[HACKERS] Help needed regarding DSA based in-memory index!

2017-10-24 Thread Gaddam Sai Ram
Hi Munro,

  Thanks for cautioning us about possible memory leaks(during error cases) 
incase of long-lived DSA segements(have a look in below thread for more 
details).



  
https://www.postgresql.org/message-id/CAEepm%3D3c4WAtSQG4tAF7Y_VCnO5cKh7KuFYZhpKbwGQOF%3DdZ4A%40mail.gmail.com


  Actually we are following an approach to avoid this DSA memory leaks. Let 
me explain our implementation and please validate and correct us in-case we 
  miss anything.



  Implementation:

  

  Basically we have to put our index data into memory (Index Column Value 
Vs Ctid) which we get in aminsert callback function.

  

  Coming to the implementation, in aminsert Callback function, 

We Switch to CurTransactionContext 

Cache the DMLs of a transaction into dlist(global per process)

Even if different clients work parallel, it won't be a problem because every 
client gets one dlist in separate process and it'll have it's own 
CurTransactionContext

We have registered transaction callback (using RegisterXactCallback() 
function). And during event pre-commit(XACT_EVENT_PRE_COMMIT), we populate all 
the transaction specific DMLs (from dlist) into our in-memory index(DSA) 
obviously inside PG_TRY/PG_CATCH block.

In case we got some errors(because of dsa_allocate() or something else) while 
processing dlist(while populating in-memory index), we cleanup the DSA memory 
in PG_CATCH block that is allocated/used till that point.

During other error cases, typically transactions gets aborted and PRE_COMMIT 
event is not called and hence we don't touch DSA at that time. Hence no need to 
bother about leaks.

Even sub transaction case is handled with sub transaction callbacks.

CurTransactionContext(dlist basically) is automatically cleared after that 
particular transaction.



I want to know if this approach is good and works well in all cases. Kindly 
provide your feedback on this.



 


 
Regards

G. Sai Ram








Re: [HACKERS] Help required to debug pg_repack breaking logical replication

2017-10-12 Thread Robert Haas
On Sat, Oct 7, 2017 at 2:37 PM, Daniele Varrazzo
 wrote:
> (with a
> custom addition to update the relfrozenxid which seems backwards to me
> as it sets the older frozen xid on the new table [3]).
>
> [3] https://github.com/reorg/pg_repack/issues/152

Wow.  That's really bad.  It will corrupt your database.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help required to debug pg_repack breaking logical replication

2017-10-10 Thread Petr Jelinek
On 08/10/17 15:21, Craig Ringer wrote:
> On 8 October 2017 at 02:37, Daniele Varrazzo  
> wrote:
>> Hello,
>>
>> we have been reported, and I have experienced a couple of times,
>> pg_repack breaking logical replication.
>>
>> - https://github.com/reorg/pg_repack/issues/135
>> - https://github.com/2ndQuadrant/pglogical/issues/113
> 
> Yeah, I was going to say I've seen reports of this with pglogical, but
> I see you've linked to them.
> 
> I haven't had a chance to look into it though, and haven't had a
> suitable reproducible test case.
> 
>> In the above issue #113, Petr Jelinek commented:
>>
>>> From quick look at pg_repack, the way it does table rewrite is almost 
>>> guaranteed
>>> to break logical decoding unless there is zero unconsumed changes for a 
>>> given table
>>> as it does not build the necessary mappings info for logical decoding that 
>>> standard
>>> heap rewrite in postgres does.
>>
>> unfortunately he didn't follow up to further details requests.
> 
> At a guess he's referring to src/backend/access/heap/rewriteheap.c .
> 
> I'd explain better if I understood what was going on myself, but I
> haven't really understood the logical decoding parts of that code.
> 
>> - Is Petr diagnosis right and freezing of logical replication is to be
>> blamed to missing mapping?
>> - Can you suggest a test to reproduce the issue reliably?
>> - What are mapped relations anyway?
> 
> I can't immediately give you the answers you seek, but start by
> studying src/backend/access/heap/rewriteheap.c . Notably
> logical_end_heap_rewrite, logical_rewrite_heap_tuple,
> logical_begin_heap_rewrite.
> 

Yes that's exactly it. When table is rewritten we need to create mapping
for every tuple that was created or removed (ie, insert, update or
delete operation happened on it) since the oldest replication slot xmin
for logical decoding to continue to work on that table after the
rewrite. And pg_repack doesn't create that mapping.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help required to debug pg_repack breaking logical replication

2017-10-08 Thread Craig Ringer
On 8 October 2017 at 02:37, Daniele Varrazzo  wrote:
> Hello,
>
> we have been reported, and I have experienced a couple of times,
> pg_repack breaking logical replication.
>
> - https://github.com/reorg/pg_repack/issues/135
> - https://github.com/2ndQuadrant/pglogical/issues/113

Yeah, I was going to say I've seen reports of this with pglogical, but
I see you've linked to them.

I haven't had a chance to look into it though, and haven't had a
suitable reproducible test case.

> In the above issue #113, Petr Jelinek commented:
>
>> From quick look at pg_repack, the way it does table rewrite is almost 
>> guaranteed
>> to break logical decoding unless there is zero unconsumed changes for a 
>> given table
>> as it does not build the necessary mappings info for logical decoding that 
>> standard
>> heap rewrite in postgres does.
>
> unfortunately he didn't follow up to further details requests.

At a guess he's referring to src/backend/access/heap/rewriteheap.c .

I'd explain better if I understood what was going on myself, but I
haven't really understood the logical decoding parts of that code.

> - Is Petr diagnosis right and freezing of logical replication is to be
> blamed to missing mapping?
> - Can you suggest a test to reproduce the issue reliably?
> - What are mapped relations anyway?

I can't immediately give you the answers you seek, but start by
studying src/backend/access/heap/rewriteheap.c . Notably
logical_end_heap_rewrite, logical_rewrite_heap_tuple,
logical_begin_heap_rewrite.

At a wild "I haven't read any of the relevant code in detail yet" stab
in the dark, pg_repack is failing to do the bookkeeping required by
logical decoding around relfilenode changes, cmin/cmax, etc.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help required to debug pg_repack breaking logical replication

2017-10-07 Thread Daniele Varrazzo
Hello,

we have been reported, and I have experienced a couple of times,
pg_repack breaking logical replication.

- https://github.com/reorg/pg_repack/issues/135
- https://github.com/2ndQuadrant/pglogical/issues/113

In my experience, after the botched run, the replication slot was
"stuck", and any attempt of reading (including
pg_logical_slot_peek_changes()) blocked until ctrl-c. I've tried
replicating the issue but first attempts have failed to fail.

In the above issue #113, Petr Jelinek commented:

> From quick look at pg_repack, the way it does table rewrite is almost 
> guaranteed
> to break logical decoding unless there is zero unconsumed changes for a given 
> table
> as it does not build the necessary mappings info for logical decoding that 
> standard
> heap rewrite in postgres does.

unfortunately he didn't follow up to further details requests.

I've started drilling down the problem, observing that the swap
function, swap_heap_or_index_files() [1] was cargoculted by the
original author from the CLUSTER command code as of PG 8.2 [2] (with a
custom addition to update the relfrozenxid which seems backwards to me
as it sets the older frozen xid on the new table [3]).

[1] https://github.com/reorg/pg_repack/blob/ver_1.4.1/lib/repack.c#L1082
[2] 
https://github.com/postgres/postgres/blob/REL8_2_STABLE/src/backend/commands/cluster.c#L783
[3] https://github.com/reorg/pg_repack/issues/152

so that code is effectively missing a good 10 years of development.
Before jumping into fast-forwarding it, I would like to ask for some
help, i.e.

- Is Petr diagnosis right and freezing of logical replication is to be
blamed to missing mapping?
- Can you suggest a test to reproduce the issue reliably?
- What are mapped relations anyway?

Thank you in advance for any help (either info about how to fix the
issue properly, or a patch by someone who happens to really understand
what we are talking about).

-- Daniele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] help to identify the reason that extension's C function returns array get segmentation fault

2017-02-28 Thread 钱新林
Thanks for your clues.

The system I have used to debug the code is x86 64bit based, Ubuntu 1404
and postgres 9.3.13, I have revised the code and it looks like as following:

Datum vquery(PG_FUNCTION_ARGS) {

int array_len = PG_GETARG_INT32(0);
int64 * node_ids;
ArrayType * retarr;
Datum * vals ;

SPI_connect();

//some code to retrieve data from various tables
// node_ids are allocated and filled up

vals = SPI_palloc(array_len * sizeof(Datum));
memset (vals, 0, array_len * sizeof(Datum));

// fill the vals up
for (i = 0 ; i < array_len ; i++)
  vals[i] = Int64GetDatum((node_ids[i]));

retarr = construct_array(vals, retcnt, INT8OID, sizeof(int64), true, 'd');

SPI_finish();

PG_RETURN_ARRAYTYPE_P(retarr);
}

It seems to solve the problem,  I have tested the code for a while and no
more segmentation faults are reported.

I have built Postgresql with --enable-debug and --enable-cassert, but use
the binary with gdb and get no symbol file loaded. I will take further
researches and use it to facilitate debug. Thanks.

On Tue, Feb 28, 2017 at 12:54 PM, Tom Lane  wrote:

> =?UTF-8?B?6ZKx5paw5p6X?=  writes:
> > I have written an extension to manage openstreetmap data. There is a C
> > function to perform spatial top k query on several  tables and return an
> > array of int8 type as result. The code skeleton of this function is as
> > follows:
>
> There are a remarkable lot of bugs in this code fragment.  Many of them
> would not bite you as long as you are running on 64-bit Intel hardware,
> but that doesn't make them not bugs.
>
> > Datum vquery(PG_FUNCTION_ARGS) {
>
> > int array_len = PG_GETARG_INT32(0);
> > long * node_ids;
>
> > SPI_connect();
>
> > //some code to retrieve data from various tables
> > // node_ids are allocated and filled up
>
> > ArrayType * retarr;
> > Datum * vals ;
>
> > vals = palloc0(array_len * sizeof(long));
>
> Datum is not necessarily the same as "long".
>
> > // fill the vals up
> > for (i = 0 ; i < array_len ; i++)
> >   vals[i] = Int64GetDatum((node_ids[i]));
>
> int64 is not necessarily the same as "long", either.
>
> > retarr = construct_array(vals, retcnt, INT8OID, sizeof(long), true, 'i');
>
> Again, INT8 is not the same size as "long", and it's not necessarily
> pass-by-val, and it's *certainly* not integer alignment.
>
> > SPI_finish();
>
> > PG_RETURN_ARRAYTYPE_P(retarr);
>
> But I think what's really biting you, probably, is that construct_array()
> made the array in CurrentMemoryContext which at that point was the SPI
> execution context; which would be deleted by SPI_finish.  So you're
> returning a dangling pointer.  You need to do something to either copy
> the array value out to the caller's context, or build it there in the
> first place.
>
> BTW, this failure would be a lot less intermittent if you were testing
> in a CLOBBER_FREED_MEMORY build.  I would go so far as to say you should
> *never* develop or test C code for the Postgres backend without using
> the --enable-cassert configure option for your build.  You're simply
> tossing away a whole lot of debug support if you don't.
>
> regards, tom lane
>


Re: [HACKERS] help to identify the reason that extension's C function returns array get segmentation fault

2017-02-27 Thread Tom Lane
=?UTF-8?B?6ZKx5paw5p6X?=  writes:
> I have written an extension to manage openstreetmap data. There is a C
> function to perform spatial top k query on several  tables and return an
> array of int8 type as result. The code skeleton of this function is as
> follows:

There are a remarkable lot of bugs in this code fragment.  Many of them
would not bite you as long as you are running on 64-bit Intel hardware,
but that doesn't make them not bugs.

> Datum vquery(PG_FUNCTION_ARGS) {

> int array_len = PG_GETARG_INT32(0);
> long * node_ids;

> SPI_connect();

> //some code to retrieve data from various tables
> // node_ids are allocated and filled up

> ArrayType * retarr;
> Datum * vals ;

> vals = palloc0(array_len * sizeof(long));

Datum is not necessarily the same as "long".

> // fill the vals up
> for (i = 0 ; i < array_len ; i++)
>   vals[i] = Int64GetDatum((node_ids[i]));

int64 is not necessarily the same as "long", either.

> retarr = construct_array(vals, retcnt, INT8OID, sizeof(long), true, 'i');

Again, INT8 is not the same size as "long", and it's not necessarily
pass-by-val, and it's *certainly* not integer alignment.

> SPI_finish();

> PG_RETURN_ARRAYTYPE_P(retarr);

But I think what's really biting you, probably, is that construct_array()
made the array in CurrentMemoryContext which at that point was the SPI
execution context; which would be deleted by SPI_finish.  So you're
returning a dangling pointer.  You need to do something to either copy
the array value out to the caller's context, or build it there in the
first place.

BTW, this failure would be a lot less intermittent if you were testing
in a CLOBBER_FREED_MEMORY build.  I would go so far as to say you should
*never* develop or test C code for the Postgres backend without using
the --enable-cassert configure option for your build.  You're simply
tossing away a whole lot of debug support if you don't.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] help to identify the reason that extension's C function returns array get segmentation fault

2017-02-27 Thread 钱新林
I have written an extension to manage openstreetmap data. There is a C
function to perform spatial top k query on several  tables and return an
array of int8 type as result. The code skeleton of this function is as
follows:

Datum vquery(PG_FUNCTION_ARGS) {

int array_len = PG_GETARG_INT32(0);
long * node_ids;

SPI_connect();

//some code to retrieve data from various tables
// node_ids are allocated and filled up

ArrayType * retarr;
Datum * vals ;

vals = palloc0(array_len * sizeof(long));

// fill the vals up
for (i = 0 ; i < array_len ; i++)
  vals[i] = Int64GetDatum((node_ids[i]));

retarr = construct_array(vals, retcnt, INT8OID, sizeof(long), true, 'i');

SPI_finish();

PG_RETURN_ARRAYTYPE_P(retarr);
}

the function runs smoothly when called using relatively small parameter,
such as select(unnest(vquery(1000))) ;  but when called with large
parameter, such as select(unnest(vquery(5))), sometimes it runs
normally, sometimes it runs into "Segmentation Fault" error. the larger the
parameter is, the more likely to run into segmentation fault.

back trace of the process as followings:

Program received signal SIGSEGV, Segmentation fault.
pg_detoast_datum (datum=0x55d4e7e43bc0) at
/build/postgresql-9.3-xceQkK/postgresql-9.3-9.3.16/build/../src/backend/utils/fmgr/fmgr.c:2241
2241 
/build/postgresql-9.3-xceQkK/postgresql-9.3-9.3.16/build/../src/backend/utils/fmgr/fmgr.c:
No such file or directory.
(gdb) backtrace full
#0  pg_detoast_datum (datum=0x55d4e7e43bc0) at
/build/postgresql-9.3-xceQkK/postgresql-9.3-9.3.16/build/../src/backend/utils/fmgr/fmgr.c:2241
No locals.
#1  0x55d4e485a29f in array_out (fcinfo=0x7ffd0fdb9f30) at
/build/postgresql-9.3-xceQkK/postgresql-9.3-9.3.16/build/../src/backend/utils/adt/arrayfuncs.c:958
v = 
element_type = 
typlen = 
typbyval = 
typalign = 
typdelim = 
p = 
tmp = 
retval = 
values = 
dims_str =
"\346\263\346U\000\000\000\000f\021\352\312\342\177\000\000
%q\003\000\000\000\000\270\347\026\313\342\177\000\000\000\000\002\000\000\000\000\000\211O\343\312\342\177\000\000(?\a\000\000\000\000\000\371\336\342\312\342\177\000\000\001\000\000\000\000\000\000\000f\021\352\312\342\177\000\000\200\204\004\001\000\000\000\000\270\347\026\313\342\177\000\000\000\000\002\000\000\000\000\000\211O\343\312\342\177\000\000`1\354\352\324U\000\000\371\336\342\312\342\177\000\000\001\000\000\000\000\000\000\000\000\200\355\347\324U\000\000\300;\344\347\324U\000\000\200\373\350\346\324U\000\000\200\204\004\001\000\000\000\000`\347\026\313\342\177\000\000\200;\344\347\324U\000\000\200D\t\000\000\000\000\000\001\000\000\000\000\000\000"
bitmap = 
bitmask = 
needquotes = 
needdims = 
nitems = 
overall_length = 
i = 
j = 
k = 
indx = {125, 0, 1638826752, 1007657037, 266051136, 32765}
ndim = 
dims = 
lb = 
my_extra = 
#2  0x55d4e491bf77 in FunctionCall1Coll
(flinfo=flinfo@entry=0x55d4e6281608,
collation=collation@entry=0, arg1=arg1@entry=94372911922112)
at
/build/postgresql-9.3-xceQkK/postgresql-9.3-9.3.16/build/../src/backend/utils/fmgr/fmgr.c:1301
fcinfo = {flinfo = 0x55d4e6281608, context = 0x0, resultinfo = 0x0,
fncollation = 0, isnull = 0 '\000', nargs = 1, arg = {94372911922112, 16,
94372911922112,
94372881863664, 81000, 140612075225152, 140724869504928,
94372856290490, 20, 94372911922112, 140724869504960, 94372855395365, 59362,
59362,
140724869505552, 140611821448277, 140724869505184,
140724869505104, 140724869505056, 25309696688, 4630392398271114606,
463877674369024,
4628742354541509959, 4638236535588651008, 140612075225152,
140724869505184, 140724869505264, 140724869505344, 8192, 1340029796386,
94372903805648,
94372905445328, 4412211000755930201, 4295079941117417898,
4212081119735560672, 94372856202527, 2087976960, 94372882833728,
94372882836912, 1,
140724869505296, 4327854021138088704, 3599182594146, 1,
140724869505248, 94372856077425, 1016, 94372882392848, 94372860652912,
140612076116712,
140724869505680, 94372856082219, 94372882407728,
140724869505360, 140724869505343, 3886087214688, 140612076116712,
140724869505344, 140728326873992,
94372882407736, 94372882817344, 94372882817312,
1125891316908032, 0, 94372855675584, 281483566645432, 2, 0, 94372881959504,
0, 1016, 0, 0, 8192,
18446603348840046049, 513, 128, 176, 140724869505568, 16,
459561500672, 2, 0, 511101108336, 0, 140724869505567, 0, 0, 124, 0, 0, 0,
0, 0, 0,
140612046612320, 8192, 1024, 1024, 1072},
  argnull = "\000

Re: [HACKERS] Help text for pg_basebackup -R

2017-02-18 Thread Magnus Hagander
On Fri, Feb 17, 2017 at 5:21 PM, Tom Lane  wrote:

> Stephen Frost  writes:
> > * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> >> Magnus Hagander wrote:
> >>> I'm guessing if we backpatch something like that, it would cause
> issues for
> >>> translations, right? So we should make it head only?
>
> >> We've had the argument a number of times.  My stand is that many
> >> translators are active in the older branches, so this update would be
> >> caught there too; and even if not, an updated English message is better
> >> than an outdated native-language message.
>
> > That makes sense to me, at least, so +1, for my part.
>
> Yeah, if the existing message text is actually wrong or misleading,
> we should back-patch.  I'm not sure I would do that if it's just a
> cosmetic improvement.  In this particular case, +1.
>

OK. Applied and backpatched.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Help text for pg_basebackup -R

2017-02-17 Thread Tom Lane
Stephen Frost  writes:
> * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
>> Magnus Hagander wrote:
>>> I'm guessing if we backpatch something like that, it would cause issues for
>>> translations, right? So we should make it head only?

>> We've had the argument a number of times.  My stand is that many
>> translators are active in the older branches, so this update would be
>> caught there too; and even if not, an updated English message is better
>> than an outdated native-language message.

> That makes sense to me, at least, so +1, for my part.

Yeah, if the existing message text is actually wrong or misleading,
we should back-patch.  I'm not sure I would do that if it's just a
cosmetic improvement.  In this particular case, +1.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help text for pg_basebackup -R

2017-02-17 Thread Stephen Frost
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> Magnus Hagander wrote:
> > On Wednesday, February 15, 2017, Alvaro Herrera 
> > wrote:
> > 
> > > Magnus Hagander wrote:
> > >
> > > >   printf(_("  -R, --write-recovery-conf\n"
> > > > -  " write recovery.conf
> > > after backup\n"));
> > > > +  " write recovery.conf for
> > > replication\n"));
> > > >   printf(_("  -S, --slot=SLOTNAMEreplication slot to use\n"));
> > >
> > > LGTM.
> > >
> > I'm guessing if we backpatch something like that, it would cause issues for
> > translations, right? So we should make it head only?
> 
> We've had the argument a number of times.  My stand is that many
> translators are active in the older branches, so this update would be
> caught there too; and even if not, an updated English message is better
> than an outdated native-language message.

That makes sense to me, at least, so +1, for my part.  Of course, I'm
not a translation-using user, but I have heard from people when I've
spoken in other countries that a correct english message is better than
outdated native-language messages, and further, that having the English
message makes it easier to get Google results.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Help text for pg_basebackup -R

2017-02-17 Thread Alvaro Herrera
Magnus Hagander wrote:
> On Wednesday, February 15, 2017, Alvaro Herrera 
> wrote:
> 
> > Magnus Hagander wrote:
> >
> > >   printf(_("  -R, --write-recovery-conf\n"
> > > -  " write recovery.conf
> > after backup\n"));
> > > +  " write recovery.conf for
> > replication\n"));
> > >   printf(_("  -S, --slot=SLOTNAMEreplication slot to use\n"));
> >
> > LGTM.
> >
> I'm guessing if we backpatch something like that, it would cause issues for
> translations, right? So we should make it head only?

We've had the argument a number of times.  My stand is that many
translators are active in the older branches, so this update would be
caught there too; and even if not, an updated English message is better
than an outdated native-language message.

Now, that's been argued in the context of a bug fix that introduces new
messages or changed an existing message for other reasons.  I'm not sure
how strongly do we think it applies for a change that's *only* about
updating a message.  I'm +0.5 on back-patching the change in this case.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help text for pg_basebackup -R

2017-02-17 Thread Magnus Hagander
On Wednesday, February 15, 2017, Alvaro Herrera 
wrote:

> Magnus Hagander wrote:
>
> >   printf(_("  -R, --write-recovery-conf\n"
> > -  " write recovery.conf
> after backup\n"));
> > +  " write recovery.conf for
> replication\n"));
> >   printf(_("  -S, --slot=SLOTNAMEreplication slot to use\n"));
>
> LGTM.
>
>
I'm guessing if we backpatch something like that, it would cause issues for
translations, right? So we should make it head only?




-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Help text for pg_basebackup -R

2017-02-15 Thread Alvaro Herrera
Magnus Hagander wrote:

>   printf(_("  -R, --write-recovery-conf\n"
> -  " write recovery.conf after 
> backup\n"));
> +  " write recovery.conf for 
> replication\n"));
>   printf(_("  -S, --slot=SLOTNAMEreplication slot to use\n"));

LGTM.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help text for pg_basebackup -R

2017-02-15 Thread Magnus Hagander
The current help text for pg_basebackup -R is "write recovery.conf after
backup".

This says nothing about what it actually does. I've had a number of people
ask me now why that's not default "because you need a recovery.conf to
restore from backup". The point being that it doesn't say anything about
the fact that it writes the file *for replication*. The help page does, but
not the message.

I propose a new message per the attached patch.

Comments?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
index 0c82bc9..9020fb1 100644
--- a/src/bin/pg_basebackup/pg_basebackup.c
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -337,7 +337,7 @@ usage(void)
 	printf(_("  -r, --max-rate=RATEmaximum transfer rate to transfer data directory\n"
 	  " (in kB/s, or use suffix \"k\" or \"M\")\n"));
 	printf(_("  -R, --write-recovery-conf\n"
-			 " write recovery.conf after backup\n"));
+			 " write recovery.conf for replication\n"));
 	printf(_("  -S, --slot=SLOTNAMEreplication slot to use\n"));
 	printf(_("  --no-slot  prevent creation of temporary replication slot\n"));
 	printf(_("  -T, --tablespace-mapping=OLDDIR=NEWDIR\n"

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] help for old extension, first_last_agg bug in install with pg9.5

2016-03-28 Thread Peter Krauss
The C implementation is simple, the problem is with the makefile,
   https://github.com/wulczer/first_last_agg/issues/2

*some clues?*

- - -

make
Makefile:25: /usr/lib/postgresql/9.5/lib/pgxs/src/makefiles/pgxs.mk:
  No such file or directory
make: *** No rule to make target
  `/usr/lib/postgresql/9.5/lib/pgxs/src/makefiles/pgxs.mk'.
  Stop.

Using UBUNTU 14 LTS

See also https://github.com/wulczer/first_last_agg/blob/master/Makefile


Re: [HACKERS] HELP!!! The WAL Archive is taking up all space

2015-12-08 Thread FattahRozzaq
Hi all,

Thank you for all of your responses.
Meanwhile, I will repost this at pgsql-gene...@postgresql.org


Regards,
Fattah

On 09/12/2015, David G. Johnston  wrote:
> On Tue, Dec 8, 2015 at 4:43 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Tue, Dec 8, 2015 at 3:33 AM, FattahRozzaq  wrote:
>>
>>> Hi all,
>>>
>>> Please help...
>>>
>>> I have 1 master PostgreSQL and 1 standby PostgreSQL.
>>> Both servers has the same OS Linux Debian Wheezy, the same hardware.
>>>
>>> Both server hardware:
>>> CPU: 24 cores
>>> RAM: 128GB
>>> Disk-1: 800GB SAS (for OS, logs, WAL archive directory)
>>> Disk-2: 330GB SSD (for PostgreSQL data directory, except WAL archive
>>> and except pg_log)
>>>
>>> The part of the configuration are as below:
>>> checkpoint_segments = 64
>>> checkpoint_completion_target = 0.9
>>> default_statistics_target = 10
>>> maintenance_work_mem = 1GB
>>> effective_cache_size = 64GB
>>> shared_buffers = 24GB
>>> work_mem = 5MB
>>> wal_buffers = 8MB
>>> wal_keep_segments = 4096
>>> wal_level = hot_standby
>>> max_wal_senders = 10
>>> archive_mode = on
>>> archive_command = 'cp -i %p /home/postgres/archive/master/%f'
>>>
>>>
>>> The WAL archive is at /home/postgres/archive/master/, right?
>>> This directory consume more than 750GB of Disk-1.
>>> Each segment in the /home/postgres/archive/master/ is 16MB each
>>> There are currently 47443 files in this folder.
>>>
>>> I want to limit the total size use by WAL archive to around 200-400 GB.
>>>
>>> Do I set the segment too big?
>>> wal_keep_segments = 4096
>>> checkpoint_segments = 64
>>>
>>> What value should I set for it?
>>>
>>
>> In which case you need to calculate how long it takes to accumulate that
>> much archive data and then perform a base backup roughly that often after
>> which point any WAL older that the point at which you began the backup
>> can
>> be removed.
>>
>> You cannot just limit how large the WAL archive is since removing any WAL
>> file will pretty much make any attempt at restoration fail.​
>>
>> David J.
>>
>>
> ​While valid I missed that you have a streaming replica on the other end
> that should be removing files as they are loaded pending the retention
> setting...see Michael's response for better information.
>
> David J.​
>


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HELP!!! The WAL Archive is taking up all space

2015-12-08 Thread Michael Paquier
On Tue, Dec 8, 2015 at 7:33 PM, FattahRozzaq  wrote:
> The WAL archive is at /home/postgres/archive/master/, right?
> This directory consume more than 750GB of Disk-1.
> Each segment in the /home/postgres/archive/master/ is 16MB each
> There are currently 47443 files in this folder.
>
> I want to limit the total size use by WAL archive to around 200-400 GB.

This kind of question is more adapted for pgsql-general. pgsql-hackers
is where happens discussions related to features and development.

There is no magic value. This depends on the data policy retention you
want to have for your backups. More information here:
http://www.postgresql.org/docs/devel/static/continuous-archiving.html
If you don't need this many segments, you should just decrease it. If
you need more, buy more disk space.

> Do I set the segment too big?
> wal_keep_segments = 4096
> What value should I set for it?

That's a lot, but it depends on what you seek, leading to up to 200GB
of WAL segments. Here this would be useful if you expect to be able to
recover with large instances, aka a base backup takes a lot of time,
and the standby that replays behind will be able to connect to its
root note within this interval of segments.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] HELP!!! The WAL Archive is taking up all space

2015-12-08 Thread FattahRozzaq
Hi all,

Please help...

I have 1 master PostgreSQL and 1 standby PostgreSQL.
Both servers has the same OS Linux Debian Wheezy, the same hardware.

Both server hardware:
CPU: 24 cores
RAM: 128GB
Disk-1: 800GB SAS (for OS, logs, WAL archive directory)
Disk-2: 330GB SSD (for PostgreSQL data directory, except WAL archive
and except pg_log)

The part of the configuration are as below:
checkpoint_segments = 64
checkpoint_completion_target = 0.9
default_statistics_target = 10
maintenance_work_mem = 1GB
effective_cache_size = 64GB
shared_buffers = 24GB
work_mem = 5MB
wal_buffers = 8MB
wal_keep_segments = 4096
wal_level = hot_standby
max_wal_senders = 10
archive_mode = on
archive_command = 'cp -i %p /home/postgres/archive/master/%f'


The WAL archive is at /home/postgres/archive/master/, right?
This directory consume more than 750GB of Disk-1.
Each segment in the /home/postgres/archive/master/ is 16MB each
There are currently 47443 files in this folder.

I want to limit the total size use by WAL archive to around 200-400 GB.

Do I set the segment too big?
wal_keep_segments = 4096
checkpoint_segments = 64

What value should I set for it?



Regards,
Fattah


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HELP!!! The WAL Archive is taking up all space

2015-12-08 Thread David G. Johnston
On Tue, Dec 8, 2015 at 3:33 AM, FattahRozzaq  wrote:

> Hi all,
>
> Please help...
>
> I have 1 master PostgreSQL and 1 standby PostgreSQL.
> Both servers has the same OS Linux Debian Wheezy, the same hardware.
>
> Both server hardware:
> CPU: 24 cores
> RAM: 128GB
> Disk-1: 800GB SAS (for OS, logs, WAL archive directory)
> Disk-2: 330GB SSD (for PostgreSQL data directory, except WAL archive
> and except pg_log)
>
> The part of the configuration are as below:
> checkpoint_segments = 64
> checkpoint_completion_target = 0.9
> default_statistics_target = 10
> maintenance_work_mem = 1GB
> effective_cache_size = 64GB
> shared_buffers = 24GB
> work_mem = 5MB
> wal_buffers = 8MB
> wal_keep_segments = 4096
> wal_level = hot_standby
> max_wal_senders = 10
> archive_mode = on
> archive_command = 'cp -i %p /home/postgres/archive/master/%f'
>
>
> The WAL archive is at /home/postgres/archive/master/, right?
> This directory consume more than 750GB of Disk-1.
> Each segment in the /home/postgres/archive/master/ is 16MB each
> There are currently 47443 files in this folder.
>
> I want to limit the total size use by WAL archive to around 200-400 GB.
>
> Do I set the segment too big?
> wal_keep_segments = 4096
> checkpoint_segments = 64
>
> What value should I set for it?
>

In which case you need to calculate how long it takes to accumulate that
much archive data and then perform a base backup roughly that often after
which point any WAL older that the point at which you began the backup can
be removed.

You cannot just limit how large the WAL archive is since removing any WAL
file will pretty much make any attempt at restoration fail.​

David J.


Re: [HACKERS] HELP!!! The WAL Archive is taking up all space

2015-12-08 Thread David G. Johnston
On Tue, Dec 8, 2015 at 4:43 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Dec 8, 2015 at 3:33 AM, FattahRozzaq  wrote:
>
>> Hi all,
>>
>> Please help...
>>
>> I have 1 master PostgreSQL and 1 standby PostgreSQL.
>> Both servers has the same OS Linux Debian Wheezy, the same hardware.
>>
>> Both server hardware:
>> CPU: 24 cores
>> RAM: 128GB
>> Disk-1: 800GB SAS (for OS, logs, WAL archive directory)
>> Disk-2: 330GB SSD (for PostgreSQL data directory, except WAL archive
>> and except pg_log)
>>
>> The part of the configuration are as below:
>> checkpoint_segments = 64
>> checkpoint_completion_target = 0.9
>> default_statistics_target = 10
>> maintenance_work_mem = 1GB
>> effective_cache_size = 64GB
>> shared_buffers = 24GB
>> work_mem = 5MB
>> wal_buffers = 8MB
>> wal_keep_segments = 4096
>> wal_level = hot_standby
>> max_wal_senders = 10
>> archive_mode = on
>> archive_command = 'cp -i %p /home/postgres/archive/master/%f'
>>
>>
>> The WAL archive is at /home/postgres/archive/master/, right?
>> This directory consume more than 750GB of Disk-1.
>> Each segment in the /home/postgres/archive/master/ is 16MB each
>> There are currently 47443 files in this folder.
>>
>> I want to limit the total size use by WAL archive to around 200-400 GB.
>>
>> Do I set the segment too big?
>> wal_keep_segments = 4096
>> checkpoint_segments = 64
>>
>> What value should I set for it?
>>
>
> In which case you need to calculate how long it takes to accumulate that
> much archive data and then perform a base backup roughly that often after
> which point any WAL older that the point at which you began the backup can
> be removed.
>
> You cannot just limit how large the WAL archive is since removing any WAL
> file will pretty much make any attempt at restoration fail.​
>
> David J.
>
>
​While valid I missed that you have a streaming replica on the other end
that should be removing files as they are loaded pending the retention
setting...see Michael's response for better information.

David J.​


[HACKERS] Help on creating C function that reads int2[] (using "int2vector")

2015-12-01 Thread Rodrigo Hjort
Hello PG Hackers,


I created a custom C function with this signature:

CREATE FUNCTION calculate_hash(numbers int2[])
RETURNS int8
AS 'MODULE_PATHNAME', 'pg_calculate_hash'
LANGUAGE C
IMMUTABLE STRICT;


And here is the function source code (inspired in codes I found in
src/backend/utils/adt/int.c):

PG_FUNCTION_INFO_V1(pg_calculate_hash);
Datum
pg_calculate_hash(PG_FUNCTION_ARGS)
{
  int2vector *int2Array = (int2vector *) PG_GETARG_POINTER(0);
  const int qtd = int2Array->dim1;

  elog(DEBUG1, "pg_calculate_hash(qtd=%d)", qtd);

  elog(DEBUG2, "  [ndim=%d, dataoffset=%d, elemtype=%d, dim1=%d,
lbound1=%d]",
int2Array->ndim, int2Array->dataoffset, int2Array->elemtype,
int2Array->dim1, int2Array->lbound1);

  [...]
}


In order to test it against a table structure, I executed these
instructions on psql:

db=# create table ss (s int2[]);
CREATE TABLE

db=# \d+ ss
Table "public.ss"
 Column |Type| Modifiers | Storage  | Stats target | Description
++---+--+--+-
 s  | smallint[] |   | extended |  |
Has OIDs: no

db=# insert into ss values ('[0:5]={58,17,15,36,59,54}');
INSERT 0 1

db=# select * from ss;
 s
---
 [0:5]={58,17,15,36,59,54}
(1 row)


Then, whenever calling the function passing the int2[] column directly,
strange values are read into the "int2vector" object:

db=# set client_min_messages to debug2;
SET

db=# select s, calculate_hash(s) from ss;
DEBUG:  pg_calculate_hash(qtd=0)
DEBUG:[ndim=0, dataoffset=5376, elemtype=1536, dim1=0,
lbound1=285227520]
 s | calculate_hash
---+---
 [0:5]={58,17,15,36,59,54} | 0
(1 row)


On the other hand, when I double-cast the int2[] column value, it works as
expected (reading the proper "int2vector" structure):

db=# select s, calculate_hash(s::varchar::int2[]) from ss;
DEBUG:  pg_calculate_hash(qtd=6)
DEBUG:[ndim=1, dataoffset=0, elemtype=21, dim1=6, lbound1=0]
 s |   calculate_hash
---+
 [0:5]={58,17,15,36,59,54} | 441352797842128896
(1 row)


Please, what is wrong with that function code?

Thanks in advance.

The whole project is on GitHub:
https://github.com/hjort/mega-sena/tree/master/src


Best regards,

-- 
Rodrigo Hjort
www.hjort.co


Re: [HACKERS] Help needed for PL/Ruby

2015-05-01 Thread Szymon Guz
On 29 April 2015 at 21:45, Szymon Guz mabew...@gmail.com wrote:

 Hi Devrim,
 I will take a look at this.

 regards,
 Szymon

 On 29 April 2015 at 18:24, Devrim Gündüz dev...@gunduz.org wrote:


 Hi,

 Anyone? :)

 Regards, Devrim

 On Wed, 2015-03-18 at 15:19 +0200, Devrim Gündüz wrote:
  Hi,
 
  Background info first: PL/Ruby was originally maintained by Guy Decoux,
  who passed away in 2008: https://www.ruby-forum.com/topic/166658 .
 After
  his death, Akinori MUSHA forked the project and maintained it until
  2010: https://github.com/knu/postgresql-plruby . Last release was on
 Jan
  2010, and recent distros started throwing build errors.
 
  I was having similar build issues while trying to RPMify PL/Ruby, and
  finally stepped up the plate and tried to fix those build issues by
  forking the project:
 
  https://github.com/devrimgunduz/postgresql-plruby/
 
  I mainly applied patches from Fedora, and also did some basic cleanup.
  However, I don't know Ruby and have limited C skills, so I need some
  help on these:
 
  * Complete the extension support: I committed initial infrastructure for
  it, but I don't think it is ready yet.
 
  * Fix the FIXME:
 
 https://github.com/devrimgunduz/postgresql-plruby/blob/master/src/plpl.c#L844
 
  * Documentation review and update: The recent example is against
  PostgreSQL 8.0. A recent Ruby example would be good, and also we need
  updates for creating the language.
 
  Any contributions are welcome. I recently released 0.5.5 that at least
  is ready for testing.
 
  I want to remind that I am not a Ruby guy, so this is really a community
  stuff for me.
 
  Thanks by now.
 
  Regards,


 --
 Devrim GÜNDÜZ
 Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
 PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
 Twitter: @DevrimGunduz , @DevrimGunduzTR





Hi Devrim,
I checked the code, and it seems like there is a lot of work to make this
plruby stuff working with the new postgres.

I think we need to get rid of the code for supporting the older postgres
versions than 8.4. Or even 9.0 (if there is any code for that) as this
version get obsolete in September this year, and I'm sure that I'm alone
will not be able to make all the changes till that time.

Compiling the code gives me lots of warnings, which also should be cleared.

The previous release was made long time ago, so it is not up to date with
the last postgres changes - a lot of work too.

So, I will work on it, however fixing and updating the code will take some
time.

Oh, and documentation of course.

At this moment I will do all the work on my github account.

regards,
Szymon


Re: [HACKERS] Help needed for PL/Ruby

2015-04-29 Thread Devrim Gündüz

Hi,

Anyone? :)

Regards, Devrim

On Wed, 2015-03-18 at 15:19 +0200, Devrim Gündüz wrote:
 Hi,
 
 Background info first: PL/Ruby was originally maintained by Guy Decoux,
 who passed away in 2008: https://www.ruby-forum.com/topic/166658 . After
 his death, Akinori MUSHA forked the project and maintained it until
 2010: https://github.com/knu/postgresql-plruby . Last release was on Jan
 2010, and recent distros started throwing build errors.
 
 I was having similar build issues while trying to RPMify PL/Ruby, and
 finally stepped up the plate and tried to fix those build issues by
 forking the project:
 
 https://github.com/devrimgunduz/postgresql-plruby/
 
 I mainly applied patches from Fedora, and also did some basic cleanup.
 However, I don't know Ruby and have limited C skills, so I need some
 help on these:
 
 * Complete the extension support: I committed initial infrastructure for
 it, but I don't think it is ready yet.
 
 * Fix the FIXME:
 https://github.com/devrimgunduz/postgresql-plruby/blob/master/src/plpl.c#L844
 
 * Documentation review and update: The recent example is against
 PostgreSQL 8.0. A recent Ruby example would be good, and also we need
 updates for creating the language.
 
 Any contributions are welcome. I recently released 0.5.5 that at least
 is ready for testing.
 
 I want to remind that I am not a Ruby guy, so this is really a community
 stuff for me.
 
 Thanks by now.
 
 Regards,


-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR



signature.asc
Description: This is a digitally signed message part


[HACKERS] Help needed for PL/Ruby

2015-03-18 Thread Devrim Gündüz

Hi,

Background info first: PL/Ruby was originally maintained by Guy Decoux,
who passed away in 2008: https://www.ruby-forum.com/topic/166658 . After
his death, Akinori MUSHA forked the project and maintained it until
2010: https://github.com/knu/postgresql-plruby . Last release was on Jan
2010, and recent distros started throwing build errors.

I was having similar build issues while trying to RPMify PL/Ruby, and
finally stepped up the plate and tried to fix those build issues by
forking the project:

https://github.com/devrimgunduz/postgresql-plruby/

I mainly applied patches from Fedora, and also did some basic cleanup.
However, I don't know Ruby and have limited C skills, so I need some
help on these:

* Complete the extension support: I committed initial infrastructure for
it, but I don't think it is ready yet.

* Fix the FIXME:
https://github.com/devrimgunduz/postgresql-plruby/blob/master/src/plpl.c#L844

* Documentation review and update: The recent example is against
PostgreSQL 8.0. A recent Ruby example would be good, and also we need
updates for creating the language.

Any contributions are welcome. I recently released 0.5.5 that at least
is ready for testing.

I want to remind that I am not a Ruby guy, so this is really a community
stuff for me.

Thanks by now.

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Help me! Why did the salve stop suddenly ?

2015-02-13 Thread Robert Haas
On Thu, Feb 12, 2015 at 3:27 AM, hailong Li shuhujiaol...@gmail.com wrote:
 Hi, dear pgsql-hackers

Please have a look at
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems

This is the wrong mailing list for this sort of question, and your
report is pretty unclear, so it's hard to tell what might have gone
wrong.  Please repost to a more appropriate mailing list taking note
of the above guidelines, and you may get a more helpful answer.
Alternatively, contact a commercial PostgreSQL support company and
contract with them for help, as again suggested in the above guide.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help me! Why did the salve stop suddenly ?

2015-02-12 Thread hailong Li
Hi, dear pgsql-hackers


*1. environment*

*DB Master*

$ cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m

$ uname -av
Linux l-x1.xx.cnx 3.14.29-3.centos6.x86_64 #1 SMP Tue Jan 20 17:48:32
CST 2015 x86_64 x86_64 x86_64 GNU/Linux

$ psql -U postgres
psql (9.3.5)
Type help for help.

postgres=# select version();

version
--
 PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-3), 64-bit
(1 row)

$ pg_config
BINDIR = /opt/pg93/bin
DOCDIR = /opt/pg93/share/doc/postgresql
HTMLDIR = /opt/pg93/share/doc/postgresql
INCLUDEDIR = /opt/pg93/include
PKGINCLUDEDIR = /opt/pg93/include/postgresql
INCLUDEDIR-SERVER = /opt/pg93/include/postgresql/server
LIBDIR = /opt/pg93/lib
PKGLIBDIR = /opt/pg93/lib/postgresql
LOCALEDIR = /opt/pg93/share/locale
MANDIR = /opt/pg93/share/man
SHAREDIR = /opt/pg93/share/postgresql
SYSCONFDIR = /opt/pg93/etc/postgresql
PGXS = /opt/pg93/lib/postgresql/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--prefix=/opt/pg93' '--with-perl' '--with-libxml'
'--with-libxslt' '--with-ossp-uuid' 'CFLAGS= -march=core2 -O2 '
CC = gcc
CPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2
CFLAGS = -march=core2 -O2  -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
CFLAGS_SL = -fpic
LDFLAGS = -L../../../src/common -Wl,--as-needed
-Wl,-rpath,'/opt/pg93/lib',--enable-new-dtags
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgport -lpgcommon -lxslt -lxml2 -lz -lreadline -lcrypt -ldl -lm
VERSION = PostgreSQL 9.3.5





*DB Slave*$ cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m

$ uname -av
Linux l-2.xx.cnx 3.14.31-3.centos6.x86_64 #1 SMP Mon Feb 2 15:26:04 CST
2015 x86_64 x86_64 x86_64 GNU/Linux

$ psql -U postgres
psql (9.3.5)
Type help for help.

postgres=# select version();

version
--
 PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-3), 64-bit
(1 row)

postgres=# show log_line_prefix ;
   log_line_prefix
--
 [%u %d %a %h %m %p %c %l %x]
(1 row)


$ pg_config
BINDIR = /opt/pg93/bin
DOCDIR = /opt/pg93/share/doc/postgresql
HTMLDIR = /opt/pg93/share/doc/postgresql
INCLUDEDIR = /opt/pg93/include
PKGINCLUDEDIR = /opt/pg93/include/postgresql
INCLUDEDIR-SERVER = /opt/pg93/include/postgresql/server
LIBDIR = /opt/pg93/lib
PKGLIBDIR = /opt/pg93/lib/postgresql
LOCALEDIR = /opt/pg93/share/locale
MANDIR = /opt/pg93/share/man
SHAREDIR = /opt/pg93/share/postgresql
SYSCONFDIR = /opt/pg93/etc/postgresql
PGXS = /opt/pg93/lib/postgresql/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--prefix=/opt/pg93' '--with-perl' '--with-libxml'
'--with-libxslt' '--with-ossp-uuid' 'CFLAGS= -march=core2 -O2 '
CC = gcc
CPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2
CFLAGS = -march=core2 -O2  -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
CFLAGS_SL = -fpic
LDFLAGS = -L../../../src/common -Wl,--as-needed
-Wl,-rpath,'/opt/pg93/lib',--enable-new-dtags
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgport -lpgcommon -lxslt -lxml2 -lz -lreadline -lcrypt -ldl -lm
VERSION = PostgreSQL 9.3.5



2.

*the DB log in the Slave's log_directory*[2015-02-05 15:38:51.406
CST 2328 54d08abc.918 6 0]WARNING:  will not overwrite a used ItemId
[2015-02-05 15:38:51.406 CST 2328 54d08abc.918 7 0]CONTEXT:  xlog
redo insert: rel 38171461/16384/57220350; tid 1778398/9
[2015-02-05 15:38:51.406 CST 2328 54d08abc.918 8 0]PANIC:
heap_insert_redo: failed to add tuple
[2015-02-05 15:38:51.406 CST 2328 54d08abc.918 9 0]CONTEXT:  xlog
redo insert: rel 38171461/16384/57220350; tid 1778398/9
[2015-02-05 15:38:51.765 CST 2320 54d08abb.910 6 0]LOG:  startup
process (PID 2328) was terminated by signal 6: Aborted
[2015-02-05 15:38:51.765 CST 2320 54d08abb.910 7 0]LOG:
terminating any other active server processes
[DBusesr DBname [unknown] 192.168.xxx.x 2015-02-05 15:38:51.765 CST
61450 54d31d48.f00a 3 0]WARNING:  terminating connection because of
crash of another server process
[DBusesr DBname [unknown] 192.168.xxx.x 2015-02-05 15:38:51.765 CST
61450 54d31d48.f00a 4 0]DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
[DBusesr DBname [unknown] 192.168.xxx.x 2015-02-05 15:38:51.765 CST
61450 54d31d48.f00a 5 0]HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
[DBusesr DBname [unknown] 192.168.xxx.x 2015-02-05 15:38:51.765 CST
51208 54d315b6.c808 7 0]WARNING:  terminating connection because of
crash of another server process
[DBusesr 

Re: [HACKERS] Help to startup

2014-09-22 Thread Alvaro Herrera
Michael Paquier wrote:
 On Sun, Sep 21, 2014 at 4:52 PM, Craig Ringer cr...@2ndquadrant.com wrote:
  On 09/17/2014 01:51 AM, Tapan Halani wrote:
  Hello everyone..i am new to PostgreSQL project. I had prior experience
  with sql+ , with oracle 11g database server. Kindly help me grasp more
  about the project.
 
  Since you're asking on pgsql-hackers, you're presumably interested in
  getting involved in developing extensions or feature work?
 
  See:
 
  http://www.postgresql.org/developer/
 There is a group picture of 2006 :)

Maybe we could turn that into a rotating display of each year's
developers meeting pictures or something like that.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help to startup

2014-09-21 Thread Craig Ringer
On 09/17/2014 01:51 AM, Tapan Halani wrote:
 Hello everyone..i am new to PostgreSQL project. I had prior experience
 with sql+ , with oracle 11g database server. Kindly help me grasp more
 about the project.

Since you're asking on pgsql-hackers, you're presumably interested in
getting involved in developing extensions or feature work?

See:

http://www.postgresql.org/developer/

If that's *not* what you're trying to do, then perhaps you want the
tutorial, main documentation, and pgsql-general mailing list?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help to startup

2014-09-21 Thread Michael Paquier
On Sun, Sep 21, 2014 at 4:52 PM, Craig Ringer cr...@2ndquadrant.com wrote:
 On 09/17/2014 01:51 AM, Tapan Halani wrote:
 Hello everyone..i am new to PostgreSQL project. I had prior experience
 with sql+ , with oracle 11g database server. Kindly help me grasp more
 about the project.

 Since you're asking on pgsql-hackers, you're presumably interested in
 getting involved in developing extensions or feature work?

 See:

 http://www.postgresql.org/developer/
There is a group picture of 2006 :)
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help to startup

2014-09-20 Thread Tapan Halani
Hello everyone..i am new to PostgreSQL project. I had prior experience with
sql+ , with oracle 11g database server. Kindly help me grasp more about the
project.

Thank you,
Tapan Halani


Re: [HACKERS] Help to startup

2014-09-20 Thread Andrew Dunstan


On 09/16/2014 01:51 PM, Tapan Halani wrote:


Hello everyone..i am new to PostgreSQL project. I had prior experience 
with sql+ , with oracle 11g database server. Kindly help me grasp more 
about the project.






The first thing you need to do is learn to ask your question in the 
right forum. pgsql-hackers is not that forum. pgsql-general is, but you 
should ask specific questions there, not just completely general and 
essentially unanswerable questions such as this. Before you ask 
questions there you should read the excellent PostgreSQL documentation 
at http://www.postgresql.org/docs/current/static/index.html


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [help] Is it possible to support remote COPY operation on PG?

2012-10-24 Thread Xiong He
Is it possible to copy some table data from remote client to the PG database 
server directly without upload the data file to the server side in advance?


---
ThanksRegards,
Xiong He

Re: [HACKERS] [help] Is it possible to support remote COPY operation on PG?

2012-10-24 Thread Dann Corbit
You can use the libpq API:
http://www.postgresql.org/docs/9.2/interactive/libpq-copy.html

The Postgresql JDBC driver exposes COPY, IIRC.


From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Xiong He
Sent: Tuesday, October 23, 2012 11:55 PM
To: PostgreSQL-development
Subject: [HACKERS] [help] Is it possible to support remote COPY operation on PG?

Is it possible to copy some table data from remote client to the PG database 
server directly without upload the data file to the server side in advance?
---
ThanksRegards,
Xiong He



Re: [HACKERS] [help] Is it possible to support remote COPY operation on PG?

2012-10-24 Thread Michael Paquier
On Wed, Oct 24, 2012 at 3:54 PM, Xiong He iih...@qq.com wrote:

 Is it possible to copy some table data from remote client to the PG
 database server directly without upload the data file to the server side in
 advance?

With a psql client you can use the command ¥copy to perform that.
-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] Help me develop new commit_delay advice

2012-08-02 Thread Amit Kapila
 From: Peter Geoghegan [mailto:pe...@2ndquadrant.com] 
 Sent: Wednesday, August 01, 2012 8:49 PM

On 1 August 2012 15:14, Amit Kapila amit.kap...@huawei.com wrote:
 I shall look into this aspect also(setting commit_delay based on raw
sync).
 You also suggest if you want to run the test with different
configuration.

 Well, I was specifically interested in testing if half of raw sync
 time was a widely useful setting, across a variety of different,
 though representative I/O subsystems. Unfortunately, without some
 context about raw sync speed to go along with your numbers, I cannot
 advance or disprove that idea.

Raw sync speed data
--
2 seconds per test 
O_DIRECT supported on this platform for open_datasync and open_sync. 

Compare file sync methods using one 8kB write: 
(in wal_sync_method preference order, except fdatasync 
is Linux's default) 
open_datasync n/a 
fdatasync 165.506 ops/sec 
fsync 166.647 ops/sec 
fsync_writethroughn/a 
open_sync 164.654 ops/sec 

165.506 * 8KB operations can perform in one sec. 
so 1 * 8KB operation takes 6.042 msec.

 It would also have been nice to see a baseline number of 0 too, to get
 an idea of how effective commit_delay may now be. However, that's
 secondary.

In the data sent yesterday commit_delay=0 was there.


With Regards,
Amit Kapila.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help me develop new commit_delay advice

2012-08-02 Thread Josh Berkus
Peter,

For some reason I didn't receive the beginning of this thread.  Can you
resend it to me, or (better) post it to the pgsql-performance mailing list?

I have a linux system where I can test both on regular disk and on SSD.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help me develop new commit_delay advice

2012-08-02 Thread Peter Geoghegan
On 29 July 2012 16:39, Peter Geoghegan pe...@2ndquadrant.com wrote:
 Many of you will be aware that the behaviour of commit_delay was
 recently changed. Now, the delay only occurs within the group commit
 leader backend, and not within each and every backend committing a
 transaction:

I've moved this to the pgsql-performance list. Please continue the
discussion there.

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help me develop new commit_delay advice

2012-08-01 Thread Amit Kapila
 From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Peter Geoghegan
 Sent: Sunday, July 29, 2012 9:09 PM


 I made what may turn out to be a useful observation during the
 development of the patch, which was that for both the tpc-b.sql and
 insert.sql pgbench-tools scripts, a commit_delay of half of my
 wal_sync_method's reported raw sync speed looked optimal. I use Linux,
 so my wal_sync_method happened to have been fdatasync. I measured this
 using pg_test_fsync.

I have done some basic test for commit_delay parameter
OS version: suse linux 10.3 
postgresql version: 9.3 dev on x86-64, compiled by gcc (GCC) 4.1.2 20070115 
Machine details: 8 core cpu, 24GB RAM. 
Testcase: pgbench tcp_b test. 

Before running the benchmark suite, the buffers are loaded by using
pg_prewarm utility. 

Test Results are attached with this mail.
Run1,Run2,Run3 means the same test has ran 3 times.


 It would be useful, for a start, if I had numbers for a battery-backed
 write cache. I don't have access to one right now though, nor do I
 have access to any more interesting hardware, which is one reason why
 I'm asking for help with this.

 I like to run sync prior to running pg_test_fsync, just in case.

 [peter@peterlaptop pg_test_fsync]$ sync

I then interpret the following output:

 [peter@peterlaptop pg_test_fsync]$ pg_test_fsync
 2 seconds per test
 O_DIRECT supported on this platform for open_datasync and open_sync.

 Compare file sync methods using one 8kB write:
 (in wal_sync_method preference order, except fdatasync
 is Linux's default)
 open_datasync 112.940 ops/sec
 fdatasync 114.035 ops/sec
 fsync 21.291 ops/sec
 *** SNIP ***

I shall look into this aspect also(setting commit_delay based on raw sync).
You also suggest if you want to run the test with different configuration.

With Regards,
Amit Kapila.















 
 
 
 
 
 
  Machine
  details
  
  
  
  
  
  
  
  
 
 
  CPU
  8 core
  RAM
  24GB
  
  
  
  
  
 
 
  OS
  suse linux 10.3
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
 
 
  Server
  Configuration
  
  
  
  
  
  
  
 
 
  sync commit
  on
  shared buffers
  1GB
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
 
 
  Pgbench tcp_b
  benchmark suite
  
  
  
  
  
  
 
 
  threads
  8
  clients
  16
  
  
  
  
  
 
 
  scale
  75
  fill factor
  100
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
 
 
  pgbench
  delay = 0
  delay = 3000
  delay = 4000
  delay = 5000
 
 
  TPS
  Total ops
  TPS
  Total ops
  TPS
  Total ops
  TPS
  Total ops
 
 
  Run1
  968
  583060
  973
  584130
  935
  561516
  916
  550043
 
 
  Run2
  971
  583035
  963
  578560
  971
  585143
  921
  556471
 
 
  Run3
  995
  597368
  978
  590085
  998
  599371
  872
  523890
 
 
  Avg
  978
  587821
  971.3
  584258.3
  968
  582010
  903
  543468
 
 
 
  
  
  
  
  
  
  
  
  
 
 











-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help me develop new commit_delay advice

2012-08-01 Thread Peter Geoghegan
On 1 August 2012 15:14, Amit Kapila amit.kap...@huawei.com wrote:
 I shall look into this aspect also(setting commit_delay based on raw sync).
 You also suggest if you want to run the test with different configuration.

Well, I was specifically interested in testing if half of raw sync
time was a widely useful setting, across a variety of different,
though representative I/O subsystems. Unfortunately, without some
context about raw sync speed to go along with your numbers, I cannot
advance or disprove that idea.

It would also have been nice to see a baseline number of 0 too, to get
an idea of how effective commit_delay may now be. However, that's
secondary.

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help me develop new commit_delay advice

2012-07-31 Thread Peter Geoghegan
Many of you will be aware that the behaviour of commit_delay was
recently changed. Now, the delay only occurs within the group commit
leader backend, and not within each and every backend committing a
transaction:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f11e8be3e812cdbbc139c1b4e49141378b118dee

For those of you that didn't follow this development, I should point
out that I wrote a blogpost that described the idea, which will serve
as a useful summary:

http://pgeoghegan.blogspot.com/2012/06/towards-14000-write-transactions-on-my.html

I made what may turn out to be a useful observation during the
development of the patch, which was that for both the tpc-b.sql and
insert.sql pgbench-tools scripts, a commit_delay of half of my
wal_sync_method's reported raw sync speed looked optimal. I use Linux,
so my wal_sync_method happened to have been fdatasync. I measured this
using pg_test_fsync.

The devel docs still say of commit_delay and commit siblings: Good
values for these parameters are not yet clear; experimentation is
encouraged. This has been the case since Postgres 7.1 (i.e. it has
never been clear what good values were - the folk wisdom was actually
that commit_delay should always be set to 0). I hope to be able to
formulate some folk wisdom about setting commit_delay from 9.3 on,
that may go on to be accepted as an official recommendation within the
docs.

I am rather curious as to what experimentation shows optimal values
for commit_delay to be for a representative cross-section of hardware.
In particular, I'd like to see if setting commit_delay to half of raw
sync time appears to be optimal for both insert.sql and tpc-b.sql
workloads across different types of hardware with different sync
times. Now, it may be sort of questionable to take those workloads as
general proxies for performance, not least since they will literally
give Postgres as many *completely uniform* transactions as it can
handle. However, it is hard to think of another, better general proxy
for performance that is likely to be accepted as such, and will allows
us to reason about setting commit_delay.

While I am not completely confident that we can formulate a widely
useful, simple piece of advice, I am encouraged by the fact that a
commit_delay of 4,000 worked very well for both tpc-b.sql and
insert.sql workloads on my laptop, beating out settings of 3,000 and
5,000 on each benchmark. I am also encouraged by the fact that in some
cases, including both the insert.sql and tpc-b.sql cases that I've
already described elsewhere, there is actually no downside to setting
commit_delay - transaction throughput naturally improves, but
transaction latency is actually improved a bit too (or at least the
average and worst-cases). This is presumably due to the amelioration
of resource contention (from greater commit batching) more than
compensating for the obvious downside of adding a delay.

It would be useful, for a start, if I had numbers for a battery-backed
write cache. I don't have access to one right now though, nor do I
have access to any more interesting hardware, which is one reason why
I'm asking for help with this.

I like to run sync prior to running pg_test_fsync, just in case.

[peter@peterlaptop pg_test_fsync]$ sync

I then interpret the following output:

[peter@peterlaptop pg_test_fsync]$ pg_test_fsync
2 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 112.940 ops/sec
fdatasync 114.035 ops/sec
fsync 21.291 ops/sec
*** SNIP ***

So if I can perform 114.035 8KiB sync operations per second, that's an
average of about 1 per 8.77 milliseconds, or 8770 microseconds to put
it in the units that commit_delay speaks. It is my hope that we will
find that when this number is halved, we will arrive at a figure that
is worth recommending as a general useful setting for commit_delay for
the system. I guess I could gain some additional insight by simply
changing my wal_sync_method, but I'd find it more interesting to look
at organic setups with faster (not slower) sync times than my system's
fdatasync. For those who are able to help me here, I'd like to see
pgbench-tools workloads for both tpc-b.sql and insert.sql with
incrementing values of commit_delay (increments of, say, 1000
microseconds, perhaps with less granularity where it isn't needed),
from 0 to $(1.5 times raw sync speed) microseconds.

Thanks
-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help with pg_locks query

2011-09-05 Thread Bruce Momjian
I am writing a talk about the lock manager for PG Open and I would like
suggestions on how to improve a query in my talk.  The query creates a
lockinfo_hierarchy view of a recursive query on other views.  The output
shows the locks held and the locks being waited for:

\! psql -e -c 'SELECT * FROM lockinfo_hierarchy;' | sed 's/^/\t/g'
SELECT * FROM lockinfo_hierarchy;
 ?column? |  pid  |  vxid  | granted | xid_lock |   lock_type   | 
relname  | page | tuple

--+---++-+--+---+--+--+---
 1| 24860 | 2/3106 | t   | 828  | transactionid |   
   |  |
 1| 24864 | 3/42   | t   | 829  | transactionid |   
   |  |
 1| 24868 | 4/78   | t   | 830  | transactionid |   
   |  |
 1| 24872 | 5/22   | t   | 831  | transactionid |   
   |  |
 2| 24864 | 3/42   | f   | 828  | transactionid |   
   |  |
 3| 24864 | 3/42   | t   |  | tuple | 
lockdemo |0 | 1
 4| 24868 | 4/78   | f   |  | tuple | 
lockdemo |0 | 1
 4| 24872 | 5/22   | f   |  | tuple | 
lockdemo |0 | 1
(8 rows)

The SQL needed to reproduce this output is attached, and must be run
in your personal database, e.g. postgres.

What this output shows are four transactions holding locks on their own
xids, transaction 3/42 waiting for 828 to complete, and 3/42 holding a
row lock that 4/78 and 5/22 are waiting on.

When there are multiple waiters, one transaction waits on the real xid
and the others sleep waiting to be woken up later.

Is there any better way to show this?  (The first column is just there
for debugging so you can see what part of the query generated the row.)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
-- cannot be a temporary view because other sessions must see it
DROP VIEW IF EXISTS lockview CASCADE;

CREATE VIEW lockview AS
SELECT  pid, virtualtransaction AS vxid, locktype AS lock_type, 
mode AS lock_mode, granted,
CASE
WHEN virtualxid IS NOT NULL AND transactionid IS NOT NULL
THENvirtualxid || ' ' || transactionid
WHEN virtualxid::text IS NOT NULL
THENvirtualxid
ELSEtransactionid::text
END AS xid_lock, relname,
page, tuple, classid, objid, objsubid
FROMpg_locks LEFT OUTER JOIN pg_class ON (pg_locks.relation = pg_class.oid)
WHERE   -- do not show our view's locks
pid != pg_backend_pid() AND
-- no need to show self-vxid locks
virtualtransaction IS DISTINCT FROM virtualxid
-- granted is ordered earlier
ORDER BY 1, 2, 5 DESC, 6, 3, 4, 7;

DROP VIEW IF EXISTS lockview1 CASCADE;

CREATE VIEW lockview1 AS
SELECT  pid, vxid, lock_type, lock_mode, granted, xid_lock, relname
FROMlockview
-- granted is ordered earlier
ORDER BY 1, 2, 5 DESC, 6, 3, 4, 7;

DROP VIEW IF EXISTS lockview2 CASCADE;

CREATE VIEW lockview2 AS
SELECT  pid, vxid, lock_type, page, tuple, classid, objid, objsubid
FROMlockview
-- granted is first
-- add non-display columns to match ordering of lockview
ORDER BY 1, 2, granted DESC, vxid, xid_lock::text, 3, 4, 5, 6, 7, 8;

DROP TABLE IF EXISTS lockdemo;
CREATE TABLE lockdemo (col int);
INSERT INTO lockdemo VALUES (1);
-- do two UPDATEs to cause a wait

CREATE VIEW lockinfo_hierarchy AS
WITH RECURSIVE lockinfo1 AS (
SELECT '1', pid, vxid, granted, xid_lock, lock_type, relname, 
page, tuple
FROM lockview
WHERE xid_lock IS NOT NULL AND
  relname IS NULL AND
  granted
UNION ALL
SELECT '2', lockview.pid, lockview.vxid, lockview.granted, 
lockview.xid_lock, 
lockview.lock_type, lockview.relname, lockview.page, 
lockview.tuple
FROM lockinfo1 JOIN lockview ON (lockinfo1.xid_lock = 
lockview.xid_lock)
WHERE lockview.xid_lock IS NOT NULL AND
  lockview.relname IS NULL AND
  NOT lockview.granted AND
  lockinfo1.granted),
lockinfo2 AS (
SELECT '3', pid, vxid, granted, xid_lock, lock_type, relname, 
page, tuple
FROM lockview
WHERE lock_type = 'tuple' AND
  granted
UNION ALL
SELECT '4', lockview.pid, lockview.vxid, lockview.granted, 
lockview.xid_lock,
lockview.lock_type, lockview.relname, lockview.page, 
lockview.tuple
FROM lockinfo2 JOIN lockview ON (

Re: [HACKERS] help with plug-in function for additional (partition/shard) visibility checks

2011-09-02 Thread PostgreSQL - Hans-Jürgen Schönig
hello …

i have been thinking about this issue for quite a while ...
given your idea i am not sure how this can work at all.

consider:
begin;
insert 1
insert 2
commit

assume this ends up in the same node,
now you split it into two …
1 and 2 will have exactly the same visibility to and transaction.
i am not sure how you can get this right without looking at the data.

alternative idea: what if the proxy would add / generate a filter by looking at 
the data?
a quick idea would be that once you split you add a simple directive such as 
FILTER GENERATOR $1 or so to the PL/proxy code.
it would then behind the scene arrange the filter passed on.
what do you think?

regards,

hans



On Sep 1, 2011, at 10:13 AM, Hannu Krosing wrote:

 Hallow hackers
 
 I have the following problem to solve and would like to get advice on
 the best way to do it.
 
 The problem:
 
 When growing a pl/proxy based database cluster, one of the main
 operations is splitting a partition. The standard flow is as follows:
 
 1) make a copy of the partitions table(s) to another database
 2) reconfigure pl/proxy to use 2 partitions instead of one
 
 The easy part is making a copy of all or half of the table to another
 database. The hard part is fast deletion (i mean milliseconds,
 comparable to TRUNCATE) the data that should not be in a partition (so
 that RUN ON ALL functions will continue to return right results).
 
 It would be relatively easy, if we still had the RULES for select
 available for plain tables, but even then the eventual cleanup would
 usually mean at least 3 passes of disk writes (set xmax, write deleted
 flag, vacuum and remove)
 
 What I would like to have is possibility for additional visibility
 checks, which would run some simple C function over tuple data (usually
 hash(fieldval) + and + or ) and return visibility (is in this partition)
 as a result. It would be best if this is run at so low level that also
 vacuum would use it and can clean up the foreign partition data in one
 pass, without doing the delete dance first.
 
 So finally the QUESTION :
 
 where in code would be the best place to check for this so that
 
 1) both regular queries and VACUUM see it
 2) the tuple data (and not only system fields or just xmin/xmax) would
 be available for the function to use
 
 
 -- 
 ---
 Hannu Krosing
 PostgreSQL Unlimited Scalability and Performance Consultant
 2ndQuadrant Nordic
 PG Admin Book: http://www.2ndQuadrant.com/books/
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 

--
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] help with plug-in function for additional (partition/shard) visibility checks

2011-09-02 Thread Hannu Krosing
On Fri, 2011-09-02 at 14:01 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
 hello …
 
 i have been thinking about this issue for quite a while ...
 given your idea i am not sure how this can work at all.
 
 consider:
   begin;
   insert 1
   insert 2
   commit
 
 assume this ends up in the same node,
 now you split it into two …
 1 and 2 will have exactly the same visibility to and transaction.
 i am not sure how you can get this right without looking at the data.

It has to consider the data when determining visibility, that's the
whole point of the plug-in .

The idea is, that each row belongs to a certain partition, as
determined by some function over it's fields. Most often this function
is hash of primary key OR-ed by a bitmap representing cluster size and
AND-ed by bitmap for partition(s) stored in this database. 

when you split the parition, then some row's don't belong in the old
partition database anymore (and if you did a full copy, then the other
half dont belong to the new one), so they should be handled as
invisible / deleted. As this can be only done by looking at the tuple
data, this needs an additional visibility function. And as this is only
needed for partitioned databases, it makes sense to implement it as a
plogin, so it would not wast cycles on non-partitioned databases

 alternative idea: what if the proxy would add / generate a filter by 
 looking at the data?
 a quick idea would be that once you split you add a simple directive 
 such as FILTER GENERATOR $1 or so to the PL/proxy code.
 it would then behind the scene arrange the filter passed on.
 what do you think?

Hmm. I'm not sure I understand what you are trying to say. Can you
elaborate a little ?

 
   regards,
 
   hans
 
 
 
 On Sep 1, 2011, at 10:13 AM, Hannu Krosing wrote:
 
  Hallow hackers
  
  I have the following problem to solve and would like to get advice on
  the best way to do it.
  
  The problem:
  
  When growing a pl/proxy based database cluster, one of the main
  operations is splitting a partition. The standard flow is as follows:
  
  1) make a copy of the partitions table(s) to another database
  2) reconfigure pl/proxy to use 2 partitions instead of one
  
  The easy part is making a copy of all or half of the table to another
  database. The hard part is fast deletion (i mean milliseconds,
  comparable to TRUNCATE) the data that should not be in a partition (so
  that RUN ON ALL functions will continue to return right results).
  
  It would be relatively easy, if we still had the RULES for select
  available for plain tables, but even then the eventual cleanup would
  usually mean at least 3 passes of disk writes (set xmax, write deleted
  flag, vacuum and remove)
  
  What I would like to have is possibility for additional visibility
  checks, which would run some simple C function over tuple data (usually
  hash(fieldval) + and + or ) and return visibility (is in this partition)
  as a result. It would be best if this is run at so low level that also
  vacuum would use it and can clean up the foreign partition data in one
  pass, without doing the delete dance first.
  
  So finally the QUESTION :
  
  where in code would be the best place to check for this so that
  
  1) both regular queries and VACUUM see it
  2) the tuple data (and not only system fields or just xmin/xmax) would
  be available for the function to use
  
  
  -- 
  ---
  Hannu Krosing
  PostgreSQL Unlimited Scalability and Performance Consultant
  2ndQuadrant Nordic
  PG Admin Book: http://www.2ndQuadrant.com/books/
  
  
  -- 
  Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-hackers
  
 
 --
 Cybertec Schönig  Schönig GmbH
 Gröhrmühlgasse 26
 A-2700 Wiener Neustadt, Austria
 Web: http://www.postgresql-support.de
 
 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] help with plug-in function for additional (partition/shard) visibility checks

2011-09-02 Thread PostgreSQL - Hans-Jürgen Schönig
hello …

the goal of the entire proxy thing is to make the right query go to the right 
node / nodes.
we determine this by using a partitioning function and so on …
currently PL/proxy has only a handful of commands - one is RUN ON … which tells 
us where to put things.
assume you issue a select … some select will fall out on the target node. 
to restrict the data coming from the node you could add an additional 
constraint on the way …

say:
SELECT * FROM proxy_table WHERE a = 20;

what you want to reach the node after a split is …

SELECT * FROM proxy_table WHERE a = 20 AND col = filter the wrong half 
away

my idea is to add an additional command to the PL/proxy command set.
it should call a function generating this additional filter.
maybe somehow like that …

RUN ON hashtext($1) 
-- this one already knows about the increased cluster
GENERATE FILTER my_create_the_bloody_filter_func($1)-- this one 
would massage the query going to the node.

it would actually open the door for a lot of additional trickery.
the function would tell the proxy what to append - and: this what would be 
under your full control.

what do you think?
i got to think about it futher but i can envision that this could be feasible 
...

hans


On Sep 2, 2011, at 2:36 PM, Hannu Krosing wrote:

 On Fri, 2011-09-02 at 14:01 +0200, PostgreSQL - Hans-Jürgen Schönig
 wrote:
 hello …
 
 i have been thinking about this issue for quite a while ...
 given your idea i am not sure how this can work at all.
 
 consider:
  begin;
  insert 1
  insert 2
  commit
 
 assume this ends up in the same node,
 now you split it into two …
 1 and 2 will have exactly the same visibility to and transaction.
 i am not sure how you can get this right without looking at the data.
 
 It has to consider the data when determining visibility, that's the
 whole point of the plug-in .
 
 The idea is, that each row belongs to a certain partition, as
 determined by some function over it's fields. Most often this function
 is hash of primary key OR-ed by a bitmap representing cluster size and
 AND-ed by bitmap for partition(s) stored in this database. 
 
 when you split the parition, then some row's don't belong in the old
 partition database anymore (and if you did a full copy, then the other
 half dont belong to the new one), so they should be handled as
 invisible / deleted. As this can be only done by looking at the tuple
 data, this needs an additional visibility function. And as this is only
 needed for partitioned databases, it makes sense to implement it as a
 plogin, so it would not wast cycles on non-partitioned databases
 
 alternative idea: what if the proxy would add / generate a filter by 
 looking at the data?
 a quick idea would be that once you split you add a simple directive 
 such as FILTER GENERATOR $1 or so to the PL/proxy code.
 it would then behind the scene arrange the filter passed on.
 what do you think?
 
 Hmm. I'm not sure I understand what you are trying to say. Can you
 elaborate a little ?
 
 
  regards,
 
  hans
 
 
 
 On Sep 1, 2011, at 10:13 AM, Hannu Krosing wrote:
 
 Hallow hackers
 
 I have the following problem to solve and would like to get advice on
 the best way to do it.
 
 The problem:
 
 When growing a pl/proxy based database cluster, one of the main
 operations is splitting a partition. The standard flow is as follows:
 
 1) make a copy of the partitions table(s) to another database
 2) reconfigure pl/proxy to use 2 partitions instead of one
 
 The easy part is making a copy of all or half of the table to another
 database. The hard part is fast deletion (i mean milliseconds,
 comparable to TRUNCATE) the data that should not be in a partition (so
 that RUN ON ALL functions will continue to return right results).
 
 It would be relatively easy, if we still had the RULES for select
 available for plain tables, but even then the eventual cleanup would
 usually mean at least 3 passes of disk writes (set xmax, write deleted
 flag, vacuum and remove)
 
 What I would like to have is possibility for additional visibility
 checks, which would run some simple C function over tuple data (usually
 hash(fieldval) + and + or ) and return visibility (is in this partition)
 as a result. It would be best if this is run at so low level that also
 vacuum would use it and can clean up the foreign partition data in one
 pass, without doing the delete dance first.
 
 So finally the QUESTION :
 
 where in code would be the best place to check for this so that
 
 1) both regular queries and VACUUM see it
 2) the tuple data (and not only system fields or just xmin/xmax) would
 be available for the function to use
 
 
 -- 
 ---
 Hannu Krosing
 PostgreSQL Unlimited Scalability and Performance Consultant
 2ndQuadrant Nordic
 PG Admin Book: http://www.2ndQuadrant.com/books/
 
 
 -- 
 Sent via 

Re: [HACKERS] help with plug-in function for additional (partition/shard) visibility checks

2011-09-02 Thread Hannu Krosing
On Fri, 2011-09-02 at 14:51 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
 hello …
 
 the goal of the entire proxy thing is to make the right query go to the right 
 node / nodes.
 we determine this by using a partitioning function and so on …
 currently PL/proxy has only a handful of commands - one is RUN ON … which 
 tells us where to put things.
 assume you issue a select … some select will fall out on the target node. 
 to restrict the data coming from the node you could add an additional 
 constraint on the way …
 
 say:
   SELECT * FROM proxy_table WHERE a = 20;
 
 what you want to reach the node after a split is …
 
   SELECT * FROM proxy_table WHERE a = 20 AND col = filter the wrong half 
 away
 
 my idea is to add an additional command to the PL/proxy command set.
 it should call a function generating this additional filter.
 maybe somehow like that …
 
   RUN ON hashtext($1) 
 -- this one already knows about the increased cluster
   GENERATE FILTER my_create_the_bloody_filter_func($1)-- this one 
 would massage the query going to the node.
 
 it would actually open the door for a lot of additional trickery.
 the function would tell the proxy what to append - and: this what would be 
 under your full control.
 
 what do you think?

Hmm, could work for simplest cases, but this has 2 main problems:

1) you need a full SQL parser to make this generally useful for plain
SQL

and

2) it still won't work for pl/proxy's main usecase - calling the same
_function_ on partition.

 i got to think about it futher but i can envision that this could be feasible 
 ...
 
   hans
 
 
 On Sep 2, 2011, at 2:36 PM, Hannu Krosing wrote:
 
  On Fri, 2011-09-02 at 14:01 +0200, PostgreSQL - Hans-Jürgen Schönig
  wrote:
  hello …
  
  i have been thinking about this issue for quite a while ...
  given your idea i am not sure how this can work at all.
  
  consider:
 begin;
 insert 1
 insert 2
 commit
  
  assume this ends up in the same node,
  now you split it into two …
  1 and 2 will have exactly the same visibility to and transaction.
  i am not sure how you can get this right without looking at the data.
  
  It has to consider the data when determining visibility, that's the
  whole point of the plug-in .
  
  The idea is, that each row belongs to a certain partition, as
  determined by some function over it's fields. Most often this function
  is hash of primary key OR-ed by a bitmap representing cluster size and
  AND-ed by bitmap for partition(s) stored in this database. 
  
  when you split the parition, then some row's don't belong in the old
  partition database anymore (and if you did a full copy, then the other
  half dont belong to the new one), so they should be handled as
  invisible / deleted. As this can be only done by looking at the tuple
  data, this needs an additional visibility function. And as this is only
  needed for partitioned databases, it makes sense to implement it as a
  plogin, so it would not wast cycles on non-partitioned databases
  
  alternative idea: what if the proxy would add / generate a filter by 
  looking at the data?
  a quick idea would be that once you split you add a simple directive 
  such as FILTER GENERATOR $1 or so to the PL/proxy code.
  it would then behind the scene arrange the filter passed on.
  what do you think?
  
  Hmm. I'm not sure I understand what you are trying to say. Can you
  elaborate a little ?
  
  
 regards,
  
 hans
  
  
  
  On Sep 1, 2011, at 10:13 AM, Hannu Krosing wrote:
  
  Hallow hackers
  
  I have the following problem to solve and would like to get advice on
  the best way to do it.
  
  The problem:
  
  When growing a pl/proxy based database cluster, one of the main
  operations is splitting a partition. The standard flow is as follows:
  
  1) make a copy of the partitions table(s) to another database
  2) reconfigure pl/proxy to use 2 partitions instead of one
  
  The easy part is making a copy of all or half of the table to another
  database. The hard part is fast deletion (i mean milliseconds,
  comparable to TRUNCATE) the data that should not be in a partition (so
  that RUN ON ALL functions will continue to return right results).
  
  It would be relatively easy, if we still had the RULES for select
  available for plain tables, but even then the eventual cleanup would
  usually mean at least 3 passes of disk writes (set xmax, write deleted
  flag, vacuum and remove)
  
  What I would like to have is possibility for additional visibility
  checks, which would run some simple C function over tuple data (usually
  hash(fieldval) + and + or ) and return visibility (is in this partition)
  as a result. It would be best if this is run at so low level that also
  vacuum would use it and can clean up the foreign partition data in one
  pass, without doing the delete dance first.
  
  So finally the 

Re: [HACKERS] help with plug-in function for additional (partition/shard) visibility checks

2011-09-02 Thread PostgreSQL - Hans-Jürgen Schönig

On Sep 2, 2011, at 2:59 PM, Hannu Krosing wrote:

 On Fri, 2011-09-02 at 14:51 +0200, PostgreSQL - Hans-Jürgen Schönig
 wrote:
 hello …
 
 the goal of the entire proxy thing is to make the right query go to the 
 right node / nodes.
 we determine this by using a partitioning function and so on …
 currently PL/proxy has only a handful of commands - one is RUN ON … which 
 tells us where to put things.
 assume you issue a select … some select will fall out on the target node. 
 to restrict the data coming from the node you could add an additional 
 constraint on the way …
 
 say:
  SELECT * FROM proxy_table WHERE a = 20;
 
 what you want to reach the node after a split is …
 
  SELECT * FROM proxy_table WHERE a = 20 AND col = filter the wrong half 
 away
 
 my idea is to add an additional command to the PL/proxy command set.
 it should call a function generating this additional filter.
 maybe somehow like that …
 
  RUN ON hashtext($1) 
 -- this one already knows about the increased cluster
  GENERATE FILTER my_create_the_bloody_filter_func($1)-- this one 
 would massage the query going to the node.
 
 it would actually open the door for a lot of additional trickery.
 the function would tell the proxy what to append - and: this what would be 
 under your full control.
 
 what do you think?
 
 Hmm, could work for simplest cases, but this has 2 main problems:
 
 1) you need a full SQL parser to make this generally useful for plain
 SQL


i think that everything beyond a simple case is pretty hard to achieve anyway. 
to me it looks pretty impossible to solve this in a generic way without same 
insane amount of labor input - at listen given the ideas coming to me in the 
past.
and yes, functions are an issue. unless you have some sort of virtually 
private database thing it is close to impossible (unless you want to try some 
nightmare based on views / constraint exclusion on the partitions or so).

regards,

hans


--
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] help with plug-in function for additional (partition/shard) visibility checks

2011-09-01 Thread Hannu Krosing
Hallow hackers

I have the following problem to solve and would like to get advice on
the best way to do it.

The problem:

When growing a pl/proxy based database cluster, one of the main
operations is splitting a partition. The standard flow is as follows:

1) make a copy of the partitions table(s) to another database
2) reconfigure pl/proxy to use 2 partitions instead of one

The easy part is making a copy of all or half of the table to another
database. The hard part is fast deletion (i mean milliseconds,
comparable to TRUNCATE) the data that should not be in a partition (so
that RUN ON ALL functions will continue to return right results).

It would be relatively easy, if we still had the RULES for select
available for plain tables, but even then the eventual cleanup would
usually mean at least 3 passes of disk writes (set xmax, write deleted
flag, vacuum and remove)

What I would like to have is possibility for additional visibility
checks, which would run some simple C function over tuple data (usually
hash(fieldval) + and + or ) and return visibility (is in this partition)
as a result. It would be best if this is run at so low level that also
vacuum would use it and can clean up the foreign partition data in one
pass, without doing the delete dance first.

So finally the QUESTION :

where in code would be the best place to check for this so that

1) both regular queries and VACUUM see it
2) the tuple data (and not only system fields or just xmin/xmax) would
be available for the function to use


-- 
---
Hannu Krosing
PostgreSQL Unlimited Scalability and Performance Consultant
2ndQuadrant Nordic
PG Admin Book: http://www.2ndQuadrant.com/books/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] help with sending email

2011-07-14 Thread Fernando Acosta Torrelly
Hi everybody: 

 

I am using pgmail to send email in an application, but I would like to use
html format 

 

Does anybody has an example how to do this?, or what do you recommend me to
use por doing this?

 

Thanks in advance for your attention. 

 

Best Regards,

 

Fernando Acosta 

Lima - Perú



Re: [HACKERS] help with sending email

2011-07-14 Thread Robert Haas
On Jul 13, 2011, at 12:29 PM, Fernando Acosta Torrelly fgaco...@gmail.com 
wrote:
 Hi everybody:
 
 I am using pgmail to send email in an application, but I would like to use 
 html format
 
 Does anybody has an example how to do this?, or what do you recommend me to 
 use por doing this?
 
 Thanks in advance for your attention.
 
This is the mailing list for PostgreSQL, not pgmail.  And it is also a 
development mailing list; there are others for user questions.

...Robert

Re: [HACKERS] Help: regarding patch development

2011-05-12 Thread Kevin Grittner
nil nil unprecedente...@yahoo.com wrote:
 
 i want to develop a service for postgresql related to querry
 processing.but i dont know how to develop it. plz guide me so that
 i can take step.
 
I think the root of all information on that is here:
 
http://www.postgresql.org/developer/
 
with the linked developer FAQ being the first thing to read:
 
http://wiki.postgresql.org/wiki/Developer_FAQ
 
One other very important source is README files in the source code
directories.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help required regarding patch development

2011-05-12 Thread nil nil
Hello,
   Sir, i am a student. i want to develop a patch for postgresql. but i 
dont know how to take the step.kindly guide me for this. 
    
   Thanks
   Emman

Re: [HACKERS] Help required regarding patch development

2011-05-12 Thread Bruce Momjian
nil nil wrote:
 Hello,
 ?? Sir, i am a student. i want to develop a patch for postgresql. but 
 i dont know how to take the step.kindly guide me for this. 

Sure, look here:

http://developer.postgresql.org


-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help: regarding patch development

2011-05-11 Thread nil nil
Hello,
  Sir, i want to develop a service for postgresql related to querry 
processing.but i dont know how to develop it. plz guide me so that i can take 
step.
i will be realy thankful to you.
 
Regards
Emman

Re: [HACKERS] help: collation support on Windows

2011-04-07 Thread Peter Eisentraut
On fre, 2011-02-25 at 21:32 +0200, Peter Eisentraut wrote:
 According to the online documentation, the APIs are there:
 http://msdn.microsoft.com/en-ca/library/a7cwbx4t.aspx
 
 Now we'd need someone brave try to make it work.  The starting point
 would be to define HAVE_LOCALE_T and then make it build.  Microsoft has
 all the relevant functions and types with an underscore in front
 (_strcoll_l, etc.), so some extra #defining will probably be necessary.

OK, I got that working now.  Patch attached.

 Also, initdb will need to be patched to get a list of OS locales to
 populate the pg_collation catalog with.

That still needs work, but you can run CREATE COLLATION manually.

 Finally, a regression test customized for Windows, but I can help with
 that later.

If you doctor the existing linux test to create appropriately named
collations before running the actual tests, and you hack the
vcregress.pl driver script to run the tests in UTF8 instead of
SQL_ASCII, then all the tests except the Turkish case conversion tests
pass.  So looks pretty good so far.

diff --git a/src/backend/utils/adt/pg_locale.c b/src/backend/utils/adt/pg_locale.c
index 163856d..ff7de38 100644
--- a/src/backend/utils/adt/pg_locale.c
+++ b/src/backend/utils/adt/pg_locale.c
@@ -962,8 +962,12 @@ pg_newlocale_from_collation(Oid collid)
 		if (strcmp(collcollate, collctype) == 0)
 		{
 			/* Normal case where they're the same */
+#ifndef WIN32
 			result = newlocale(LC_COLLATE_MASK | LC_CTYPE_MASK, collcollate,
 			   NULL);
+#else
+			result = _create_locale(LC_ALL, collcollate);
+#endif
 			if (!result)
 ereport(ERROR,
 		(errcode_for_file_access(),
@@ -972,6 +976,7 @@ pg_newlocale_from_collation(Oid collid)
 		}
 		else
 		{
+#ifndef WIN32
 			/* We need two newlocale() steps */
 			locale_t loc1;
 
@@ -987,6 +992,9 @@ pg_newlocale_from_collation(Oid collid)
 		(errcode_for_file_access(),
 		 errmsg(could not create locale \%s\: %m,
 collctype)));
+#else
+			elog(ERROR, not supported);
+#endif
 		}
 
 		cache_entry-locale = result;
diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c
index 3587fe4..04b0326 100644
--- a/src/backend/utils/adt/varlena.c
+++ b/src/backend/utils/adt/varlena.c
@@ -1374,7 +1374,10 @@ varstr_cmp(char *arg1, int len1, char *arg2, int len2, Oid collid)
 			((LPWSTR) a2p)[r] = 0;
 
 			errno = 0;
-			result = wcscoll((LPWSTR) a1p, (LPWSTR) a2p);
+			if (mylocale)
+result = _wcscoll_l((LPWSTR) a1p, (LPWSTR) a2p, mylocale);
+			else
+result = wcscoll((LPWSTR) a1p, (LPWSTR) a2p);
 			if (result == 2147483647)	/* _NLSCMPERROR; missing from mingw
 		 * headers */
 ereport(ERROR,
diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32
index 79b8036..8eae0b4 100644
--- a/src/include/pg_config.h.win32
+++ b/src/include/pg_config.h.win32
@@ -683,3 +683,5 @@
 /* Define to empty if the keyword `volatile' does not work. Warning: valid
code using `volatile' can become incorrect without. Disable with care. */
 /* #undef volatile */
+
+#define HAVE_LOCALE_T 1
diff --git a/src/include/utils/pg_locale.h b/src/include/utils/pg_locale.h
index 4c72fd0..370d691 100644
--- a/src/include/utils/pg_locale.h
+++ b/src/include/utils/pg_locale.h
@@ -17,6 +17,17 @@
 #include xlocale.h
 #endif
 
+#ifdef WIN32
+#define locale_t _locale_t
+#define towlower_l _towlower_l
+#define towupper_l _towupper_l
+#define toupper_l _toupper_l
+#define tolower_l _tolower_l
+#define iswalnum_l _iswalnum_l
+#define isalnum_l _isalnum_l
+#define strcoll_l _strcoll_l
+#endif
+
 #include utils/guc.h
 
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] help: collation support on Windows

2011-02-25 Thread Peter Eisentraut
According to the online documentation, the APIs are there:
http://msdn.microsoft.com/en-ca/library/a7cwbx4t.aspx

Now we'd need someone brave try to make it work.  The starting point
would be to define HAVE_LOCALE_T and then make it build.  Microsoft has
all the relevant functions and types with an underscore in front
(_strcoll_l, etc.), so some extra #defining will probably be necessary.

Also, initdb will need to be patched to get a list of OS locales to
populate the pg_collation catalog with.

Finally, a regression test customized for Windows, but I can help with
that later.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] help: collation support on Windows

2011-02-25 Thread Andrew Dunstan



On 02/25/2011 02:32 PM, Peter Eisentraut wrote:

According to the online documentation, the APIs are there:
http://msdn.microsoft.com/en-ca/library/a7cwbx4t.aspx

Now we'd need someone brave try to make it work.  The starting point
would be to define HAVE_LOCALE_T and then make it build.  Microsoft has
all the relevant functions and types with an underscore in front
(_strcoll_l, etc.), so some extra #defining will probably be necessary.

Also, initdb will need to be patched to get a list of OS locales to
populate the pg_collation catalog with.

Finally, a regression test customized for Windows, but I can help with
that later.






What is the equivalent of locale -a?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] help: collation support on Windows

2011-02-25 Thread Dave Page
On Fri, Feb 25, 2011 at 9:54 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 02/25/2011 02:32 PM, Peter Eisentraut wrote:

 According to the online documentation, the APIs are there:
 http://msdn.microsoft.com/en-ca/library/a7cwbx4t.aspx

 Now we'd need someone brave try to make it work.  The starting point
 would be to define HAVE_LOCALE_T and then make it build.  Microsoft has
 all the relevant functions and types with an underscore in front
 (_strcoll_l, etc.), so some extra #defining will probably be necessary.

 Also, initdb will need to be patched to get a list of OS locales to
 populate the pg_collation catalog with.

 Finally, a regression test customized for Windows, but I can help with
 that later.





 What is the equivalent of locale -a?

There isn't a command that I know of, but the API function you
probably need is EnumSystemLocales.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] help: collation support on Windows

2011-02-25 Thread Magnus Hagander
On Fri, Feb 25, 2011 at 22:58, Dave Page dp...@pgadmin.org wrote:
 On Fri, Feb 25, 2011 at 9:54 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 02/25/2011 02:32 PM, Peter Eisentraut wrote:

 According to the online documentation, the APIs are there:
 http://msdn.microsoft.com/en-ca/library/a7cwbx4t.aspx

 Now we'd need someone brave try to make it work.  The starting point
 would be to define HAVE_LOCALE_T and then make it build.  Microsoft has
 all the relevant functions and types with an underscore in front
 (_strcoll_l, etc.), so some extra #defining will probably be necessary.

 Also, initdb will need to be patched to get a list of OS locales to
 populate the pg_collation catalog with.

 Finally, a regression test customized for Windows, but I can help with
 that later.





 What is the equivalent of locale -a?

 There isn't a command that I know of, but the API function you
 probably need is EnumSystemLocales.

Yeah. If you want example code, you can probably lift something from
the old pginstaller project on pgfoundry - we use that code to
populate the dropdown box for initdb there.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-28 Thread Tom Lane
Itagaki Takahiro itagaki.takah...@gmail.com writes:
 On Tue, Sep 28, 2010 at 12:12 PM, Robert Haas robertmh...@gmail.com wrote:
 Oh - I didn't realize this meant marking lots of things in contrib
 that didn't otherwise need to be marked.  Why do other people need
 this if we don't?

 As I mentioned, we don't need the marks in our build environment at all.

In that case, anybody who does need it should fix their build
environment.

I grow really weary of the idea that we should submit to arbitrary
amounts of uglification of our source code so that it will deal with
this week's Windows oddness.  The Windows folk need to be willing to
do a bit of work on their end.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-28 Thread Itagaki Takahiro
On Tue, Sep 28, 2010 at 3:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 As I mentioned, we don't need the marks in our build environment at all.

 In that case, anybody who does need it should fix their build
 environment.

 I grow really weary of the idea that we should submit to arbitrary
 amounts of uglification of our source code so that it will deal with
 this week's Windows oddness.  The Windows folk need to be willing to
 do a bit of work on their end.

Windows has 3 levels of function visibilities in a DLL:
  1. exported from the DLL
  2. module global, but not exported
  3. static (private in file), of course not exported

On UNIXes, 2 is always 1. So we don't have to distinguish between
global and exported functions. But standard DLL projects on Windows
require marking which functions should be exported.

I'm not sure why we can build modules without any EXPORT marks,
though we can do it actually... It is very unnatural on Windows.


If we want to avoid adding PGDLLEXPORTs, we could have sample MSVC
project with proper settings in tutorial or documentation instead.
It should be opened with VC++ GUI (not from command line!) and can
be generate DLLs in the same way we're using to build the core.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-28 Thread Craig Ringer
On 28/09/10 11:09, Itagaki Takahiro wrote:
 On Tue, Sep 28, 2010 at 9:51 AM, Robert Haas robertmh...@gmail.com wrote:
 Since we have PGDLLEXPORT in 9.0, we can mark some of exported
 functions with it in tutorial codes and maybe contrib modules.

 If that (a) works and (b) reduces user confusion, +1 from me.  We've
 gotten this question a few times lately.
 
 If we do so, many PGDLLEXPORT will be added:
   *  17 in src/tutorial
   * 507 in contrib
 for each exported PGFunction, _PG_init, and _PG_fini.
 
 Any objections? Am I missing something?

For what it's worth, these macros are useful for more than Windows.

They can be used with gcc's visibility features to reduce the size of
the symbol table and therefore speed linking and backend startup. This
isn't as important in a C-only program as it is in a C++ program (which
has huge collections of private symbols) but it still won't hurt. If
building with gcc4 on a non-Windows platform, the PGDLLEXPORT macro can
be defined as:

  #define PGDLLEXPORT __attribute__ ((visibility(default)))

and gcc can be invoked with -fvisibility=hidden to make symbols not
explicitly exported hidden by default. A handy advantage of this is that
it'll cause code that would compile and run correctly on *nix and fail
on Windows to also fail on *nix, making it easier for *nix based
developers (ie sane, happy people) to catch issues that'll break Windows.

Such macros also serve as documentation markers indicating public API.
They're ugly, but they're not purely Windows ugly.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-28 Thread Magnus Hagander
On Tue, Sep 28, 2010 at 09:26, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Tue, Sep 28, 2010 at 3:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 As I mentioned, we don't need the marks in our build environment at all.

 In that case, anybody who does need it should fix their build
 environment.

 I grow really weary of the idea that we should submit to arbitrary
 amounts of uglification of our source code so that it will deal with
 this week's Windows oddness.  The Windows folk need to be willing to
 do a bit of work on their end.

 Windows has 3 levels of function visibilities in a DLL:
  1. exported from the DLL
  2. module global, but not exported
  3. static (private in file), of course not exported

 On UNIXes, 2 is always 1. So we don't have to distinguish between
 global and exported functions. But standard DLL projects on Windows
 require marking which functions should be exported.

 I'm not sure why we can build modules without any EXPORT marks,
 though we can do it actually... It is very unnatural on Windows.


 If we want to avoid adding PGDLLEXPORTs, we could have sample MSVC
 project with proper settings in tutorial or documentation instead.
 It should be opened with VC++ GUI (not from command line!) and can
 be generate DLLs in the same way we're using to build the core.

We're talking about the export all symbols thing, right? I *don't*
think we want to recommend people to do that - it creates bloated DLL
files, for no really good reason. Also, it's not just a matter of a
msvc project - we do that with a perl hack
(http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/tools/msvc/gendef.pl;h=b8538dd79b8baf21ede87b2ec1aba0276fd3b3d9;hb=62b6aaa40b2abb26edf18d1cd00dffcac090f67a).
It's not a good way.

We might, however, want to add a specific section to the
*documentation* about building extensions on Windows. We have section
35.9.6 which lists a bunch of OSes, where windows is clearly missing.
But perhaps a complete section of it's own, like the one for pgxs in
35.9.6, would be even better?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-28 Thread Euler Taveira de Oliveira
Magnus Hagander escreveu:
 We might, however, want to add a specific section to the
 *documentation* about building extensions on Windows.
 
+1.


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-28 Thread Itagaki Takahiro
On Tue, Sep 28, 2010 at 6:16 PM, Magnus Hagander mag...@hagander.net wrote:
 We're talking about the export all symbols thing, right? I *don't*
 think we want to recommend people to do that - it creates bloated DLL
 files, for no really good reason. Also, it's not just a matter of a
 msvc project - we do that with a perl hack
 (http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/tools/msvc/gendef.pl;h=b8538dd79b8baf21ede87b2ec1aba0276fd3b3d9;hb=62b6aaa40b2abb26edf18d1cd00dffcac090f67a).
 It's not a good way.

What a great hacking!  I agree that it is not recommendable, but users
need to build their codes in different build environment from ours if so.

 We might, however, want to add a specific section to the
 *documentation* about building extensions on Windows

+1. It will be a longer section than ones for other platforms.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Euler Taveira de Oliveira
Itagaki Takahiro escreveu:
 I had the same problems before, and I wrote some hacks for VC++.
 
Isn't there such a code in core or am i missing something? Is it worth
supporting the VC++ standalone projects?


-- 
  Euler Taveira de Oliveira
  http://www.timbira.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Itagaki Takahiro
On Tue, Sep 28, 2010 at 5:13 AM, Euler Taveira de Oliveira
eu...@timbira.com wrote:
 Itagaki Takahiro escreveu:
 I had the same problems before, and I wrote some hacks for VC++.

 Isn't there such a code in core or am i missing something? Is it worth
 supporting the VC++ standalone projects?

Since we have PGDLLEXPORT in 9.0, we can mark some of exported
functions with it in tutorial codes and maybe contrib modules.

Is it worth doing?  We don't need the marks in our build environment,
but they might help users referencing our codes in standalone VC++ projects.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Robert Haas
On Mon, Sep 27, 2010 at 8:45 PM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Tue, Sep 28, 2010 at 5:13 AM, Euler Taveira de Oliveira
 eu...@timbira.com wrote:
 Itagaki Takahiro escreveu:
 I had the same problems before, and I wrote some hacks for VC++.

 Isn't there such a code in core or am i missing something? Is it worth
 supporting the VC++ standalone projects?

 Since we have PGDLLEXPORT in 9.0, we can mark some of exported
 functions with it in tutorial codes and maybe contrib modules.

 Is it worth doing?  We don't need the marks in our build environment,
 but they might help users referencing our codes in standalone VC++ projects.

If that (a) works and (b) reduces user confusion, +1 from me.  We've
gotten this question a few times lately.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Itagaki Takahiro
On Tue, Sep 28, 2010 at 9:51 AM, Robert Haas robertmh...@gmail.com wrote:
 Since we have PGDLLEXPORT in 9.0, we can mark some of exported
 functions with it in tutorial codes and maybe contrib modules.

 If that (a) works and (b) reduces user confusion, +1 from me.  We've
 gotten this question a few times lately.

If we do so, many PGDLLEXPORT will be added:
  *  17 in src/tutorial
  * 507 in contrib
for each exported PGFunction, _PG_init, and _PG_fini.

Any objections? Am I missing something?

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Robert Haas
On Mon, Sep 27, 2010 at 11:09 PM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Tue, Sep 28, 2010 at 9:51 AM, Robert Haas robertmh...@gmail.com wrote:
 Since we have PGDLLEXPORT in 9.0, we can mark some of exported
 functions with it in tutorial codes and maybe contrib modules.

 If that (a) works and (b) reduces user confusion, +1 from me.  We've
 gotten this question a few times lately.

 If we do so, many PGDLLEXPORT will be added:
  *  17 in src/tutorial
  * 507 in contrib
 for each exported PGFunction, _PG_init, and _PG_fini.

 Any objections? Am I missing something?

Oh - I didn't realize this meant marking lots of things in contrib
that didn't otherwise need to be marked.  Why do other people need
this if we don't?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Itagaki Takahiro
On Tue, Sep 28, 2010 at 12:12 PM, Robert Haas robertmh...@gmail.com wrote:
 If we do so, many PGDLLEXPORT will be added:
  *  17 in src/tutorial
  * 507 in contrib
 for each exported PGFunction, _PG_init, and _PG_fini.

 Oh - I didn't realize this meant marking lots of things in contrib
 that didn't otherwise need to be marked.  Why do other people need
 this if we don't?

As I mentioned, we don't need the marks in our build environment at all.
So, the PGDLLEXPORT marks are for users who refers our tutorials and
contrib modules as sample codes.

Personally, I learned many idioms from contrib modules, but didn't
notice the tutorial directory. I think codes in contribs are often
copied-and-pasted. So, if we add PGDLLEXPORTs to some places,
I'd like to add them to contribs, too.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-27 Thread Robert Haas
On Mon, Sep 27, 2010 at 11:26 PM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Tue, Sep 28, 2010 at 12:12 PM, Robert Haas robertmh...@gmail.com wrote:
 If we do so, many PGDLLEXPORT will be added:
  *  17 in src/tutorial
  * 507 in contrib
 for each exported PGFunction, _PG_init, and _PG_fini.

 Oh - I didn't realize this meant marking lots of things in contrib
 that didn't otherwise need to be marked.  Why do other people need
 this if we don't?

 As I mentioned, we don't need the marks in our build environment at all.

Why not?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-25 Thread Esteban Zimanyi
Hello

I have problems defining user-defined types and functions in
PostgreSQL using a Visual C++ development environment.

In order to look for the solution I took the complex.c and
complex.source files coming with the PostgreSQL distribution
(src/tutorial).

When I run the examples there is a server crash. I used both Visual
C++ 2008 and 2005 for building the dll. I also used both PostgreSQL
versions 8.4 and 9.0. All versions produce the same problem.

Do you know how can I solve the problem ? Someone has a Visual C++
solution that works that can send me ?

Many many thanks for your help ! I have been struggling around with
this problem for several weeks without any success :-(

Regards

Esteban

--

Prof. Esteban Zimanyi
Department of Computer  Decision Engineering  (CoDE) CP 165/15
Universite Libre de Bruxelles
Avenue F. D. Roosevelt 50
B-1050 Brussels, Belgium
fax: + 32.2.650.47.13
tel: + 32.2.650.31.85
e-mail: ezima...@ulb.ac.be
Internet: http://code.ulb.ac.be/




-- 

Prof. Esteban Zimanyi
Department of Computer  Decision Engineering  (CoDE) CP 165/15
Universite Libre de Bruxelles
Avenue F. D. Roosevelt 50
B-1050 Brussels, Belgium
fax: + 32.2.650.47.13
tel: + 32.2.650.31.85
e-mail: ezima...@ulb.ac.be
Internet: http://code.ulb.ac.be/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help with User-defined function in PostgreSQL with Visual C++

2010-09-25 Thread Itagaki Takahiro
On Sun, Sep 26, 2010 at 12:56 AM, Esteban Zimanyi
estebanzima...@gmail.com wrote:
 When I run the examples there is a server crash. I used both Visual
 C++ 2008 and 2005 for building the dll. I also used both PostgreSQL
 versions 8.4 and 9.0. All versions produce the same problem.

 Do you know how can I solve the problem ? Someone has a Visual C++
 solution that works that can send me ?

If you're developing your project as VC++ standalone project, codes
in tutorials and contrib modules don't work at all, because they
don't have __declspec(dllexport) for each function and variable to
be exported. They will work well as long as you use mingw or special
VC++ environment used to build the core.

I had the same problems before, and I wrote some hacks for VC++.
The codes in pg_reorg/lib in pgFoundry might be a help:
  http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/reorg/pg_reorg/lib/reorg.c
  http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/reorg/pg_reorg/lib/pgut/pgut-be.h

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Help with Join Performance Testing

2009-01-21 Thread Lawrence, Ramon
A hash join modification patch is under review for 8.4 that needs
performance testing.   We would appreciate help with this testing. 

 

A testing version of the patch is attached in addition to testing
instructions and where to retrieve a sample data set.   The basic idea
of the patch is that it reduces disk operations for large multi-batch
hash joins where there is skew in the probe relation.  The patch
collects statistics on performance benefits when using the optimization.

 

--

Ramon Lawrence and Bryce Cutt

Overview


This document provides an overview of how to test the histojoin patch.  The 
patch performs skew optimization for large, multi-batch hash joins.

Installation

The patch should compile cleanly against CVS head.


Execution
-
The skew optimization can be turned on by:

set enable_hashjoin_usestatmcvs = on;

and off by:

set enable_hashjoin_usestatmcvs = off;

If a hash join has detectable skew in the larger probe relation, then the skew 
optimization will output the amount of skew it sees and the number of tuples it 
will buffer in memory to exploit that skew.  When the hash join completes, it 
will output statistics on the number of tuples actually matched by the 
in-memory (IM) skew partition and the number of tuples in partition 0. The 
improvements in join I/Os is also given.

Sample (from LI-P TPCH 10G 1Z):

Values: 100 Skew: 0.27  Est. tuples: 59986052.00 Batches: 512  Est. Save: 
16114709.99
Total Inner Tuples: 200
IM Inner Tuples: 83
Batch Zero Inner Tuples: 3941
Batch Zero Potential Inner Tuples: 3941
Total Outer Tuples: 59986052
IM Outer Tuples: 16074146
Batch Zero Outer Tuples: 98778
Batch Zero Potential Outer Tuples: 98778 
Total Output Tuples: 59986052 
IM Output Tuples: 16074146
Batch Zero Output Tuples: 98778
Batch Zero Potential Output Tuples: 98778
Percentage less tuple IOs than HHJ: 25.98


Data Set

A sample test data set is TPC-H scale factor 1 GB.  A pg_dump can be downloaded 
from:

http://people.ok.ubc.ca/rlawrenc/tpch1g1z.zip

The larger 10 GB data sets are available on request.  You can also download the 
generator itself (works only on Windows) at:

http://people.ok.ubc.ca/rlawrenc/TPCHSkew.zip

The only joins with significant skew in the database are Part-LineItem and 
Supplier-LineItem.


Result Notes


1) The percentage benefit increases with the amount of skew.  Relations with no 
skew are not affected.  Relations with minimal skew show no noticeable 
improvement or negative impact.

2) Since disk I/Os in the join is only one part of the query execution time, 
overall execution times do not improve the same amount as the reduction in disk 
I/Os.  For CPU-bound queries, the disk I/O improvement may not have a 
significant effect on the overall time.

3) The relations are quite large.  Thus, queries with SELECT * that join 
several relations are very costly and the generation of the tuples dominates 
the execution time (especially if executing the query through a client such as 
pgAdmin).


Previous Results


The join with LineItem-Part on TPCH 1G 1Z shows about a 26% improvement in I/Os 
performed during the join and about 5-10% improvement in overall time.  The 
join with LineItem-Supplier is similar.  Data sets with higher skew show even 
better performance.  For example, Lineitem-Part on TPCH 10G 2Z has 90% of probe 
relation tuples matching 100 most common values.  The improvement in I/Os is 
about 90% and time about 50%.

Some sample test queries:

Query #1a:
SELECT * FROM Part, Lineitem WHERE p_partkey = l_partkey;

Query #1b:
SELECT count(*) FROM Part, Lineitem WHERE p_partkey = l_partkey;

Query #2a:
SELECT * FROM Supplier, Lineitem WHERE s_suppkey = l_suppkey;

Query #2b:
SELECT count(*) FROM Supplier, Lineitem WHERE s_suppkey = l_suppkey;

Query #3a:
SELECT * FROM Part, Lineitem, Supplier WHERE p_partkey = l_partkey and 
s_suppkey = l_suppkey;

Query #3b:
SELECT count(*) FROM Part, Lineitem, Supplier WHERE p_partkey = l_partkey and 
s_suppkey = l_suppkey;



histojoin_testing.patch
Description: histojoin_testing.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Help in processing multiple parse nodes.

2008-10-31 Thread Somasekhar Ramadurgam
Hi all,
 I am having a project in Postgres where I had modified code in
gram.y...I have added new  command along with grammer and returning
list of parse nodes as opposed to single node inside actions
blockI want to process each of them in postgres.c file...But, I
couldn't find any difference in parsetree_list structure, irrespective
of whether i am returning single node or list of nodes...It is always
saying list which is of length of 1...I don't know how to parse the
list  due to which I am stuck completely... I had tried to add
foreach() loop on my own and failed miserably...

Looking forward to some help from experts.
Thanks in advance


Best regards,
Somasekhar.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] HELP

2008-03-26 Thread Pedro Belmino
Hello,
I am creating an agent that runs alongside the postgres written in c++, I
have a question: How send sql queries directly for the database without
going need to make any connection?
What I call function, which I use file?

Thanks,

-- 
Pedro Belmino.

# Ciência da Computação - UNIFOR
# [EMAIL PROTECTED]



Re: [HACKERS] HELP

2008-03-26 Thread Gurjeet Singh
On Wed, Mar 26, 2008 at 11:10 PM, Pedro Belmino [EMAIL PROTECTED]
wrote:

 Hello,
 I am creating an agent that runs alongside the postgres written in c++, I
 have a question: How send sql queries directly for the database without
 going need to make any connection?
 What I call function, which I use file?


Try looking for the documentation of libpq or libpqxx.
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [HACKERS] Help with release note items

2007-12-01 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 I need help understanding the following two release note items (see XXX):

I've tweaked the text for the first one.  I do not think the second one
needs any changes; the matter is discussed elsewhere in the docs, and
the release notes are not the place to go into such detail.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] Help with release note items

2007-11-27 Thread Bruce Momjian
I need help understanding the following two release note items (see XXX):

 listitem
  para
   Create a general mechanism that supports casts to and from the
   standard string types (typeTEXT/type, typeVARCHAR/type,
   typeCHAR/type) for emphasisevery/emphasis datatype, by
   invoking the datatype's I/O functions (Tom)  XXX examples?
  /para

  para
   These new casts are assignment-only in the to-string direction,
   explicit-only in the other direction, and therefore should create no
   surprising behavior.  Various
   datatype-specific casting functions that were equivalent to this
   behavior were removed.
  /para
 /listitem

 listitem
  para
   Allow limited hashing when using two different data types (Tom)
  /para

  para
   This allows hash joins, hash indexes, hashed subplans, and hash
   aggregation to be used in situations involving cross-data-type
   comparisons, if the data types have compatible hash functions.
   Currently, cross-data-type hashing support exists for
   typesmallint/type/typeinteger/type/typebigint/type,
   and for typefloat4/type/typefloat8/type.  XXX how do you do
   hash aggregate with two different data types
  /para
 /listitem


-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Help! DLL make faild under windows mingw.

2007-05-29 Thread Wang Haiyong
I am doing a DLL project, it is in the backend/utils/mb. But it can not be 
maked under windows mingw.
I require everyone to help me. Thank your very much.

The error infomations are as follows.

Info: resolving _CurrentMemoryContext by linking to __imp__CurrentMemoryContext 
(auto-import)
Info: resolving _SPI_processed by linking to __imp__SPI_processed (auto-import)
Info: resolving _SPI_tuptable by linking to __imp__SPI_tuptable (auto-import)
Info: resolving _default_statistics_target by linking to 
__imp__default_statistics_target (auto-import)
fu01.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
fu02.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
fu03.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
fu04.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
fu05.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
fu06.o(.idata$3+0xc): more undefined references to `libpostgres_a_iname' 
follow
nmth00.o(.idata$4+0x0): undefined reference to `_nm__CurrentMemoryContext'
nmth000111.o(.idata$4+0x0): undefined reference to `_nm__SPI_processed'
nmth000118.o(.idata$4+0x0): undefined reference to `_nm__SPI_tuptable'
nmth000127.o(.idata$4+0x0): undefined reference to 
`_nm__default_statistics_target'
d:\MinGW\bin\dllwrap.exe: d:\MinGW\bin\gcc exited with status 1
make: *** [libnsqlgeom.dll] Error 1




--
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) is intended only for the use of the intended 
recipient and may be confidential and/or privileged of Neusoft Group Ltd., its 
subsidiaries and/or its affiliates. If any reader of this communication is not 
the intended recipient, unauthorized use, forwarding, printing, storing, 
disclosure or copying is strictly prohibited, and may be unlawful. If you have 
received this communication in error, please immediately notify the sender by 
return e-mail, and delete the original message and all copies from your system. 
Thank you. 
---



Re: [HACKERS] Help! DLL make faild under windows mingw.

2007-05-29 Thread Andrei Kovalevski
Hi!

What exactly are you trying to build? If you have own-written Makefile
for this - post it here. Does your code contain #define BUILDING_DLL
before inclusion of PostgreSQL headers?

Wang Haiyong wrote:
 I am doing a DLL project, it is in the backend/utils/mb. But it can
 not be maked under windows mingw.
 I require everyone to help me. Thank your very much.
 The error infomations are as follows.
 Info: resolving _CurrentMemoryContext by linking to
 __imp__CurrentMemoryContext (auto-import)
 Info: resolving _SPI_processed by linking to __imp__SPI_processed
 (auto-import)
 Info: resolving _SPI_tuptable by linking to __imp__SPI_tuptable
 (auto-import)
 Info: resolving _default_statistics_target by linking to
 __imp__default_statistics_target (auto-import)
 fu01.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu02.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu03.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu04.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu05.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu06.o(.idata$3+0xc): more undefined references to
 `libpostgres_a_iname' follow
 nmth00.o(.idata$4+0x0): undefined reference to
 `_nm__CurrentMemoryContext'
 nmth000111.o(.idata$4+0x0): undefined reference to `_nm__SPI_processed'
 nmth000118.o(.idata$4+0x0): undefined reference to `_nm__SPI_tuptable'
 nmth000127.o(.idata$4+0x0): undefined reference to
 `_nm__default_statistics_target'
 d:\MinGW\bin\dllwrap.exe: d:\MinGW\bin\gcc exited with status 1
 make: *** [libnsqlgeom.dll] Error 1



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Help! DLL make faild under windows mingw.

2007-05-29 Thread Wang Haiyong
You are right. Thanks.

I found macro BUILDING_DLL in the file src/Makefile.port. 
I move the DLL source code from the backend path. It can be successfully built.

The file Makefile.port contains 
ifneq (,$(findstring backend,$(subdir)))
ifeq (,$(findstring conversion_procs,$(subdir)))
override CPPFLAGS+= -DBUILDING_DLL
endif
endif


- Original Message - 
From: Andrei Kovalevski [EMAIL PROTECTED]
To: pgsql-hackers@postgresql.org
Sent: Wednesday, May 30, 2007 1:35 AM
Subject: Re: [HACKERS] Help! DLL make faild under windows mingw.


 Hi!
 
 What exactly are you trying to build? If you have own-written Makefile
 for this - post it here. Does your code contain #define BUILDING_DLL
 before inclusion of PostgreSQL headers?
 
 Wang Haiyong wrote:
 I am doing a DLL project, it is in the backend/utils/mb. But it can
 not be maked under windows mingw.
 I require everyone to help me. Thank your very much.
 The error infomations are as follows.
 Info: resolving _CurrentMemoryContext by linking to
 __imp__CurrentMemoryContext (auto-import)
 Info: resolving _SPI_processed by linking to __imp__SPI_processed
 (auto-import)
 Info: resolving _SPI_tuptable by linking to __imp__SPI_tuptable
 (auto-import)
 Info: resolving _default_statistics_target by linking to
 __imp__default_statistics_target (auto-import)
 fu01.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu02.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu03.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu04.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu05.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'
 fu06.o(.idata$3+0xc): more undefined references to
 `libpostgres_a_iname' follow
 nmth00.o(.idata$4+0x0): undefined reference to
 `_nm__CurrentMemoryContext'
 nmth000111.o(.idata$4+0x0): undefined reference to `_nm__SPI_processed'
 nmth000118.o(.idata$4+0x0): undefined reference to `_nm__SPI_tuptable'
 nmth000127.o(.idata$4+0x0): undefined reference to
 `_nm__default_statistics_target'
 d:\MinGW\bin\dllwrap.exe: d:\MinGW\bin\gcc exited with status 1
 make: *** [libnsqlgeom.dll] Error 1
 
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


--
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) is intended only for the use of the intended 
recipient and may be confidential and/or privileged of Neusoft Group Ltd., its 
subsidiaries and/or its affiliates. If any reader of this communication is not 
the intended recipient, unauthorized use, forwarding, printing, storing, 
disclosure or copying is strictly prohibited, and may be unlawful. If you have 
received this communication in error, please immediately notify the sender by 
return e-mail, and delete the original message and all copies from your system. 
Thank you. 
---


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Help Required regarding implementing operators in postgreSQL

2007-05-24 Thread Tasneem Memon

Dear Developers,
Hi, I am Tasneem Memon doing MS Computer Science. I am working on my thesis 
these days, for which I have defined some operators theoritically as well as 
their algorithms, and intend to implement them in PostgreSQL. These 
operators(NEAR, NOT NEAR) are binary and work with NUMBER and DATE data types. 
Now I need to implement support for them in the parser. But I haven't worked on 
the optimization of the operators. And also I am new to database systems 
programming, I am quite confused about what to do now and if I am going the 
right direction!Please Guide me what should I do next.With best regards,
 

Tasneem Ishaque Memon MS (Computer Science) Institute of Mathmetics and 
Computer Science University of Sindh, Jamshoro Pakistan 
http://www.geocities.com/tasneememon/- The process of gaining knowledge is not 
producing results but clearing away ignorance.
_
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx

Re: [HACKERS] Help Required regarding implementing operators in postgreSQL

2007-05-24 Thread Tom Lane
Tasneem Memon [EMAIL PROTECTED] writes:
 Hi, I am Tasneem Memon doing MS Computer Science. I am working on my thesis=
  these days, for which I have defined some operators theoritically as well =
 as their algorithms, and intend to implement them in PostgreSQL. These oper=
 ators(NEAR, NOT NEAR) are binary and work with NUMBER and DATE data types. =
 Now I need to implement support for them in the parser. But I haven't worke=
 d on the optimization of the operators. And also I am new to database syste=
 ms programming, I am quite confused about what to do now and if I am going =
 the right direction!Please Guide me what should I do next.With best regards=

It sounds like you could use LIKE and its underlying operators/functions
as a template to follow.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] help required regarding queryin postgis database from google maps

2007-02-26 Thread Andrew Hammond
On Feb 25, 9:34 am, [EMAIL PROTECTED] (Andrew Dunstan) wrote:
 Phani Kishore wrote:

  hi !

  i think u people could probably help me i how to query the
  pgsql/postgis from google maps api to display the markers on the
  google maps which are stored in the postgis database.
  Phani Kishore
  Tata Consultancy Services
  Mailto: [EMAIL PROTECTED]
  Website:http://www.tcs.com

 This list is not about how to use postgres. Please ask in the correct forum.

pgsql-general, would be a good place to start with.

Andrew


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] help required regarding queryin postgis database from google maps

2007-02-25 Thread Phani Kishore
hi !

i think u people could probably help me i how to query the pgsql/postgis 
from google maps api to display the markers on the google maps which are 
stored in the postgis database.
Phani Kishore
Tata Consultancy Services
Mailto: [EMAIL PROTECTED]
Website: http://www.tcs.com
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [HACKERS] help required regarding queryin postgis database from google maps

2007-02-25 Thread Andrew Dunstan



Phani Kishore wrote:


hi !

i think u people could probably help me i how to query the 
pgsql/postgis from google maps api to display the markers on the 
google maps which are stored in the postgis database.

Phani Kishore
Tata Consultancy Services
Mailto: [EMAIL PROTECTED]
Website: http://www.tcs.com
  


This list is not about how to use postgres. Please ask in the correct forum.

cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Help with Degree Work

2007-01-24 Thread Werner Echezuria

Hello,

I've suscribed to this mailing list for help, I will work on a
Specialization Degree Thesis, this will be a PostgreSQL implementation of
fsql, or fuzzy querys.

http://www.lcc.uma.es/~ppgg/FSQL.html, this is a link to a webpage who made
this in Oracle, but it's not inside of course, because it was made for other
people.

My work will be make this in the PostgreSQL Kernell, so i'll appreciate all
help you can bring.

Thanks.


Re: [HACKERS] Help with Degree Work

2007-01-24 Thread Jim C. Nasby
You should take a look at http://pgfoundry.org/projects/qbe, which deals
with querying data by providing sample data that matches what you're
looking for.

On Wed, Jan 24, 2007 at 01:40:04PM -0400, Werner Echezuria wrote:
 Hello,
 
 I've suscribed to this mailing list for help, I will work on a
 Specialization Degree Thesis, this will be a PostgreSQL implementation of
 fsql, or fuzzy querys.
 
 http://www.lcc.uma.es/~ppgg/FSQL.html, this is a link to a webpage who made
 this in Oracle, but it's not inside of course, because it was made for other
 people.
 
 My work will be make this in the PostgreSQL Kernell, so i'll appreciate all
 help you can bring.
 
 Thanks.

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[HACKERS] Help me pack up postgresDB with my windows application.

2006-11-14 Thread du li
Dear hackers,I'm working ona windows application with C# language and use npgsql to connectpostgres DB. I'm eager to learn how to make a solo setup file which included windows application and postgres DB. My develop environment is Visual Studio 2003 and Framework 1.1I don't know if there are a convenient way to pack upa postgres DB, and I'm not sure which files and register key I need to pack, as well as how to turn up apostgres service after installation.I knowmany peoplelike you have done wonderful job on postgres and it is unsuspectingly. Ijust want toreduce steps, config-operation and keep database passwordwhen our userinstall applications.  Very appreciate your help.  James Duan 

Everyone is raving about the all-new Yahoo! Mail beta.

Re: [HACKERS] Help me pack up postgresDB with my windows application.

2006-11-14 Thread Shane Ambler

du li wrote:

Dear hackers,
   
  I'm working on a windows application with C# language and use npgsql to connect postgres DB. I'm eager to learn how to make a solo setup file which included windows application and postgres DB. My develop environment is Visual Studio 2003 and Framework 1.1
   
  I don't know if there are a convenient way to pack up a postgres DB, and I'm not sure which files and register key I need to pack, as well as how to turn up a postgres service after installation.
   
  I know many people like you have done wonderful job on postgres and it is unsuspectingly. I just want to reduce steps, config-operation and keep database password when our user install applications.
   
   


http://pgfoundry.org/projects/pginstaller/

contains a link to the source used to build the windows binary installer 
- what you want will be in there.



--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] help wanted (fmgr.c)

2006-09-11 Thread Gevik Babakhani
I am a little stuck of a question.

In fmgr.c:1698, function InputFunctionCall at line 1718

/* Should get null result if and only if str is NULL */
if (str == NULL)
{
 
 

What are we testing to be NULL here?
Do we expect str to changed at line 1715 
(  result = FunctionCallInvoke(fcinfo);  )


Please advice,
Gevik.







---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] help wanted (fmgr.c)

2006-09-11 Thread Abhijit Menon-Sen
At 2006-09-11 10:25:22 +0200, [EMAIL PROTECTED] wrote:

 What are we testing to be NULL here?
 Do we expect str to changed at line 1715 

No. (Read the comment just above the function.)

The code is like this, starting from line 1703:

if (str == NULL  flinfo-fn_strict)
return (Datum) 0;

That is, if the function is declared strict, and the argument (str) is
0, just return NULL straightaway. Then it sets up the fcinfo and calls
the function, and then:

...

/* Should get null result if and only if str is NULL */
if (str == NULL)
{
if (!fcinfo.isnull)
elog(ERROR, input function %u returned non-NULL,
 fcinfo.flinfo-fn_oid);
}
else
{
if (fcinfo.isnull)
elog(ERROR, input function %u returned NULL,
 fcinfo.flinfo-fn_oid);
}

This says: If the argument is NULL and the input function didn't return
a NULL, log an error; but if the argument is non-NULL and the function
returned NULL, log this other error. (Note that a function would set
fcinfo-isnull to indicate that it wants to return an SQL NULL, as
explained in $DOC/plhandler.html)

-- ams

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] help wanted (fmgr.c)

2006-09-11 Thread Gevik Babakhani
Thank you for your reply. 
I found my bug in the code which made the function behave strangely.

On Mon, 2006-09-11 at 14:23 +0530, Abhijit Menon-Sen wrote:
 At 2006-09-11 10:25:22 +0200, [EMAIL PROTECTED] wrote:
 
  What are we testing to be NULL here?
  Do we expect str to changed at line 1715 
 
 No. (Read the comment just above the function.)
 
 The code is like this, starting from line 1703:
 
 if (str == NULL  flinfo-fn_strict)
 return (Datum) 0;
 
 That is, if the function is declared strict, and the argument (str) is
 0, just return NULL straightaway. Then it sets up the fcinfo and calls
 the function, and then:
 
 ...
 
 /* Should get null result if and only if str is NULL */
 if (str == NULL)
 {
 if (!fcinfo.isnull)
 elog(ERROR, input function %u returned non-NULL,
  fcinfo.flinfo-fn_oid);
 }
 else
 {
 if (fcinfo.isnull)
 elog(ERROR, input function %u returned NULL,
  fcinfo.flinfo-fn_oid);
 }
 
 This says: If the argument is NULL and the input function didn't return
 a NULL, log an error; but if the argument is non-NULL and the function
 returned NULL, log this other error. (Note that a function would set
 fcinfo-isnull to indicate that it wants to return an SQL NULL, as
 explained in $DOC/plhandler.html)
 
 -- ams
 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Help! - Slony-I - saving/setting/restoring GUC

2006-07-25 Thread Chris Browne
[EMAIL PROTECTED] (Peter Eisentraut) writes:
 Chris Browne wrote:
 In support of PG 8.2, we need to have the log trigger function do the
 following:
   - Save value of standards_conforming_string
   - Set value of standards_conforming_string to FALSE
   - proceed with saving data to sl_log_?
   - Recover value of standards_conforming_string

 Would SET LOCAL help you?

Not really.  The log trigger function is an SPI function, and I don't
think I want to be invoking an extra SQL request every time a tuple is
updated.

Consider our present handling of date localizations...  Slony-I
prefers to operate using ISO dates.  So the log trigger function must
force the datestyle correspondingly.

So our code looks like...  (eliding irrelevant code)

int OldDateStyle = DateStyle;
DateStyle = USE_ISO_DATES;

/* code that generates data to stow in sl_log_n */

DateStyle = OldDateStyle;  /* Retrieve user's local settings */

---

At one point, I thought that we'd modify this to:
int OldDateStyle = DateStyle;
int OldSCS = standards_conforming_strings;
DateStyle = USE_ISO_DATES;
standards_conforming_strings = FALSE; 

/* code that generates data to stow in sl_log_n */

DateStyle = OldDateStyle;  /* Retrieve user's local settings */
standards_conforming_strings = OldSCS;

Unfortunately (perhaps) standards_conforming_strings does not appear
to be exported, so I'm not sure how to do this otherwise.
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://www.ntlug.org/~cbbrowne/wp.html
Editing is a rewording activity.
-- Alan J. Perlis
[And EMACS a rewording editor.  Ed.]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Help! - Slony-I - saving/setting/restoring GUC

2006-07-25 Thread Tom Lane
Chris Browne [EMAIL PROTECTED] writes:
 Unfortunately (perhaps) standards_conforming_strings does not appear
 to be exported, so I'm not sure how to do this otherwise.

Perhaps your problem is one of spelling?  It's
standard_conforming_strings, and it's certainly a global variable.

You still haven't explained why you need this, though.  There are
no datatype output functions that examine this variable.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Help! - Slony-I - saving/setting/restoring GUC

2006-07-25 Thread Chris Browne
[EMAIL PROTECTED] (Tom Lane) writes:
 Chris Browne [EMAIL PROTECTED] writes:
 Unfortunately (perhaps) standards_conforming_strings does not appear
 to be exported, so I'm not sure how to do this otherwise.

 Perhaps your problem is one of spelling?  It's
 standard_conforming_strings, and it's certainly a global variable.

 You still haven't explained why you need this, though.  There are
 no datatype output functions that examine this variable.

Apologies.  I was reading too much into the problem report.

After some discussion on IRC, I think we've simplified things back
to what the problem is.

Our logtrigger() function is already applying a non-SCS policy to
quote backslashes.  No need to declare anything there, as had been the
case with a recent DateStyle issue.

With DateStyle, we had to control the style in two places:
 1.  In logtrigger(), when PostgreSQL is asked to generate the
 string version of a datestamp.  We do so by temporarily
 setting DateStyle.
 2.  When loading data, we need to make sure the connection
 uses a consistent DateStyle.  We do so by setting the GUC
 variable using SET.

I was under the misapprehension that we needed something akin to that
step 1; apparently we only need worry about step 2.  To which Peter's
suggestion is entirely sufficient.

Sorry about the confusion.
-- 
let name=cbbrowne and tld=ntlug.org in name ^ @ ^ tld;;
http://cbbrowne.com/info/internet.html
Who is General Failure and why is he reading my hard disk? 
-- [EMAIL PROTECTED], Felix von Leitner

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Help! - Slony-I - saving/setting/restoring GUC

2006-07-24 Thread Chris Browne
In support of PG 8.2, we need to have the log trigger function do the
following:
  - Save value of standards_conforming_string
  - Set value of standards_conforming_string to FALSE
  - proceed with saving data to sl_log_? 
  - Recover value of standards_conforming_string

The variable, standards_conforming_string, does not appear to be
widely exported, but rather seems pretty localized to guc.c

This is the prime thing holding us from RC2.

Tom, Peter, you have touched guc.c in the context of
standards_conforming_string; perahps you can suggest something?  Darcy
and I aren't seeing what to do...
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://www3.sympatico.ca/cbbrowne/
Why do we drive on parkways and park on driveways?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Help! - Slony-I - saving/setting/restoring GUC

2006-07-24 Thread Peter Eisentraut
Chris Browne wrote:
 In support of PG 8.2, we need to have the log trigger function do the
 following:
   - Save value of standards_conforming_string
   - Set value of standards_conforming_string to FALSE
   - proceed with saving data to sl_log_?
   - Recover value of standards_conforming_string

Would SET LOCAL help you?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


  1   2   3   >