Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
On 2013-04-10 19:29:06 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2013-04-10 19:06:12 -0400, Tom Lane wrote:
> >> And the answer is they're not testing this code path at all, because if
> >> you do
> >> DECLARE c CURSOR WITH HOLD FOR ...
> >> FETCH ALL FROM c;
> >> then the second query executes with a portal (and resource owner)
> >> created to execute the FETCH command, not directly on the held portal.
> 
> > But in that path CurrentResourceOwner gets reset to portal->resowner as
> > well (see PortalRunFetch())?
> 
> Right, but that's the FETCH's portal, which is a regular "live" portal
> that has a ResOwner.

I don't think so?

standard_ProcessUtility:
PerformPortalFetch:
/* get the portal from the portal name */
portal = GetPortalByName(stmt->portalname);
...
/* Do it */
nprocessed = PortalRunFetch(portal,
stmt->direction,
stmt->howMany,
dest);
PortalRunFetch:
PG_TRY();
{
ActivePortal = portal;
CurrentResourceOwner = portal->resowner;

So it seems to trigger a very similar codepath?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Joshua Berry
Hm. Make that a
> print *(DR_printtup *) self
> print *((DR_printtup *) self)->attrinfo
> print *slot->tts_tupleDescriptor


(gdb) print *(DR_printtup *) self
$2 = {pub = {receiveSlot = 0x459390 ,
rStartup = 0x459550 ,
rShutdown = 0x458a20 ,
rDestroy = 0x458a10 , mydest = DestRemoteExecute},
  portal = 0x2aa9360, sendDescrip = 0 '\000', attrinfo = 0x2e7fc50,
  nattrs = 42, myinfo = 0x2a8ac30}
(gdb) print *((DR_printtup *) self)->attrinfo
$3 = {natts = 42, attrs = 0x2e7fc78, constr = 0x0, tdtypeid = 2249,
  tdtypmod = -1, tdhasoid = 0 '\000', tdrefcount = -1}
(gdb) print *slot->tts_tupleDescriptor
$4 = {natts = 42, attrs = 0x2e7fc78, constr = 0x0, tdtypeid = 2249,
  tdtypmod = -1, tdhasoid = 0 '\000', tdrefcount = -1}
(gdb)


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
On 2013-04-10 19:06:12 -0400, Tom Lane wrote:
> I wrote:
> > (Wanders away wondering just how much the regression tests exercise
> > holdable cursors.)
> 
> And the answer is they're not testing this code path at all, because if
> you do
>   DECLARE c CURSOR WITH HOLD FOR ...
>   FETCH ALL FROM c;
> then the second query executes with a portal (and resource owner)
> created to execute the FETCH command, not directly on the held portal.
> 
> After a little bit of thought I'm not sure it's even possible to
> reproduce this problem with libpq, because it doesn't expose any way to
> issue a bare protocol Execute command against a pre-existing portal.
> (I had thought psqlOBC went through libpq, but maybe it's playing some
> games here.)

Hm. PQexecPrepared() looks like it can do that.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Tom Lane
Andres Freund  writes:
> On 2013-04-10 19:06:12 -0400, Tom Lane wrote:
>> And the answer is they're not testing this code path at all, because if
>> you do
>> DECLARE c CURSOR WITH HOLD FOR ...
>> FETCH ALL FROM c;
>> then the second query executes with a portal (and resource owner)
>> created to execute the FETCH command, not directly on the held portal.

> But in that path CurrentResourceOwner gets reset to portal->resowner as
> well (see PortalRunFetch())?

Right, but that's the FETCH's portal, which is a regular "live" portal
that has a ResOwner.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
On 2013-04-10 18:25:24 -0500, Joshua Berry wrote:
> > Ok, I might be seeing whats going on here. Could you go to 'printtup'
> > and print *myState, *myState->attrinfo, *typpeinfo?
> >
> 
> #4  0x004593c4 in printtup (slot=0x2d14618, self=0x2a50c40)
> at printtup.c:297
> 297 printtup_prepare_info(myState, typeinfo, natts);
> (gdb) print *myState, *myState->attrinfo, *typeinfo
> value has been optimized out

Hm. Make that a
print *(DR_printtup *) self
print *((DR_printtup *) self)->attrinfo
print *slot->tts_tupleDescriptor

then.


Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Joshua Berry
> Ok, I might be seeing whats going on here. Could you go to 'printtup'
> and print *myState, *myState->attrinfo, *typpeinfo?
>

#4  0x004593c4 in printtup (slot=0x2d14618, self=0x2a50c40)
at printtup.c:297
297 printtup_prepare_info(myState, typeinfo, natts);
(gdb) print *myState, *myState->attrinfo, *typeinfo
value has been optimized out
(gdb)


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Tom Lane
Andres Freund  writes:
> Tom: It looks to me like printtup_prepare_info won't normally be called
> in an held cursor. But if some concurrent DDL changed the number of
> columns in typeinfo vs thaose in the the receiver that could explain the
> issue and why its not seen all the time, right?

It looks to me like there are probably two triggering conditions:

1. Client is doing a direct protocol Execute on a held-cursor portal.

2. Cache flush occurs to drop the syscache entries needed by
getTypeOutputInfo.  (Otherwise, they'd still be around from when the
held cursor was created.)

The first of these explains why we don't see it in the
CLOBBER_CACHE_ALWAYS buildfarm runs, and the second one explains why
Joshua is only seeing it intermittently and not every darn time his
application does that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
On 2013-04-10 19:06:12 -0400, Tom Lane wrote:
> I wrote:
> > (Wanders away wondering just how much the regression tests exercise
> > holdable cursors.)
> 
> And the answer is they're not testing this code path at all, because if
> you do
>   DECLARE c CURSOR WITH HOLD FOR ...
>   FETCH ALL FROM c;
> then the second query executes with a portal (and resource owner)
> created to execute the FETCH command, not directly on the held portal.

But in that path CurrentResourceOwner gets reset to portal->resowner as
well (see PortalRunFetch())?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
On 2013-04-10 17:31:09 -0500, Joshua Berry wrote:
> > Ok, so while we have a valid resource owner up to here, portal->resonwer
> > is NULL.
> >
> > Could you 'up' until youre in this frame and then do 'print *portal'?
> >
> 
> #7  0x00638c78 in PortalRun (portal=0x2aa9360, count=10,
> isTopLevel=1 '\001', dest=0x2a50c40, altdest=0x2a50c40,
> completionTag=0x7fffd193d0f0 "") at pquery.c:787
> 787 nprocessed =
> PortalRunSelect(portal, true, count, dest);
> (gdb) print *portal
> $1 = {name = 0x2a3fe78 "SQL_CUR017CB040", prepStmtName = 0x0,
>   heap = 0x2aca6f0, resowner = 0x0, cleanup = 0x535d30 ,
>   createSubid = 0,
>   sourceText = 0x2d141b8 "declare \"SQL_CUR017CB040\" cursor with hold for
> SELECT
> anl.LAB,anl.JOB,anl.COMPANYCODE,anl.SAMPLETYPE,anl.COMPANYLAB,anl.CYLNO,anl.CONTAINERCODE,anl.DATEGCANALYSIS,anl.DATEREPORT,anl.SAMPLENAME,anl.FIE"...,
>   commandTag = 0x828f32 "SELECT", stmts = 0x2d145e8, cplan = 0x0,
>   portalParams = 0x0, strategy = PORTAL_ONE_SELECT, cursorOptions = 52,
>   status = PORTAL_ACTIVE, portalPinned = 0 '\000', queryDesc = 0x0,
>   tupDesc = 0x2e7fc50, formats = 0x0, holdStore = 0x2d0d430,
>   holdContext = 0x2f4e290, atStart = 0 '\000', atEnd = 0 '\000',
>   posOverflow = 0 '\000', portalPos = 10, creation_time = 418933112934458,
>   visible = 1 '\001'}
> (gdb)

Ok, I might be seeing whats going on here. Could you go to 'printtup'
and print *myState, *myState->attrinfo, *typpeinfo?

Tom: It looks to me like printtup_prepare_info won't normally be called
in an held cursor. But if some concurrent DDL changed the number of
columns in typeinfo vs thaose in the the receiver that could explain the
issue and why its not seen all the time, right?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Tom Lane
I wrote:
> (Wanders away wondering just how much the regression tests exercise
> holdable cursors.)

And the answer is they're not testing this code path at all, because if
you do
DECLARE c CURSOR WITH HOLD FOR ...
FETCH ALL FROM c;
then the second query executes with a portal (and resource owner)
created to execute the FETCH command, not directly on the held portal.

After a little bit of thought I'm not sure it's even possible to
reproduce this problem with libpq, because it doesn't expose any way to
issue a bare protocol Execute command against a pre-existing portal.
(I had thought psqlOBC went through libpq, but maybe it's playing some
games here.)

Anyway, I'm thinking the appropriate fix might be like this

-   CurrentResourceOwner = portal->resowner;
+   if (portal->resowner)
+   CurrentResourceOwner = portal->resowner;

in several places in pquery.c; that is, keep using
TopTransactionResourceOwner if the portal doesn't have its own.

A more general but probably much more invasive solution would be to fake
up an intermediate portal when pulling data from a held portal, to
more closely approximate the explicit-FETCH case.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Tom Lane
Andres Freund  writes:
> Ok, so while we have a valid resource owner up to here, portal->resonwer
> is NULL.

Yeah, that's what I'm guessing.  Given the exposed reference to a cursor
WITH HOLD, it seems likely that the reason the portal has no resowner
is that PreCommit_Portals() got rid of it when the cursor was held.
However, if that were the explanation, it's not clear how come this
isn't falling over all the time for everybody who uses holdable cursors.
Any cache flush happening just before fetching from a holdable cursor
ought to do it ...

(Wanders away wondering just how much the regression tests exercise
holdable cursors.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Joshua Berry
> Ok, so while we have a valid resource owner up to here, portal->resonwer
> is NULL.
>
> Could you 'up' until youre in this frame and then do 'print *portal'?
>

#7  0x00638c78 in PortalRun (portal=0x2aa9360, count=10,
isTopLevel=1 '\001', dest=0x2a50c40, altdest=0x2a50c40,
completionTag=0x7fffd193d0f0 "") at pquery.c:787
787 nprocessed =
PortalRunSelect(portal, true, count, dest);
(gdb) print *portal
$1 = {name = 0x2a3fe78 "SQL_CUR017CB040", prepStmtName = 0x0,
  heap = 0x2aca6f0, resowner = 0x0, cleanup = 0x535d30 ,
  createSubid = 0,
  sourceText = 0x2d141b8 "declare \"SQL_CUR017CB040\" cursor with hold for
SELECT
anl.LAB,anl.JOB,anl.COMPANYCODE,anl.SAMPLETYPE,anl.COMPANYLAB,anl.CYLNO,anl.CONTAINERCODE,anl.DATEGCANALYSIS,anl.DATEREPORT,anl.SAMPLENAME,anl.FIE"...,
  commandTag = 0x828f32 "SELECT", stmts = 0x2d145e8, cplan = 0x0,
  portalParams = 0x0, strategy = PORTAL_ONE_SELECT, cursorOptions = 52,
  status = PORTAL_ACTIVE, portalPinned = 0 '\000', queryDesc = 0x0,
  tupDesc = 0x2e7fc50, formats = 0x0, holdStore = 0x2d0d430,
  holdContext = 0x2f4e290, atStart = 0 '\000', atEnd = 0 '\000',
  posOverflow = 0 '\000', portalPos = 10, creation_time = 418933112934458,
  visible = 1 '\001'}
(gdb)


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
On 2013-04-10 16:53:12 -0500, Joshua Berry wrote:
> #7  0x00638c78 in PortalRun (portal=0x2aa9360, count=10,
> isTopLevel=1 '\001', dest=0x2a50c40, altdest=0x2a50c40,
> completionTag=0x7fffd193d0f0 "") at pquery.c:787
> save_exception_stack = 0x7fffd193cfe0
> save_context_stack = 0x0
> local_sigjmp_buf = {{__jmpbuf = {140736709513424,
> 8412518331319730861,
>   44733280, 0, 4294967295, 3, -8412469024494566739,
>   8412517527857939117}, __mask_was_saved = 0, __saved_mask = {
>   __val = {76, 44316880, 5, 0, 44804496, 767, 5, 0, 4294967295,
> 3,
> 5565893, 11495616, 4746857, 140736709513424,
> 140736709513424,
> 0
> result = 
> nprocessed = 
> saveTopTransactionResourceOwner = 0x2a439e0
> saveTopTransactionContext = 0x2a438d0
> saveActivePortal = 0x0
> saveResourceOwner = 0x2a439e0
> savePortalContext = 0x0
> saveMemoryContext = 0x2a438d0
> __func__ = "PortalRun"

Ok, so while we have a valid resource owner up to here, portal->resonwer
is NULL.

Could you 'up' until youre in this frame and then do 'print *portal'?

Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Joshua Berry
Hi Andres!

On Wed, Apr 10, 2013 at 4:49 PM, Andres Freund wrote:

> Could you show the output of 'bt full'?
>

Program terminated with signal 11, Segmentation fault.
#0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:605
605 if (owner->ncatrefs < owner->maxcatrefs)
(gdb) bt
#0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:605
#1  0x006e1382 in SearchCatCache (cache=0x2a1aad0,
v1=, v2=,
v3=, v4=) at catcache.c:1143
#2  0x006ec69e in getTypeOutputInfo (type=20, typOutput=0x2a8ac30,
typIsVarlena=0x2a8ac38 "") at lsyscache.c:2438
#3  0x00459027 in printtup_prepare_info (myState=0x2a50c40,
typeinfo=0x2e7fc50, numAttrs=42) at printtup.c:263
#4  0x004593c4 in printtup (slot=0x2d14618, self=0x2a50c40)
at printtup.c:297
#5  0x006376ca in RunFromStore (portal=0x2aa9360,
direction=, count=10, dest=0x2a50c40) at
pquery.c:1121
#6  0x006377b2 in PortalRunSelect (portal=0x2aa9360,
forward=, count=10, dest=0x2a50c40) at pquery.c:939
#7  0x00638c78 in PortalRun (portal=0x2aa9360, count=10,
isTopLevel=1 '\001', dest=0x2a50c40, altdest=0x2a50c40,
completionTag=0x7fffd193d0f0 "") at pquery.c:787
#8  0x0063661e in exec_execute_message (argc=,
argv=, dbname=0x298a150 "[dbname]",
username=) at postgres.c:1965
#9  PostgresMain (argc=, argv=,
dbname=0x298a150 "[dbname]", username=)
at postgres.c:4026
#10 0x005f6c61 in BackendRun () at postmaster.c:3612
#11 BackendStartup () at postmaster.c:3302
#12 ServerLoop () at postmaster.c:1466
#13 0x005f9431 in PostmasterMain (argc=,
argv=) at postmaster.c:1127
#14 0x0059a9b0 in main (argc=5, argv=0x2988480) at main.c:199
(gdb) bt full
#0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:605
newmax = 
#1  0x006e1382 in SearchCatCache (cache=0x2a1aad0,
v1=, v2=,
v3=, v4=) at catcache.c:1143
res = 1 '\001'
cur_skey = {{sk_flags = 0, sk_attno = -2, sk_strategy = 3,
sk_subtype = 0, sk_collation = 0, sk_func = {
  fn_addr = 0x686640 , fn_oid = 184, fn_nargs = 2,
  fn_strict = 1 '\001', fn_retset = 0 '\000', fn_stats = 2
'\002',
  fn_extra = 0x0, fn_mcxt = 0x298b108, fn_expr = 0x0},
sk_argument = 20}, {sk_flags = 0, sk_attno = 0, sk_strategy = 0,
sk_subtype = 0, sk_collation = 0, sk_func = {fn_addr = 0,
  fn_oid = 0, fn_nargs = 0, fn_strict = 0 '\000',
  fn_retset = 0 '\000', fn_stats = 0 '\000', fn_extra = 0x0,
  fn_mcxt = 0x0, fn_expr = 0x0}, sk_argument = 0}, {sk_flags =
0,
sk_attno = 0, sk_strategy = 0, sk_subtype = 0, sk_collation = 0,
sk_func = {fn_addr = 0, fn_oid = 0, fn_nargs = 0,
  fn_strict = 0 '\000', fn_retset = 0 '\000', fn_stats = 0
'\000',
  fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0}, sk_argument =
0},
  {sk_flags = 0, sk_attno = 0, sk_strategy = 0, sk_subtype = 0,
sk_collation = 0, sk_func = {fn_addr = 0, fn_oid = 0,
  fn_nargs = 0, fn_strict = 0 '\000', fn_retset = 0 '\000',
  fn_stats = 0 '\000', fn_extra = 0x0, fn_mcxt = 0x0,
  fn_expr = 0x0}, sk_argument = 0}}
hashValue = 2280326203
hashIndex = 59
elt = 0x7fc28648fb88
ct = 0x7fc28648fb78
relation = 
scandesc = 
ntp = 
#2  0x006ec69e in getTypeOutputInfo (type=20, typOutput=0x2a8ac30,
typIsVarlena=0x2a8ac38 "") at lsyscache.c:2438
typeTuple = 
pt = 
__func__ = "getTypeOutputInfo"
#3  0x00459027 in printtup_prepare_info (myState=0x2a50c40,
typeinfo=0x2e7fc50, numAttrs=42) at printtup.c:263
thisState = 
format = 
formats = 0x0
i = 
__func__ = "printtup_prepare_info"
#4  0x004593c4 in printtup (slot=0x2d14618, self=0x2a50c40)
at printtup.c:297
typeinfo = 
myState = 0x2a50c40
buf = {data = 0x2aa9360 "x\376\243\002", len = 47269400, maxlen = 0,
  cursor = 47269400}
natts = 42
i = 
#5  0x006376ca in RunFromStore (portal=0x2aa9360,
direction=, count=10, dest=0x2a50c40) at
pquery.c:1121
oldcontext = 0x2aca6f0
ok = 
current_tuple_count = 0
slot = 0x2d14618
#6  0x006377b2 in PortalRunSelect (portal=0x2aa9360,
forward=, count=10, dest=0x2a50c40) at pquery.c:939
queryDesc = 0x0
direction = ForwardScanDirection
nprocessed = 
__func__ = "PortalRunSelect"
#7  0x00638c78 in PortalRun (portal=0x2aa9360, count=10,
isTopLevel=1 '\001', dest=0x2a50c40, altdest=0x2a50c40,
completionTag=0x7fffd193d0f0 "") at pquery.c:787
save_exception_stack = 0x7fffd193cfe0
save_context_stack = 0x0
local_sigjmp_buf = {{__jmpbuf = {140736709513424,
8412518331319730861,
  44733280, 0, 42949

Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Andres Freund
Hi,

On 2013-04-10 16:34:40 -0500, Joshua Berry wrote:
> Below are the relevant details. I'm not terribly savvy with gdb, so please
> let me know what else I could/should examine from the core dump, as well as
> anything else about the system/configuration.


> Kind Regards,
> -Joshua
> 
> #NB: some info in square brackets has been [redacted]
> # grep postmaster /var/log/messages
> Apr 10 13:18:32 [hostname] kernel: postmaster[17356]: segfault at 40 ip
> 00710e2e sp 7fffd193ca70 error 4 in postgres[40+4ea000]
> 
> gdb /usr/pgsql-9.1/bin/postmaster -c core.17356
> [...loading/reading symbols...]
> Core was generated by `postgres: [username] [databasename]
> [client_ipaddress](1500) SELECT  '.
> Program terminated with signal 11, Segmentation fault.
> #0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:605
> 605 if (owner->ncatrefs < owner->maxcatrefs)
> (gdb) q

Could you show the output of 'bt full'?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault with core dump

2013-04-10 Thread Alvaro Herrera
Joshua Berry escribió:

> gdb /usr/pgsql-9.1/bin/postmaster -c core.17356
> [...loading/reading symbols...]
> Core was generated by `postgres: [username] [databasename]
> [client_ipaddress](1500) SELECT  '.
> Program terminated with signal 11, Segmentation fault.
> #0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:605
> 605 if (owner->ncatrefs < owner->maxcatrefs)
> (gdb) q

Run "bt" here please (maybe "bt full" is even more illuminating)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Segmentation fault with core dump

2013-04-10 Thread Joshua Berry
Hi Group,

I'm using PG 9.1.9 with a client application using various versions of the
pgsqlODBC driver on Windows. Cursors are used heavily, as well as some
pretty heavy trigger queries on db writes which update several materialized
views.

The server has 48GB RAM installed, PG is configured for 12GB shared
buffers, 8MB max_stack_depth, 32MB temp_buffers, and 2MB work_mem. Most of
the other settings are defaults.

The server will seg fault from every few days to up to two weeks. Each time
one of the postgres server processes seg faults, the server gets terminated
by signal 11, restarts in recovery for up to 30 seconds, after which time
it accepts connections as if nothing ever happened. Unfortunately all the
open cursors and connections are lost, so the client apps are left in a bad
state.

 Seg faults have also occurred with PG 8.4. However that server's DELL OMSA
(hardware health monitoring system) began to report RAM parity errors, so I
figured that the seg faults were due to hardware issues and I did not
configure the system to save core files in order to debug. I migrated the
database to a server running PG9.1 with the hopes that the problem would
disappear, but it has not. So now I'm starting to debug.

Below are the relevant details. I'm not terribly savvy with gdb, so please
let me know what else I could/should examine from the core dump, as well as
anything else about the system/configuration.

Kind Regards,
-Joshua

#NB: some info in square brackets has been [redacted]
# grep postmaster /var/log/messages
Apr 10 13:18:32 [hostname] kernel: postmaster[17356]: segfault at 40 ip
00710e2e sp 7fffd193ca70 error 4 in postgres[40+4ea000]

gdb /usr/pgsql-9.1/bin/postmaster -c core.17356
[...loading/reading symbols...]
Core was generated by `postgres: [username] [databasename]
[client_ipaddress](1500) SELECT  '.
Program terminated with signal 11, Segmentation fault.
#0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:605
605 if (owner->ncatrefs < owner->maxcatrefs)
(gdb) q

# uname -a
Linux [hostname] 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT
2013 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
Scientific Linux release 6.3 (Carbon)

# psql -U jberry
psql (9.1.9)
Type "help" for help.

jberry=# select version();
   version
--
 PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-3), 64-bit
(1 row)


[GENERAL]

2013-04-10 Thread News Subsystem
Wed, 10 Apr 2013 13:30:52 -0700 (PDT)
 Wed, 10 Apr 2013 13:30:52 -0700 (PDT)
X-Newsgroups: pgsql.general
Date: Wed, 10 Apr 2013 13:30:52 -0700 (PDT)
Complaints-To: groups-ab...@google.com
Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=76.87.68.198; 
posting-account=96NFGAoAAABqgpEyKCN3YH2nEalcbJuu
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <7993cbff-9ea2-4ca2-a671-1e4626f87...@googlegroups.com>
Subject: Creating a somewhat complicated cartesian result
From: Nick 
Injection-Date: Wed, 10 Apr 2013 20:30:52 +
Content-Type: text/plain; charset=ISO-8859-1
To: pgsql-general@postgresql.org

I have a table called "where_clauses" which contains a bunch of conditions I 
would like to use for building dynamic queries. I would like to know all 
possible queries I could perform using this data. Here is my "where_clauses" 
data...

INSERT INTO where_clauses (id,col_name,clause) VALUES (1,'x','x < 1');
INSERT INTO where_clauses (id,col_name,clause) VALUES (2,'x','x < 2');
INSERT INTO where_clauses (id,col_name,clause) VALUES (3,'x','x < 3');
INSERT INTO where_clauses (id,col_name,clause) VALUES (4,'y','y < 1');
INSERT INTO where_clauses (id,col_name,clause) VALUES (5,'y','y < 2');
INSERT INTO where_clauses (id,col_name,clause) VALUES (6,'y','y < 3');
INSERT INTO where_clauses (id,col_name,clause) VALUES (7,'z','z < 1');

Ideally I would like the "all possible queries" in the form of an array of ids. 
For example, the "all possible queries" result would be...

{1}
{1,4}
{1,4,7}
{1,5}
{1,5,7}
{1,6}
{1,6,7}
{2}
{2,4}
{2,4,7}
{2,5}
{2,5,7}
{2,6}
{2,6,7}
{3}
{3,4}
{3,4,7}
{3,5}
{3,5,7}
{3,6}
{3,6,7}
{4}
{4,7}
{5}
{5,7}
{6}
{6,7}
{7}

Note that im throwing out joining on equal columns. What is a query or function 
that would give all possible where_clauses? 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread John R Pierce

On 4/10/2013 12:35 PM, Thomas Kellerer wrote:

But the *display* is done by the client.
And if Postgres (the server) did the conversion, I would not be able 
to see a different date formatting in e.g. a JDBC based tool. So I 
guess psql is reading that database/server setting.


psql is letting postgres send it as text rather than the binary internal 
date format.


JDBC has its own extensive date munging with its own database 
independent rules.  in particular, it does NOT use libpq.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] SOLVED Statistics query

2013-04-10 Thread Steve Crawford

On 04/10/2013 10:51 AM, Steve Crawford wrote:

...


Given a point in time I would like to:

1. Identify all distinct unit_ids with an entry that exists in
the preceding hour then

2. Count both the total events and sum the status=1 events for
the most recent 50 events for each unit_id that fall within a
limited period (e.g. don't look at data earlier than midnight).
So unit_id 60 might have 50 events in the last 15 minutes while
unit_id 4 might have only 12 events after midnight.


...

Guess I needed to wait for the coffee to absorb. I've come up with an 
initial working solution (perhaps excess use of CTE but it's useful for 
testing/debugging over different portions of collected data):


with

report_time as (
select
1365526800::int as list_end
),

report_ranges as (
select
extract(epoch from date_trunc('day', abstime(list_end)))::int as 
day_start,
greatest(list_end-3600, extract(epoch from date_trunc('day', 
abstime(list_end)))::int) as list_start,

list_end
from
report_time
),

today_events as (
select
unit_id,
event_time,
status
from
event_log d,
report_ranges r
where
d.event_time >= r.day_start and
d.event_time <= r.list_end
),

unit_id_list as (
select
distinct unit_id,
coalesce((select
 i.event_time
 from
 today_events i
 where
 i.unit_id = o.unit_id and
 i.event_time <= r.list_end
 order by
 event_time desc
 limit 1
 offset 49), r.day_start) as first_event
from
event_log o,
report_ranges r
where
event_time between r.list_start and r.list_end
)

select
unit_id,
(select
 count(*)
 from
 today_events ii
 where
 ii.unit_id = oo.unit_id and
 ii.event_time >= oo.first_event) as events,
(select
 sum (case when status = -6 then 1 else 0 end)
 from
 today_events ii
 where
 ii.unit_id = oo.unit_id and
 ii.event_time >= oo.first_event) as live_answer
from
unit_id_list oo
order by
unit_id
;

Cheers,
Steve



[GENERAL] Announcement: German-speaking PostgreSQL Conference 2013

2013-04-10 Thread Andreas 'ads' Scherbaum


PGConf.DE 2013 is the sequel of the highly successful German-speaking 
PostgreSQL Conference 2011. We maintain the proven concept: November 
8th, 2013, the Rhineland Industrial Museum in Oberhausen.


http://2013.pgconf.de/

The call for papers will open in a few days. Registration for the 
conference will be possible well in advance. Tickets may be purchased 
online in advance as well as at the entrance. Due to limited capacity, 
early booking is advisable. For sponsors, we have put together a package 
that includes among other things, a number of discounted ticket. More in 
the Call for Sponsors in a separate e-mail.


Following this conference, the OpenRheinRuhr will take place at the same 
venue over the weekend:


http://openrheinruhr.de/

--
Andreas 'ads' Scherbaum
German PostgreSQL User Group
European PostgreSQL User Group - Board of Directors
Volunteer Regional Contact, Germany - PostgreSQL Project


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Call for Sponsors: German-speaking PostgreSQL Conference 2013

2013-04-10 Thread Andreas 'ads' Scherbaum


The German-speaking PostgreSQL Conference 2013 is an excellent way for 
businesses to reach users and developers, or to give something back to 
the community. If you are interested in becoming a sponsor, we have put 
together several options for you:


http://2013.pgconf.de/cfs

The offers for Gold, Silver and Bronze sponsorship are described on the 
website. Please contact us if you are interested in other forms of 
sponsorship.


--
Andreas 'ads' Scherbaum
German PostgreSQL User Group
European PostgreSQL User Group - Board of Directors
Volunteer Regional Contact, Germany - PostgreSQL Project


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread Thomas Kellerer

John R Pierce wrote on 10.04.2013 21:28:

On 4/10/2013 6:15 AM, Thomas Kellerer wrote:

psql (one of the possible client applications) uses the "datestyle"
parameter to decide on how to format a date column when displaying
it.

If you change the "datestyle" parameter in postgresql.conf, it will
influence the way psql displays the date values. Probably pgAdmin
will also check that setting (as I don't use pgAdmin I can't really
tell).


PSQL doesn't use that, postgres itself does.   it can be set on the
fly with SET on a per-connection basis, or with ALTER DATABASE on a
per-database basis.
 


But the *display* is done by the client.
And if Postgres (the server) did the conversion, I would not be able to see a 
different date formatting in e.g. a JDBC based tool. So I guess psql is reading 
that database/server setting.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread John R Pierce

On 4/10/2013 6:15 AM, Thomas Kellerer wrote:
psql (one of the possible client applications) uses the "datestyle" 
parameter to decide on how to format a date column when displaying it.


If you change the "datestyle" parameter in postgresql.conf, it will 
influence the way psql displays the date values. Probably pgAdmin will 
also check that setting (as I don't use pgAdmin I can't really tell). 


PSQL doesn't use that, postgres itself does.   it can be set on the fly 
with SET on a per-connection basis, or with ALTER DATABASE on a 
per-database basis.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Statistics query

2013-04-10 Thread Steve Crawford

On 04/10/2013 10:31 AM, Chris Curvey wrote:


On Wed, Apr 10, 2013 at 12:30 PM, Steve Crawford 
> wrote:


I'm seeking ideas on the best way to craft the following query.
I've stripped everything down to the bare essentials and
simplified it below.

Input data has a timestamp (actually an int received from the
system in the form of a Unix epoch), a unit identifier and a status:

 event_time | unit_id | status
+-+
 1357056011 |  60 |  1
 1357056012 | 178 |  0
 1357056019 | 168 |  0
 1357056021 |   3 |  0
 1357056021 |   4 |  1
 1357056021 | 179 |  0
 1357056022 |   0 |  1
 1357056022 |   1 |  0
 1357056023 |   2 |  0
 1357056024 |   9 |  0
 1357056025 |   5 |  0
 1357056025 |   6 |  0
 1357056026 |   7 |  1
...

A given unit_id cannot have two events at the same time (enforced
by constraints).

Given a point in time I would like to:

1. Identify all distinct unit_ids with an entry that exists in the
preceding hour then

2. Count both the total events and sum the status=1 events for the
most recent 50 events for each unit_id that fall within a limited
period (e.g. don't look at data earlier than midnight). So unit_id
60 might have 50 events in the last 15 minutes while unit_id 4
might have only 12 events after midnight.

The output would look something like:

 unit_id | events | status_1_count
-++
  1  | 50 | 34
  2  | 27 | 18
  1  | 50 | 34
  1  |  2 |  0
...

Each sub-portion is easy and while I could use external processing
or set-returning functions I was hoping first to find the
secret-sauce to glue everything together into a single query.

Cheers,
Steve


something like

select unit_id, count(*), sum(status)
from mytable a
where event_time >= [whatever unix epoch translates to "last midnight"]
and exists
(  select *
   from mytable b
   where b.unit_id = a.unit_id
   and b.epoch >= [unix epoch that translates to "one hour ago"])
group by unit _id;

1) I think figuring out the unix epoch should be reasonable...but I 
don't know how to do it off the top of my head.
2) I could completely be misunderstanding this.  I'm not sure why the 
example results would have unit id 1 repeated. (which my suggestion 
WON'T do)


Because I screwed up cutting and pasting to make an example. The unit_id 
in the output should, in fact, be distinct:


 unit_id | events | status_1_count
-++
  1  | 50 | 34
  2  | 27 | 18
  3  | 50 | 34
  4  |  2 |  0

You are correct, epoch is easy:
abstime(epoch)
or
extract(epoch from timestamptz)
depending on which direction you are going or for an hour difference 
just subtract 3600.


The solution, however, misses the important complicating gotcha. The 
units I want listed are only those that have had at least one event in 
the last hour. But for each such unit, I only want the statistics to 
reflect the most-recent 50 events (even if those events occurred earlier 
than the current hour) provided the event occurred on the current date. 
So the events column can never be less than 1 nor more than 50.


For example...

One unit might have a single event at the start of the last hour but 49 
more in the preceding 10 minutes. I want to see that unit and the stats 
for those 50 events.


Same thing if a unit has 50 events clustered at the end of an hour - I 
don't want the earlier ones.


Another might have 50 events early in the day but none this hour. I 
don't want to see that one.


But I do want to see the one that had an event in the last hour late in 
the day along with the 48 other events that have accumulated since midnight.


Cheers,
Steve


Re: [GENERAL] Statistics query

2013-04-10 Thread Chris Curvey
On Wed, Apr 10, 2013 at 12:30 PM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:

> I'm seeking ideas on the best way to craft the following query. I've
> stripped everything down to the bare essentials and simplified it below.
>
> Input data has a timestamp (actually an int received from the system in
> the form of a Unix epoch), a unit identifier and a status:
>
>  event_time | unit_id | status
> +-+---**-
>  1357056011 |  60 |  1
>  1357056012 | 178 |  0
>  1357056019 | 168 |  0
>  1357056021 |   3 |  0
>  1357056021 |   4 |  1
>  1357056021 | 179 |  0
>  1357056022 |   0 |  1
>  1357056022 |   1 |  0
>  1357056023 |   2 |  0
>  1357056024 |   9 |  0
>  1357056025 |   5 |  0
>  1357056025 |   6 |  0
>  1357056026 |   7 |  1
> ...
>
> A given unit_id cannot have two events at the same time (enforced by
> constraints).
>
> Given a point in time I would like to:
>
> 1. Identify all distinct unit_ids with an entry that exists in the
> preceding hour then
>
> 2. Count both the total events and sum the status=1 events for the most
> recent 50 events for each unit_id that fall within a limited period (e.g.
> don't look at data earlier than midnight). So unit_id 60 might have 50
> events in the last 15 minutes while unit_id 4 might have only 12 events
> after midnight.
>
> The output would look something like:
>
>  unit_id | events | status_1_count
> -++---**-
>   1  | 50 | 34
>   2  | 27 | 18
>   1  | 50 | 34
>   1  |  2 |  0
> ...
>
> Each sub-portion is easy and while I could use external processing or
> set-returning functions I was hoping first to find the secret-sauce to glue
> everything together into a single query.
>
> Cheers,
> Steve
>
>
> something like

select unit_id, count(*), sum(status)
from mytable a
where event_time >= [whatever unix epoch translates to "last midnight"]
and exists
(  select *
   from mytable b
   where b.unit_id = a.unit_id
   and b.epoch >= [unix epoch that translates to "one hour ago"])
group by unit _id;

1) I think figuring out the unix epoch should be reasonable...but I don't
know how to do it off the top of my head.
2) I could completely be misunderstanding this.  I'm not sure why the
example results would have unit id 1 repeated. (which my suggestion WON'T
do)


[GENERAL] Statistics query

2013-04-10 Thread Steve Crawford
I'm seeking ideas on the best way to craft the following query. I've 
stripped everything down to the bare essentials and simplified it below.


Input data has a timestamp (actually an int received from the system in 
the form of a Unix epoch), a unit identifier and a status:


 event_time | unit_id | status
+-+
 1357056011 |  60 |  1
 1357056012 | 178 |  0
 1357056019 | 168 |  0
 1357056021 |   3 |  0
 1357056021 |   4 |  1
 1357056021 | 179 |  0
 1357056022 |   0 |  1
 1357056022 |   1 |  0
 1357056023 |   2 |  0
 1357056024 |   9 |  0
 1357056025 |   5 |  0
 1357056025 |   6 |  0
 1357056026 |   7 |  1
...

A given unit_id cannot have two events at the same time (enforced by 
constraints).


Given a point in time I would like to:

1. Identify all distinct unit_ids with an entry that exists in the 
preceding hour then


2. Count both the total events and sum the status=1 events for the most 
recent 50 events for each unit_id that fall within a limited period 
(e.g. don't look at data earlier than midnight). So unit_id 60 might 
have 50 events in the last 15 minutes while unit_id 4 might have only 12 
events after midnight.


The output would look something like:

 unit_id | events | status_1_count
-++
  1  | 50 | 34
  2  | 27 | 18
  1  | 50 | 34
  1  |  2 |  0
...

Each sub-portion is easy and while I could use external processing or 
set-returning functions I was hoping first to find the secret-sauce to 
glue everything together into a single query.


Cheers,
Steve






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to create materialized view in postgresql 8.3

2013-04-10 Thread Julian
On 10/04/13 23:33, Vincent Veyron wrote:
> Le lundi 08 avril 2013 à 09:36 -0700, Mike Christensen a écrit :
>> This is the number one requested feature on Uservoice:
>>
>>
>> http://postgresql.uservoice.com/forums/21853-general/suggestions/247548-materialized-views
>>
>>
> 
> I find this rather surprising, considering the fact that a properly
> tuned database will return queries over millions of rows and multiple
> joins in milliseconds, given the proper hardware.
> 
> I can see how a datawharehouse with terrabytes of data can make use of
> them, but that is hardly a common situation. It seems to me many of
> these people clamouring for materialized views would be better off
> simply using a proprer data structure (and taking a few SQL courses).
> 
> Am I misguided?
> 

Theres database and application systems (literally everywhere on the web
IMO) where people think that throwing extra hardware at a problem will
solve what proper caching solutions would achieve with no upgrades at all.

IMO, for most things "web", data is retrieved more than it is set or
modified.

MV's will always perform better caching a query result, than a query
(VIEW) and MV's and tablespaces seem to be made for each other.

As for proper data structures, for whatever reason (migrating,
upgrading, maintaining) really bad query code exists (including mine).

Jules.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Characters

2013-04-10 Thread P. Broennimann
Hi Adrian

Thanks a lot! After spending the day checking all sorts of things you saved
my day :)

-> I just added the unit 'cwstring' and now it works.

Thanks & good day

Cheers,
Peter

--
Peter Broennimann
Untergasse 11 a
CH-8476 Unterstammheim
Switzerland
--


2013/4/10 Adrian Klaver 

> On 04/10/2013 02:53 AM, P. Broennimann wrote:
>
>> Hi there
>>
>> I am using PG 9.2 and 'pg_database' shows 'en_US.UTF-8'.
>>
>> I have a text 'Piqué' but this is shown in my application as 'Piqu?' so
>> I was wondering where the problem is?
>>
>> I am using Devart's 'PostgreSQL Data Access Components' to access PG
>> from FreePascal.
>>
>> In my FreePascal code I do use AnsiStrings and I cast the database
>> results/text 'UTF8ToAnsi(...)'
>>
>> Thanks for a hint!
>>
>
> Just guessing, but from FreePascal docs:
>
> http://www.freepascal.org/**docs-html/rtl/system/**utf8toansi.html
>
> "
> Description
>
> Utf8ToAnsi converts an utf8-encode unicode string to an ansistring. It
> converts the string to a widestring and then converts the widestring to an
> ansistring.
>
> For this function to work, a widestring manager must be installed.
> "
>
> Is there a widestring manager?
>
>
> http://www.freepascal.org/**docs-html/rtl/system/**
> setwidestringmanager.html
>
>
>> Cheers,
>> P.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread Shaun Thomas

On 04/10/2013 06:31 AM, Zahid Quadri wrote:


please suggest if there is any way which i can find which tables need
indexes in postgresql.


If you have 8.4 or newer, you can look in pg_stat_statements for queries 
that use a lot of time. Check the where clauses for columns or 
transforms that can be indexed.


Of course, you have to install it first. Take a look here:

http://www.postgresql.org/docs/8.4/static/pgstatstatements.html

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Characters

2013-04-10 Thread Adrian Klaver

On 04/10/2013 02:53 AM, P. Broennimann wrote:

Hi there

I am using PG 9.2 and 'pg_database' shows 'en_US.UTF-8'.

I have a text 'Piqué' but this is shown in my application as 'Piqu?' so
I was wondering where the problem is?

I am using Devart's 'PostgreSQL Data Access Components' to access PG
from FreePascal.

In my FreePascal code I do use AnsiStrings and I cast the database
results/text 'UTF8ToAnsi(...)'

Thanks for a hint!


Just guessing, but from FreePascal docs:

http://www.freepascal.org/docs-html/rtl/system/utf8toansi.html

"
Description

Utf8ToAnsi converts an utf8-encode unicode string to an ansistring. It 
converts the string to a widestring and then converts the widestring to 
an ansistring.


For this function to work, a widestring manager must be installed.
"

Is there a widestring manager?


http://www.freepascal.org/docs-html/rtl/system/setwidestringmanager.html



Cheers,
P.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread Michael Paquier
On Wed, Apr 10, 2013 at 10:19 PM, JotaComm  wrote:

> Hello,
>
>
> 2013/4/10 Thomas Kellerer 
>
>> Zahid Quadri, 10.04.2013 13:31:
>>
>>  hi,,
>>>
>>> please suggest if there is any way which i can find which tables need
>>> indexes in postgresql.
>>>
>>
> You have some possibilities:
>
> - the log file (slow queries)
>
> - statistics with old information (see the ANALYZE command)
>
> - statistics tables, for example: pg_stat_user_tables
>
> You can run this SQL:
>
> SELECT pg_stat_user_tables.schemaname,
> pg_stat_user_tables.relname,
> pg_stat_user_tables.seq_scan,
> pg_stat_user_tables.seq_tup_read,
> pg_stat_user_tables.idx_scan,
> pg_stat_user_tables.idx_tup_fetch
> FROM pg_stat_user_tables;
>
> If you have a big value in seq_scan column compared to the idx_scan column
> (small value), this indicate that you probably need to create an index in
> some column, but you need to discover what column needs the index. (the log
> file is a good indication).
>
There is also this tool online that can help you to determine what are the
slow parts of a query plan :
http://explain.depesz.com/

This is perhaps more simple than visualizing raw ANALYZE output, and it
will help you to catch what are the tables needing indexing, or perhaps
partial indexing.
-- 
Michael


Re: [GENERAL] how to create materialized view in postgresql 8.3

2013-04-10 Thread Michael Paquier
On Wed, Apr 10, 2013 at 10:33 PM, Vincent Veyron wrote:

> I find this rather surprising, considering the fact that a properly
> tuned database will return queries over millions of rows and multiple
> joins in milliseconds, given the proper hardware.
>
> I can see how a datawharehouse with terrabytes of data can make use of
> them, but that is hardly a common situation. It seems to me many of
> these people clamouring for materialized views would be better off
> simply using a proprer data structure (and taking a few SQL courses).
>
> Am I misguided?

A use case of materialized views is cache for web application where you
could refresh them with complicated join queries running in background. You
cannot do that with a view as it would be necessary to reprocess the query
each time, and it is difficult to do that with only tables as this could
incredibly complicate your database schema.
-- 
Michael


Re: [GENERAL] how to create materialized view in postgresql 8.3

2013-04-10 Thread Vincent Veyron
Le lundi 08 avril 2013 à 09:36 -0700, Mike Christensen a écrit :
> This is the number one requested feature on Uservoice:
> 
> 
> http://postgresql.uservoice.com/forums/21853-general/suggestions/247548-materialized-views
> 
> 

I find this rather surprising, considering the fact that a properly
tuned database will return queries over millions of rows and multiple
joins in milliseconds, given the proper hardware.

I can see how a datawharehouse with terrabytes of data can make use of
them, but that is hardly a common situation. It seems to me many of
these people clamouring for materialized views would be better off
simply using a proprer data structure (and taking a few SQL courses).

Am I misguided?

-- 
Salutations, Vincent Veyron
http://gdlc.fr/logiciels
Applications de gestion des sinistres assurance et des contentieux juridiques



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread Chris Curvey
On Wed, Apr 10, 2013 at 9:19 AM, JotaComm  wrote:

> Hello,
>
>
> 2013/4/10 Thomas Kellerer 
>
>> Zahid Quadri, 10.04.2013 13:31:
>>
>>  hi,,
>>>
>>> please suggest if there is any way which i can find which tables need
>>> indexes in postgresql.
>>>
>>
> You have some possibilities:
>
> - the log file (slow queries)
>
> - statistics with old information (see the ANALYZE command)
>
> - statistics tables, for example: pg_stat_user_tables
>
> You can run this SQL:
>
> SELECT pg_stat_user_tables.schemaname,
> pg_stat_user_tables.relname,
> pg_stat_user_tables.seq_scan,
> pg_stat_user_tables.seq_tup_read,
> pg_stat_user_tables.idx_scan,
> pg_stat_user_tables.idx_tup_fetch
> FROM pg_stat_user_tables;
>
> If you have a big value in seq_scan column compared to the idx_scan column
> (small value), this indicate that you probably need to create an index in
> some column, but you need to discover what column needs the index. (the log
> file is a good indication).
>
>
I'll also give a shout-out for pgBadger.  It parses your slow query logs
and creates a nice summary of queries that could use some attention.


Re: [GENERAL] Characters

2013-04-10 Thread JotaComm
Hello,


2013/4/10 P. Broennimann 

> Hi there
>
> I am using PG 9.2 and 'pg_database' shows 'en_US.UTF-8'.
>
> I have a text 'Piqué' but this is shown in my application as 'Piqu?' so I
> was wondering where the problem is?
>

What is the result for these commands?

SHOW client_encoding;

and

SHOW server_encoding;

>
> I am using Devart's 'PostgreSQL Data Access Components' to access PG from
> FreePascal.
>
> In my FreePascal code I do use AnsiStrings and I cast the database
> results/text 'UTF8ToAnsi(...)'
>
> Thanks for a hint!
>
> Cheers,
> P.
>
>
>
Regards

-- 
JotaComm
http://jotacomm.wordpress.com


Re: [GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread JotaComm
Hello,


2013/4/10 Thomas Kellerer 

> Zahid Quadri, 10.04.2013 13:31:
>
>  hi,,
>>
>> please suggest if there is any way which i can find which tables need
>> indexes in postgresql.
>>
>
You have some possibilities:

- the log file (slow queries)

- statistics with old information (see the ANALYZE command)

- statistics tables, for example: pg_stat_user_tables

You can run this SQL:

SELECT pg_stat_user_tables.schemaname,
pg_stat_user_tables.relname,
pg_stat_user_tables.seq_scan,
pg_stat_user_tables.seq_tup_read,
pg_stat_user_tables.idx_scan,
pg_stat_user_tables.idx_tup_fetch
FROM pg_stat_user_tables;

If you have a big value in seq_scan column compared to the idx_scan column
(small value), this indicate that you probably need to create an index in
some column, but you need to discover what column needs the index. (the log
file is a good indication).






>>
> Tables don't need indexes. Queries do. You will need to show us the
> queries in question (e.g. those that are slow) in order to decide which
> index is helpful.
>
> Thomas
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Regards

-- 
JotaComm
http://jotacomm.wordpress.com


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread Adrian Klaver

On 04/10/2013 06:03 AM, Condor wrote:

Hello ppl,

I have a database where the previous owner use US date format in date
fields:

2009-02-18

Is there a way how to convert the fields in European format 18-02-2009.
I mean existing date in records. What's will be happened if I change format
in postgresql.conf ?


The dates are stored as a non formatted value. What you are seeing is 
the output formatting:


http://www.postgresql.org/docs/9.2/interactive/datatype-datetime.html#DATATYPE-DATETIME-OUTPUT

In other words the data in the fields will not be converted.

If you want a European style formatting then you can set the 
DateStyle='SQL,DMY':


test=> set datestyle='SQL,DMY';
SET
test=> SELECT now()::date;
now

 10/04/2013
(1 row)


This can be done as needed or by setting it in postgresql.conf




Cheers,
Hristo S.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread Thomas Kellerer

Condor, 10.04.2013 15:03:

Hello ppl,

I have a database where the previous owner use US date format in date fields:

2009-02-18

Is there a way how to convert the fields in European format 18-02-2009.
I mean existing date in records. What's will be happened if I change format
in postgresql.conf ?


A date column does NOT have a format.

The format is only applied by the client application when _displaying_ the date.

Btw. 2009-02-18 is not the US format, it's the ISO format, in the US the format 
02/18/2012 is used.

psql (one of the possible client applications) uses the "datestyle" parameter 
to decide on how to format a date column when displaying it.

If you change the "datestyle" parameter in postgresql.conf, it will influence 
the way psql displays the date values. Probably pgAdmin will also check that setting (as 
I don't use pgAdmin I can't really tell).

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread Condor

Hello ppl,

I have a database where the previous owner use US date format in date 
fields:


2009-02-18

Is there a way how to convert the fields in European format 18-02-2009.
I mean existing date in records. What's will be happened if I change 
format

in postgresql.conf ?


Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread Thomas Kellerer

Zahid Quadri, 10.04.2013 13:31:

hi,,

please suggest if there is any way which i can find which tables need indexes 
in postgresql.



Tables don't need indexes. Queries do. You will need to show us the queries in 
question (e.g. those that are slow) in order to decide which index is helpful.

Thomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread Atri Sharma


Sent from my iPad

On 10-Apr-2013, at 17:01, Zahid Quadri  wrote:

> hi,,
> 
> please suggest if there is any way which i can find which tables need indexes 
> in postgresql.
> 
> 
> 
> 



If the table under consideration has lots of data,and queries on it are very 
slow,you could consider adding an index to column(s) of that table.

Note,however,making excessive indexes may be harmful as well,so please be 
judicious while making them.

Regards,

Atri
> 
> 
> 


[GENERAL] how to find which tables required indexes in postgresql

2013-04-10 Thread Zahid Quadri
hi,, 

please suggest if there is any way which i can find which tables need indexes 
in postgresql. 











Re: [GENERAL] optimizer's cost formulas

2013-04-10 Thread Daniel Bausch
Hi Sebastien,

> The tool to tweak the query planner parameters mentioned in the article
> sounds very useful. Can we download it somewhere, either as binary or
> source code ?

It is currently not publicly available, because it contains some
specifics (no secrets) of the experiments I did, the most prominent
being that it depends on a modified version of dbt3.  The changes to
dbt3 in turn are even harder to share, because the master of dbt3 got
heavily modified in the meantime, making a rebase hard.

If you are still interested, please contact me off list.

Regards,
Daniel Bausch

-- 
Daniel Bausch
Wissenschaftlicher Mitarbeiter
Technische Universität Darmstadt
Fachbereich Informatik
Fachgebiet Datenbanken und Verteilte Systeme

Hochschulstraße 10
64289 Darmstadt
Germany

Tel.: +49 6151 16 6706
Fax:  +49 6151 16 6229


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Characters

2013-04-10 Thread P. Broennimann
Hi there

I am using PG 9.2 and 'pg_database' shows 'en_US.UTF-8'.

I have a text 'Piqué' but this is shown in my application as 'Piqu?' so I
was wondering where the problem is?

I am using Devart's 'PostgreSQL Data Access Components' to access PG from
FreePascal.

In my FreePascal code I do use AnsiStrings and I cast the database
results/text 'UTF8ToAnsi(...)'

Thanks for a hint!

Cheers,
P.


Re: [GENERAL] What is pg backend using memory for?

2013-04-10 Thread hubert depesz lubaczewski
On Wed, Apr 10, 2013 at 07:36:59AM +, Albe Laurenz wrote:
> What libraries are loaded in this backend (lsof)?
> Maybe it's something non-PostgreSQL that's hogging the memory.

I don't have this particular backend anymore, and I don't have lsof. But
in smaps there are libraries listed, so:

Still there is 51MB non-shared block:
2ba63c797000-2ba63fa68000 rw-p 2ba63c797000 00:00 0
Size: 52036 kB
Rss:  51340 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean:0 kB
Private_Dirty:51340 kB
Swap: 0 kB
Pss:  51340 kB

As for libraries:
=> grep / smaps  | awk '{print $NF}' | sort | uniq
(deleted)
/lib64/ld-2.5.so
/lib64/libc-2.5.so
/lib64/libcom_err.so.2.1
/lib64/libcrypt-2.5.so
/lib64/libcrypto.so.0.9.8e
/lib64/libdl-2.5.so
/lib64/libkeyutils-1.2.so
/lib64/libm-2.5.so
/lib64/libnss_files-2.5.so
/lib64/libresolv-2.5.so
/lib64/libselinux.so.1
/lib64/libsepol.so.1
/lib64/libssl.so.0.9.8e
/opt/pgbrew/9.1.6/bin/postgres
/opt/pgbrew/9.1.6/lib/postgresql/auto_explain.so
/opt/pgbrew/9.1.6/lib/postgresql/plpgsql.so
/usr/lib64/gconv/gconv-modules.cache
/usr/lib64/libgssapi_krb5.so.2.2
/usr/lib64/libk5crypto.so.3.1
/usr/lib64/libkrb5.so.3.3
/usr/lib64/libkrb5support.so.0.1
/usr/lib64/libxml2.so.2.6.26
/usr/lib64/libz.so.1.2.3
/usr/lib/locale/locale-archive

the "(deleted)" is shared memory file.

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup advice

2013-04-10 Thread Gabriele Bartolini

Hi Johann,

On Wed, 10 Apr 2013 09:58:05 +0200, Johann Spies 
 wrote:

I can specify how many versions of the files should be kept on
Tivoli.


Another option you can evaluate is the usage of backup catalogues, 
retention policies and archiving of Barman (www.pgbarman.org). We use it 
in some contexts with Tivoli (currently only through file system backup, 
but I guess that's what you do anyway).


However, the approach is totally different and is based on physical 
backups and continuous archiving, allowing you to perform point in time 
recovery as well.


Maybe it is worth evaluating it.

Cheers,
Gabriele
--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it - www.2ndQuadrant.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup advice

2013-04-10 Thread Johann Spies
Thanks everybody for your valuable comments.

I can specify how many versions of the files should be kept on Tivoli.

The database will regularly get new data and there is a continuous process
of data cleaning. It is a database mainly for research purposes and a few
researchers are using it.

I will explore the options mentioned. After the first read it looks like
continuing pg_dump is not a bad idea - maybe with some optimization (like
using diff's).

Regards
Johann

On 9 April 2013 12:05, Eduardo Morras  wrote:

> On Mon, 8 Apr 2013 10:40:16 -0500
> Shaun Thomas  wrote:
>
> >
> > Anyone else?
> >
>
> If his db has low inserts/updates/deletes he can use diff between pg_dumps
> (with default -Fp) before compressing.
>
> ---   ---
> Eduardo Morras 
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: [GENERAL] What is pg backend using memory for?

2013-04-10 Thread Albe Laurenz
hubert depesz lubaczewski wrote:
> So, I checked a backend on Linux, and found such thing:
> 2ba63c797000-2ba63fa68000 rw-p 2ba63c797000 00:00 0
> Size: 52036 kB
> Rss:  51336 kB
> Shared_Clean: 0 kB
> Shared_Dirty: 0 kB
> Private_Clean:0 kB
> Private_Dirty:51336 kB
> Swap: 0 kB
> Pss:  51336 kB
> 
> (this is part of /proc//smaps).
> 
> This is not shared memory, so it's local. It's not related to any files (in 
> such case first line would
> have path to file).
> 
> What's more - this backend, during getting smaps copy was idle, and it's not 
> stats manager, or
> anything like this.
> 
> How can this be diagnosed, to find out why there is so much private
> memory?
> 
> In case it matters: it's pg 9.1.6 on linux 2.6.18-164.2.1.el5

What libraries are loaded in this backend (lsof)?
Maybe it's something non-PostgreSQL that's hogging the memory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After dump/restoring from 32bit 8.4-windows to 64bit 9.2.4-linux experiencing 10x slowdown on queries

2013-04-10 Thread Albe Laurenz
Rob Sargent wrote:
> On 04/09/2013 02:29 PM, Giovanni Martina wrote:
>> I'm trying to upgrade our database server from postgresql 32-bit 8.2.4
>> running on Windows Server 2008 to postgresql 64-bit 9.2.4 on ubuntu
>> server 12.04.02 LTS.
>>
>> I have dumped one of our databases from our windows server and restored
>> it on the postgres server running on ubuntu in order to test for
>> incompatibilities. But the thing I am noticing playing with pgAdmin is
>> that queries are being performed much more slowly on the linux server
>> compared to the old windows 2k8 server, even with linux running on a ssd
>> with more ram, faster cpu etc.
>>
>> I've tried running ANALYZE, VACUUM and combinations of these via pgAdmin
>> on the linux database but performing queries consistently take 10x the
>> amount of time that they take on the windows server so I'm obviously
>> missing something here.
>>
>> The dump is a simple pg_dump -F c -f data.backup, then using pg_restore
>> to restore in an empty database created with template0. What could be
>> causing the new database to perform so abysmal?

> connect with psql to your new database and run "reindex database  db name>;"

That would be pretty useless since the indexes have been
created recently.

I would try to identify the bottleneck: is it disk-I/O, CPU
or something else? Also, turn on log_duration to see if the
query takes long on the server or if network or client
processing are part of the problem.
Is postgresql.conf identical on both machines?

For more specific help, you'll have to share more details
about your setup.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general