Re: [GENERAL] why autocommit mode is slow?

2011-04-07 Thread Scott Marlowe
On Thu, Apr 7, 2011 at 2:59 PM, Szymon Guz  wrote:
> Hi,
> this is maybe a stupid question, but I don't know how to explain to my
> coworkers why performing many inserts in autocommit mode is so much slower
> than making all of them in one transaction.
> Why is that so slow?

The real issue is that transactions have fairly high overhead.  When
in autocommit mode, the cost of the transaction is much higher than
the individual insert, so it's relatively slow.  OTOH, when inserting
a dozen or a hundred or a thousand rows, the transactional overhead to
build up and tear down the transaction becomes smaller and smaller in
comparison to the inserts.  The inserts in each instance cost the same
/ take just as long, but the transactional wrapping is only paid for
once in the large transaction scenario, and it's paid every time in
the autocommit.

The good news is postgresql has no real practical limit to transaction
size, and the theoretical limit is VERY large (like 2B or so
statements I believe.)  So no error about running out of rollback
space etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why autocommit mode is slow?

2011-04-07 Thread Craig Ringer

On 04/08/2011 04:59 AM, Szymon Guz wrote:

Hi,
this is maybe a stupid question, but I don't know how to explain to my
coworkers why performing many inserts in autocommit mode is so much
slower than making all of them in one transaction.

Why is that so slow?


Unless you have synchronous_commit disabled and/or a commit_delay, each 
commit requires synchronization of all outstanding writes to the disk 
before the commit can return success. That stops the OS intelligently 
reordering and batching the writes of multiple statements for better 
disk I/O efficiency. It also adds pauses before each statement can 
return, effectively adding latency between statements.


The trade-off you get is that after each statement, you know for certain 
that the results of that statement are on-disk and safe. That's not the 
case when you do them all in one transaction, or with synchronous_commit 
off.


BTW, the other area to look at with this is query batching. If the 
client isn't on the same host as the server or at least on a very low 
latency network, it can be much more efficient to batch queries together 
to minimize the number of network round trips required.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-07 Thread Rob Sargent



On 04/07/2011 03:46 PM, John R Pierce wrote:

On 04/07/11 1:42 PM, Shianmiin wrote:

Since the connection pool will be used by all tenants, eventually each
connection will hit all the tables/views.



don't all connections in a given pool have to use the same user
credentials? won't that be problematic for this architecture?



typically one munges the connection to set an effective user inside the 
connection established by "common" credentials


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why autocommit mode is slow?

2011-04-07 Thread Adrian Klaver
On Thursday, April 07, 2011 1:59:50 pm Szymon Guz wrote:
> Hi,
> this is maybe a stupid question, but I don't know how to explain to my
> coworkers why performing many inserts in autocommit mode is so much slower
> than making all of them in one transaction.
> 
> Why is that so slow?

Because autocommit wraps each statement in a transaction versus wrapping all 
the 
statements in one transaction. Transactions have overhead, the more you have 
the 
more time it is going to take.

> 
> regards
> Szymon

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-07 Thread John R Pierce

On 04/07/11 1:42 PM, Shianmiin wrote:

Since the connection pool will be used by all tenants, eventually each
connection will hit all the tables/views.



don't all connections in a given pool have to use the same user 
credentials?  won't that be problematic for this architecture?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-07 Thread Merlin Moncure
On Thu, Apr 7, 2011 at 3:42 PM, Shianmiin  wrote:
> Hi there,
>
> We are evaluating using PostgreSQL to implement a multitenant database,
> Currently we are running some tests on single-database-multiple-schema model
> (basically, all tenants have the same set of database objects under then own
> schema within the same database).
> The application will maintain a connection pool that will be shared among
> all tenants/schemas.
>
> e.g. If the database has 500 tenants/schemas and each tenants has 200
> tables/views,
> the total number of tables/views will be 500 * 200 = 100,000.
>
> Since the connection pool will be used by all tenants, eventually each
> connection will hit all the tables/views.
>
> In our tests, when the connection hits more views, we found the memory usage
> of the backend process increases quite fast and most of them are private
> memory.
> Those memory will be hold until the connection is closed.
>
> We have a test case that one backend process uses more the 30GB memory and
> eventually get an out of memory error.
>
> To help understand the issue, I wrote code to create a simplified test cases
>  - MTDB_destroy: used to clear tenant schemas
>  - MTDB_Initialize: used to create a multitenant DB
>  - MTDB_RunTests: simplified test case, basically select from all tenant
> views one by one.
>
> The tests I've done was on PostgreSQL 9.0.3 on CentOS 5.4
> To make sure I have a clean environment, I re-created database cluster and
> leave majority configurations as default,
> (the only thing I HAVE to change is to increase "max_locks_per_transaction"
> since MTDB_destroy needs to drop many objects.)
>
> This is what I do to reproduce the issue:
> 1. create a new database
> 2. create the three functions using the code attached
> 3. connect to the new created db and run the initialize scripts
>
> -- Initialize
> select MTDB_Initialize('tenant', 100, 100, true);
> -- not sure if vacuum analyze is useful here, I just run it
> vacuum analyze;
> -- check the tables/views created
> select table_schema, table_type, count(*) from information_schema.tables
> where table_schema like 'tenant%' group by table_schema, table_type order by
> table_schema, table_type;
>
> 4. open another connection to the new created db and run the test scripts
>
> -- get backend process id for current connection
> SELECT pg_backend_pid();
>
> -- open a linux console and run ps -p  and watch VIRT, RES and SHR
>
> -- run tests
> select MTDB_RunTests('tenant', 1);

don't write your test environment this way.  postgresql functions !=
stored procedure. Your entire suite of tests is running in single
transaction context which is very problematic and not a real test of
any reasonably written application.   Convert your test suite into a
sql/per/bash/etc script that is run into the database through libpq or
sql.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL backend process high memory usage issue

2011-04-07 Thread Shianmiin
Hi there,

We are evaluating using PostgreSQL to implement a multitenant database,
Currently we are running some tests on single-database-multiple-schema model
(basically, all tenants have the same set of database objects under then own
schema within the same database).
The application will maintain a connection pool that will be shared among
all tenants/schemas.

e.g. If the database has 500 tenants/schemas and each tenants has 200
tables/views,
the total number of tables/views will be 500 * 200 = 100,000.

Since the connection pool will be used by all tenants, eventually each
connection will hit all the tables/views.

In our tests, when the connection hits more views, we found the memory usage
of the backend process increases quite fast and most of them are private
memory.
Those memory will be hold until the connection is closed.

We have a test case that one backend process uses more the 30GB memory and
eventually get an out of memory error.

To help understand the issue, I wrote code to create a simplified test cases
  - MTDB_destroy: used to clear tenant schemas
  - MTDB_Initialize: used to create a multitenant DB
  - MTDB_RunTests: simplified test case, basically select from all tenant
views one by one.

The tests I've done was on PostgreSQL 9.0.3 on CentOS 5.4
To make sure I have a clean environment, I re-created database cluster and
leave majority configurations as default,
(the only thing I HAVE to change is to increase "max_locks_per_transaction"
since MTDB_destroy needs to drop many objects.)

This is what I do to reproduce the issue:
1. create a new database
2. create the three functions using the code attached
3. connect to the new created db and run the initialize scripts

-- Initialize
select MTDB_Initialize('tenant', 100, 100, true);
-- not sure if vacuum analyze is useful here, I just run it
vacuum analyze;
-- check the tables/views created
select table_schema, table_type, count(*) from information_schema.tables
where table_schema like 'tenant%' group by table_schema, table_type order by
table_schema, table_type;

4. open another connection to the new created db and run the test scripts

-- get backend process id for current connection
SELECT pg_backend_pid();

-- open a linux console and run ps -p  and watch VIRT, RES and SHR

-- run tests
select MTDB_RunTests('tenant', 1);

Observations:
1. when the connection for running tests was first created,
VIRT = 182MB, RES = 6240K, SHR=4648K
2. after run the tests once, (took 175 seconds)
VIRT = 1661MB  RES = 1.5GB SHR = 55MB
3. re-run the test again (took 167 seconds)
VIRT = 1661MB  RES = 1.5GB SHR = 55MB
3. re-run the test again (took 165 seconds)
VIRT = 1661MB  RES = 1.5GB SHR = 55MB

as we scale up the number of tables, the memory usage go up as the tests is
run too.

Can anyone help explain what's happening here?
Is there a way we can control memory usage of PostgreSQL backend process?

Thanks.
Samuel

=

-- MTDB_destroy
create or replace function MTDB_destroy (schemaNamePrefix varchar(100))
returns int as $$
declare
   curs1 cursor(prefix varchar) is select schema_name from
information_schema.schemata where schema_name like prefix || '%';
   schemaName varchar(100);
   count integer;
begin
   count := 0;
   open curs1(schemaNamePrefix);
   loop
  fetch curs1 into schemaName;
  if not found then exit; end if;   
  count := count + 1;
  execute 'drop schema ' || schemaName || ' cascade;';
   end loop;  
   close curs1;
   return count;
end $$ language plpgsql;

-- MTDB_Initialize
create or replace function MTDB_Initialize (schemaNamePrefix varchar(100),
numberOfSchemas integer, numberOfTablesPerSchema integer,
createViewForEachTable boolean)
returns integer as $$
declare   
   currentSchemaId integer;
   currentTableId integer;
   currentSchemaName varchar(100);
   currentTableName varchar(100);
   currentViewName varchar(100);
   count integer;
begin
   -- clear
   perform MTDB_Destroy(schemaNamePrefix);

   count := 0;
   currentSchemaId := 1;
   loop
  currentSchemaName := schemaNamePrefix ||
ltrim(currentSchemaId::varchar(10));
  execute 'create schema ' || currentSchemaName;
 
  currentTableId := 1;
  loop
 currentTableName := currentSchemaName || '.' || 'table' ||
ltrim(currentTableId::varchar(10));
 execute 'create table ' || currentTableName || ' (f1 integer, f2
integer, f3 varchar(100), f4 varchar(100), f5 varchar(100), f6 varchar(100),
f7 boolean, f8 boolean, f9 integer, f10 integer)';
 if (createViewForEachTable = true) then
currentViewName := currentSchemaName || '.' || 'view' ||
ltrim(currentTableId::varchar(10));
execute 'create view ' || currentViewName || ' as ' ||
 'select t1.* from ' || currentTableName || ' t1 ' ||
 ' inner join ' || currentTableName || ' t2 on (t1.f1 = t2.f1) '
||
 '

Re: [GENERAL] why autocommit mode is slow?

2011-04-07 Thread Dann Corbit
It has nothing to do with autocommit and everything to do with batching them 
together.

For instance, if you run ten update queries in autocommit mode I would expect 
it to take exactly the same time as:

Begin
Exec Query1
Commit
…
Begin
Exec Query10
Commit



From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Szymon Guz
Sent: Thursday, April 07, 2011 2:00 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] why autocommit mode is slow?

Hi,
this is maybe a stupid question, but I don't know how to explain to my 
coworkers why performing many inserts in autocommit mode is so much slower than 
making all of them in one transaction.

Why is that so slow?

regards
Szymon


[GENERAL] why autocommit mode is slow?

2011-04-07 Thread Szymon Guz
Hi,
this is maybe a stupid question, but I don't know how to explain to my
coworkers why performing many inserts in autocommit mode is so much slower
than making all of them in one transaction.

Why is that so slow?

regards
Szymon


Re: [GENERAL] What happened to ALTER SCHEMA ... SET TABLESPACE?

2011-04-07 Thread Yang Zhang
Also, in PG8.4+, is there any way to set the default tablespace on a
per-schema basis?

On Thu, Apr 7, 2011 at 12:27 PM, Yang Zhang  wrote:
> Last I could find on this, it was slated for 8.1 inclusion:
>
> http://archives.postgresql.org/pgsql-patches/2004-08/msg00425.php
>
> But it doesn't seem to be there in my PG8.4
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What happened to ALTER SCHEMA ... SET TABLESPACE?

2011-04-07 Thread Yang Zhang
Last I could find on this, it was slated for 8.1 inclusion:

http://archives.postgresql.org/pgsql-patches/2004-08/msg00425.php

But it doesn't seem to be there in my PG8.4

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding "unknown" data type

2011-04-07 Thread Tom Lane
Steve Crawford  writes:
> On 04/07/2011 11:29 AM, Tom Lane wrote:
>> Steve Crawford  writes:
>>> 1. Where can I read up on the purpose and properties of a data-type of
>>> unknown?

>> It's the type initially imputed to unadorned string literals and NULL...

> Is there a place in the documentation that explains this? I haven't 
> found it so far.

Probably not --- it's normally just an implementation detail.

>> ...It can be cast to anything, but that's a hardwired behavior not
>> something listed in pg_cast.

> Hmmm. Not *anything* - at least not directly:

Unadorned string literals and NULL constants can be cast to anything.
Once it's not a simple constant anymore, you can't do anything at all
with it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding "unknown" data type

2011-04-07 Thread Steve Crawford

Thanks Tom.

On 04/07/2011 11:29 AM, Tom Lane wrote:

Steve Crawford  writes:

1. Where can I read up on the purpose and properties of a data-type of
unknown?

It's the type initially imputed to unadorned string literals and NULL...
Is there a place in the documentation that explains this? I haven't 
found it so far.



...It can be cast to anything, but that's a hardwired behavior not
something listed in pg_cast.

Hmmm. Not *anything* - at least not directly:

steve=# create table foobar as select '1' as somedata;
WARNING:  column "somedata" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
SELECT 1
   ^
steve=# select somedata::int from foobar;
ERROR:  failed to find conversion function from unknown to integer

...
steve=# create table foobar as select '2011-04-07' as somedata;
WARNING:  column "somedata" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
SELECT 1
steve=# select somedata::date from foobar;
ERROR:  failed to find conversion function from unknown to date



Throwing an error has been discussed, but it seems likely to break
things for some people.


A GUC? (Like we need more of those :))

...but there aren't terribly bad side-effects
(other than not being able to operate on that table column).  The
cases that are of practical interest tend to involve label columns in
views.
Not sure how the variety of front-ends might behave when fed an 
"unknown". It's pretty easy to accidentally create such a column and 
have it lurking waiting to cause downstream trouble.


For now, I have a daily alert script that warns me if a dev has 
accidentally done this.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding "unknown" data type

2011-04-07 Thread Tom Lane
Steve Crawford  writes:
> 1. Where can I read up on the purpose and properties of a data-type of 
> unknown?

It's the type initially imputed to unadorned string literals and NULL
constants.  The parser normally converts these to constants of some
other type, as soon as it can figure out what type they ought to be.
In your example case, there is no context whatever that leads to
resolving a type for them, so they're still "unknown" when it comes
time to create the table.

> It apparently can be cast to some other types though "unknown" 
> is not shown in the output of \dC.

It can be cast to anything, but that's a hardwired behavior not
something listed in pg_cast.

> 2. Is there any setting/option that would allow me to force an error 
> instead of allowing creation of tables with type unknown? (And why would 
> such a type be allowed in a table at all - it seems like a foot-gun 
> waiting to happen?)

Throwing an error has been discussed, but it seems likely to break
things for some people.  If it were really a serious issue, we might
force the point anyway, but there aren't terribly bad side-effects
(other than not being able to operate on that table column).  The
cases that are of practical interest tend to involve label columns in
views.

There's also been some discussion of forcing the declared column type to
text in such cases.  Don't remember at the moment what the downsides of
that might be.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Chris Curvey
>
>
>>
and, FWIW, here's another trace, which is NEARLY the same as the first one I
posted, with the difference being a slightly different line number at #3.  I
will be quiet now and leave the brain trust to ponder.  Let me know if there
is anything else I can get for you guys.

#0  0x006ce317 in GetMemoryChunkSpace (pointer=0x292a0e0) at
mcxt.c:264
#1  0x006d3d56 in writetup_index (state=0x2281670, tapenum=, stup=) at tuplesort.c:2924
#2  0x006d2af7 in dumptuples (state=0x2281670, alltuples=0 '\000')
at tuplesort.c:2068
#3  0x006d38b7 in puttuple_common (state=0x2281670, tuple=) at tuplesort.c:1046
#4  0x006d3c4c in tuplesort_putindextuple (state=0x2281670,
tuple=) at tuplesort.c:943
#5  0x00472cac in btbuildCallback (index=,
htup=0x22eade0, values=, isnull=,
tupleIsAlive=1 '\001', state=0x7fffc180c8f0) at nbtree.c:194
#6  0x004ab1ec in IndexBuildHeapScan (heapRelation=, indexRelation=, indexInfo=,
allow_sync=, callback=,
callback_state=) at index.c:1866
#7  0x00472b35 in btbuild (fcinfo=) at
nbtree.c:123
#8  0x006b8ba1 in OidFunctionCall3 (functionId=, arg1=140165586614672, arg2=140165586792240, arg3=36148832) at
fmgr.c:1610
#9  0x004ab804 in index_build (heapRelation=0x7f7ad7fff990,
indexRelation=0x7f7ad802af30, indexInfo=0x2279660, isprimary=0 '\000') at
index.c:1427
#10 0x004ad43e in index_create (heapRelationId=, indexRelationName=, indexRelationId=, indexInfo=0x2279660, indexColNames=,
accessMethodObjectId=, tableSpaceId=0,
classObjectId=0x228ee20, coloptions=0x228ee40, reloptions=0, isprimary=0
'\000', isconstraint=0 '\000', deferrable=0 '\000', initdeferred=0 '\000',
allow_system_table_mods=, skip_build=0 '\000',
concurrent=0 '\000') at index.c:959
#11 0x00514ec2 in DefineIndex (heapRelation=,
indexRelationName=, indexRelationId=, accessMethodName=, tableSpaceName=, attributeList=0x2, predicate=0x0, options=0x0,
exclusionOpNames=0x0, unique=0 '\000', primary=0 '\000', isconstraint=0
'\000', deferrable=, initdeferred=, is_alter_table=0 '\000', check_rights=1 '\001', skip_build=0 '\000',
quiet=0 '\000', concurrent=) at indexcmds.c:484
#12 0x00603b69 in standard_ProcessUtility (parsetree=0x21a4218,
queryString=0x21a3570 "CREATE INDEX ieqansb ON eqanswer_base USING btree
(entityrole, fieldnum, memotext);", params=0x0, isTopLevel=1 '\001',
dest=0x21a45b8, completionTag=0x7fffc180d440 "") at utility.c:876
#13 0x006000a7 in PortalRunUtility (portal=0x225ea10,
utilityStmt=0x21a4218, isTopLevel=0 '\000', dest=0x21a45b8,
completionTag=0x7fffc180d440 "") at pquery.c:1191
#14 0x006010ec in PortalRunMulti (portal=0x225ea10, isTopLevel=1
'\001', dest=0x21a45b8, altdest=0x21a45b8, completionTag=0x7fffc180d440 "")
at pquery.c:1296
#15 0x00601852 in PortalRun (portal=,
count=, isTopLevel=-32 '\340', dest=, altdest=, completionTag=) at
pquery.c:822
#16 0x005fde0b in exec_simple_query (query_string=) at postgres.c:1058
#17 0x005fee47 in PostgresMain (argc=,
argv=, username=) at
postgres.c:3931
#18 0x005cc3b9 in BackendRun () at postmaster.c:3555
#19 BackendStartup () at postmaster.c:3242
#20 ServerLoop () at postmaster.c:1431
#21 0x005cea1c in PostmasterMain (argc=35397488, argv=0x219d8f0) at
postmaster.c:1092
#22 0x00575be0 in main (argc=3, argv=0x219d8e0) at main.c:188

-- 
Ignoring that little voice in my head since 1966!


[GENERAL] Understanding "unknown" data type

2011-04-07 Thread Steve Crawford

I was working on a test restore and got:
WARNING:  column "astring" has type "unknown"

The restore worked correctly - it was a warning not an error - and we 
located the cause and corrected it. But I did some playing leading to 
questions for which I have been unable to locate answers:


steve=# create table foobar as select 'a string' as astring, null as anull;
WARNING:  column "astring" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "anull" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
SELECT 1

steve=# \d foobar
 Table "public.foobar"
 Column  |  Type   | Modifiers
-+-+---
 astring | unknown |
 anull   | unknown |

steve=# select * from foobar;
astring  | anull
--+---
 a string |

steve=# insert into foobar values ('1','2');
INSERT 0 1

steve=# select * from foobar;
 astring  | anull
--+---
 a string |
 1| 2

steve=# select text (astring), text (anull) from foobar;
   text   | text
--+--
 a string |
 1| 2

So for starters...

1. Where can I read up on the purpose and properties of a data-type of 
unknown? It apparently can be cast to some other types though "unknown" 
is not shown in the output of \dC.


2. Is there any setting/option that would allow me to force an error 
instead of allowing creation of tables with type unknown? (And why would 
such a type be allowed in a table at all - it seems like a foot-gun 
waiting to happen?)


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Chris Curvey
On Thu, Apr 7, 2011 at 10:45 AM, Tom Lane  wrote:

> Chris Curvey  writes:
> > And voila!  Here is the backtrace:
>
> > #0  0x006ce317 in GetMemoryChunkSpace (pointer=0x347cc70) at
> > mcxt.c:264
> > #1  0x006d3d56 in writetup_index (state=0x26fc530, tapenum= > optimized out>, stup=) at tuplesort.c:2924
> > #2  0x006d2af7 in dumptuples (state=0x26fc530, alltuples=0
> '\000')
> > at tuplesort.c:2068
> > #3  0x006d392f in puttuple_common (state=0x26fc530,
> > tuple=0x7fff1e21d3b0) at tuplesort.c:1097
> > #4  0x006d3c4c in tuplesort_putindextuple (state=0x26fc530,
> > tuple=) at tuplesort.c:943
> > #5  0x00472cac in btbuildCallback (index=,
> > htup=0x26f4460, values=, isnull= out>,
> > tupleIsAlive=1 '\001', state=0x7fff1e21d870) at nbtree.c:194
>
> That is damn peculiar.


You clearly haven't met me yet :)


>  The tuple handed to writetup_index would have
> been copied just moments before in tuplesort_putindextuple, so there is
> no way that GetMemoryChunkSpace ought to fail.  If you do the run
> several times over, do you get the exact same stack trace every time?
>
>regards, tom lane
>

I don't get the same trace every time.   I ran this again to see if I could
duplicate it, and everything worked.  Then I ran it again, and got this
trace.  (Segfault in same location, but a different path to it.)

0  0x006ce31b in GetMemoryChunkSpace (pointer=0x246eae0) at
mcxt.c:264
#1  0x006d3d56 in writetup_index (state=0x22cd460, tapenum=, stup=) at tuplesort.c:2924
#2  0x006d2af7 in dumptuples (state=0x22cd460, alltuples=1 '\001')
at tuplesort.c:2068
#3  0x006d304f in tuplesort_performsort (state=0x22cd460) at
tuplesort.c:1164
#4  0x00476023 in _bt_leafbuild (btspool=,
btspool2=0x246eae0) at nbtsort.c:206
#5  0x00472b5a in btbuild (fcinfo=) at
nbtree.c:139
#6  0x006b8ba1 in OidFunctionCall3 (functionId=, arg1=14016558590, arg2=140156762410992, arg3=36255616) at
fmgr.c:1610
#7  0x004ab804 in index_build (heapRelation=0x7f7ad7f66d10,
indexRelation=0x7f78ca094ff0, indexInfo=0x2293780, isprimary=1 '\001') at
index.c:1427
#8  0x004ad43e in index_create (heapRelationId=, indexRelationName=, indexRelationId=, indexInfo=0x2293780, indexColNames=,
accessMethodObjectId=, tableSpaceId=0,
classObjectId=0x22939d0, coloptions=0x22939e8, reloptions=0, isprimary=1
'\001', isconstraint=1 '\001', deferrable=0 '\000', initdeferred=0 '\000',
allow_system_table_mods=, skip_build=0 '\000',
concurrent=0 '\000') at index.c:959
#9  0x00514ec2 in DefineIndex (heapRelation=,
indexRelationName=, indexRelationId=, accessMethodName=, tableSpaceName=, attributeList=0x0, predicate=0x0, options=0x0,
exclusionOpNames=0x0, unique=1 '\001', primary=1 '\001', isconstraint=1
'\001', deferrable=, initdeferred=, is_alter_table=1 '\001', check_rights=1 '\001', skip_build=0 '\000',
quiet=0 '\000', concurrent=) at indexcmds.c:484
#10 0x00522b7b in ATExecAddIndex (tab=,
rel=, stmt=, is_rebuild=0 '\000')
at tablecmds.c:4576
#11 0x0052b422 in ATExecCmd (wqueue=) at
tablecmds.c:2744
#12 ATRewriteCatalogs (wqueue=) at tablecmds.c:2670
#13 0x0052c0d7 in ATController (rel=0x7f7ad7f66d10, cmds=, recurse=) at tablecmds.c:2421
#14 0x00604415 in standard_ProcessUtility (parsetree=, queryString=0x2246b80 "ALTER TABLE ONLY erelated_base\n
 ADD CONSTRAINT erelated_base_pkey PRIMARY KEY (id);", params=0x0,
isTopLevel=1 '\001', dest=0x2247b40, completionTag=0x7fffc180d440 "") at
utility.c:717
#15 0x006000a7 in PortalRunUtility (portal=0x21bdf90,
utilityStmt=0x2247800, isTopLevel=0 '\000', dest=0x2247b40,
completionTag=0x7fffc180d440 "") at pquery.c:1191
#16 0x006010ec in PortalRunMulti (portal=0x21bdf90, isTopLevel=1
'\001', dest=0x2247b40, altdest=0x2247b40, completionTag=0x7fffc180d440 "")
at pquery.c:1296
#17 0x00601852 in PortalRun (portal=,
count=, isTopLevel=-32 '\340', dest=, altdest=, completionTag=) at
pquery.c:822
#18 0x005fde0b in exec_simple_query (query_string=) at postgres.c:1058
#19 0x005fee47 in PostgresMain (argc=,
argv=, username=) at
postgres.c:3931
#20 0x005cc3b9 in BackendRun () at postmaster.c:3555
#21 BackendStartup () at postmaster.c:3242
#22 ServerLoop () at postmaster.c:1431
#23 0x005cea1c in PostmasterMain (argc=35397488, argv=0x219d8f0) at
postmaster.c:1092
#24 0x00575be0 in main (argc=3, argv=0x219d8e0) at main.c:188



-- 
Ignoring that little voice in my head since 1966!


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Scott Marlowe
On Thu, Apr 7, 2011 at 6:46 AM, Michael Gould
 wrote:
> We want to protect the intellectual property of several of our stored
> procedures.  If we put them in a separate schema in the database and only
> allow execute on anything in that schema would that protect our stored
> procedures?

Just want to point out that the problem you're trying to solve is one
of licensing and legal contract, not software.  No system is
unhackable.  Also, having a skilled user who can help you with
troubleshooting being locked out of doing so does you no good in the
long run.  The very people you seek to protect your IP from are the
ones who can help you the most with bug fixes and reporting.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Joshua D. Drake
On Thu, 2011-04-07 at 12:45 -0400, Andrew Sullivan wrote:
> On Thu, Apr 07, 2011 at 09:31:20AM -0500, Michael Gould wrote:
> > We wouldn't make any of the system users a superuser in Postgres and in my
> > 20+ years experience in the industry we provide software for, the
> > possibility of having any users of the system that are able to hack or even
> > understand what they have if they were able to is slim.  
> 
> So you aren't afraid your users are going to take this code, but you
> want to put (relatively meaningless) protection in place anyway?  
> 
> I guess maybe the security definer functions might help you.

As someone mentioned previously, there is also pl/secure. It certainly
isn't perfect but it will deal with the low hanging fruit.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Andrew Sullivan
On Thu, Apr 07, 2011 at 09:31:20AM -0500, Michael Gould wrote:
> We wouldn't make any of the system users a superuser in Postgres and in my
> 20+ years experience in the industry we provide software for, the
> possibility of having any users of the system that are able to hack or even
> understand what they have if they were able to is slim.  

So you aren't afraid your users are going to take this code, but you
want to put (relatively meaningless) protection in place anyway?  

I guess maybe the security definer functions might help you.

A


-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread John R Pierce

On 04/07/11 7:31 AM, Michael Gould wrote:

We wouldn't make any of the system users a superuser in Postgres



are you providing software run on a customers box, or a turnkey managed 
system (hardware+software) ?


anyone who has root access can easily gain database access.  anyone with 
physical access to the box and its console can gain root access.   
anyone with access to pg_dumpall backup files has access to your stored 
procedures, too.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Sim Zacks

On 04/07/2011 03:46 PM, Michael Gould wrote:

We want to protect the intellectual property of several of our stored 
procedures.  If we put them in a separate schema in the database and 
only allow execute on anything in that schema would that protect our 
stored procedures?


Best Regards

It depends on what level you want to protect it and what rights the 
clients have on the machine.



The short answer is that with physical access to a machine, a system 
administrator should be able to gain super user access to anything on 
the machine, including the databases.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Tom Lane
Chris Curvey  writes:
> And voila!  Here is the backtrace:

> #0  0x006ce317 in GetMemoryChunkSpace (pointer=0x347cc70) at
> mcxt.c:264
> #1  0x006d3d56 in writetup_index (state=0x26fc530, tapenum= optimized out>, stup=) at tuplesort.c:2924
> #2  0x006d2af7 in dumptuples (state=0x26fc530, alltuples=0 '\000')
> at tuplesort.c:2068
> #3  0x006d392f in puttuple_common (state=0x26fc530,
> tuple=0x7fff1e21d3b0) at tuplesort.c:1097
> #4  0x006d3c4c in tuplesort_putindextuple (state=0x26fc530,
> tuple=) at tuplesort.c:943
> #5  0x00472cac in btbuildCallback (index=,
> htup=0x26f4460, values=, isnull=,
> tupleIsAlive=1 '\001', state=0x7fff1e21d870) at nbtree.c:194

That is damn peculiar.  The tuple handed to writetup_index would have
been copied just moments before in tuplesort_putindextuple, so there is
no way that GetMemoryChunkSpace ought to fail.  If you do the run
several times over, do you get the exact same stack trace every time?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arrays of arrays

2011-04-07 Thread Radosław Smogura
Merlin Moncure  Thursday 07 April 2011 15:53:00
> On Thu, Apr 7, 2011 at 4:39 AM, rsmogura  wrote:
> > Hello,
> > 
> > May I ask if PostgreSQL supports arrays of arrays directly or indirectly,
> > or if such support is planned? I'm interested about pseudo constructs
> > like: 1. Directly - (integer[4])[5] - this is equivalent to
> > multidimensional array, but may be differently represented on protocol
> > serialization (as array with array of elements).
> > 2. Indirectly - like create domain d as integer[6], d[11], or by any
> > other means.
> > 
> > Currently, I think both of this, are unsupported and array should be only
> > base type. I ask about this, to create extend array support in JDBC.
> 
> if you want to do serialization of complex structures through the
> protocol, your best bet currently is to use composites.  for example,
> if you want to make array of arrays that are not all the same length
> (if they were the same length, why not use regular array?):
> 
> create type t (i int[]);
> select array[row(array[1,2])::t, row(array[1,2,3,4])::t];
> 
> merlin

No, I'm asking due to development of driver, just I don't want to expose some 
functionality that may cause problems in future, when arrays of arrays will be 
available, so it's quite important.

Just one more topic to discussion: should zero length arrays, like int[5][0]
[6], be available? Or error should be thrown?

Regards,
Radek.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database backup throws errors

2011-04-07 Thread Tom Lane
"Michael Molz"  writes:
> we´re running PostgreSQL 8.4.7 for an CRM System on a Linux machine. 
> Database contains about 540 tables with summary about 2.5 million records. 
> Since this weeks the backup stops with an error message always at the same 
> table. It´s irrelevant what tool we use to backup (pgAdmin III, pg_dump from 
> a windows client; backup on the Linux machine directly, ...), error stays the 
> same.

> The log contains the following lines:
> 2011-04-05 17:57:35 CEST LOG:  SSL-error: unsafe legacy renegotiation disabled

That looks like your problem --- the SSL library is breaking the connection.

> Furthermore, I found a hint to the SSL connection problem. It was recommended 
> to add the parameter ssl_renegotiation_limit with value 0 to postgresql.conf 
> to turn off the SSL renegotiation. As I do this, database system will not 
> start again.

And that's the solution (at least till you can get hold of a non-broken
SSL library).  What do you mean by "database system will not start"?
An 8.4.7 server should definitely accept that parameter.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Michael Gould
We wouldn't make any of the system users a superuser in Postgres and in my
20+ years experience in the industry we provide software for, the
possibility of having any users of the system that are able to hack or even
understand what they have if they were able to is slim.  I understand that
anything can be reverse engineered.  I had a relationship with a Russian
program several years ago that could take the compile C modules, reverse
engineer them to assembler and then back to the original C code to find and
report bugs.  That being said I don't worry too much about those types.

Best Regards

Michael Gould


"Andrew Sullivan"  wrote:
> On Thu, Apr 07, 2011 at 07:46:36AM -0500, Michael Gould wrote:
>> We want to protect the intellectual property of several of our stored
>> procedures.  If we put them in a separate schema in the database and only
>> allow execute on anything in that schema would that protect our stored
>> procedures?
> 
> If your customer can have superuser access on the machine (which
> effectively means, "If your customer has control of the box,") then
> no, it won't.
> 
> If you need to do this sort of thing, then you need to write the
> procedures in C and deliver only object code.  Even that probably
> won't solve everything.
> 
> A
> 
> -- 
> Andrew Sullivan
> a...@crankycanuck.ca
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] database backup throws errors

2011-04-07 Thread Michael Molz
Hi folks,

 

we´re running PostgreSQL 8.4.7 for an CRM System on a Linux machine. Database 
contains about 540 tables with summary about 2.5 million records. Since this 
weeks the backup stops with an error message always at the same table. It´s 
irrelevant what tool we use to backup (pgAdmin III, pg_dump from a windows 
client; backup on the Linux machine directly, ...), error stays the same.

On Windows pg_dump reports:

pg_dump: Dumping the contents of table "gdvaus" failed: PQgetCopyData() failed.

pg_dump: Error message from server: server closed the connection unexpectedly

This probably means the server terminated abnormally

before or while processing the request.

pg_dump: The command was: COPY iwm.gdvaus (nvirtnr, ngesell, ngdvsatz, 
nimplauf, ngdvteilsatz, cagentur, cgesellnr, cpolnn, csatzart, cspartenr, 
iadresskz, ifolgenr, nfiliale, nfilaender, msatz) TO stdout;

pg_dump: *** aborted because of error

 

The log contains the following lines:

2011-04-05 17:57:35 CEST STATEMENT:  COPY iwm.gdvaus (nvirtnr, ngesell, 
ngdvsatz, nimplauf, ngdvteilsatz, cagentur, cgesellnr, cpolnn, csatzart, 
cspartenr, iadresskz, ifolgenr, nfiliale, nfilaender, msatz) TO stdout;

2011-04-05 17:57:35 CEST LOG:  SSL-error: unsafe legacy renegotiation disabled

 

While searching for a solution to the first notification "PQgetCopyData() 
failed" I found some notes about a possible OOM Kill so I monitored RAM Usage 
during the backup and found out that this is not the reason; none of the 
PostgreSQL processes catches more than about 15 MB and there´s enough free 
Memory left.

Furthermore, I found a hint to the SSL connection problem. It was recommended 
to add the parameter ssl_renegotiation_limit with value 0 to postgresql.conf to 
turn off the SSL renegotiation. As I do this, database system will not start 
again.

 

Does anyone have some more hints for me?

 

Thanks a lot for all responses.

 

Greetings

 

Michael

 


-- 
Diese Mail wurde auf Viren geprüft mit Hilfe von Astaro Mail Gateway. 
http://www.astaro.com


Re: [GENERAL] Attaching/detaching tablespaces (or, in general, parts of a DB)

2011-04-07 Thread Vibhor Kumar

On Apr 7, 2011, at 7:35 AM, Yang Zhang wrote:

> Is it possible to attach or detach parts of a DB (e.g. tablespaces),
> such that I can flexibly move the disks containing the DB tables
> around to different hosts?
> 
> The last discussion I could find on this topic is from 2007, and the
> answer was "No":
> 
> http://postgresql.1045698.n5.nabble.com/Possible-to-Attach-Detach-Tablespaces-td1885602.html
> 
> Wondering if there's been any change to the answer since then.  I can
> alternatively dump/load, which is slower and requires 2x the disk
> space.  Thanks in advance.


No. You can't do that.

Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Attaching/detaching tablespaces (or, in general, parts of a DB)

2011-04-07 Thread Guillaume Lelarge
Le 07/04/2011 04:05, Yang Zhang a écrit :
> Is it possible to attach or detach parts of a DB (e.g. tablespaces),
> such that I can flexibly move the disks containing the DB tables
> around to different hosts?
> 

No, you can't. But you actually don't need to. Users won't be able to
access the objects you're moving, so no issues.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arrays of arrays

2011-04-07 Thread Merlin Moncure
On Thu, Apr 7, 2011 at 4:39 AM, rsmogura  wrote:
> Hello,
>
> May I ask if PostgreSQL supports arrays of arrays directly or indirectly, or
> if such support is planned? I'm interested about pseudo constructs like:
> 1. Directly - (integer[4])[5] - this is equivalent to multidimensional
> array, but may be differently represented on protocol serialization (as
> array with array of elements).
> 2. Indirectly - like create domain d as integer[6], d[11], or by any other
> means.
>
> Currently, I think both of this, are unsupported and array should be only
> base type. I ask about this, to create extend array support in JDBC.

if you want to do serialization of complex structures through the
protocol, your best bet currently is to use composites.  for example,
if you want to make array of arrays that are not all the same length
(if they were the same length, why not use regular array?):

create type t (i int[]);
select array[row(array[1,2])::t, row(array[1,2,3,4])::t];

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Vibhor Kumar

On Apr 7, 2011, at 6:16 PM, Michael Gould wrote:

> We want to protect the intellectual property of several of our stored 
> procedures.  If we put them in a separate schema in the database and only 
> allow execute on anything in that schema would that protect our stored 
> procedures?

If use is normal user, then you can revoke the access of procedure from user. 
If you are thinking of keeping the code of stored procedure hidden from any 
user, then you can use PL_secure:
http://www.enterprisedb.com/docs/en/8.4/plsecure/Postgres_Plus_Standard_Server_PL_Secure_PLpgSQL-09.htm#P134_8474

Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: how to insert multiple rows and get the ids back in a temp table (pgplsql)?

2011-04-07 Thread Merlin Moncure
On Wed, Apr 6, 2011 at 11:12 PM, abhishek.itbhu2004
 wrote:
> I am still new to postgres. Can you please tell the exact syntax for this. I
> tried different things but was not able to retun the ids of the newly
> inserted rows.

in 9.1 with wCTE you will have a very direct way to do this. in 9.0
down, there is only one effective way to do this.  you have to wrap
your insert statement in a plain (sql, not plpgsql) function and have
it return results:

create function ins_foo() returns setof int as
$$
  insert into foo(v) select 1 from generate_series(1,3) v returning id;
$$ language sql;

That function can be called from plpgsql or from another query:
select array(select ins_foo());

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Andrew Sullivan
On Thu, Apr 07, 2011 at 07:46:36AM -0500, Michael Gould wrote:
> We want to protect the intellectual property of several of our stored
> procedures.  If we put them in a separate schema in the database and only
> allow execute on anything in that schema would that protect our stored
> procedures?

If your customer can have superuser access on the machine (which
effectively means, "If your customer has control of the box,") then
no, it won't.

If you need to do this sort of thing, then you need to write the
procedures in C and deliver only object code.  Even that probably
won't solve everything.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Craig Ringer

On 04/07/2011 08:49 PM, Chris Curvey wrote:


Let me see if I can load it somewhere else.   And since the only other
machine I have to try it on is a Windows box, I will be able to answer
my own question of "are pg_dump dumps OS-portable?"


The answer there is "mostly". Contrib modules can cause issues and 
locale names differ across OSes. I usually CREATE DATABASE then restore 
into the empty database, because pg_restore may try to create a database 
using a locale that doesn't exist on the target machine.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protecting stored procedures

2011-04-07 Thread Szymon Guz
On 7 April 2011 14:46, Michael Gould  wrote:

> We want to protect the intellectual property of several of our stored
> procedures.  If we put them in a separate schema in the database and only
> allow execute on anything in that schema would that protect our stored
> procedures?
>
>
>
Protect them from who?

regards
Szymon


[GENERAL] Protecting stored procedures

2011-04-07 Thread Michael Gould
We want to protect the intellectual property of several of our stored
procedures.  If we put them in a separate schema in the database and only
allow execute on anything in that schema would that protect our stored
procedures?


 


Best Regards


 


 


--
Michael Gould, Managing Partner
Intermodal Software Solutions, LLC
904.226.0978
904.592.5250 fax



Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Chris Curvey
>
>
> Are you able to distribute your dataset - if not to the general public,
> then to someone interested in identifying the fault?
>

probably not.  but let me ask...maybe I can come up with a creative
solution.


> Does the same dataset crash Pg when restored on another machine?


That is a good question. I'm running a full memtest on the server in
question at the moment, just to be sure that this is not a hardware issue.
 (And I'm assuming that if I was out of memory -- which I think is unlikely
-- that I would get a different error.)

Let me see if I can load it somewhere else.   And since the only other
machine I have to try it on is a Windows box, I will be able to answer my
own question of "are pg_dump dumps OS-portable?"


>
-- 
Ignoring that little voice in my head since 1966!


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Craig Ringer

On 04/07/2011 06:37 PM, Chris Curvey wrote:


2) install bison, flex and libreadline5-dev (sudo apt-get install
PACKAGE).  I already had the gcc package installed


The easy way to do this on Debian/ubuntu, if you're building something 
that packages exist for, is (eg):


sudo apt-get build-dep postgresql

This downloads and installs all the dependencies required to compile 
postgresql.



And voila!  Here is the backtrace:


Thankyou VERY much for taking the time to collect this information.

It appears to be crashing while building an index:

CREATE INDEX itransf ON transactions USING btree
(loccode, startdtact, starttmact);

I don't know PostgreSQL's innards well enough to know much more than 
that, but others do and may well check this out.


Are you able to distribute your dataset - if not to the general public, 
then to someone interested in identifying the fault?


Does the same dataset crash Pg when restored on another machine?


#0  0x006ce317 in GetMemoryChunkSpace (pointer=0x347cc70) at
mcxt.c:264
#1  0x006d3d56 in writetup_index (state=0x26fc530,
tapenum=, stup=) at
tuplesort.c:2924
#2  0x006d2af7 in dumptuples (state=0x26fc530, alltuples=0
'\000') at tuplesort.c:2068
#3  0x006d392f in puttuple_common (state=0x26fc530,
tuple=0x7fff1e21d3b0) at tuplesort.c:1097
#4  0x006d3c4c in tuplesort_putindextuple (state=0x26fc530,
tuple=) at tuplesort.c:943
#5  0x00472cac in btbuildCallback (index=,
htup=0x26f4460, values=, isnull=, tupleIsAlive=1 '\001', state=0x7fff1e21d870) at nbtree.c:194
#6  0x004ab1ec in IndexBuildHeapScan (heapRelation=, indexRelation=, indexInfo=, allow_sync=, callback=, callback_state=) at index.c:1866
#7  0x00472b35 in btbuild (fcinfo=) at
nbtree.c:123
#8  0x006b8ba1 in OidFunctionCall3 (functionId=, arg1=140128587519600, arg2=140128587659696, arg3=40470992) at
fmgr.c:1610
#9  0x004ab804 in index_build (heapRelation=0x7f723aae9670,
indexRelation=0x7f723ab0b9b0, indexInfo=0x26989d0, isprimary=0 '\000')
at index.c:1427
#10 0x004ad43e in index_create (heapRelationId=, indexRelationName=, indexRelationId=, indexInfo=0x26989d0, indexColNames=, accessMethodObjectId=, tableSpaceId=0,
classObjectId=0x26f2e70, coloptions=0x26f2e90, reloptions=0, isprimary=0
'\000', isconstraint=0 '\000', deferrable=0 '\000', initdeferred=0
'\000', allow_system_table_mods=, skip_build=0
'\000', concurrent=0 '\000') at index.c:959
#11 0x00514ec2 in DefineIndex (heapRelation=, indexRelationName=, indexRelationId=, accessMethodName=,
tableSpaceName=, attributeList=0x2, predicate=0x0,
options=0x0, exclusionOpNames=0x0, unique=0 '\000', primary=0 '\000',
isconstraint=0 '\000', deferrable=,
initdeferred=, is_alter_table=0 '\000',
check_rights=1 '\001', skip_build=0 '\000', quiet=0 '\000',
concurrent=) at indexcmds.c:484
#12 0x00603b69 in standard_ProcessUtility (parsetree=0x2648880,
queryString=0x2647be0 "CREATE INDEX itransf ON transactions USING btree
(loccode, startdtact, starttmact);", params=0x0, isTopLevel=1 '\001',
dest=0x2648c20, completionTag=0x7fff1e21e3c0 "") at utility.c:876
#13 0x006000a7 in PortalRunUtility (portal=0x25bf0d0,
utilityStmt=0x2648880, isTopLevel=0 '\000', dest=0x2648c20,
completionTag=0x7fff1e21e3c0 "") at pquery.c:1191
#14 0x006010ec in PortalRunMulti (portal=0x25bf0d0, isTopLevel=1
'\001', dest=0x2648c20, altdest=0x2648c20, completionTag=0x7fff1e21e3c0
"") at pquery.c:1296
#15 0x00601852 in PortalRun (portal=,
count=, isTopLevel=112 'p', dest=, altdest=, completionTag=) at pquery.c:822
#16 0x005fde0b in exec_simple_query (query_string=) at postgres.c:1058
#17 0x005fee47 in PostgresMain (argc=,
argv=, username=) at
postgres.c:3931
#18 0x005cc3b9 in BackendRun () at postmaster.c:3555
#19 BackendStartup () at postmaster.c:3242
#20 ServerLoop () at postmaster.c:1431
#21 0x005cea1c in PostmasterMain (argc=39596208, argv=0x259f8d0)
at postmaster.c:1092
#22 0x00575be0 in main (argc=3, argv=0x259f8c0) at main.c:188

so, do I leave this here, or do I send it to pgsql-bugs?


--
Ignoring that little voice in my head since 1966!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-07 Thread Chris Curvey
>
>
> Nor I. https://launchpad.net/~pitti/+archive/postgresql/+packages doesn't
> seem to list any.
>
> Martin: Are your packages stripped, or do they include minimal debug info
> in the main packages? Do you put your debuginfo debs anywhere accessible?
>
>
I decided to just try building it myself -- it has been a voyage of
discovery.  Here are instructions without going down all the false paths
that I did.

1) download postgres 9.0.3 from postgresql.org
2) install bison, flex and libreadline5-dev (sudo apt-get install PACKAGE).
 I already had the gcc package installed
3) ./configure --enable-debug  (I could not get --with-openssl to work...the
server kept complaining about SSL when it started.  so I gave up)
4) make
5) sudo make install
6) stop the existing server (sudo /etc/init.d/postgresql stop)
7) start the debug server (sudo -u postgres /usr/local/pgsql/bin start -D
/etc/postgresql/9.0.main)  took me a while to figure out that  the -D
argument goes to where your postgresql.conf file lives, NOT to where the
database files actually are.  Probably obvious to experienced users.
8) Start my restore

then follow the instructions at
http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSDto
get gdb connected to the postgres backend.

And voila!  Here is the backtrace:

#0  0x006ce317 in GetMemoryChunkSpace (pointer=0x347cc70) at
mcxt.c:264
#1  0x006d3d56 in writetup_index (state=0x26fc530, tapenum=, stup=) at tuplesort.c:2924
#2  0x006d2af7 in dumptuples (state=0x26fc530, alltuples=0 '\000')
at tuplesort.c:2068
#3  0x006d392f in puttuple_common (state=0x26fc530,
tuple=0x7fff1e21d3b0) at tuplesort.c:1097
#4  0x006d3c4c in tuplesort_putindextuple (state=0x26fc530,
tuple=) at tuplesort.c:943
#5  0x00472cac in btbuildCallback (index=,
htup=0x26f4460, values=, isnull=,
tupleIsAlive=1 '\001', state=0x7fff1e21d870) at nbtree.c:194
#6  0x004ab1ec in IndexBuildHeapScan (heapRelation=, indexRelation=, indexInfo=,
allow_sync=, callback=,
callback_state=) at index.c:1866
#7  0x00472b35 in btbuild (fcinfo=) at
nbtree.c:123
#8  0x006b8ba1 in OidFunctionCall3 (functionId=, arg1=140128587519600, arg2=140128587659696, arg3=40470992) at
fmgr.c:1610
#9  0x004ab804 in index_build (heapRelation=0x7f723aae9670,
indexRelation=0x7f723ab0b9b0, indexInfo=0x26989d0, isprimary=0 '\000') at
index.c:1427
#10 0x004ad43e in index_create (heapRelationId=, indexRelationName=, indexRelationId=, indexInfo=0x26989d0, indexColNames=,
accessMethodObjectId=, tableSpaceId=0,
classObjectId=0x26f2e70, coloptions=0x26f2e90, reloptions=0, isprimary=0
'\000', isconstraint=0 '\000', deferrable=0 '\000', initdeferred=0 '\000',
allow_system_table_mods=, skip_build=0 '\000',
concurrent=0 '\000') at index.c:959
#11 0x00514ec2 in DefineIndex (heapRelation=,
indexRelationName=, indexRelationId=, accessMethodName=, tableSpaceName=, attributeList=0x2, predicate=0x0, options=0x0,
exclusionOpNames=0x0, unique=0 '\000', primary=0 '\000', isconstraint=0
'\000', deferrable=, initdeferred=, is_alter_table=0 '\000', check_rights=1 '\001', skip_build=0 '\000',
quiet=0 '\000', concurrent=) at indexcmds.c:484
#12 0x00603b69 in standard_ProcessUtility (parsetree=0x2648880,
queryString=0x2647be0 "CREATE INDEX itransf ON transactions USING btree
(loccode, startdtact, starttmact);", params=0x0, isTopLevel=1 '\001',
dest=0x2648c20, completionTag=0x7fff1e21e3c0 "") at utility.c:876
#13 0x006000a7 in PortalRunUtility (portal=0x25bf0d0,
utilityStmt=0x2648880, isTopLevel=0 '\000', dest=0x2648c20,
completionTag=0x7fff1e21e3c0 "") at pquery.c:1191
#14 0x006010ec in PortalRunMulti (portal=0x25bf0d0, isTopLevel=1
'\001', dest=0x2648c20, altdest=0x2648c20, completionTag=0x7fff1e21e3c0 "")
at pquery.c:1296
#15 0x00601852 in PortalRun (portal=,
count=, isTopLevel=112 'p', dest=,
altdest=, completionTag=) at
pquery.c:822
#16 0x005fde0b in exec_simple_query (query_string=) at postgres.c:1058
#17 0x005fee47 in PostgresMain (argc=,
argv=, username=) at
postgres.c:3931
#18 0x005cc3b9 in BackendRun () at postmaster.c:3555
#19 BackendStartup () at postmaster.c:3242
#20 ServerLoop () at postmaster.c:1431
#21 0x005cea1c in PostmasterMain (argc=39596208, argv=0x259f8d0) at
postmaster.c:1092
#22 0x00575be0 in main (argc=3, argv=0x259f8c0) at main.c:188

so, do I leave this here, or do I send it to pgsql-bugs?


-- 
Ignoring that little voice in my head since 1966!


[GENERAL] Arrays of arrays

2011-04-07 Thread rsmogura

Hello,

May I ask if PostgreSQL supports arrays of arrays directly or 
indirectly, or if such support is planned? I'm interested about pseudo 
constructs like:
1. Directly - (integer[4])[5] - this is equivalent to multidimensional 
array, but may be differently represented on protocol serialization (as 
array with array of elements).
2. Indirectly - like create domain d as integer[6], d[11], or by any 
other means.


Currently, I think both of this, are unsupported and array should be 
only base type. I ask about this, to create extend array support in 
JDBC.


Regards,
Radek.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general