Re: [GENERAL] Help! pg_dump: Error message from server: ERROR: cache lookup failed for type 19

2010-09-13 Thread Craig Ringer
On 14/09/2010 11:02 AM, 夏武 wrote:
> I reconvery it  by \copy command.
> thanks very much.

Glad to help.

In future, it might be a good idea to:

- Keep regular pg_dump backups; and
- Avoid trying to alter the system catalogs

With Slony you can never completely avoid needing to mess with the
catalogs, as it seems to be able to get things into a nasty state
sometimes. However, if you do have to do catalog work it's a good idea
to ask for help here *before* doing anything, because it'll be easier to
fix if you haven't deleted catalog entries etc.

-- 
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post Install / Secure PostgreSQL

2010-09-13 Thread Craig Ringer

On 14/09/2010 1:57 AM, Tom Lane wrote:

I'd suggest creating "carlos" as either a plain user or a CREATEROLE
user depending on whether you think you're likely to be adding/deleting
plain users regularly.


I'd second that.

When I install a new instance of PostgreSQL, I usually set up a "craig" 
user to match my Linux login ID. This user has CREATEDB and CREATEROLE 
rights, but is not a superuser.


This account will be used automatically by psql unless I override it, 
because psql defaults to local unix socket logins with the same 
postgresql username as the unix username. pg_hba.conf by default permits 
local unix users to use the postgresql user account with the same user 
name as their unix account. So I can just run "psql databasename" to 
connect to any database that I've granted access rights to "craig" for.


I then usecreate a "craig" database as a test area / playpen. This will 
be connected to by default if I run psql without any arguments.


So:

craig$ sudo -u postgres psql
postgres=> CREATE USER craig WITH PASSWORD 'somepassword'
   CREATEDB CREATEROLE;
postgres=> CREATE DATABASE craig WITH OWNER craig;
postgres=> \q

Now I can connect to my new default database with a simple "psql". For 
any real work I make new databases, but the "craig" database is handy 
for general testing and playing around. I generally revoke public 
connect rights on those databases, permitting only specific users to 
connect even if they're authenticated and allowed access to other databases.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any plans to expose publicly (via stored proc) relation_needs_vacanalyze, or some flavour of do_autovacuum?

2010-09-13 Thread Tom Lane
Vlad Romascanu  writes:
> The logical, amended solution would then be to have the "writer"
> session perform, after INSERTion but before COMMITTing, the same
> calculation that the autovacuum daemon currently performs inside
> relation_needs_vacanalyze, based on the same configuration parameters
> that the autovacuum daemon uses, before deciding whether to explicitly
> ANALYZE or not the affected application tables.  Ideally one would not
> want to duplicate the relation_needs_vacanalyze logic (not to mention
> having to guess the value of last_anl_tuples, which is not exposed via
> any storedproc -- one would have to assume that it is more or less
> equivalent to pg_class.reltuples.)

I don't know if you actually looked at that code, but it's driven off of
statistics counters that are only updated at commit; and furthermore are
maintained in a different process altogether.  So what you have in mind
isn't going to work ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query plan choice issue

2010-09-13 Thread Yaroslav Tykhiy

Hi Tom,

On 14/09/2010, at 12:41 AM, Tom Lane wrote:


Yaroslav Tykhiy  writes:

 ->  Bitmap Heap Scan on dbmail_headervalue v
(cost=1409.82..221813.70 rows=2805 width=16) (actual
time=28543.411..28623.623 rows=1 loops=1)
   Recheck Cond: (v.headername_id = n.id)
   Filter: ("substring"(v.headervalue, 0,
255) ~~* '%<@mail.gmail.com>%'::text)
   ->  Bitmap Index Scan on
dbmail_headervalue_testing  (cost=0.00..1409.82 rows=75940 width=0)
(actual time=17555.572..17555.572 rows=1877009 loops=1)
 Index Cond: (v.headername_id = n.id)


I think the major problem you're having is that the planner is
completely clueless about the selectivity of the condition
"substring"(v.headervalue, 0,  255) ~~* '%<@mail.gmail.com>%'
If it knew that that would match only one row, instead of several
thousand, it would likely pick a different plan.

In recent versions of PG you could probably make a noticeable
improvement in this if you just dropped the substring() restriction
... do you actually need that?  Alternatively, if you don't want to
change the query logic at all, I'd try making an index on
substring(v.headervalue, 0, 255).  I'm not expecting the query
to actually *use* the index, mind you.  But its existence will prompt
ANALYZE to collect stats on the expression's value, and that will
help the planner with estimating the ~~* condition.


Well, that substring() and ILIKE combo looked suspicious to me, too.   
However, there already was an index on substring(v.headervalue, 0,  
255) but the fast query plan didn't seem to use it, it used a  
different index instead:


mail=# \d dbmail_headervalue
  Table "public.dbmail_headervalue"
 Column |  Type  | Modifiers
+ 
+---

 headername_id  | bigint | not null
 physmessage_id | bigint | not null
 id | bigint | not null default  
nextval('dbmail_headervalue_idnr_seq'::regclass)

 headervalue| text   | not null default ''::text
Indexes:
"dbmail_headervalue_pkey" PRIMARY KEY, btree (id)
"dbmail_headervalue_1" UNIQUE, btree (physmessage_id, id)
"dbmail_headervalue_2" btree (physmessage_id)
"dbmail_headervalue_3" btree ("substring"(headervalue, 0, 255))
...
EXPLAIN ANALYZE...
-> Index Scan using dbmail_headervalue_2 on  
dbmail_headervalue v (cost=0.00..1489.96 rows=1 width=16) (actual  
time=0.028..0.029 rows=0 loops=358)
  Index Cond: (v.physmessage_id =  
m.physmessage_id)
  Filter: ("substring"(v.headervalue, 0, 255)  
~~* '%<@mail.gmail.com>%'::text)

...

Meanwhile, a mate of mine lurking on this list pointed out that  
reducing random_page_cost might help here and it did: random_page_cost  
of 2 made the fast query favourable.


Can it mean that the default planner configuration slightly  
overfavours seq scans?


Thank you all guys!

Yar

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema search path

2010-09-13 Thread Yaroslav Tykhiy

On 14/09/2010, at 8:56 AM, Tom Lane wrote:


Bruce Momjian  writes:

Yaroslav Tykhiy wrote:

SELECT * FROM foo.bar WHERE bar.a=1;
   ^^^ this means foo.bar



Do you think it's a feature or a bug? :-)



Feature, and SQL-standard behavior.


It might be worth pointing out that this has nothing to do with
search_path; rather, the key is that the FROM clause establishes
a table alias "bar" for the query.


Sure, that makes sense because it just extends the well-known aliasing  
for unqualified column names, as in "SELECT a FROM foo", to table  
names as well.  But a remark on this feature in the SCHEMA related  
documentation pages can be a good idea IMHO.


Thanks!

Yar

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any plans to expose publicly (via stored proc) relation_needs_vacanalyze, or some flavour of do_autovacuum?

2010-09-13 Thread Vlad Romascanu
Imagine the following sequence of events:
1. a "writer" session begins a transaction, growing the number of live
tuples in several tables (e.g. via COPY) from mere tens (or hundreds)
to tens of thousands of tuples, then COMMITs
2. one or more "reader" sessions perform a SELECT ... JOIN on the very
tables grown by the "writer" session

Currently, unless the autovacuum daemon happens to run after the
"writer" session COMMITs and complete before the "reader" session(s)
SELECT, then the "reader" session(s) may complete execution in minutes
instead of milliseconds because of an incorrect execution plan based
on stale statistics.  Have seen this happen in reality.

In my specific case, all write operations are serialized via an
application-level mutex, i.e. there will only be one "writer" session
and multiple "reader" sessions at any given time.  In such a setup,
the most obvious workaround is to explicitly ANALYZE the affected
tables, prior to COMMITting, in the "writer" session.  New data would
thus be committed along with up-to-date statistics.

However, let's tweak the above scenario and assume that, later on, the
"writer" session only INSERTs one row in a table which now counts
several million tuples.-- indiscriminately running ANALYZE after such
a measly INSERTion is overkill.

The logical, amended solution would then be to have the "writer"
session perform, after INSERTion but before COMMITTing, the same
calculation that the autovacuum daemon currently performs inside
relation_needs_vacanalyze, based on the same configuration parameters
that the autovacuum daemon uses, before deciding whether to explicitly
ANALYZE or not the affected application tables.  Ideally one would not
want to duplicate the relation_needs_vacanalyze logic (not to mention
having to guess the value of last_anl_tuples, which is not exposed via
any storedproc -- one would have to assume that it is more or less
equivalent to pg_class.reltuples.)

So my question is: does it sound reasonable to (and/or are there any
existing plans to) expose either relation_needs_vacanalyze, or a
per-table flavour of do_autoanalyze, to the public via a stored proc
for those in my situation or who need more granular control over
autovacuuming than the autovacuum daemon does?

Thank you,
Vlad.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema search path

2010-09-13 Thread Tom Lane
Bruce Momjian  writes:
> Yaroslav Tykhiy wrote:
>> SELECT * FROM foo.bar WHERE bar.a=1;
>> ^^^ this means foo.bar

>> Do you think it's a feature or a bug? :-)

> Feature, and SQL-standard behavior.

It might be worth pointing out that this has nothing to do with
search_path; rather, the key is that the FROM clause establishes
a table alias "bar" for the query.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema search path

2010-09-13 Thread Bruce Momjian
Yaroslav Tykhiy wrote:
> Hi there,
> 
> Sorry but I've got yet another issue to discuss today, this time that  
> on schema search path.  In fact it may not be a bug, but it may be  
> worth a note in the documentation.
> 
> It seems that if the table in SELECT FROM has an explicit schema  
> specifier, further references to the same table name will implicitly  
> inherit it.  E.g., this query will be valid because the second  
> reference will be to foo.bar not public.bar:
> 
> SELECT * FROM foo.bar WHERE bar.a=1;
>  ^^^ this means foo.bar

No one has ever complained about this before.

> As just shown, this can be even more confusing with nested queries.
> 
> Do you think it's a feature or a bug? :-)

Feature, and SQL-standard behavior.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post Install / Secure PostgreSQL

2010-09-13 Thread Scott Marlowe
On Mon, Sep 13, 2010 at 12:24 PM, Carlos Mennens
 wrote:
> On Mon, Sep 13, 2010 at 1:57 PM, Tom Lane  wrote:
> I also noticed that I created a database called 'ide' in PostgreSQL as
> the 'postgres' super user and I am trying to change the owner of the
> database to me  and when I run the following command, I
> don't get an error but the owner doesn't appear to change for some
> reason. What am I doing wrong?
>
> ide=# \c ide
> psql (8.4.4)
> You are now connected to database "ide".
>
> ide=# ALTER DATABASE ide OWNER TO cmennens;
> ALTER DATABASE
>
> ide=# \dt
>         List of relations
>  Schema | Name  | Type  |  Owner
> +---+---+--
>  public | users | table | postgres
> (1 row)
>
> Any ideas if I am missing something here?
>
> Thank you very much for all your support so far!

The table owner isn't the same as the db owner.  Whoever created the
table owns it.  Try \l to see a list of databases.

Also note that instead of reassigning all those table owners by name
you can grant membership of a user to that "role":

grant ide to myrole;

-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post Install / Secure PostgreSQL

2010-09-13 Thread Carlos Mennens
On Mon, Sep 13, 2010 at 1:57 PM, Tom Lane  wrote:
> It's definitely a good idea not to use a superuser account when you
> don't have to; just like you don't use Unix root unless you have to.
> You should do your day-to-day database hacking in an ordinary
> unprivileged account.

When I am logged into my Linux DB server as the 'postgres' user, I can
run the shell command 'createuser ' and that shows me the
following:

# createuser cmennens
Shall the new role be a superuser? (y/n)  n
Shall the new role be allowed to create databases? (y/n)  y
Shall the new role be allowed to create more new roles? (y/n)  y

Does what I displayed above create a an account that can do
administrative tasks like creating/deleting users, changing their
passwords, etc, but can't hose the system catalogs or do other serious
damage? If what I did doesn't, should I do this using the 'CREATEROLE'
option manually in PostgreSQL?

> There is also an intermediate level, which is an account with the
> CREATEROLE option (if you're on a PG version new enough to have that).
> That kind of account can do administrative things like creating/deleting
> users, changing their passwords, etc, but it can't directly munge system
> catalogs or do other things that can seriously screw up your database.
>
> I'd suggest creating "carlos" as either a plain user or a CREATEROLE
> user depending on whether you think you're likely to be adding/deleting
> plain users regularly.

I also noticed that I created a database called 'ide' in PostgreSQL as
the 'postgres' super user and I am trying to change the owner of the
database to me  and when I run the following command, I
don't get an error but the owner doesn't appear to change for some
reason. What am I doing wrong?

ide=# \c ide
psql (8.4.4)
You are now connected to database "ide".

ide=# ALTER DATABASE ide OWNER TO cmennens;
ALTER DATABASE

ide=# \dt
 List of relations
 Schema | Name  | Type  |  Owner
+---+---+--
 public | users | table | postgres
(1 row)

Any ideas if I am missing something here?

Thank you very much for all your support so far!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post Install / Secure PostgreSQL

2010-09-13 Thread Tom Lane
Carlos Mennens  writes:
> In MySQL, it was recommended that you create a power user account
> rather than manage the database with the 'root' account. Is this also
> the same thing for PostgreSQL? I know you  guys told me that there is
> no 'root' account but there is a 'postgres' account which appears to
> be the equivalent of MySQL's 'root' database user. My question is do I
> need to or is it recommended I create a 'carlos' account and grant
> privileges to that user rather than manage the database with the
> 'postgres' super user account?

It's definitely a good idea not to use a superuser account when you
don't have to; just like you don't use Unix root unless you have to.
You should do your day-to-day database hacking in an ordinary
unprivileged account.

There is also an intermediate level, which is an account with the
CREATEROLE option (if you're on a PG version new enough to have that).
That kind of account can do administrative things like creating/deleting
users, changing their passwords, etc, but it can't directly munge system
catalogs or do other things that can seriously screw up your database.

I'd suggest creating "carlos" as either a plain user or a CREATEROLE
user depending on whether you think you're likely to be adding/deleting
plain users regularly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hi, how to let the inserted tuple visible to other backend when current backend hasn't finish?

2010-09-13 Thread Merlin Moncure
On Sun, Sep 12, 2010 at 3:02 AM, sunpeng  wrote:
> hi, These codes are in the postgresql engine, just assume they are in
> PortalRun() function:
> //1.create table structure
> char *relname = "test";
> ...
> relOid = heap_create_with_catalog(relname, );
> CommandCounterIncrement();
> ...
> //2.then i can use SPI_execute to create index on this created table
> SPI_connect();
> char *sqlCreate_index ="create index on test."
> int ret = SPI_execute(sqlCreate_index , false, 1);
> SPI_finish();
> .
> //3.until now it performs well,but after i insert a tuple in this table
> 
> Relation mcir_relation = relation_open(relOid); //the relation just created
> HeapTupleData htup;
> 
> simple_heap_insert(relation, &htup);
> CommandCounterIncrement();
> ...
> //4.then again want to invoke SPI_execute("select"), it seems the
> inserted tuple is not visible to SPI_execute()
> SPI_connect();
> int ret =  SPI_execute("select * from test;", true, 1);
> if (ret == SPI_OK_SELECT && SPI_processed == 1 ) {
> 
> }
>
> the ret is SPI_OK_SELECT ,but  SPI_processed == 0, the inserted tuple is not
> visible to SPI_execute() .
>  i've used these methods to try to let it visible to SPI_execute() :
> simple_heap_insert()
> CommandCounterIncrement();
> or:
> BeginInternalSubTransaction(NULL);
> simple_heap_insert()...
> ReleaseCurrentSubTransaction();
> 
> but they all don't work, how to resolve it?


This may or may not have anything do do with your problem, but why are
you bypassing the SPI interface to insert the tuple?  Have you tried
inserting the record via regular SPI query?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post Install / Secure PostgreSQL

2010-09-13 Thread Carlos Mennens
Thanks for all the assistance and clarification with my new install of
PostgreSQL. I am able to switch users to 'postgres' and verify the
default home directory for 'postgres' shell user:

[r...@db1 ~]# su - postgres

[postg...@db1 ~]$ pwd
/var/lib/postgres

I am also now able from the documentation to understand how I can
"create" a database and "drop" a database but thats about all I can
figure out for now.

In MySQL, it was recommended that you create a power user account
rather than manage the database with the 'root' account. Is this also
the same thing for PostgreSQL? I know you  guys told me that there is
no 'root' account but there is a 'postgres' account which appears to
be the equivalent of MySQL's 'root' database user. My question is do I
need to or is it recommended I create a 'carlos' account and grant
privileges to that user rather than manage the database with the
'postgres' super user account?

test=# SELECT * FROM "pg_user";
 usename  | usesysid | usecreatedb | usesuper | usecatupd |  passwd  |
valuntil | useconfig
--+--+-+--+---+--+--+---
 postgres |   10 | t   | t| t |  |
 |
 cmennens |16393 | f   | f| f |  |
 |
(2 rows)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I keep getting "type does not exist" on compile of this SETOF function (list 2 table)

2010-09-13 Thread Merlin Moncure
On Mon, Sep 13, 2010 at 11:17 AM, Jonathan Brinkman
 wrote:
> [CODE]
>
> BEGIN;
>
> DROP TYPE structure.format_list2table_rs CASCADE;
>
> CREATE TYPE structure.format_list2table_rs AS (
>  "item" VARCHAR(4000)
> );
>
> END;
>
> CREATE OR REPLACE FUNCTION structure.format_list2table (
>  "v_list" varchar,
>  "v_delim" varchar
> )
> RETURNS SETOF structure.format_list2table_rs AS
> $body$
> /*
> select * from Format_List2Table('1', '1');
> SELECT item FROM Format_List2Table('first||2nd||III||1+1+1+1','||');
> SELECT CAST(item AS INT) AS Example2 FROM
> Format_List2Table('111,222,333,444,555',',');
> SELECT item FROM Format_List2Table('12/1/2009, 12/2/2009, 12/3/2009,
> 12/4/2009, 12/7/2009, 12/8/2009,, 12/9/2009, 12/10/2009, 12/11/2009,',',');
> SELECT * FROM Format_List2Table('1988,1390',',');
> SELECT * FROM Format_List2Table('1988',',');
> SELECT * FROM Format_List2Table('1988 1390 5151 5i7151 515545',' ');
> */
>
> DECLARE
>   v_item  VARCHAR(4000);
>   v_Pos  INTEGER;
>   v_RunLastTime  INTEGER;
>   SWV_List VARCHAR(4000);
>   SWV_Rs format_list2table_rs;
>
> BEGIN
>  --  SWV_List := v_List;
>   BEGIN
>      CREATE GLOBAL TEMPORARY TABLE tt_PARSEDLIST
>                                (item VARCHAR(4000)) WITH OIDS;
>      exception when others then truncate table tt_PARSEDLIST;
>   END;
>   SWV_List := v_list;
>   v_RunLastTime := 0;
>   SWV_List := CASE POSITION(v_delim IN SWV_List) WHEN 0 THEN
> coalesce(SWV_List,'') || coalesce(v_delim,'') ELSE SWV_List END; --fix lists
> with only 1 item
>   v_Pos := POSITION(v_delim IN SWV_List);
>   WHILE v_Pos > 0 LOOP
>      v_item := LTRIM(RTRIM(SUBSTR(SWV_List,1,v_Pos -1)));
>      IF v_item <> '' THEN
>                                        INSERT INTO tt_PARSEDLIST(item)
>                                                VALUES(CAST(v_item AS
> VARCHAR(4000)));
>      ELSE
>         INSERT INTO tt_PARSEDLIST(item)
>                                                VALUES(NULL);
>      END IF;
>      SWV_List := SUBSTR(SWV_List,length(SWV_List) -ABS(LENGTH(SWV_List)
> -v_Pos)+1);
>      v_Pos := POSITION(v_delim IN SWV_List);
>      IF SWV_List = '' THEN v_Pos = null;
>      END IF;
>      IF v_Pos = 0 AND v_RunLastTime <> 1 then
>         v_RunLastTime := 1;
>         v_Pos := LENGTH(SWV_List)+1;
>      END IF;
>   END LOOP;
>
>   FOR SWV_Rs IN(SELECT * FROM  tt_PARSEDLIST) LOOP
>      RETURN NEXT SWV_Rs;
>   END LOOP;
>   RETURN;
> END;
> $body$
> LANGUAGE 'plpgsql'
> VOLATILE
> CALLED ON NULL INPUT
> SECURITY INVOKER
> ;

is 'structure' in your search_path? in the declare section you didn't
prefix w/namespace:
>   SWV_Rs format_list2table_rs;

but you did everywhere else.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] I keep getting "type does not exist" on compile of this SETOF function (list 2 table)

2010-09-13 Thread Jonathan Brinkman
[CODE]

BEGIN;

DROP TYPE structure.format_list2table_rs CASCADE;

CREATE TYPE structure.format_list2table_rs AS (
  "item" VARCHAR(4000)
);

END;

CREATE OR REPLACE FUNCTION structure.format_list2table (
  "v_list" varchar,
  "v_delim" varchar
)
RETURNS SETOF structure.format_list2table_rs AS
$body$
/*
select * from Format_List2Table('1', '1');
SELECT item FROM Format_List2Table('first||2nd||III||1+1+1+1','||');
SELECT CAST(item AS INT) AS Example2 FROM
Format_List2Table('111,222,333,444,555',',');
SELECT item FROM Format_List2Table('12/1/2009, 12/2/2009, 12/3/2009,
12/4/2009, 12/7/2009, 12/8/2009,, 12/9/2009, 12/10/2009, 12/11/2009,',',');
SELECT * FROM Format_List2Table('1988,1390',',');
SELECT * FROM Format_List2Table('1988',',');
SELECT * FROM Format_List2Table('1988 1390 5151 5i7151 515545',' ');
*/

DECLARE
   v_item  VARCHAR(4000);
   v_Pos  INTEGER;
   v_RunLastTime  INTEGER;
   SWV_List VARCHAR(4000);
   SWV_Rs format_list2table_rs;

BEGIN
 --  SWV_List := v_List;
   BEGIN
  CREATE GLOBAL TEMPORARY TABLE tt_PARSEDLIST
(item VARCHAR(4000)) WITH OIDS;
  exception when others then truncate table tt_PARSEDLIST;
   END;
   SWV_List := v_list;
   v_RunLastTime := 0;
   SWV_List := CASE POSITION(v_delim IN SWV_List) WHEN 0 THEN
coalesce(SWV_List,'') || coalesce(v_delim,'') ELSE SWV_List END; --fix lists
with only 1 item
   v_Pos := POSITION(v_delim IN SWV_List);
   WHILE v_Pos > 0 LOOP
  v_item := LTRIM(RTRIM(SUBSTR(SWV_List,1,v_Pos -1)));
  IF v_item <> '' THEN 
INSERT INTO tt_PARSEDLIST(item)
VALUES(CAST(v_item AS
VARCHAR(4000)));
  ELSE
 INSERT INTO tt_PARSEDLIST(item)
VALUES(NULL);
  END IF;
  SWV_List := SUBSTR(SWV_List,length(SWV_List) -ABS(LENGTH(SWV_List)
-v_Pos)+1);
  v_Pos := POSITION(v_delim IN SWV_List);
  IF SWV_List = '' THEN v_Pos = null;
  END IF;
  IF v_Pos = 0 AND v_RunLastTime <> 1 then
 v_RunLastTime := 1;
 v_Pos := LENGTH(SWV_List)+1;
  END IF;
   END LOOP;

   FOR SWV_Rs IN(SELECT * FROM  tt_PARSEDLIST) LOOP
  RETURN NEXT SWV_Rs;
   END LOOP;
   RETURN;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
;
[/CODE]


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Object access

2010-09-13 Thread Greg Smith

adi hirschtein wrote:
Using the catalog tables, is there any way to correlate session 
id/user id to which object (i.e. tables, indexes etc) it access and 
much how disk reads or I/O wait has been done against the objects.
in general, I'd like to see which objects are being accessed by which 
user and the time/amount of I/O wait/reads.


On recent Linux systems, the iotop utility is handy to figure out which 
individual users are doing lots of I/O.  There are some cases where the 
user doing the I/O and the one who caused the I/O are different, which 
includes things from synchronized scans to background writer writes.  
But for the most part that utility gives a useful view into per-user I/O.


Mark Wong has done some good work toward integrating that same data 
source on Linux into something you can query and match against database 
activity in his pg_proctab project:  
http://www.slideshare.net/markwkm/pgproctab-accessing-system-stats-in-postgresql-3573304


And if you're on Solaris you can extract of a lot of this data with 
custom DTrace scripting.


I have a rough plan for directly instrumenting more of this information 
from within the database, more like what Oracle does here.  But that's 
going to take months of development time, and I'm not sure the 
PostgreSQL core will even accept the overhead it would add in all 
cases.  If we could get one Oracle user who's on the fence over a 
PostgreSQL conversion to throw a small portion of the money they'd save 
toward that project, I'm sure I could get it developed.  It's just that 
nobody has been interested enough in such a thing to sponsor it so far.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Good candidate query for window syntax?

2010-09-13 Thread Dimitri Fontaine
Ketema Harris  writes:
> My goal is: To find the maximum number of concurrent rows over an
> arbitrary interval.

My guess is that the following would help you:

  http://wiki.postgresql.org/wiki/Range_aggregation

-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] User function canceling VACUUMDB utility

2010-09-13 Thread Tom Lane
Carlos Henrique Reimer  writes:
> Yes, once correct schema was included in the search_path, VACUUM and ANALYZE
> run fine again.

You'd be better advised to fix the function so it works regardless of
caller's search_path.  As-is, it's a loaded gun pointed at your foot.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query plan choice issue

2010-09-13 Thread Tom Lane
Yaroslav Tykhiy  writes:
>   ->  Bitmap Heap Scan on dbmail_headervalue v   
> (cost=1409.82..221813.70 rows=2805 width=16) (actual  
> time=28543.411..28623.623 rows=1 loops=1)
> Recheck Cond: (v.headername_id = n.id)
> Filter: ("substring"(v.headervalue, 0,  
> 255) ~~* '%<@mail.gmail.com>%'::text)
> ->  Bitmap Index Scan on  
> dbmail_headervalue_testing  (cost=0.00..1409.82 rows=75940 width=0)  
> (actual time=17555.572..17555.572 rows=1877009 loops=1)
>   Index Cond: (v.headername_id = n.id)

I think the major problem you're having is that the planner is
completely clueless about the selectivity of the condition
"substring"(v.headervalue, 0,  255) ~~* '%<@mail.gmail.com>%'
If it knew that that would match only one row, instead of several
thousand, it would likely pick a different plan.

In recent versions of PG you could probably make a noticeable
improvement in this if you just dropped the substring() restriction
... do you actually need that?  Alternatively, if you don't want to
change the query logic at all, I'd try making an index on
substring(v.headervalue, 0, 255).  I'm not expecting the query
to actually *use* the index, mind you.  But its existence will prompt
ANALYZE to collect stats on the expression's value, and that will
help the planner with estimating the ~~* condition.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incrementally Updated Backups

2010-09-13 Thread RB
On Sep 12, 2:39 pm, jo...@antarean.org ("J. Roeleveld") wrote:
> On Sunday 12 September 2010 13:32:00 Martijn van Oosterhout wrote:
>
> > On Sun, Sep 12, 2010 at 12:18:10PM +0200, J. Roeleveld wrote:
> > > How can you ensure the snapshot is in a consistent state if the server is
> > > running?
>
> > > If a snapshot is taken between 2 updates in a single transaction, only
> > > half of this transaction is included in the snapshot.
> > > I would never take an LVM (or similar) snapshot of an application that
> > > can't be paused in a way to provide a consistent filesystem.
>
> > That's the trick, the filesystem is always in a consistant state,
> > otherwise how could a database survive a power failure?
>
> This is something you want to try to avoid.
> Recovery situations are not always reliable.
>
> > The trick is WAL, which ensure that changes are logged consistantly and
> > replays them if the database crashes.
>
> > If you take a snapshot the database will simply startup and replay the
> > log as if the machine crashed at the point. All committed transactions
> > appears anything uncommitted vanishes.
>
> Nice in theory.
> Except backups can not be fully trusted if they rely on database recovery
> mechanics as part of the restore process.
>
> How certain can you be that the data you have in your backup will always
> result to being able to recover 100%?
>
> --
> Joost
>
> --
> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
> To make changes to your 
> subscription:http://www.postgresql.org/mailpref/pgsql-general

Just to add that it is standard practice to use database recovery to
bring up a 'Hot Backup'. For example Oracle's default RMAN online
backup if used to restore from will put back the files it backed up
before recovering using the archived and current redo logs (ie the
postgres WAL). It is pretty standard practice to abort 'crash' the
database which is the equivalent to a machine power outage. Recovering
from your 'inconsistent' backup is identical to recovering from you
user generated 'abort'.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a bug in FOR i IN 1..10 LOOP (8.4.4)?

2010-09-13 Thread Rob Richardson
 
What makes you think there is a bug?  What does this function return for
you?  It always helps us to see everything you have seen.

Without taking the time to try it on my system, I'd recommend explictly
converting your index to text:
num_var := num_var || ',' || i::text;

RobR

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to inherit search_path from template

2010-09-13 Thread Merlin Moncure
On Sun, Sep 12, 2010 at 8:34 AM, Scott Marlowe  wrote:
> On Thu, Sep 9, 2010 at 7:41 PM, Merlin Moncure  wrote:
>> On Thu, Sep 9, 2010 at 7:13 AM, Phui Hock  wrote:
>>> Hi,
>>> How can I create a database template with altered search_path to be
>>> inherited by child databases? Say, I created a template named
>>> template_a with the following commands:
>>
>> It doesn't really work that way -- GUC values are global,  not per
>> database.
>
> You can do "alter database xyz set search_path=..."

so you can...thanks for that!

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Post Install / Secure PostgreSQL

2010-09-13 Thread Sam Mason
On Fri, Sep 10, 2010 at 01:23:39PM -0700, bjjjrn lundin wrote:
> I usually do like this on a new box
> 
> sudo su -
> su - postgres
> createuser bnl
> exit
> exit

It would be somewhat easier to use sudo's "-u" switch, the following
should do the same as the above:

  sudo -u postgres createuser "$USER"

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] User function canceling VACUUMDB utility

2010-09-13 Thread Carlos Henrique Reimer
Hi,

Yes, once correct schema was included in the search_path, VACUUM and ANALYZE
run fine again.

Thank you!

On Fri, Sep 10, 2010 at 11:38 AM, Tom Lane  wrote:

> Carlos Henrique Reimer  writes:
> > Yes, you're right! I found out a functional index using this function and
> > ANALYZE also cancels.
>
> > Is there a way to code this function in a way VACUUM/ANALYZE does not
> > cancel?
>
> I think your problem is probably a search_path issue, ie vacuumdb is not
> running with the "BRASIL" schema in its path so the column reference
> fails to resolve.  You should be able to add the schema name to the
> %TYPE reference.  Or, if you can't make that work, just don't use
> %TYPE...
>
>regards, tom lane
>



-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] new encoding (UTF8) is incompatible with the encoding of the template database (SQL_ASCII)

2010-09-13 Thread Craig Ringer
On 13/09/10 19:48, MailingLists wrote:
>  Dear all of you,
> 
> I'm currently working on a centos that I manage with webmin.
> A sI try to create a DB with unicode encoding the following message is
> returned to me :
> 
> 
>   new encoding (UTF8) is incompatible with the encoding of the
>   template database (SQL_ASCII)
> 
> After a little googling, I felt happy because the solution seems easy to
> create a new utf8 db :
> createdb -E UTF8 -T template0 myDB
> 
> Unfortunately, I would like to create a default template database in UTF
> or accepting all others encoding.

You could drop template1 (the default template database) and recreate it
from template0 with a utf-8 encoding.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] new encoding (UTF8) is incompatible with the encoding of the template database (SQL_ASCII)

2010-09-13 Thread MailingLists

 Dear all of you,

I'm currently working on a centos that I manage with webmin.
A sI try to create a DB with unicode encoding the following message is 
returned to me :



 new encoding (UTF8) is incompatible with the encoding of the
 template database (SQL_ASCII)

After a little googling, I felt happy because the solution seems easy to 
create a new utf8 db :

createdb -E UTF8 -T template0 myDB

Unfortunately, I would like to create a default template database in UTF 
or accepting all others encoding.


Your help would be usefull,

Best regards,

Florent THOMAS




Re: [GENERAL] Incrementally Updated Backups

2010-09-13 Thread Mikko Partio
>
> >> That section has been removed from the current 9.0 docs because we are
> >> unsure it works.
> >
> > Is the feature (or the documentation) still being worked on, or is
> pg_dump
> > the only way to take a backup of a warm standby while the database is
> > running?
>
> I don't think you can take a pg_dump of a warm standby without making
> recover.  But I can't see why you can't use a snapshot to recover a
> warm standby, since the file system will be just a base snapshot and a
> bunch of wal files.


Sorry, I got confused with the terms. What I meant was 'hot standby', the
new feature implemented in 9.0. I guess you can take a pg_dump out of a hot
standby, right?

Regards

Mikko


Re: [GENERAL] Query plan choice issue

2010-09-13 Thread Yaroslav Tykhiy

Hi Martin,

Thank you for your response!

On 13/09/2010, at 10:49 AM, Martin Gainty wrote:

a cursory look of the plan details a FTS on dbmail_headername  
invoked by the JOIN clause

JOIN dbmail_headername n ON v.headername_id=n.id
you would accelerate the seek appreciably by placing indexes on both  
participating columns

v.headername_id
n.id


Granted, there was no index on v.headername_id but creating one just  
slowed the query down, with a different plan:


CREATE INDEX dbmail_headervalue_testing ON dbmail_headervalue  
(headername_id);

EXPLAIN ANALYSE ...
   QUERY 
 PLAN


 Sort  (cost=222020.81..222020.81 rows=1 width=8) (actual  
time=28636.426..28636.426 rows=0 loops=1)

   Sort Key: m.message_idnr
   Sort Method:  quicksort  Memory: 25kB
   ->  Nested Loop  (cost=1409.82..222020.80 rows=1 width=8) (actual  
time=28636.409..28636.409 rows=0 loops=1)
 ->  Nested Loop  (cost=1409.82..222012.27 rows=1 width=24)  
(actual time=28636.405..28636.405 rows=0 loops=1)
   ->  Nested Loop  (cost=1409.82..221959.94 rows=6  
width=8) (actual time=28543.441..28624.750 rows=1 loops=1)
 ->  Seq Scan on dbmail_headername n   
(cost=0.00..111.17 rows=1 width=8) (actual time=0.022..1.114 rows=1  
loops=1)
   Filter: ((headername)::text ~~* 'MESSAGE- 
ID'::text)
 ->  Bitmap Heap Scan on dbmail_headervalue v   
(cost=1409.82..221813.70 rows=2805 width=16) (actual  
time=28543.411..28623.623 rows=1 loops=1)

   Recheck Cond: (v.headername_id = n.id)
   Filter: ("substring"(v.headervalue, 0,  
255) ~~* '%<@mail.gmail.com>%'::text)
   ->  Bitmap Index Scan on  
dbmail_headervalue_testing  (cost=0.00..1409.82 rows=75940 width=0)  
(actual time=17555.572..17555.572 rows=1877009 loops=1)

 Index Cond: (v.headername_id = n.id)
   ->  Index Scan using dbmail_messages_physmessage_idx  
on dbmail_messages m  (cost=0.00..8.71 rows=1 width=16) (actual  
time=11.646..11.646 rows=0 loops=1)

 Index Cond: (m.physmessage_id = v.physmessage_id)
 Filter: ((m.status = ANY ('{0,1}'::integer[]))  
AND (m.mailbox_idnr = 12345))
 ->  Index Scan using dbmail_physmessage_pkey on  
dbmail_physmessage p  (cost=0.00..8.52 rows=1 width=8) (never executed)

   Index Cond: (p.id = m.physmessage_id)
 Total runtime: 28636.517 ms
(19 rows)


I also see a FTS on domain_headervalue invoked by the JOIN cluase
JOIN dbmail_headervalue v ON v.physmessage_id=p.id
place indexes on both columns
v.physmessage_id
p.id


Both columns already indexed here:

On public.dbmail_headervalue (alias v):
"dbmail_headervalue_2" btree (physmessage_id)

On public.dbmail_physmessage (alias p):
"dbmail_physmessage_pkey" PRIMARY KEY, btree (id)

Perhaps I should provide some data on the table sizes.

dbmail_headervalue is the largest table with respect to its record  
count: 36 million records.

dbmail_headername is small: 5640 records.
dbmail_physmessage and dbmail_messages are of an average size: ~2  
million records each.


Sorry for my cluelessness on this issue.  But Postgresql's ability to  
build a fast query plan for this query type at least occasionally is  
encouraging. :-)


Yar

the join clause JOIN dbmail_physmessage p ON m.physmessage_id=p.id  
uses indexed for both participants


Martin
__
Verzicht und Vertraulichkeitanmerkung

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene  
Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede  
unbefugte Weiterleitung oder Fertigung einer Kopie ist unzulaessig.  
Diese Nachricht dient lediglich dem Austausch von Informationen und  
entfaltet keine rechtliche Bindungswirkung. Aufgrund der leichten  
Manipulierbarkeit von E-Mails koennen wir keine Haftung fuer den  
Inhalt uebernehmen.




From: y...@barnet.com.au
To: pgsql-general@postgresql.org
Subject: [GENERAL] Query plan choice issue
Date: Mon, 13 Sep 2010 09:36:35 +1000

Hi all,

I'm seeing a funny behaviour in Postgresql 8.4.4. Namely, a query can
be executed using either of two different query plans, one taking a
few milliseconds and the other, tens of seconds. The work_mem setting
doesn't seem to affect it -- tried to increase or decrease it by 2 or
4 times, but it didn't seem to favour the fast plan choice. Honestly,
I have no idea what affects the plan choice, but I saw Postgresql
change it at random.

The query in question looks like this -- sorry, it's rather complex:

SELECT message_idnr
FROM dbmail_messages m
JOIN dbmail_physmessage p ON m.physmessage_id=p.id
JOIN dbma

Re: [GENERAL] Incrementally Updated Backups

2010-09-13 Thread Scott Marlowe
On Mon, Sep 13, 2010 at 1:29 AM, Mikko Partio  wrote:
>> > I'm interested in the "incrementally updated backups" scenario
>> > described in section 25.6 of the Postgres 9 documentation. I've
>> > configured streaming replication for my warm standby server.
>> >
>> > Step 2 in this procedure is to note?pg_last_xlog_replay_location at
>> > the end of the backup. However it seems like this requires hot standby
>> > to be configured; otherwise there is no way of connecting to the
>> > standby machine to make the required query. That does not seem clear
>> > from the documentation. Is there a way to get this without using hot
>> > standby?
>>
>> That section has been removed from the current 9.0 docs because we are
>> unsure it works.
>
> Is the feature (or the documentation) still being worked on, or is pg_dump
> the only way to take a backup of a warm standby while the database is
> running?

I don't think you can take a pg_dump of a warm standby without making
recover.  But I can't see why you can't use a snapshot to recover a
warm standby, since the file system will be just a base snapshot and a
bunch of wal files.

Docs on continuous archiving are here:

http://www.postgresql.org/docs/8.4/interactive/continuous-archiving.html

-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Object access

2010-09-13 Thread adi hirschtein
Thanks!
I'll look into those system tools and probably come back with some more
questions...

Best,
Adi

On Mon, Sep 13, 2010 at 4:58 AM, Craig Ringer
wrote:

> On 09/12/2010 10:02 PM, adi hirschtein wrote:
>
>> Hi Craig,
>>
>> Thanks a lot for the quick response!
>> I'm coming from the Oracle side of the house and In oracle for instance,
>> you use shared buffer as well, but you are still able to see which
>> session is waiting for which blocks and if one session is doing the
>>
> > "real" I/O then the other one wait on 'wait for other session"
> > event so you are able to know who did the actual I/O
>
> There's nothing like that in PostgreSQL. There's some lock monitoring
> support for seeing what transactions hold locks and which other transactions
> are waiting on those locks, but AFAIK nothing like that for I/O. PostgreSQL
> does have DTrace hooks, so if you're on Solaris or some BSDs you might be
> able to use those to get the data you want.
>
> It'd be a pretty significant job to add a decent I/O monitoring system to
> PostgreSQL. Personally, if I needed something like that, I'd want to base it
> on an existing system-level tracing toolkit like Solaris's DTrace or Linux's
> "perf". I'd want to add some additional instrumentation hooks - some of
> which already exist in Pg for DTrace - to permit the tools to beaware of
> transactions, statements, the current database, which tables are which,
> which indexes are associated with which tables, etc. Then I'd use the data
> collected by the performance monitoring tools to report on load associated
> with particular users, indexes, tables, queries, etc. That way I'd be able
> to handle things like whether a request was satisfied with OS buffer cache
> or had to go to real disk, report on disk queue depth, etc as part of the
> whole system. It'd be a big job even with the use of existing trace tools to
> help.
>
> Currently there are some DTrace hooks, but I don't think there's any kind
> of integrated toolset like I've described to use the monitoring hooks plus
> the existing system hooks to do detailed reporting of load/user,
> load/tablespace, etc.
>
>
>  the reason behind it is that you want to check which objects is being
>> heavily hit by which  business processes or users and then tier your
>> storage accordingly.
>>
>
> At the moment, all you can really do is turn up the logging levels to log
> queries, logins, etc. Then watch pg_stat_activity and use system-level tools
> like iostat, vmstat, top, perf/dtrace, etc. If you see backends that're
> hogging resources you can look their pid up in pg_stat_activity or the logs,
> see what they were doing, and run controlled tests to see what can be
> improved.
>
> It's somewhat clumsy, but seems to work pretty well most of the time.
>
> Nobody has stepped up to build a comprehensive tracing and performance
> framework - and even if they did, they'd have to make it lightweight enough
> that it didn't slow PostgreSQL down when it wasn't in use, show that it
> wouldn't add an excessive maintenance burden for the developers, show that
> it wouldn't break or produce incorrect results the first time something
> changed, etc. The Linux kernel demonstrates just how hard getting this right
> can be. So does the amount of effort Sun put in to DTrace. Sure, PostgreSQL
> isn't an OS kernel, but it's far from simple.
>
> I guess that's why Oracle charges the big bucks - because of all the extras
> they include that round the database out into the kitchen-sink monster that
> it is.
>
>
>  is there any place rather than pg_stat_activity that you think I should
>> take a look at?
>>
>
> System-level tools and the postgresql logs, especially after proper
> configuration. There are some tools on pgfoundry that help a little with log
> analysis.
>
> --
> Craig Ringer
>


Re: [GENERAL] Incrementally Updated Backups

2010-09-13 Thread Mikko Partio
>
> > I'm interested in the "incrementally updated backups" scenario
> > described in section 25.6 of the Postgres 9 documentation. I've
> > configured streaming replication for my warm standby server.
> >
> > Step 2 in this procedure is to note?pg_last_xlog_replay_location at
> > the end of the backup. However it seems like this requires hot standby
> > to be configured; otherwise there is no way of connecting to the
> > standby machine to make the required query. That does not seem clear
> > from the documentation. Is there a way to get this without using hot
> > standby?
>
> That section has been removed from the current 9.0 docs because we are
> unsure it works.



Is the feature (or the documentation) still being worked on, or is pg_dump
the only way to take a backup of a warm standby while the database is
running?

Regards

Mikko


[GENERAL] Server crash during simple c-language function

2010-09-13 Thread Tomáš Kovářík
Hi,

I am trying to create a simple c-language function for "PostgreSQL
8.4.4, compiled by Visual C++ build 1400, 32-bit" running on Windows 7
(32-bit). It works, until I use a SPI.

1) CRASH: I successfully execute a simple query using SPI_exec(), but
when getting the result, it crashes:
SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1);
I tried to look wherever I could, but I have no idea what could go wrong.

2) NOT COMPILE: When I what to read value of global SPI_result, I
cannot compile - unresolved external symbol "_SPI_result". I am
currently linking with 'postgres.lib" and I haven't found anything
else what to link with additionally. Compiling using Visual Studio
2008, C/C++ project set to compile for "C".

Can anybody point out, what I am doing wrong?
The complete code and project are below.

Thanks, Tomas

 CODE =
#define BUILDING_DLL 1

#include "postgres.h"
#include "fmgr.h"
#include "executor/spi.h"

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

PG_FUNCTION_INFO_V1(crash_repro);


__declspec (dllexport)
Datum crash_repro(PG_FUNCTION_ARGS)
{
int exec_result;

SPI_connect();

exec_result = SPI_exec("select 'test'", 0);

elog(NOTICE, "exect_result_state = %i", exec_result);
elog(NOTICE, "SPI_processed = %u", SPI_processed);

if ((SPI_processed > 0) && (SPI_tuptable != NULL))
{
elog(NOTICE, "SPI_tuptable is not NULL");
SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1);
}

SPI_finish();

PG_RETURN_NULL();
}

= VISUAL STUDIO PROJECT ===































































-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general