[GENERAL] Restart Issue in Cluster environment

2011-02-22 Thread Itnal, Prakash (NSN - IN/Bangalore)
Hi,

We are using postrges 9.0.3 cluster environment, with replication patch.


In a normal condition, if I restart current active postgres in standby
mode and current standby postgres in active mode, we are facing one
blocking issue.

On restarting current standby postgres in active mode, the flag
internally maintained by postgres engine (select pg_is_in_recovery()) is
not getting changed. Because of this even though postgres is started
properly it is in read-only mode. 

The restart of current standby postgres in active mode is valid because
both postgres are in sync and as long as both postgres are in sync I can
start any postgres in active mode. 

So in such scenario, is there any option to forcefully change the flag?
Or postgres ideally should change this flag? 

Regards,
Prakash




Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread Dmitriy Igrishin
Hey Melvin,

2011/2/22 Melvin Davidson 

>
> Other than "It's currently not available", can anyone provide a logical
> explanation of why triggers cannot be implemented for SELECT statements, or
> rules for SELECT must be DO INSTEAD SELECT?
>
> PostgreSQL was derived from Ingres, and Ingres had a nice auditing feature
> that also handled SELECT.  It would be simple enough to write a RULE or
> TRIGGER on a SELECT to just log access, but for some unexplainable reason
> (at least to my knowledge) this has been greatly restricted in PostgreSQL. I
> am sure many DBA's and developers would greatly appreciate the addition of a
> TRIGGER or RULE on SELECT, and it should be simple enough to change the
> code, so I am curious as to why this has never been done.
>
> Thanks in advance.
>
Why not use function which returns table and wrap the
logging (auditing) code in it ?

>
>
>
> Melvin Davidson
>
>
>


-- 
// Dmitriy.


[GENERAL] Mysql to Postgresql

2011-02-22 Thread Adarsh Sharma

Dear all,

Today I need to back up a mysql database and restore in Postgresql 
database but I don't know how to achieve this accurately.


Can anyone kindly describe me the way to do this.


Thanks & best Regards,

Adarsh Sharma

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] multiple column to onec column

2011-02-22 Thread zab08


I have two table:




CREATE TABLE roles(role_name varchar(255) primary key);


CREATE TABLE roles_permissions(permission varchar(100), role_name varchar(100));


here is result by :
 SELECT * from roles;
 role_name 
---
 role1
 role2
(2 rows)


here is result by :
SELECT * from roles_permissions ;
 role_name | permission  
---+-
 role1 | permission1
 role1 | permission2
 role2 | permission1
(3 rows)


---


after the command:
SELECT * from roles, roles_permissions;
 role_name | role_name | permission  
---+---+-
 role1 | role1 | permission1
 role1 | role1 | permission2
 role1 | role2 | permission1
 role2 | role1 | permission1
 role2 | role1 | permission2
 role2 | role2 | permission1




the expected result is:
 role_name | permission  
---+-
 role1  | permission1,permission2,
 role2  | permission1




for short:
afer join of two table, here is some repeat columns,


I only want to a row for role1.











[GENERAL] System trigger

2011-02-22 Thread rsmogura

Hi,

Is any solution (I mean in code and internal based), any API. That 
allows to create "system" trigger or handle on table. I'm interested in 
tracking changes and coercing values on row change/insert/remove - user 
may not to disable such "trigger". In addition It is possible to track 
changes to cid, for e.g. when VACCUM/CLUSTER will move row?


Regards,
Radek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mysql to Postgresql

2011-02-22 Thread Jens Wilke
On Tuesday 22 February 2011 10:21:01 Adarsh Sharma wrote:

Hi,
 
>  Today I need to back up a mysql database and restore in Postgresql
> database but I don't know how to achieve this accurately.
> 
> Can anyone kindly describe me the way to do this.

Have a look here:
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL

HTH, Jens

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] disable triggers using psql

2011-02-22 Thread Geoffrey Myers

Greg Sabino Mullane wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


The saga continues.  I've reverted to a multi-step process to try and 
figure this out.  I create the initial database, then load it from the 
command line psql as follows:


pro-# \set session_replication_role replica;
pro-# \o db.out
pro-# \i dump.txt


This is a database set, not a psql on, so you do not want the 
backslash before the "set". 


SET session_replication_role = replica;

I'd recommend adding a:

SHOW session_replication_role;

to the dump.txt as a sanity check.


For the sake of completeness, I've attempted the above, same result.  We 
have decided to take a different approach and attempt to clean up the 
data in the database, then convert.


I do appreciate all the time you've devoted to this.  There must be 
something in the dump that is causing these issues.


Thanks again Greg.  I'll certainly update the list once we have a 
working solution.




- -- 
Greg Sabino Mullane g...@turnstep.com

End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201102211529
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAk1iy74ACgkQvJuQZxSWSsgWQACgrxKDvN/yCZD5GZJvlqFMyyIC
9mwAnjOMJ9QDRa3IoiBCvaS9mT5sMR6f
=JYCs
-END PGP SIGNATURE-






--
Until later, Geoffrey

"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them."
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread Melvin Davidson
Dmitriy

>Why not use function which returns table and wrap the
>logging (auditing) code in it ?

Because to use a trigger function, you need a trigger, and as previously 
stated, you cannot have a trigger on select. The same applies for a rule.

Melvin Davidson 
 





  

Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread Dmitriy Igrishin
2011/2/22 Melvin Davidson 

> Dmitriy
>
>
> >Why not use function which returns table and wrap the
> >logging (auditing) code in it ?
>
> Because to use a trigger function, you need a trigger, and as previously
> stated, you cannot have a trigger on select. The same applies for a rule.
>
Yes, you can't. But why do you need a *trigger* function and trigger ?
Why not select via regular function ?

CREATE OR REPLACE FUNCTION public.test_select()
 RETURNS TABLE(id integer, name text)
 LANGUAGE sql
 SECURITY DEFINER -- note here!
AS $function$
SELECT 1, 'dima' UNION ALL
SELECT 2, 'melvin'; -- just for example I use simple union query
$function$

You can revoke SELECT privileges on table and
give to some users privileges on function  (to
prevent them from selecting from table directly).


> Melvin Davidson
>
>
>
>


-- 
// Dmitriy.


Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread David Johnston
Melvin:  The proposal is to do something of the form

 

SELECT * FROM selecting_function()

 

And have selecting_function() perform any necessary auditing.

 

I guess this work fairly well - as long as you remember to remove "SELECT"
privileges on the wrapped table from everyone and setup SECURITY DEFINER on
the function itself.

 

For a normal "SELECT FROM table WHERE" construct the query planner is able
to use indexes on "table" based upon the where clause.  If you do "SELECT
FROM func() WHERE" does the entire table always get scanned/returned or are
indexes applied in this case as well?  These (performance concerns) are the
main reason that using a wrapping function is not intuitive.  It would also
require a slightly different paradigm for the end-user and would require
rewriting to make it work in an existing system that was directly accessing
tables before (I assume if access is being done via VIEWs that incorporating
the function calls into the views would work just fine).

 

David J

 

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Melvin Davidson
Sent: Tuesday, February 22, 2011 8:47 AM
To: Dmitriy Igrishin
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] why is there no TRIGGER ON SELECT ?

 


Dmitriy

>Why not use function which returns table and wrap the
>logging (auditing) code in it ?

Because to use a trigger function, you need a trigger, and as previously
stated, you cannot have a trigger on select. The same applies for a rule.

Melvin Davidson 

 



Re: [GENERAL] Mysql to Postgresql

2011-02-22 Thread Jaiswal Dhaval Sudhirkumar
Take a look at the Navicat for PostgreSQL.
 
--
Thanks & Regards
 
Dhaval Jaiswal 
Cell: +91 80953.978.43



From: pgsql-general-ow...@postgresql.org on behalf of Adarsh Sharma
Sent: Tue 2/22/2011 2:51 PM
To: pgsql-general@postgresql.org
Cc: my...@lists.mysql.com
Subject: [GENERAL] Mysql to Postgresql



Dear all,

 Today I need to back up a mysql database and restore in Postgresql
database but I don't know how to achieve this accurately.

Can anyone kindly describe me the way to do this.


Thanks & best Regards,

Adarsh Sharma

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential and/or privileged material. 
Any review, re-transmission, dissemination or other use of or taking of any 
action in reliance upon,this information by persons or entities other than the 
intended recipient is prohibited. 
If you received this in error, please contact the sender and delete the 
material from your computer. 
Microland takes all reasonable steps to ensure that its electronic 
communications are free from viruses. 
However, given Internet accessibility, the Company cannot accept liability for 
any virus introduced by this e-mail or any attachment and you are advised to 
use up-to-date virus checking software. 


Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread Melvin Davidson
Dave and Dmitriy,

I know a function can be used, but the point is to log a table whenever 
"someone else" does a SELECT on it.

It cannot be depended on that a user will include that (or any specific 
function in a SELECT.  iow, when any user does "SELECT ... FROM tablex;" then 
logging should occur.

That cannot happen unless there a trigger of the form:
CREATE TRIGGER tg_log_table AFTER SELECT 
ON tablex FOR EACH STATEMENT
EXECUTE PROCEDURE log_table();

And yes, I know access the to table is restricted, but that is also not the 
point. Suppose an application requires that a user/client be charged for each 
time they access certain data?

The point, and question is, not how to work around the problem, but rather, why 
cannot this functionality be implemented in triggerts and rules.

Melvin Davidson 
Dmitriy

>Why not use function which returns table and wrap the
>logging (auditing) code in it ?

Because to use a trigger function, you need a trigger, and as previously 
stated, you cannot have a trigger on select. The same applies for a rule.Melvin 
Davidson   


  

Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread Thomas Kellerer

Melvin Davidson, 22.02.2011 15:42:

I know a function can be used, but the point is to log a table
whenever "someone else" does a SELECT on it.

It cannot be depended on that a user will include that (or any
specific function in a SELECT. iow, when any user does "SELECT ...
FROM tablex;" then logging should occur.


You can force users to use the function.

Remove the SELECT privilege on the table for the user, create a view that uses 
the function and then grant select on the view to the users. Thus they won't 
even notice they are going through a function and you can still audit the 
SELECT.
The function needs to be created with SECURITY DEFINER though.

The downside of this is, that this only works if the result set isn't too 
large. Because all rows that are returned by the function will be first 
buffered on the the server before they are returned to the client.

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Notify rule

2011-02-22 Thread Tom Lane
ivan_14_32  writes:
> Q: How can I send tuple id (integer primary key) via notify using Rule?

You can't --- NOTIFY doesn't take variables in its parameters.
(That's a general property of utility commands, not just NOTIFY.)

Use the pg_notify() function, instead.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] work_mem = 900MB but Sort Method: external merge Disk: 304008kB

2011-02-22 Thread Reid Thompson
What am I missing that causes this to resort to sorting on disk?

obc=# select version();
 version
--
 PostgreSQL 8.3.7 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20071124 (Red Hat 4.1.2-42)
(1 row)

Time: 43.920 ms

> ---
>  Limit  (cost=1063641.92..1063643.47 rows=20 width=13) (actual 
> time=422710.147..422711.328 rows=20 loops=1)
>->  Unique  (cost=1063641.92..1064133.33 rows=6320 width=13) (actual 
> time=422710.022..422711.127 rows=20 loops=1)
>  ->  Sort  (cost=1063641.92..1063887.62 rows=98282 width=13) (actual 
> time=422710.014..422710.696 rows=172 loops=1)
>Sort Key: cpn.value
>Sort Method:  external merge  Disk: 304008kB
>->  Nested Loop  (cost=647.20..1061026.67 rows=98282 width=13) 
> (actual time=61.029..71867.921 rows=9627373 loops=1)
>  ->  HashAggregate  (cost=647.20..648.15 rows=95 width=4) 
> (actual time=60.950..64.350 rows=596 loops=1)
>->  Hash Join  (cost=4.59..646.96 rows=95 width=4) 
> (actual time=0.352..57.210 rows=596 loops=1)
>  Hash Cond: (cb.client_id = c.id)
>  ->  Seq Scan on contact_block cb  
> (cost=0.00..596.31 rows=12031 width=8) (actual time=0.015..26.757 rows=10323 
> loops=1)
>  ->  Hash  (cost=4.58..4.58 rows=1 width=4) 
> (actual time=0.064..0.064 rows=1 loops=1)
>->  Seq Scan on clients c  
> (cost=0.00..4.58 rows=1 width=4) (actual time=0.021..0.055 rows=1 loops=1)
>  Filter: ((name)::text = 'Kmart 
> Pharmacies, Inc.'::text)
>  ->  Index Scan using extra_import_param_blk_item_tag on 
> extra_import_param cpn  (cost=0.00..11039.67 rows=9777 width=17) (actual 
> time=0.057..61.769 rows=16153 loops=596)
>Index Cond: ((cpn.block_id = cb.id) AND 
> ((cpn.tag)::text = 'PATNAME'::text))
>  Total runtime: 422920.026 ms
> (16 rows)
> 
> Time: 422924.289 ms
> obc=# show sort_mem;
>  work_mem 
> --
>  900MB
> (1 row)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple column to onec column

2011-02-22 Thread Andreas Kretschmer
zab08  wrote:

> 
> I have two table:
> 
> 
> CREATE TABLE roles(role_name varchar(255) primary key);
> 
> CREATE TABLE roles_permissions(permission varchar(100), role_name varchar
> (100));
> 
> here is result by :
>  SELECT * from roles;
>  role_name 
> ---
>  role1
>  role2
> (2 rows)
> 
> here is result by :
> SELECT * from roles_permissions ;
>  role_name | permission  
> ---+-
>  role1 | permission1
>  role1 | permission2
>  role2 | permission1
> (3 rows)
> 
>  ---
> 
> after the command:
> SELECT * from roles, roles_permissions;

That's a CROSS-JOIN



>  role_name | role_name | permission  
> ---+---+-
>  role1 | role1 | permission1
>  role1 | role1 | permission2
>  role1 | role2 | permission1
>  role2 | role1 | permission1
>  role2 | role1 | permission2
>  role2 | role2 | permission1
> 
> 
> the expected result is:
>  role_name | permission  
> ---+-
>  role1  | permission1,permission2,
> & nbsp;role2  | permission1
> 
> 
> for short:
> afer join of two table, here is some repeat columns,

yeah, it's a cross-join. Rewrite your query, adding a WHERE-condition:

where roles.role_name = roles_permissions.role_name




Regards, Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Covert database from ASCII to UTF-8

2011-02-22 Thread Jeff Amiel
It's come time to bite the bullet and convert a half-terraybyte database from 
ASCII to UTF8.  Have gone through a bit of effort to track down the unclean 
ascii text and repair it but would like to avoid the outage of a many-many hour 
dump-restore.

Using Postgres 8.4.X.

Are there any other magic options open to me?  
Any way to do an in-place conversion?
I assume slony replication is an option.
What about some sort of wal log shipping replication?

Any thoughts would be appreciated.



  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] work_mem = 900MB but Sort Method: external merge Disk: 304008kB

2011-02-22 Thread Andreas Kretschmer
Reid Thompson  wrote:

> What am I missing that causes this to resort to sorting on disk?
> 
> obc=# select version();
>  version
> --
>  PostgreSQL 8.3.7 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
> 20071124 (Red Hat 4.1.2-42)
> (1 row)
> 
> Time: 43.920 ms
> 
> > ---
> >  Limit  (cost=1063641.92..1063643.47 rows=20 width=13) (actual 
> > time=422710.147..422711.328 rows=20 loops=1)
> >->  Unique  (cost=1063641.92..1064133.33 rows=6320 width=13) (actual 
> > time=422710.022..422711.127 rows=20 loops=1)
> >  ->  Sort  (cost=1063641.92..1063887.62 rows=98282 width=13) 
> > (actual time=422710.014..422710.696 rows=172 loops=1)
> >Sort Key: cpn.value
> >Sort Method:  external merge  Disk: 304008kB

Bad estimation: rows=98282, actual rows=172



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Reordering a table

2011-02-22 Thread Howard Cole

Hi,

a puzzle to solve...

I have a table with a primary key, and a timestamp, e.g.

idstamp
1 2011-02-01 10:00
2 2011-02-01 09:00
3 2011-02-01 11:00

Now for reasons too painful to go into, I need to reorder the id 
(sequence) so that they are in time order:


idstamp
1 2011-02-01 09:00
2 2011-02-01 10:00
3 2011-02-01 11:00

I thought I could do it by adding a third colum with the order in it, 
but I cannot think of a way to do this short of writing some code


idstamp order
1 2011-02-01 10:002
2 2011-02-01 09:001
3 2011-02-01 11:003

Any ideas?

Thanks.
Howard
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] work_mem = 900MB but Sort Method: external merge Disk: 304008kB

2011-02-22 Thread Tom Lane
Reid Thompson  writes:
> What am I missing that causes this to resort to sorting on disk?

The in-memory space required to sort N tuples can be significantly
larger than the on-disk space, because the latter representation is
optimized to be small and the in-memory representation not so much.
I haven't seen a 3X differential before, but it's not outside the realm
of reason, especially for narrow rows like these where it's all about
the overhead.  I suspect if you crank work_mem up still more, you'll see
it switch over.  It flips to on-disk sort when the in-memory
representation exceeds the limit ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Covert database from ASCII to UTF-8

2011-02-22 Thread Vibhor Kumar

On Feb 22, 2011, at 10:23 PM, Jeff Amiel wrote:

> It's come time to bite the bullet and convert a half-terraybyte database from 
> ASCII to UTF8.  Have gone through a bit of effort to track down the unclean 
> ascii text and repair it but would like to avoid the outage of a many-many 
> hour dump-restore.
> 
> Using Postgres 8.4.X.
> 
> I assume slony replication is an option.

Right! Replication would help. You can also try Bucardo.

> What about some sort of wal log shipping replication?


WAL Log shipping won't help.


Thanks & Regards,
Vibhor Kumar

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Covert database from ASCII to UTF-8

2011-02-22 Thread Vick Khera
On Tue, Feb 22, 2011 at 11:53 AM, Jeff Amiel  wrote:
> I assume slony replication is an option.

this is my plan, once i finish cleaning up the code and the DB data.
you have to ensure that whatever the original DB emits (in the form of
COPY and individual updates later on) will import correctly into the
new DB without error, else you're hosed.

also, I'd do it one table at a time rather than all at once, to
minimize failure cases if there is a problem with one table.


> What about some sort of wal log shipping replication?
>
I don't think you can do that.  This is a binary replication that
copies disk pages.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reordering a table

2011-02-22 Thread David Kerr
On Tue, Feb 22, 2011 at 04:40:36PM +, Howard Cole wrote:
- Hi,
- 
- a puzzle to solve...
- 
- I have a table with a primary key, and a timestamp, e.g.
- 
- idstamp
- 1 2011-02-01 10:00
- 2 2011-02-01 09:00
- 3 2011-02-01 11:00
- 
- Now for reasons too painful to go into, I need to reorder the id 
- (sequence) so that they are in time order:
- 
- idstamp
- 1 2011-02-01 09:00
- 2 2011-02-01 10:00
- 3 2011-02-01 11:00
- 
- I thought I could do it by adding a third colum with the order in it, 
- but I cannot think of a way to do this short of writing some code
- 
- idstamp order
- 1 2011-02-01 10:002
- 2 2011-02-01 09:001
- 3 2011-02-01 11:003
- 
- Any ideas?
- 
- Thanks.
- Howard
- www.selestial.com

There is almost certianly a slicker way, but what comes to the top of my head is

create new_table (id serial, stamp timestamp)
insert into new_table (stamp) select stamp from old_table order by stamp;
rename old_table old_table_old -- just to be safe
rename new_table old_table

this, of course, doesn't stop stuff from getting out of order again.

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] work_mem = 900MB but Sort Method: external merge Disk: 304008kB

2011-02-22 Thread Andreas Kretschmer
Tom Lane  wrote:

> Reid Thompson  writes:
> > What am I missing that causes this to resort to sorting on disk?
> 
> The in-memory space required to sort N tuples can be significantly
> larger than the on-disk space, because the latter representation is
> optimized to be small and the in-memory representation not so much.
> I haven't seen a 3X differential before, but it's not outside the realm
> of reason, especially for narrow rows like these where it's all about
> the overhead.  I suspect if you crank work_mem up still more, you'll see
> it switch over.  It flips to on-disk sort when the in-memory
> representation exceeds the limit ...

Question: when is the planner making the decision between in-memory and
on-disk, at planning-time or at execution time with the knowledge about
the real amount of tuples?


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reordering a table

2011-02-22 Thread Vibhor Kumar

On Feb 22, 2011, at 10:10 PM, Howard Cole wrote:

> Hi,
> 
> a puzzle to solve...
> 
> I have a table with a primary key, and a timestamp, e.g.
> 
> idstamp
> 1 2011-02-01 10:00
> 2 2011-02-01 09:00
> 3 2011-02-01 11:00
> 
> Now for reasons too painful to go into, I need to reorder the id (sequence) 
> so that they are in time order:
> 
> idstamp
> 1 2011-02-01 09:00
> 2 2011-02-01 10:00
> 3 2011-02-01 11:00
> 
> I thought I could do it by adding a third colum with the order in it, but I 
> cannot think of a way to do this short of writing some code
> 
> idstamp order
> 1 2011-02-01 10:002
> 2 2011-02-01 09:001
> 3 2011-02-01 11:003
> 
> Any ideas?
> 

If you are using PG 8.4 then you can try something with row_number as given 
below:
 select id,stamp, row_number() over(order by stamp) from test;

Or 

Create table test1 as select row_number() over(order by stamp) as id, stamp 
from test;

Thanks & Regards,
Vibhor Kumar

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] work_mem = 900MB but Sort Method: external merge Disk: 304008kB

2011-02-22 Thread Reid Thompson
On 02/22/2011 12:06 PM, Tom Lane wrote:
> Reid Thompson  writes:
>> What am I missing that causes this to resort to sorting on disk?
> 
> The in-memory space required to sort N tuples can be significantly
> larger than the on-disk space, because the latter representation is
> optimized to be small and the in-memory representation not so much.
> I haven't seen a 3X differential before, but it's not outside the realm
> of reason, especially for narrow rows like these where it's all about
> the overhead.  I suspect if you crank work_mem up still more, you'll see
> it switch over.  It flips to on-disk sort when the in-memory
> representation exceeds the limit ...
> 
>   regards, tom lane

ahh, ok; the underlying cpn.value table is 11 GB so I understand how even 
slightly less optimized representation could be
significantly larger than ~300MB/900MB

Thanks,
reid

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] work_mem = 900MB but Sort Method: external merge Disk: 304008kB

2011-02-22 Thread Tom Lane
Andreas Kretschmer  writes:
> Tom Lane  wrote:
>> Reid Thompson  writes:
>>> What am I missing that causes this to resort to sorting on disk?
>> 
>> The in-memory space required to sort N tuples can be significantly
>> larger than the on-disk space,

> Question: when is the planner making the decision between in-memory and
> on-disk, at planning-time or at execution time with the knowledge about
> the real amount of tuples?

The planner doesn't make that decision.  tuplesort.c always starts in
in-memory mode, and flips to on-disk when the actual amount of data in
its care exceeds work_mem.  The planner guesses whether that will happen
while making cost estimates, but it's only an estimate.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Notify rule

2011-02-22 Thread Merlin Moncure
On Tue, Feb 22, 2011 at 1:03 AM, ivan_14_32  wrote:
> Hi!
> I trying to create rule for update notifing:
>
> CREATE OR REPLACE RULE "ttt_NotifyU" AS
>     ON UPDATE TO "ttt" DO
>  NOTIFY "ttt", '88';
> this work.
>
> But this
> CREATE OR REPLACE RULE "ttt_NotifyU" AS
>     ON UPDATE TO "ttt" DO
>  NOTIFY "ttt", NEW."id";
> don't (syntax error),
> NOTIFY "ttt", CAST(NEW."id" as text)
> too.
>
> Q: How can I send tuple id (integer primary key) via notify using Rule?
>     Target is erasing deprecated tuples from application's cache.

why use a rule here and not a trigger? triggers might be a bit more
verbose to right but explicit rules have and endless list of gotchas
and should be avoided unless you have a really good reason not to.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Notify rule

2011-02-22 Thread Scott Ribe
On Tue, Feb 22, 2011 at 1:03 AM, ivan_14_32  wrote:
> Hi!
> I trying to create rule for update notifing:
> 
> CREATE OR REPLACE RULE "ttt_NotifyU" AS
> ON UPDATE TO "ttt" DO
>  NOTIFY "ttt", '88';
> this work.
> 
> But this
> CREATE OR REPLACE RULE "ttt_NotifyU" AS
> ON UPDATE TO "ttt" DO
>  NOTIFY "ttt", NEW."id";
> don't (syntax error),
> NOTIFY "ttt", CAST(NEW."id" as text)
> too.
> 
> Q: How can I send tuple id (integer primary key) via notify using Rule?
> Target is erasing deprecated tuples from application's cache.

Well, that second argument to NOTIFY must be a literal. If you need to notify 
with a dynamic value, then you need to use the pg_notify function 
instead--regardless of whether you use a rule or a trigger.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reordering a table

2011-02-22 Thread David Johnston
Something like:

SELECT 
ordered.stamp, 
nextval('sequence') AS rownumber
FROM (SELECT stamp FROM table ORDER BY stamp ASC) ordered

Incorporate the ID field and UPDATE as necessary to get the result the way
you need it.

You are apparently aware that you likely have a design or understanding
issue due to the fact that you need to do this persistently.

You may be able to accomplish a similar result without the use of a sequence
by using WINDOW but for a one-off scenario the sequence should suffice.

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Howard Cole
Sent: Tuesday, February 22, 2011 11:41 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Reordering a table

Hi,

a puzzle to solve...

I have a table with a primary key, and a timestamp, e.g.

idstamp
1 2011-02-01 10:00
2 2011-02-01 09:00
3 2011-02-01 11:00

Now for reasons too painful to go into, I need to reorder the id
(sequence) so that they are in time order:

idstamp
1 2011-02-01 09:00
2 2011-02-01 10:00
3 2011-02-01 11:00

I thought I could do it by adding a third colum with the order in it, but I
cannot think of a way to do this short of writing some code

idstamp order
1 2011-02-01 10:002
2 2011-02-01 09:001
3 2011-02-01 11:003

Any ideas?

Thanks.
Howard
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reordering a table

2011-02-22 Thread Howard Cole

On 22/02/2011 5:18 PM, Vibhor Kumar wrote:
If you are using PG 8.4 then you can try something with row_number as 
given below:

  select id,stamp, row_number() over(order by stamp) from test;

Or

Create table test1 as select row_number() over(order by stamp) as id, stamp 
from test;

Thanks&  Regards,
Vibhor Kumar


I have not come across that that function before. I'll take a closer look.

Many thanks to all for the quick responses.

Howard.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] EXECUTE of a 'create table' string is not happening

2011-02-22 Thread Ralph Smith

Hi,

I'm passing a tablename and two columnnames into a function so that I 
can SELECT Records in a FOR LOOP using 'fixed' field names.
Using the passed params I construct the create table command and then 
alter the names of two columns.


When I 'build' the function and then run my query to use the function w/ 
a different offset it works the first time.
The first time ONLY.  But actually it doesn't work, it just doesn't 
error the first run.
Subsequent runs tell me that relation  doesn't exist, at the 
FOR...LOOP line, after ALTERing the table.


I build the string and it's good:
CREATE TABLE temp_gentokenstable AS SELECT businessid, publicname FROM 
businesscontact ORDER BY businessid OFFSET 20 LIMIT 1 ;


I'm forced to use v7.4.

If this is a known error, can I (with the version  ; ^)  ) get around it?

Ultimately I need to FOR..LOOP through through records and the table and 
cols will change.  Any suggestions???


THANKS!

--

Ralph
_



Re: [GENERAL] EXECUTE of a 'create table' string is not happening

2011-02-22 Thread David Johnston
A little lost but the first thing that stands out is that you are attempting
to create an actual table instead of a temporary table.  Not sure if that
difference is meaningful to the function but procedurally is there a reason
to create the permanent table instead of a temporary one?

 

If you do need a permanent table would you be able to generate the data as
part of routine maintenance and/or via triggers instead of building out the
entire (or portion) of the table each time?

 

I do not think you have provided enough code to get good feedback.  The
entire function would probably help - though maybe a simplified version but
one that still exhibits the behavior in question.

 

You also do not provide the minor release level which may be relevant.

 

David J.

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ralph Smith
Sent: Tuesday, February 22, 2011 2:05 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] EXECUTE of a 'create table' string is not happening

 

Hi,

I'm passing a tablename and two columnnames into a function so that I can
SELECT Records in a FOR LOOP using 'fixed' field names.
Using the passed params I construct the create table command and then alter
the names of two columns.

When I 'build' the function and then run my query to use the function w/ a
different offset it works the first time.
The first time ONLY.  But actually it doesn't work, it just doesn't error
the first run.
Subsequent runs tell me that relation  doesn't exist, at the
FOR...LOOP line, after ALTERing the table.

I build the string and it's good:
CREATE TABLE temp_gentokenstable AS SELECT businessid, publicname FROM
businesscontact ORDER BY businessid OFFSET 20 LIMIT 1 ;

I'm forced to use v7.4.

If this is a known error, can I (with the version  ; ^)  ) get around it?

Ultimately I need to FOR..LOOP through through records and the table and
cols will change.  Any suggestions???

THANKS!



-- 
 
Ralph
_


Re: [GENERAL] EXECUTE of a 'create table' string is not happening

2011-02-22 Thread Pavel Stehule
2011/2/22 Ralph Smith :
> Hi,
>
> I'm passing a tablename and two columnnames into a function so that I can
> SELECT Records in a FOR LOOP using 'fixed' field names.
> Using the passed params I construct the create table command and then alter
> the names of two columns.
>
> When I 'build' the function and then run my query to use the function w/ a
> different offset it works the first time.
> The first time ONLY.  But actually it doesn't work, it just doesn't error
> the first run.
> Subsequent runs tell me that relation  doesn't exist, at the
> FOR...LOOP line, after ALTERing the table.
>
> I build the string and it's good:
> CREATE TABLE temp_gentokenstable AS SELECT businessid, publicname FROM
> businesscontact ORDER BY businessid OFFSET 20 LIMIT 1 ;
>
> I'm forced to use v7.4.
>
> If this is a known error, can I (with the version  ; ^)  ) get around it?


>
> Ultimately I need to FOR..LOOP through through records and the table and
> cols will change.  Any suggestions???
>

use a EXECUTE statement and FOR IN EXECUTE statement

regards

Pavel Stehule


> THANKS!
>
> --
>
> Ralph
> _
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re ==> EXECUTE of a 'create table' string is not happening

2011-02-22 Thread Ralph Smith

Here's what I'm doing.
It is to work on existing tables (not triggerable), but for subsequent 
updates to the table(s) that I'm tokenizing fields for, a trigger will 
be used to do the tokenizing of new data
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _


   CREATE OR REPLACE FUNCTION gen_tokens_n_map(character varying,
   character varying, varchar, bigint, bigint)
 RETURNS void AS
   '
   -- Sample invocation:  SELECT
   gen_tokens_n_map(^businesscontact^,^businessid^,^publicname^, 0,
   1) ;
   --  Where ^ are actually apostrophes.


   DECLARE vTableNameALIAS FOR $1 ;
   KeyColNameALIAS FOR $2 ;
   ValueColNameALIAS FOR $3 ;
   offsetvalALIAS FOR $4 ;
   limitvalALIAS FOR $5 ;
   TableFieldValINTEGER ;
   TableRec RECORD ;
   BusID BIGINT ;
   .
   .
   .
 ExecuteString := ''CREATE TABLE temp_gentokenstable (ID, ValueCol)
   AS '' ;
 ExecuteString := ExecuteString || '' SELECT '' || KeyColName ||
   '', '' || ValueColName  ;
 ExecuteString := ExecuteString || '' FROM '' || vTableName || ''
   ORDER BY '' || KeyColName  ;
 ExecuteString := ExecuteString || '' OFFSET '' ||
   offsetval::text::varchar || '' LIMIT '' || limitval::text::varchar
   || '' ;'';
 
 EXECUTE ExecuteString ;

 FOR TableRec IN SELECT * FROM temp_gentokenstable order by id LOOP


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _

I will now try Pavel's suggestion of

   use a EXECUTE statement and FOR IN EXECUTE statement

And to DJ's question, version 7.4.12.  Yes, OUCH!

I'll let you know if the execute in the FOR LOOP will help out.

Thanks all!
Ralph


--
===  



 FOR TableRec IN SELECT * FROM temp_gentokenstable order by id LOOP

--

Ralph
_



[GENERAL] CLOSURE: EXECUTE of a 'create table' string is not happening

2011-02-22 Thread Ralph Smith

This worked!!!
 FOR TableRec IN EXECUTE ExecuteString LOOP

THANKS ALL!!!
Ralph

p.s.  The reason we're still using 7.4 is that some system logs were 
trashed and we NEED that data.  All but the BLOGS have been recovered, 
and there lies the problem.

=

Pavel Stehule wrote:

2011/2/22 Ralph Smith :
  

Hi,

I'm passing a tablename and two columnnames into a function so that I can
SELECT Records in a FOR LOOP using 'fixed' field names.
Using the passed params I construct the create table command and then alter
the names of two columns.

When I 'build' the function and then run my query to use the function w/ a
different offset it works the first time.
The first time ONLY.  But actually it doesn't work, it just doesn't error
the first run.
Subsequent runs tell me that relation  doesn't exist, at the
FOR...LOOP line, after ALTERing the table.

I build the string and it's good:
CREATE TABLE temp_gentokenstable AS SELECT businessid, publicname FROM
businesscontact ORDER BY businessid OFFSET 20 LIMIT 1 ;

I'm forced to use v7.4.

If this is a known error, can I (with the version  ; ^)  ) get around it?




  

Ultimately I need to FOR..LOOP through through records and the table and
cols will change.  Any suggestions???




use a EXECUTE statement and FOR IN EXECUTE statement

regards

Pavel Stehule


  

THANKS!

--

Ralph
_




--

Ralph
_



Re: [GENERAL] why is there no TRIGGER ON SELECT ?

2011-02-22 Thread Igor Neyman
> -Original Message-
> From: Melvin Davidson [mailto:melvin6...@yahoo.com] 
> Sent: Tuesday, February 22, 2011 9:43 AM
> To: 'Dmitriy Igrishin'; David Johnston
> Cc: pgsql-general@postgresql.org
> Subject: Re: why is there no TRIGGER ON SELECT ?
> 
> Dave and Dmitriy,
> 
> I know a function can be used, but the point is to log a 
> table whenever "someone else" does a SELECT on it.
> 
> It cannot be depended on that a user will include that (or 
> any specific function in a SELECT.  iow, when any user does 
> "SELECT ... FROM tablex;" then logging should occur.
> 
> That cannot happen unless there a trigger of the form:
> CREATE TRIGGER tg_log_table AFTER SELECT ON tablex FOR EACH 
> STATEMENT EXECUTE PROCEDURE log_table();
> 
> And yes, I know access the to table is restricted, but that 
> is also not the point. Suppose an application requires that a 
> user/client be charged for each time they access certain data?
> 
> The point, and question is, not how to work around the 
> problem, but rather, why cannot this functionality be 
> implemented in triggerts and rules.
> 
> Melvin Davidson 
> 
> 
> Dmitriy
> 
> >Why not use function which returns table and wrap the logging 
> >(auditing) code in it ?
> 
> Because to use a trigger function, you need a trigger, and as 
> previously stated, you cannot have a trigger on select. The 
> same applies for a rule.
> 
> Melvin Davidson 
> 
>

Somewhat OT (but, m.b. related?):

With every view there is:

 RULE "_RETURN" AS ON SELECT TO ... DO INSTEAD ...

Which PG creates automatically (behind the scene), when view is created.

Is there a way, or did anyone try to modify this Rule? 
It is after all "DO INSTEAD" Rule.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-22 Thread Aleksey Tsalolikhin
Hi.  Last week our 60 GB database (per psql \l+) was (re-)replicated to
the DR site using SlonyI, and arrived 109 GB in size which caused a
problem as it filled up the filesystem on the DR server - we expected the
DR database to be the same size.  Mystery.

Now just past weekend we upgraded our production server by pg_dump
and pg_restore, and again the database is 109 GB in size!

Most of our data is in a single table, which on the old server is 50 GB in
size and on the new server is 100 GB in size.

Could you please help us understand why a COPY of the data into a new
database (whether DR or the new server) results in different disk usage?

Somebody mentioned on the Slony users list that there is a kind of padding
that goes in that actually helps performance.

Is there a way to track disk usage MINUS the padding?

Thanks,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to extract a value from a record using attnum or attname?

2011-02-22 Thread Dimitri Fontaine
"Kevin Grittner"  writes:

> PL/pgSQL seems tantalizingly close to being useful for developing a
> generalized trigger function for notifying the client of changes.  I
> don't know whether I'm missing something or whether we're missing a
> potentially useful feature here.  Does anyone see how to fill in
> where the commented question is, or do I need to write this function
> in C?

See those:

  http://tapoueh.org/articles/blog/_Dynamic_Triggers_in_PLpgSQL.html
  http://www.pgsql.cz/index.php/PL_toolbox_%28en%29#Record.27s_functions

>   for i in array_lower(keycols, 1)..array_upper(keycols, 1) loop
> select quote_ident(attname) from pg_catalog.pg_attribute
>   where attrelid = tg_relid and attnum = keycols[i]::oid

Beware of attisdropped, which I've not fixed in the published URL
before (the tapoueh.org one).

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PgEast 2011: Talks and trainings up

2011-02-22 Thread Joshua D. Drake
Hello,

Per the customary URL:

https://www.postgresqlconference.org/

JD
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to extract a value from a record using attnum or attname?

2011-02-22 Thread Kevin Grittner
[moving to -hackers with BC to -general]
 
Dimitri Fontaine  wrote:
> "Kevin Grittner"  writes:
> 
>> PL/pgSQL seems tantalizingly close to being useful for developing
>> a generalized trigger function for notifying the client of
>> changes. I don't know whether I'm missing something or whether
>> we're missing a potentially useful feature here.  Does anyone see
>> how to fill in where the commented question is, or do I need to
>> write this function in C?
> 
> See those:
> 
> http://tapoueh.org/articles/blog/_Dynamic_Triggers_in_PLpgSQL.html
>
http://www.pgsql.cz/index.php/PL_toolbox_%28en%29#Record.27s_functions
> 
>>   for i in array_lower(keycols, 1)..array_upper(keycols, 1) loop
>> select quote_ident(attname) from pg_catalog.pg_attribute
>>   where attrelid = tg_relid and attnum = keycols[i]::oid
> 
> Beware of attisdropped, which I've not fixed in the published URL
> before (the tapoueh.org one).
 
Thanks.
 
In the absence of an earlier response, though, I went ahead and
wrote the attached, which has passed some initial programmer testing
and is scheduled to start business analyst testing tomorrow with the
application software for production deployment in a couple months.
We probably won't go back to PL/pgSQL for this now.
 
I'm assuming that while I have an AccessShareLock on the index
relation for the primary key, any attributes it tells me are used by
that relation will not have the attisdropped flag set?
 
What this trigger function does is to issue a NOTIFY to the channel
specified as a parameter to the function in CREATE TRIGGER (with
'tcn' as the default), and a payload consisting of the table name, a
code for the operation (Insert, Update, or Delete), and the primary
key values.  So, an update to a Party record for us might generate
this NOTIFY payload:
 
"Party",U,"countyNo"='71',"caseNo"='2011CF001234',"partyNo"='1'
 
This is one of those things which our shop needs, but I was planning
to post it for the first 9.2 CF fest to see if anyone else was
interested.  It struck me while typing this post that for general
use the schema would probably need to be in there, but I'll worry
about that later, if anyone else *is* interested.  If anyone wants
it I can provide Java code to tear apart the NOTIFY payloads using
the Pattern and Matches classes.
 
I'll add to the first 9.2 CF referencing this post.
 
-Kevin

*** a/src/backend/utils/adt/trigfuncs.c
--- b/src/backend/utils/adt/trigfuncs.c
***
*** 13,21 
   */
  #include "postgres.h"
  
! #include "access/htup.h"
  #include "commands/trigger.h"
  #include "utils/builtins.h"
  
  
  /*
--- 13,25 
   */
  #include "postgres.h"
  
! #include "executor/spi.h"
! #include "catalog/indexing.h"
! #include "commands/async.h"
  #include "commands/trigger.h"
  #include "utils/builtins.h"
+ #include "utils/fmgroids.h"
+ #include "utils/tqual.h"
  
  
  /*
***
*** 93,95  suppress_redundant_updates_trigger(PG_FUNCTION_ARGS)
--- 97,261 
  
return PointerGetDatum(rettuple);
  }
+ 
+ 
+ /*
+  * Copy from s (for source) to r (for result), wrapping with q (quote)
+  * characters and doubling any quote characters found.
+  */
+ static char *
+ strcpy_quoted(char *r, const char *s, const char q)
+ {
+   *r++ = q;
+   while (*s)
+   {
+   if (*s == q)
+   *r++ = q;
+   *r++ = *s;
+   s++;
+   }
+   *r++ = q;
+   return r;
+ }
+ 
+ /*
+  * triggered_change_notification
+  *
+  * This trigger function will send a notification of data modification with
+  * primary key values.The channel will be "tcn" unless the trigger is
+  * created with a parameter, in which case that parameter will be used.
+  */
+ Datum
+ triggered_change_notification(PG_FUNCTION_ARGS)
+ {
+   TriggerData *trigdata = (TriggerData *) fcinfo->context;
+   Trigger*trigger;
+   int nargs;
+   HeapTuple   trigtuple,
+   newtuple;
+   HeapTupleHeader trigheader,
+   newheader;
+   Relationrel;
+   TupleDesc   tupdesc;
+   RelationindexRelation;
+   ScanKeyData skey;
+   SysScanDesc scan;
+   HeapTuple   indexTuple;
+   char   *channel;
+   charoperation;
+   charpayload[200];
+   char   *p;
+   boolfoundPK;
+ 
+   /* make sure it's called as a trigger */
+   if (!CALLED_AS_TRIGGER(fcinfo))
+   ereport(ERROR,
+   
(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),
+   errmsg("triggered_change_notification: must be called as 
trigger")));
+ 
+   /* and that it's called after the change */
+   if (!TRIGGER_FIRED_AFTER(trigdata->tg_event))
+   ereport(ERROR,
+   
(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),
+e

Re: [GENERAL] How to extract a value from a record using attnum or attname?

2011-02-22 Thread Scott Ribe
I don't know if you can quite write the generalized notification function you 
want in plpgsql or not, but you can certainly write the meta-function that 
create the function for any table ;-)

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regexp match in plpgsql

2011-02-22 Thread Gauthier, Dave
V8.3.4 on linux

How does one do a regexp match/test in PlPgsql given a text variable containing 
a regexp and another text variable containt the string to test.  Example that 
shows what I'm trying to do...

declare
 rgxp text;
 str1 text;
 str2 text;

begin

  rgxp := '[a-zA-Z0-9]';
  str1 := 'ShouldBeOK99';
  str2 := 'Should_Fail_match77';

  if(str1 =~ E\rgxp\) then raise notice '% is a match',str1; else
raise notice '% is not a match',str1;end if;
  if(str2 =~ E\rgxp\) then raise notice '% is a match',str2; else
raise notice '% is not a match',str2;end if;

end;
$$ language plpgsql


I would expect to see...
"ShouldBeOK99 is a match"
"Should_Fail_match77 is not a match"

Thanks in Advance.


Re: [GENERAL] regexp match in plpgsql

2011-02-22 Thread Andrej
On 23 February 2011 11:55, Gauthier, Dave  wrote:

> I would expect to see...
>
> "ShouldBeOK99 is a match"
>
> "Should_Fail_match77 is not a match"
Why would you expect that?  Both strings match at least one
character from the character class?


Cheers,
Andrej

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp match in plpgsql

2011-02-22 Thread David Johnston
You are trying to check the entire string to ensure only the specified
character class matches at each position.  What you are doing is seeing
whether or not there is at least one character class matching value in the
tested string.

 

Since you want to check the entire string you should:

Anchor it using "^" and "$"

Repeat the test: [a-zA-Z0-9]+ ; "+" so we do not match the empty-string

 

Thus: ^[a-zA-Z0-9]+$

 

As for the actual question, how, I am unsure but if the issue was (and you
never did say what output or errors you are getting) that you were getting
"true" for both tests then this is the reason and the solution.

 

David J.

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gauthier, Dave
Sent: Tuesday, February 22, 2011 5:56 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] regexp match in plpgsql

 

V8.3.4 on linux

 

How does one do a regexp match/test in PlPgsql given a text variable
containing a regexp and another text variable containt the string to test.
Example that shows what I'm trying to do...

 

declare 

 rgxp text;

 str1 text;

 str2 text;

 

begin

 

  rgxp := '[a-zA-Z0-9]';

  str1 := 'ShouldBeOK99';

  str2 := 'Should_Fail_match77';

 

  if(str1 =~ E\rgxp\) then raise notice '% is a match',str1; else
raise notice '% is not a match',str1;end if;

  if(str2 =~ E\rgxp\) then raise notice '% is a match',str2; else
raise notice '% is not a match',str2;end if;

 

end;

$$ language plpgsql 

 

 

I would expect to see... 

"ShouldBeOK99 is a match"

"Should_Fail_match77 is not a match"

 

Thanks in Advance.



[GENERAL] Finding Errors in .csv Input Data

2011-02-22 Thread Rich Shepard

  I'm sure many of you have solved this problem in the past and can offer
solutions that will work for me. The context is a 73-column postgres table
of data that was originally in an Access .mdb file. A colleague loaded the
file into Access and wrote a .csv file for me to use since we have nothing
Microsoft here. There are 110,752 rows in the file/table. After a lot of
cleaning with emacs and sed, the copy command accepted all but 80 rows of
data. Now I need to figure out why postgres reports them as having too many
columns.

  Starting to work with a single row, I first compared by cleaned row to the
raw .csv from the Access output. They match column-by-column. Then I copied
the schema to a text file and started comparing the .csv data
column-by-column. While this looks OK to me, postgres doesn't like it. For
example, I get this error message:

nevada=# \copy water_well from 'one.csv' with delimiter '|' null '' CSV;
ERROR:  value too long for type character(1)
CONTEXT:  COPY water_well, line 1, column gravel_packed: "106"

  Yet, the column comparison for gravel_packed and surrounding attributes
does not show this:

lot_no TEXT,|
block_no TEXT,  |
well_finish_date DATE,  11/15/1948|
date_cmplt_acc CHAR(1), D|
gravel_packed CHAR(1),  |
depth_seal INTEGER, |
depth_drilled INTEGER,  106|
depth_bedrock INTEGER,  |
aquifer_desc TEXT,  |

  Notice the NULL value for gravel_packed, while the "106" is for
depth_drilled, two columns later.

  I'm stymied and wonder if there's a tool I can use to fix these 80 rows so
the copy command will accept them.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding Errors in .csv Input Data

2011-02-22 Thread Andy Colson

On 02/22/2011 07:10 PM, Rich Shepard wrote:

I'm sure many of you have solved this problem in the past and can offer
solutions that will work for me. The context is a 73-column postgres table
of data that was originally in an Access .mdb file. A colleague loaded the
file into Access and wrote a .csv file for me to use since we have nothing
Microsoft here. There are 110,752 rows in the file/table. After a lot of
cleaning with emacs and sed, the copy command accepted all but 80 rows of
data. Now I need to figure out why postgres reports them as having too many
columns.

Starting to work with a single row, I first compared by cleaned row to the
raw .csv from the Access output. They match column-by-column. Then I copied
the schema to a text file and started comparing the .csv data
column-by-column. While this looks OK to me, postgres doesn't like it. For
example, I get this error message:

nevada=# \copy water_well from 'one.csv' with delimiter '|' null '' CSV;
ERROR: value too long for type character(1)
CONTEXT: COPY water_well, line 1, column gravel_packed: "106"

Yet, the column comparison for gravel_packed and surrounding attributes
does not show this:

lot_no TEXT, |
block_no TEXT, |
well_finish_date DATE, 11/15/1948|
date_cmplt_acc CHAR(1), D|
gravel_packed CHAR(1), |
depth_seal INTEGER, |
depth_drilled INTEGER, 106|
depth_bedrock INTEGER, |
aquifer_desc TEXT, |

Notice the NULL value for gravel_packed, while the "106" is for
depth_drilled, two columns later.

I'm stymied and wonder if there's a tool I can use to fix these 80 rows so
the copy command will accept them.

Rich



Can we see a few lines of one.csv?  If we ignore the CONTEXT line, and just look at the 
error "too long for char(1)", it look like only two columns to think about 
date_cmplt_acc and gravel_packed.  Sure there is no extra spaces or tabs or weirdness in 
the file for those two columns?

You might also consider dumping out insert statements.  Might be a little 
slower, but simpler to debug.

Did access dump out comma separated with quoted fields?  I've done this on 
several occasions and never had to do any cleaning.  But... I also use perl to 
split the csv and fire off inserts/copys/updates/etc.

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding Errors in .csv Input Data

2011-02-22 Thread Andy Colson

On 02/22/2011 07:25 PM, Andy Colson wrote:

On 02/22/2011 07:10 PM, Rich Shepard wrote:

I'm sure many of you have solved this problem in the past and can offer
solutions that will work for me. The context is a 73-column postgres table
of data that was originally in an Access .mdb file. A colleague loaded the
file into Access and wrote a .csv file for me to use since we have nothing
Microsoft here. There are 110,752 rows in the file/table. After a lot of
cleaning with emacs and sed, the copy command accepted all but 80 rows of
data. Now I need to figure out why postgres reports them as having too many
columns.

Starting to work with a single row, I first compared by cleaned row to the
raw .csv from the Access output. They match column-by-column. Then I copied
the schema to a text file and started comparing the .csv data
column-by-column. While this looks OK to me, postgres doesn't like it. For
example, I get this error message:

nevada=# \copy water_well from 'one.csv' with delimiter '|' null '' CSV;
ERROR: value too long for type character(1)
CONTEXT: COPY water_well, line 1, column gravel_packed: "106"

Yet, the column comparison for gravel_packed and surrounding attributes
does not show this:

lot_no TEXT, |
block_no TEXT, |
well_finish_date DATE, 11/15/1948|
date_cmplt_acc CHAR(1), D|
gravel_packed CHAR(1), |
depth_seal INTEGER, |
depth_drilled INTEGER, 106|
depth_bedrock INTEGER, |
aquifer_desc TEXT, |

Notice the NULL value for gravel_packed, while the "106" is for
depth_drilled, two columns later.

I'm stymied and wonder if there's a tool I can use to fix these 80 rows so
the copy command will accept them.

Rich



Can we see a few lines of one.csv? If we ignore the CONTEXT line, and just look at the 
error "too long for char(1)", it look like only two columns to think about 
date_cmplt_acc and gravel_packed. Sure there is no extra spaces or tabs or weirdness in 
the file for those two columns?

You might also consider dumping out insert statements. Might be a little 
slower, but simpler to debug.

Did access dump out comma separated with quoted fields? I've done this on 
several occasions and never had to do any cleaning. But... I also use perl to 
split the csv and fire off inserts/copys/updates/etc.

-Andy



Hum... and another also: you could change your fields to all be 'text', then do 
the import.  Then clean up the data with a few:

select * from water_well where length(date_cmplt_acc) > 1


And I see that you have 73 columns, not just the few you posted.  Well, I 
suppose... one of these days, I'll start paying more attention :-)

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple column to onec column

2011-02-22 Thread zab08
thanks , this is a example of my application.


the ans is here, http://www.postgresql.org/docs/9.0/static/xaggr.html


sql command:


CREATE AGGREGATE array_accum (anyelement)
(
sfunc = array_append,
stype = anyarray,
initcond = '{}'
);

 

SELECT r.role_name, array_accum(permission) from roles r, roles_permissions rp 
group by r.role_name;




and the result:

role_name |  array_accum  

---+---

 role2 | {permission1,permission2,permission1}

 role1 | {permission1,permission2,permission1}










 

 

Re: [GENERAL] Finding Errors in .csv Input Data

2011-02-22 Thread Adrian Klaver
On Tuesday, February 22, 2011 5:10:34 pm Rich Shepard wrote:
>I'm sure many of you have solved this problem in the past and can offer
> solutions that will work for me. The context is a 73-column postgres table
> of data that was originally in an Access .mdb file. A colleague loaded the
> file into Access and wrote a .csv file for me to use since we have nothing
> Microsoft here. There are 110,752 rows in the file/table. After a lot of
> cleaning with emacs and sed, the copy command accepted all but 80 rows of
> data. Now I need to figure out why postgres reports them as having too many
> columns.
> 
>Starting to work with a single row, I first compared by cleaned row to
> the raw .csv from the Access output. They match column-by-column. Then I
> copied the schema to a text file and started comparing the .csv data
> column-by-column. While this looks OK to me, postgres doesn't like it. For
> example, I get this error message:
> 
> nevada=# \copy water_well from 'one.csv' with delimiter '|' null '' CSV;
> ERROR:  value too long for type character(1)
> CONTEXT:  COPY water_well, line 1, column gravel_packed: "106"
> 
>Yet, the column comparison for gravel_packed and surrounding attributes
> does not show this:
> 
> lot_no TEXT,|
> block_no TEXT,  |
> well_finish_date DATE,  11/15/1948|
> date_cmplt_acc CHAR(1), D|
> gravel_packed CHAR(1),  |
> depth_seal INTEGER, |
> depth_drilled INTEGER,  106|
> depth_bedrock INTEGER,  |
> aquifer_desc TEXT,  |
> 
>Notice the NULL value for gravel_packed, while the "106" is for
> depth_drilled, two columns later.
> 
>I'm stymied and wonder if there's a tool I can use to fix these 80 rows
> so the copy command will accept them.
> 
> Rich

We are going to need to see at least a sample of the actual data in one.csv 
that 
is causing the problem. You have an off by two error as you suggest, but that 
could actually have happened earlier in the row. For instance the 
well_finish_date would insert into lot_no because lot_no is TEXT and the date 
value at this point is just text. Same with date_cmplt_acc and block_no.
-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding Errors in .csv Input Data

2011-02-22 Thread Rich Shepard

On Tue, 22 Feb 2011, Adrian Klaver wrote:


We are going to need to see at least a sample of the actual data in
one.csv that is causing the problem. You have an off by two error as you
suggest, but that could actually have happened earlier in the row. For
instance the well_finish_date would insert into lot_no because lot_no is
TEXT and the date value at this point is just text. Same with
date_cmplt_acc and block_no.


Adrian/Andy,

  The data came out of Access as comma-and-quote csv. I massaged it in emacs
and sed to change the delimiter to a pipe rather than a comma and removed
the double quotes.

  I cannot assume that each of the 80 problem rows suffer from the defect in
the same place, so if there's a generic process I can apply it row-by-row.
After all, 80 problem rows out of 110,752 is not bad.

  Here are the schema and the first row, in one.csv. Because I don't know if
attachments are stripped off before the message is allowed to be distributed
to the list subscribers, I'll just include both here.

DDL:

  The column names are original (except for 'ref' that seems to be a
reserved word), but I modifed the data types.

CREATE TABLE water_well (
  sequence_no TEXT PRIMARY KEY,
  well_log TEXT,
  app VARCHAR(20),
  notice_of_intent VARCHAR(6),
  waiver_no VARCHAR(30),
  date_log_rcvd DATE,
  date_log_rcvd_acc CHAR(1),
  site_type CHAR(1),
  work_type CHAR(1),
  work_type_rmks TEXT,
  proposed_use CHAR(1),
  drilling_method CHAR(1),
  sc TEXT,
  ha TEXT,
  twn VARCHAR(3),
  legal_twn VARCHAR(3),
  rng VARCHAR(3),
  legal_rng VARCHAR(3),
  sec TEXT,
  sec_quarters TEXT,
  legal_quarters TEXT,
  quarters_seq TEXT,
  ref TEXT,
  latitude NUMERIC(9,6),
  longitude NUMERIC(9,6),
  lat_long_src VARCHAR(5),
  lat_long_acc CHAR(1),
  owner_current TEXT,
  owner_address TEXT,
  owner_no TEXT,
  parcel_no TEXT,
  subdivision_name TEXT,
  lot_no TEXT,
  block_no TEXT,
  well_finish_date DATE,
  date_cmplt_acc CHAR(1),
  gravel_packed CHAR(1),
  depth_seal INTEGER,
  depth_drilled INTEGER,
  depth_bedrock INTEGER,
  aquifer_desc TEXT,
  depth_cased INTEGER,
  csng_diameter FLOAT,
  csng_reductions INTEGER,
  top_perf INTEGER,
  bottom_perf INTEGER,
  perf_intervals INTEGER,
  static_wl FLOAT,
  temperature FLOAT,
  yield FLOAT,
  drawdown FLOAT,
  hours_pumped FLOAT,
  test_method CHAR(1),
  qual_const_data CHAR(1),
  qual_lith_data CHAR(1),
  remarks TEXT,
  remarks_additional TEXT,
  contractor_lic_no VARCHAR(8),
  contractor_name TEXT,
  contractor_address TEXT,
  contractor_drlr_no VARCHAR(6),
  driller_lic_no VARCHAR(6),
  source_agency TEXT,
  user_id TEXT,
  date_entry DATE,
  update_user_id VARCHAR(16),
  date_update DATE,
  edit_status VARCHAR(16),
  well_start_date DATE,
  gravel_pack_top INTEGER,
  gravel_pack_bot INTEGER,
  utm_x NUMERIC(13,6),
  utm_y NUMERIC(13,6)
);

  Here's one.csv:

68670|724||0||11/27/1948|D|N|N||H|C|32031|087|N18|18N|E20|20E|07MD|39.44|119.77|NV003|M|KAIPER,
 R L|||SIERRA 
MANOR||11/15/1948|D|||106|||106|6.62|0|60|102|1|12.00||30.00|||B|G|G|AIR 
COMPRESSOR TESTED 30 GPM ALSO||3|MEL MEYER|RT 1 BOX 10 
RENO|||3|NV003|JSWINGHOLM|1/16/2003|||F|11/11/1948|||261013.36|4369139.23

  I hope you're able to see what I keep missing as the source of the
problem.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mysql to Postgresql

2011-02-22 Thread Jaime Crespo Rincón
2011/2/22 Adarsh Sharma :
> Dear all,
>
> Today I need to back up a mysql database and restore in Postgresql database
> but I don't know how to achieve this accurately.

Have a look at: "mysqldump --compatible=postgresql" command:


Anyway, most of the times you will need a more manual migration, with
human intervention (custom scripts) and migrating the data through
something like CSV (SELECT... INTO OUTFILE).


-- 
Jaime Crespo
MySQL & Java Instructor
Software Developer
Warp Networks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mysql to Postgresql

2011-02-22 Thread John R Pierce

On 02/22/11 1:25 AM, Jaime Crespo Rincón wrote:

2011/2/22 Adarsh Sharma:

Dear all,

Today I need to back up a mysql database and restore in Postgresql database
but I don't know how to achieve this accurately.

Have a look at: "mysqldump --compatible=postgresql" command:


Anyway, most of the times you will need a more manual migration, with
human intervention (custom scripts) and migrating the data through
something like CSV (SELECT... INTO OUTFILE).



if your tables aren't too huge, one method is via a perl script that 
uses DBI to connect to both mysql and pgsql, and fetches a table from 
one and loads it into the other.   the catch-22 is, its fairly hard to 
do this efficiently if the tables won't fit in memory



there are also various "ETL" (Extract, Translate, Load) tools that do 
this sort of thing with varying levels of performance and automation, 
some free, some commercial.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mysql to Postgresql

2011-02-22 Thread mezgani ali
Please take a look at this article:
http://securfox.wordpress.com/2010/12/12/converting-mysql-to-postgresql/

I think also, that there are a tool that can do this easly,
Regards,

On Tue, Feb 22, 2011 at 9:21 AM, Adarsh Sharma wrote:

> Dear all,
>
> Today I need to back up a mysql database and restore in Postgresql database
> but I don't know how to achieve this accurately.
>
> Can anyone kindly describe me the way to do this.
>
>
> Thanks & best Regards,
>
> Adarsh Sharma
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Ali MEZGANI
Network Engineering/Security
http://securfox.wordpress.com/


Re: [GENERAL] Finding Errors in .csv Input Data

2011-02-22 Thread David Johnston
Rich,

The data and table structure provided do not seem to correlate.

Regardless, if you changed the delimiter to "|" from "," it is possible that
you converted an embedded "," in one of the textual fields into a "|" when
you should not have.

For Instance:

Value1,value2,"value, with comma",value4

Thus became

Value1|value2|value|with comma|value4

Giving you additional fields that should not be present.

I suggest opening up a testing file (one with the 80 malformed records and
10 to 20 good/control records) in an Excel or equivalent spreadsheet and
import/text-to-columns using the "|" delimiter.  You will be able to quickly
see rows with extra columns and specifically where those extras are
originating.  Then you can decide on how to fix the problem.  I would
suggest manually changing each incorrect "|" into a "," as a form of
self-punishment for improper data conversion - but whatever works for you.

In the odd chance you do not see extra columns in the spreadsheet you can
also scan down the columns and see if you recognize any pattern of
differences between the 80 failing records and the 10 successful records
that might point you further in the correct direction.

If THAT fails you might want to see if anyone will receive the testing file
and "CREATE TABLE" and try to independently diagnose the cause of the
problem.  That or do the Access export and skip your conversion routine and
directly import the CSV into Postgres.

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rich Shepard
Sent: Tuesday, February 22, 2011 9:59 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Finding Errors in .csv Input Data

On Tue, 22 Feb 2011, Adrian Klaver wrote:

> We are going to need to see at least a sample of the actual data in 
> one.csv that is causing the problem. You have an off by two error as 
> you suggest, but that could actually have happened earlier in the row. 
> For instance the well_finish_date would insert into lot_no because 
> lot_no is TEXT and the date value at this point is just text. Same 
> with date_cmplt_acc and block_no.

Adrian/Andy,

   The data came out of Access as comma-and-quote csv. I massaged it in
emacs and sed to change the delimiter to a pipe rather than a comma and
removed the double quotes.
Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backup Fails

2011-02-22 Thread Adarsh Sharma

Dear all,

I performed backup of databases in Database Servers but Today when I'm 
going to backup of one of databases , the following error occurs :


[root@s8-mysd-2 8.4SS]# bin/pg_dump -Upostgres -i -o pdc_uima_olap | 
gzip -c > /hdd4-1/pdc_uima_olap108feb18.sql.gz

Password:
pg_dump: [archiver (db)] connection to database "pdc_uima_olap" failed: 
FATAL:  could not write init file

[root@s8-mysd-2 8.4SS]#


Don't know the actual reason of this as remainning databases are backed 
up correctly.


Please guide with some suggestion to overcome it.


Thanks & best  Regards,
Adarsh Sharma

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Fails

2011-02-22 Thread Tom Lane
Adarsh Sharma  writes:
> Dear all,
> I performed backup of databases in Database Servers but Today when I'm 
> going to backup of one of databases , the following error occurs :

> [root@s8-mysd-2 8.4SS]# bin/pg_dump -Upostgres -i -o pdc_uima_olap | 
> gzip -c > /hdd4-1/pdc_uima_olap108feb18.sql.gz
> Password:
> pg_dump: [archiver (db)] connection to database "pdc_uima_olap" failed: 
> FATAL:  could not write init file
> [root@s8-mysd-2 8.4SS]#

Server out of disk space or the data directory somehow became
write-protected, I'd guess.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FATAL: no pg_hba.conf entry for host “::1***"

2011-02-22 Thread itishree sukla
Hi  All,

I am using System DSN,  that connects to postgreSQL, to fetch data from the
database, and put into xls sheet .Its working fine with most of the machines
and connects fine but on 1 machine i am getting  this FATAL: no pg_hba.conf
entry for host “::1**”, user “postgres”, database “myDatabase", SSL off
error. Any idea why it is  so, please suggest.

Thanks in  advance ...


Regards,
Itishree


[GENERAL] Re: [GENERAL] FATAL: no pg_hba.conf entry for host “::1***"

2011-02-22 Thread John R Pierce

On 02/22/11 10:38 PM, itishree sukla wrote:

Hi  All,
I am using System DSN,  that connects to postgreSQL, to fetch data 
from the database, and put into xls sheet .Its working fine with most 
of the machines and connects fine but on 1 machine i am getting  this 
FATAL: no pg_hba.conf entry for host “::1**”, user “postgres”, 
database “myDatabase", SSL off error. Any idea why it is  so, please 
suggest.

Thanks in  advance ...



::1 is the IPv6 localhost.  sounds like this system has ipv6 configured, 
so you probably want a line in pg_hba.conf like...



hosts  all all ::1   md5

(replace the md5 with whatever you normally use on the localhost linet 
hat looks like


hosts all all 127.0.0.0/8  md5

the other common value is 'trust'.   md5 requires a password, trust doesn't.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Partitions and indexes

2011-02-22 Thread Amitabh Kant
Hello

If I have partitioned a table based on a foreign key in a manner where every
child table will only have data for single value of the foreign key, do I
need to create a index for the foreign key in the primary and/or child
tables? I am using version 8.4

With regards

Amitabh