Re: [GENERAL] checkpoint and recovering process use too much memory

2017-11-05 Thread tao tony
Thank you,  Justin Pryzby.
I reset shared_buffer to 16GB,and the  memory usage of  checkpoint and 
recovering just stayed at 16GB.

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+ COMMAND
192956 postgres  20   0 18.5g  16g  16g S  1.3 25.9  19:44.69 postgres: 
startup process   recovering 000408A30035
192960 postgres  20   0 18.5g  16g  16g S  0.7 25.8  11:13.79 postgres: 
checkpointer process
192951 postgres  20   0 18.5g 1.9g 1.9g S  0.0  3.1   0:01.75 
/usr/pgsql-9.6/bin/postmaster -D /data/pgdata

Thank you again for your help.

On 11/03/2017 10:21 AM, Justin Pryzby wrote:
> On Fri, Nov 03, 2017 at 01:43:32AM +, tao tony wrote:
>> I had an asynchronous steaming replication HA cluster.Each node had 64G 
>> memory.pg is 9.6.2 and deployed on centos 6.
>>
>> Last month the database was killed by OS kernel for OOM,the checkpoint 
>> process was killed.
> If you still have logs, was it killed during a large query?  Perhaps one using
> a hash aggregate?
>
>> I noticed checkpoint process occupied memory for more than 20GB,and it was 
>> growing everyday.In the hot-standby node,the recovering process occupied 
>> memory as big as checkpoint process.
> "resident" RAM of a postgres subprocess is often just be the fraction of
> shared_buffers it's read/written.  checkpointer must necessarily read all 
> dirty
> pages from s-b and write out to disk (by way of page cache), so that's why its
> RSS is nearly 32GB.  And the recovery process is continuously writing into 
> s-b.
>
>> Now In the standby node,checkpoint and recovering process  used more then 
>> 50GB memory as below,and I worried someday the cluster would be killed by OS 
>> again.
>>
>> PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>> 167158 postgres  20   0 34.9g  25g  25g S  0.0 40.4  46:36.86 postgres: 
>> startup process   recovering 00040855004B
>> 167162 postgres  20   0 34.9g  25g  25g S  0.0 40.2  17:58.38 postgres: 
>> checkpointer process
>>
>> shared_buffers = 32GB
> Also, what is work_mem ?
>
> Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] checkpoint and recovering process use too much memory

2017-11-02 Thread tao tony
hi dears,


I had an asynchronous steaming replication HA cluster.Each node had 64G 
memory.pg is 9.6.2 and deployed on centos 6.


Last month the database was killed by OS kernel for OOM,the checkpoint process 
was killed.


I noticed checkpoint process occupied memory for more than 20GB,and it was 
growing everyday.In the hot-standby node,the recovering process occupied memory 
as big as checkpoint process.


I turned the checkpoint parameters,but it didn't worked.

Now In the standby node,checkpoint and recovering process  used more then 50GB 
memory as below,and I worried someday the cluster would be killed by OS again.

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
167158 postgres  20   0 34.9g  25g  25g S  0.0 40.4  46:36.86 postgres: startup 
process   recovering 00040855004B
167162 postgres  20   0 34.9g  25g  25g S  0.0 40.2  17:58.38 postgres: 
checkpointer process


shared_buffers = 32GB

and this is my checkpoint configuration:

checkpoint_timeout = 5min   # range 30s-1d
max_wal_size = 3GB
min_wal_size = 1GB
checkpoint_completion_target = 0.6  # checkpoint target duration, 0.0 - 1.0
checkpoint_flush_after = 1MB# 0 disables,
# default is 256kB on linux, 0 otherwise
checkpoint_warning = 300s   # 0 disables


in log file ,it started every 5 minites.


2017-11-02 16:55:14.063 CST,,,758087,,59e5b9c2.b9147,2498,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint complete: wrote 29416 buffers (0.7%); 0 
transaction log file(s) added, 0 removed, 0 recycled; write=269.904 s, 
sync=0.017 s, total=269.930 s; sync files=86, longest=0.006 s, average=0.000 s; 
distance=235633 kB, estimate=816034 kB""LogCheckpointEnd, 
xlog.c:8121",""
2017-11-02 16:55:44.093 CST,,,758087,,59e5b9c2.b9147,2499,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint starting: time""LogCheckpointStart, 
xlog.c:8039",""
2017-11-02 16:59:13.824 CST,,,758087,,59e5b9c2.b9147,2500,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint complete: wrote 2091 buffers (0.0%); 0 transaction 
log file(s) added, 0 removed, 29 recycled; write=209.585 s, sync=0.072 s, 
total=209.731 s; sync files=46, longest=0.024 s, average=0.001 s; 
distance=11700 kB, estimate=735601 kB""LogCheckpointEnd, xlog.c:8121",""
2017-11-02 17:00:44.903 CST,,,758087,,59e5b9c2.b9147,2501,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint starting: time""LogCheckpointStart, 
xlog.c:8039",""
2017-11-02 17:05:14.249 CST,,,758087,,59e5b9c2.b9147,2502,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint complete: wrote 93078 buffers (2.2%); 0 
transaction log file(s) added, 23 removed, 44 recycled; write=269.104 s, 
sync=0.023 s, total=269.346 s; sync files=109, longest=0.010 s, average=0.000 
s; distance=480607 kB, estimate=710102 kB""LogCheckpointEnd, 
xlog.c:8121",""
2017-11-02 17:05:44.269 CST,,,758087,,59e5b9c2.b9147,2503,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint starting: time""LogCheckpointStart, 
xlog.c:8039",""
2017-11-02 17:10:14.215 CST,,,758087,,59e5b9c2.b9147,2504,,2017-10-17 16:05:22 
CST,,0,LOG,0,"checkpoint complete: wrote 177599 buffers (4.2%); 0 
transaction log file(s) added, 0 removed, 53 recycled; write=269.728 s, 
sync=0.010 s, total=269.945 s; sync files=88, longest=0.004 s, average=0.000 s; 
distance=1140361 kB, estimate=1140361 kB""LogCheckpointEnd, 
xlog.c:8121",""




Pleas kindly let me know how could I reduce the 2 process memory usage.Many 
many thanks.


[GENERAL] slow query on multiple table join

2017-05-08 Thread tao tony
hi guys,

I met a query performance issue in postgresql 9.6.2 with multiple tables 
joined.

there were 2 slow queries,and the reasons were the same:the optimizer 
generate a bad explain which using nest loop.

attached is the query and its explain.all tables  are small and the 
indexes were only created on primary keys .

in query 1 I noticed  the explain forecast the nest loop anti join 
return 1 row as below,it was the result of  (f join p) join pt:

  ->  Nested Loop  (cost=1.95..14838.66 rows=1 width=163)
Join Filter: ((f.shop)::text = (s.uuid)::text)
->  Nested Loop Anti Join  (cost=1.95..14743.60 rows=1 
width=111)
  ->  Hash Join  (cost=1.53..12067.46 rows=4751 
width=115)
Hash Cond: ((p.shop)::text = (f.shop)::text)
->  Seq Scan on shopsku p 
(cost=0.00..11483.96 rows=106892 width=106)
  Filter: ((state)::text = 'normal'::text)
->  Hash  (cost=1.29..1.29 rows=19 width=9)
  ->  Seq Scan on shopfranchise f  
(cost=0.00..1.29 rows=19 width=9)
Filter: (enabled = 1)
  ->  Index Only Scan using platformsku_pkey on 
platformsku pt  (cost=0.42..0.55 rows=1 width=36)
Index Cond: (uuid = (p.platformsku)::text)
->  Seq Scan on shop s  (cost=0.00..75.58 rows=1558 
width=61)
  ->  Hash  (cost=2823.76..2823.76 rows=43376 width=46)
->  Seq Scan on merchantsku m (cost=0.00..2823.76 
rows=43376 width=46)

while in analyze explain,it actually returns 57458 row.so higher level 
nest loop would get 57458*1558 rows,this cause this query runs for more 
than 40 seconds.

  ->  Nested Loop  (cost=1.95..14838.66 rows=1 width=163) 
(actual time=0.817..43150.583 rows=57458 loops=1)
Join Filter: ((f.shop)::text = (s.uuid)::text)
Rows Removed by Join Filter: 89462106
->  Nested Loop Anti Join  (cost=1.95..14743.60 rows=1 
width=111) (actual time=0.060..408.092 rows=57458 loops=1)
  ->  Hash Join  (cost=1.53..12067.46 rows=4751 
width=115) (actual time=0.046..174.523 rows=57485 loops=1)
Hash Cond: ((p.shop)::text = (f.shop)::text)
->  Seq Scan on shopsku p 
(cost=0.00..11483.96 rows=106892 width=106) (actual time=0.008..107.416 
rows=106580 loops=1)
  Filter: ((state)::text = 'normal'::text)
  Rows Removed by Filter: 429
->  Hash  (cost=1.29..1.29 rows=19 width=9) 
(actual time=0.026..0.026 rows=20 loops=1)
  Buckets: 1024  Batches: 1  Memory 
Usage: 9kB
  ->  Seq Scan on shopfranchise f  
(cost=0.00..1.29 rows=19 width=9) (actual time=0.006..0.017 rows=20 loops=1)
Filter: (enabled = 1)
Rows Removed by Filter: 4
  ->  Index Only Scan using platformsku_pkey on 
platformsku pt  (cost=0.42..0.55 rows=1 width=36) (actual 
time=0.003..0.003 rows=0 loops=57485)
Index Cond: (uuid = (p.platformsku)::text)
Heap Fetches: 0
->  Seq Scan on shop s  (cost=0.00..75.58 rows=1558 
width=61) (actual time=0.001..0.332 rows=1558 loops=57458)

If I disabled nest loop,ti only use 519 ms.

in query 2 ,the nest loop join also process more than 200m rows,it runs 
almost 2 minutes.After disable nest loop,it use hash join,finished in 
1.5 sec.

 purchase join (shopfranchise f_4 join inventory k) 
join gdname
  ->  Nested Loop Left Join (cost=3972.43..4192.40 
rows=1 width=1074) (actual time=268.989..106066.932 rows=45615 loops=1)
Join Filter: (((k.shop)::text = 
(purchase.shop)::text) AND ((k.shopsku)::text = (purchase.shopsku)::text))
Rows Removed by Join Filter: 208410367

 (shopfranchise f_4 join inventory k) join 
gdname
->  Hash Join  (cost=3972.43..4165.52 rows=1 
width=1112) (actual time=247.088..1754.448 rows=45615 loops=1)
  Hash Cond: (((gdname.shop)::text = 
(k.shop)::text) AND ((gdname.shopsku)::text = (k.shopsku)::text))
  ->  CTE Scan on gdname (cost=0.00..140.42 
rows=7021 width=1246) (actual time=156.543..1563.121 rows=51153 loops=1)
  ->  Hash  (cost=3925.81..3925.81 rows=3108 
width=63) (actual time=90.520..90.520 rows=45622 loops=1)
Buckets: 65536 (originally 4096)  
Batches: 1 (originally 1)  Memory Usage: 4745kB
 shopfranchise f_4 join inventory k
 

Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Tony Finch
Bill Moran  wrote:
>
> There's a well-written article I saw recently that directly addresses
> your question ... I'm too lazy to find it, but google will probably
> turn it up for you.

This? http://renesd.blogspot.co.uk/2017/02/is-postgresql-good-enough.html

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/  -  I xn--zr8h punycode
Irish Sea: East or northeast 5 or 6. Slight or moderate. Fair. Good.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion-like behavior for UNION ALL views

2017-03-29 Thread Tony Cebzanov

On 3/29/17 3:39 PM, David G. Johnston wrote:


That said, I'm not sure what using materialized views instead of normal
tables buys you in the first place.  I could see possibly using a
materialized view as the current month's table but the historical tables
usually don't require refreshing.


My example was simplified for brevity.  The actual materialized views in 
question do a significant amount of work, pulling from several other 
tables, grouping/aggregating, etc.  It would be possible to have that 
same query populate a normal table instead of being stored as a 
materialized view, but there's a reason materialized views were created 
in the first place -- to avoid the overhead of manually creating 
triggers and so forth -- and I was hoping to find a way to retain those 
advantages while also being able to partition the views by date.


My thought was that since check constraints already exist for regular 
tables, and since materialized views are implemented as tables (or 
table-like substances) it seems reasonable that materialized views 
should support check constraints and the query optimization that comes 
with them.


-Tony



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Constraint exclusion-like behavior for UNION ALL views

2017-03-29 Thread Tony Cebzanov
PostgreSQL's materialized view functionality is very useful, but one 
problem is that when the view gets large, there is no way to refresh 
part of it.  I know that table partitioning is coming in Postgres 10, 
but I haven't heard anything about ideas for partitioning / sharding of 
materialized views, so I'm trying to work out a strategy for doing it 
manually until that's supported in the future.


Because there's no table inheritance for materialized views, the only 
way I can think of to be able to get this behavior is to create 
materialized views manually for each partition (e.g. each month) and 
then use a UNION ALL view to act as the "parent" table.  It looks 
something like this:


CREATE MATERIALIZED VIEW prices_2016_04 AS (
SELECT sale_date, price FROM tbl
WHERE sale_date >= '2016/04/01' AND sale_date < '2016/05/01'
);

CREATE MATERIALIZED VIEW prices_2016_05 AS (
SELECT sale_date, price FROM tbl
WHERE sale_date >= '2016/05/01' AND sale_date < '2016/06/01'
);

CREATE MATERIALIZED VIEW prices_2016_06 AS (
SELECT sale_date, price FROM tbl
WHERE sale_date >= '2016/06/01' AND sale_date < '2016/07/01'
);

CREATE VIEW prices_2016 AS (
SELECT * FROM prices_2016_04
UNION ALL
SELECT * FROM prices_2016_05
UNION ALL
SELECT * FROM prices_2016_06
);

The problem comes when these views get large.  Unlike when the 
underlying objects are tables, I see no way to create CHECK constraints 
on these materialized views, which means that queries like:


 SELECT * FROM prices_2016 WHERE sale_date = '2016/04/15'

end up searching through all of the underlying materialized views, even 
though the date in the query will only ever match one of them.


As a workaround, I have added logic to my queries to only search tables 
for months where the time filters could match, but it would be ideal if 
we could create CHECK constraints on materialized views, or if the query 
planner could analyze the underlying query that creates the materialized 
view to infer that certain rows will never match based on the conditions 
that created the view.


Are either of these things that could be supported in the future?  If 
not, is there a better way to get this kind of behavior so that 
materialized views are more useful when the amount of data increases and 
it's not feasible to update them in their entirety?


Thanks.
-Tony


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] range partitioning of materialized views

2017-03-18 Thread Tony Cebzanov
I find PostgreSQL's materialized view functionality very useful, but one 
problem is that when the view gets large, there is no way to refresh 
part of it.  I know that table partitioning is coming in Postgres 10, 
but I haven't heard anything about ideas for partitioning / sharding of 
materialized views, so I'm trying to work out a strategy for doing it 
manually until that's supported in the future.


Because there's no table inheritance for materialized views, the only 
way I can think of to do it is to create materialized views manually for 
each partition (e.g. each month) and then use a UNION ALL view to act as 
the "parent" table.  It looks something like this:


CREATE MATERIALIZED VIEW prices_2016_04 AS (
   SELECT sale_date, price FROM tbl
   WHERE sale_date >= '2016/04/01' AND sale_date < '2016/05/01'
);

CREATE MATERIALIZED VIEW test_2016_05 AS (
   SELECT sale_date, price FROM tbl
   WHERE sale_date >= '2016/05/01' AND sale_date < '2016/06/01'
);

CREATE MATERIALIZED VIEW test_2016_06 AS (
   SELECT sale_date, price FROM tbl
   WHERE sale_date >= '2016/06/01' AND sale_date < '2016/07/01'
);

CREATE VIEW test_2016 AS (
   SELECT * FROM prices_2016_04
   UNION ALL
   SELECT * FROM test_2016_05
   UNION ALL
   SELECT * FROM test_2016_06
);

The problem comes when these views get large.  Unlike when the 
underlying objects are tables, there is no way for me to create CHECK 
constraints on these materialized views, which means when I do something 
like:


SELECT * FROM test_2016 WHERE sale_date = '2016/04/15'

The query planner is forced to look at all of the underlying 
materialized views, even though the date in the query will only ever 
match one of them.


As a workaround, I can have my application restrict the query to a 
single month's materialized view, but it would be ideal if we could 
create CHECK constraints on materialized views, or even better, if the 
query planner could know about the underlying query that created the 
materialized view to infer that certain rows will never match based on 
the conditions that created the view.


Are either of these things that could be supported in the future?  If 
not, is there a better way to get the behavior I want, where I can 
partition materialized views based on a date range?


Thanks.
-Tony


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming replication - slave server

2016-05-02 Thread Tony Nelson
I have a nicely working 3 server, 1 master, 2 slave setup.  All servers are 
running on Ubuntu 12.04.  I was considering building a new slave server on 
16.04.

The master is currently running 9.1.13, the slave I'm going to replace is 
running 9.1.20.

Does the new slave have to be running 9.1?  Or can use the latest, which 
appears to be 9.5 on 16.04?

Thanks in advance,
Tony

Since 1982, Starpoint Solutions has been a trusted source of human capital and 
solutions. We are committed to our clients, employees, environment, community 
and social concerns.  We foster an inclusive culture based on trust, respect, 
honesty and solid performance. Learn more about Starpoint and our social 
responsibility at http://www.starpoint.com/social_responsibility

This email message from Starpoint Solutions LLC is for the sole use of  the 
intended recipient(s) and may contain confidential and privileged  information. 
 Any unauthorized review, use, disclosure or distribution is prohibited.  If 
you are not the intended recipient, please contact the sender by reply email 
and destroy all copies of the original message.  Opinions, conclusions and 
other information in this message that do not relate to the official business 
of Starpoint Solutions shall be understood as neither given nor endorsed by it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Code of Conduct: Is it time?

2016-01-05 Thread Tony Theodore

> On 6 Jan 2016, at 03:47, Joshua D. Drake  wrote:
> 
> In reflection, the only thing a CoC does is put in writing what behaviour we 
> as a project already require, so why not document it and use it as a tool to 
> encourage more contribution to our project?

I fully agree with you. No one would question documenting (or advertising) any 
particular feature - indeed, the quality of documentation is a feature in 
itself. I'm reminded of this 2006 quote from Joss Whedon [1]:

Q: So, why do you write these strong female characters?
A: Because you’re still asking me that question.

The Postgres community is also a great "feature", maybe the question we should 
be asking is - "why isn't it documented yet?”? I don’t see a CoC as an end in 
itself, it’s merely an artefact of a community that is as proud of it’s 
workings as it’s output.

Regards,

Tony

[1] 
http://www.goodreads.com/quotes/1018998-why-aren-t-you-asking-a-hundred-other-guys-why-they



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Interrupted streaming replication

2015-12-07 Thread Tony Nelson
Thank you very much, this is working perfectly.  I copied a couple over and the 
server applied them, and is waiting for the next set.

Tony

From: Melvin Davidson [mailto:melvin6...@gmail.com]
Sent: Monday, December 07, 2015 8:59 AM
To: Tony Nelson
Cc: Andreas Kretschmer; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Interrupted streaming replication

As long as you have 00010089002C and the subsequent WALs in your 
archive directory, then you should
be able to simply scp them to you second slave's pg_xlog directory.

On Mon, Dec 7, 2015 at 8:50 AM, Tony Nelson 
mailto:tnel...@starpoint.com>> wrote:


> -Original Message-
> Tony Nelson mailto:tnel...@starpoint.com>> wrote:
>
> > 2015-12-07 08:24:50 EST FATAL:  requested WAL segment
> > 00010089002C has already been removed
> >
> >
> >
> > Can I simply copy the file from my archive directory back to the WAL
> directory?
>
> I'm afraid that won't work, because of the master knows nothing about this
> copy and can't stream its content.
>
> Maybe you can create/update your recovery.conf, see restore_command.
>
>
> for the future: increase wal_keep_segments, or better: use replication slots
> (since 9.4 a really good feature to prevent such errors).
>
>

Thank you.  I'm still running 9.1, but I'll look into 9.4.  Fortunately my 
database is small so doing a new base backup is not the end of the world.  It's 
been a long time since I've set this up, so I'm going to have to go through a 
refresher course on exactly what I need to do.

Thanks for your help.

Tony

Since 1982, Starpoint Solutions has been a trusted source of human capital and 
solutions. We are committed to our clients, employees, environment, community 
and social concerns.  We foster an inclusive culture based on trust, respect, 
honesty and solid performance. Learn more about Starpoint and our social 
responsibility at http://www.starpoint.com/social_responsibility

This email message from Starpoint Solutions LLC is for the sole use of  the 
intended recipient(s) and may contain confidential and privileged  information. 
 Any unauthorized review, use, disclosure or distribution is prohibited.  If 
you are not the intended recipient, please contact the sender by reply email 
and destroy all copies of the original message.  Opinions, conclusions and 
other information in this message that do not relate to the official business 
of Starpoint Solutions shall be understood as neither given nor endorsed by it.

--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you. [cid:~WRD000.jpg]


Since 1982, Starpoint Solutions has been a trusted source of human capital and 
solutions. We are committed to our clients, employees, environment, community 
and social concerns. We foster an inclusive culture based on trust, respect, 
honesty and solid performance. Learn more about Starpoint and our social 
responsibility at http://www.starpoint.com/social_responsibility


This email message from Starpoint Solutions LLC is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure or distribution is prohibited. If you 
are not the intended recipient, please contact the sender by reply email and 
destroy all copies of the original message. Opinions, conclusions and other 
information in this message that do not relate to the official business of 
Starpoint Solutions shall be understood as neither given nor endorsed by it.


Re: [GENERAL] Interrupted streaming replication

2015-12-07 Thread Tony Nelson


> -Original Message-
> Tony Nelson  wrote:
>
> > 2015-12-07 08:24:50 EST FATAL:  requested WAL segment
> > 00010089002C has already been removed
> >
> >
> >
> > Can I simply copy the file from my archive directory back to the WAL
> directory?
>
> I'm afraid that won't work, because of the master knows nothing about this
> copy and can't stream its content.
>
> Maybe you can create/update your recovery.conf, see restore_command.
>
>
> for the future: increase wal_keep_segments, or better: use replication slots
> (since 9.4 a really good feature to prevent such errors).
>
>

Thank you.  I'm still running 9.1, but I'll look into 9.4.  Fortunately my 
database is small so doing a new base backup is not the end of the world.  It's 
been a long time since I've set this up, so I'm going to have to go through a 
refresher course on exactly what I need to do.

Thanks for your help.

Tony

Since 1982, Starpoint Solutions has been a trusted source of human capital and 
solutions. We are committed to our clients, employees, environment, community 
and social concerns.  We foster an inclusive culture based on trust, respect, 
honesty and solid performance. Learn more about Starpoint and our social 
responsibility at http://www.starpoint.com/social_responsibility

This email message from Starpoint Solutions LLC is for the sole use of  the 
intended recipient(s) and may contain confidential and privileged  information. 
 Any unauthorized review, use, disclosure or distribution is prohibited.  If 
you are not the intended recipient, please contact the sender by reply email 
and destroy all copies of the original message.  Opinions, conclusions and 
other information in this message that do not relate to the official business 
of Starpoint Solutions shall be understood as neither given nor endorsed by it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Interrupted streaming replication

2015-12-07 Thread Tony Nelson
Hi all,

I have a small problem of my own making.  I have a simple streaming replication 
setup, one master and two slaves.  One slave is in the same rack as the master, 
the other is across a "slow" link in another server room.

Last night I executed some updates that caused a lot of changes.  The slave in 
the same room did worked just fine, and the slave across the slow link was 
working fine, applying logs.

Then my backups executed, and moved all of the WAL archives, to an archive 
directory.

The good news is, I *think* I have them all.

The slave is waiting:

postgres 12562 12561  0 Oct16 ?00:11:07 postgres: startup process   
waiting for 00010089002C

And I definitely have the file in my archive directory on the master.

The master is logging this error:

2015-12-07 08:24:50 EST FATAL:  requested WAL segment 00010089002C 
has already been removed

Can I simply copy the file from my archive directory back to the WAL directory?

Thanks in advance
Tony Nelson


Since 1982, Starpoint Solutions has been a trusted source of human capital and 
solutions. We are committed to our clients, employees, environment, community 
and social concerns. We foster an inclusive culture based on trust, respect, 
honesty and solid performance. Learn more about Starpoint and our social 
responsibility at http://www.starpoint.com/social_responsibility


This email message from Starpoint Solutions LLC is for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure or distribution is prohibited. If you 
are not the intended recipient, please contact the sender by reply email and 
destroy all copies of the original message. Opinions, conclusions and other 
information in this message that do not relate to the official business of 
Starpoint Solutions shall be understood as neither given nor endorsed by it.


Re: [GENERAL] Heartbleed Impact

2014-04-16 Thread Tony Theodore
On 16 April 2014 21:27, Boszormenyi Zoltan  wrote:
> 2014-04-16 12:40 keltezéssel, Tony Theodore írta:

>> 1.0.0 isn't affected.
>
>
> The package version and the soversion are only loosely related.
> E.g .the upstream OpenSSL 1.0.0 and 1.0.1 series both ship soversion 1.0.0.

Good point - thanks!

Tony


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Heartbleed Impact

2014-04-16 Thread Tony Theodore
On 16 April 2014 18:48, Dev Kumkar  wrote:

> We embed certain binaries and libssl.so.1.0.0 gets shipped along with
> pre-build in-house database with product.

1.0.0 isn't affected.

Cheers,

Tony


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2014-04-11 Thread Tony Theodore
On 12 April 2014 07:02, Moshe Jacobson  wrote:
>
> I know this is a terribly old thread, but if you are still looking for 
> software to provide an audit trail of changes in the database, please see 
> Cyan Audit at http://pgxn.org/dist/cyanaudit. I think it will do just what 
> you're looking for.
> (Full disclosure: I am the author of this software)

Nice! I really like the idea of "undo".

Do you plan to support primary keys other than single column integers?

Cheers,

Tony


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types or composite keys?

2013-11-17 Thread Tony Theodore

On 18 Nov 2013, at 2:24 pm, Chris Travers  wrote:
> 
> I haven't done work with this so I am not 100% sure but it seems to me based 
> on other uses I have for table inheritance that it might work well for 
> enforcing interfaces for natural joins.  The one caveat I can imagine is that 
> there are two issues that occur to me there.
> 
> 1.  If you have two child tables which add a column of the same name, then 
> your centralized enforcement gets messed up and you have a magic join which 
> could take a while to debug
> 
> 2.  The same goes if you have two child tables which also inherit a different 
> parent table for a different natural join
> 
> To be honest I think being explicit about joins is usually a very good thing.

I can see how debugging a magic join would quickly outweigh any benefits and 
the “USING()” clause nicely reflects the foreign key definition, so I’ll stick 
with explicit joins.

Thanks,

Tony



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types or composite keys?

2013-11-17 Thread Tony Theodore

On 16 Nov 2013, at 3:01 am, Merlin Moncure  wrote:
> 
> Well, here are the downsides.  Composite types:
> *) are more than the sum of their parts performance-wise.  So there is
> a storage penalty in both the heap and the index
> *) can't leverage indexes that are querying only part of the key
> *) will defeat the implicit 'per column NOT NULL constraint' of the primary 
> keys

Thanks, I didn’t see any of those - I was thinking that they were like pseudo 
tables or column templates.

> *) are not very well supported in certain clients -- for example JAVA.
> you can always deal with them as text, but that can be a headache.
> 
> ...plus some other things I didn't think about.  If you can deal with
> those constraints, it might be interesting to try a limited
> experiment.   The big upside of composite types is that you can add
> attributes on the fly without rebuilding the index.  Test carefully.

I’ll give it a try - I might stick to using plain or inherited tables for the 
main storage and then experiment with composite types for functions and other 
aggregate tables that are used internally.

Cheers,

Tony



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types or composite keys?

2013-11-17 Thread Tony Theodore

On 15 Nov 2013, at 8:04 pm, Chris Travers  wrote:
> 
> In general, if you don't know you need composite types, you don't want them.  
> You have basically three options and the way you are doing it is the most 
> typical solution to the problem

The current way is much easier since I discovered the “JOIN ... USING(..)” 
syntax and I’m tempted to try natural joins.

> Having experience with table inheritance and composite types in tuples, I 
> will say the former has fewer sharp corners than the latter. 
> 
> Where composite types really work well is where you want to add functions 
> which take the type as input.  In essence you can develop some very 
> sophisticated models with them, but you probably would not use them for 
> storage unless you have other considerations in mind.

Thanks for that, I’ve done some reading on inheritance and it looks like I can 
create an empty parent table that acts like a column definition template. This 
also automatically creates a type that can be used in functions which sounds 
like what I’m after. There are also scenarios where “product” is a combination 
of “level" and “id” (where “level” can be things like brand/category/sku) and 
I’d like to use the same calculations regardless of where it sits in the 
hierarchy.

Cheers,

Tony



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Composite types or composite keys?

2013-11-15 Thread Tony Theodore
Hi,

I was reading about composite types and wondering if I should use them instead 
of composite keys. I currently have tables like this:

create table products (
source_system text,
product_id text,
description text,
...
primary key (source_system, product_id)
);
create table inventory (
source_system text,
product_id text,
qty int,
...
foreign key (source_system, product_id) references products
);


and it means having to add the “source_system" column to many queries. Would 
something like:

create type product as (
source_system text,
product_id text
);
create table products (
product product,
description text,
...
primary key(product)
);
create table inventory (
product product,
qty numeric,
...
foreign key (product) references products
);

be a correct use of composite types? I rarely need to see the columns 
separately, so having to write “(product).product_id” won’t happen much in 
practice.

Thanks,

Tony



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms for entering data into postgresql

2013-10-12 Thread Tony Theodore

On 13/10/2013, at 9:15 AM, Chuck Davis  wrote:

> the only appropriate way to get from one input field to the next is hitting 
> the enter key. 

Ha, I remember how blazing fast entry could be on old terminals with a field 
exit key on the numeric keypad - particularly when standardised on 4-6 digit 
numeric identifiers. I do feel sorry for people who have to do data entry in 
poorly designed apps, but it's also easier these days to pre-populate data and 
use an ajax hybrid search/select box to seriously minimise the number of 
keystrokes.

Cheers,

Tony



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hi, Friends, are there any ETL tools (free or commercial) available for PostgreSQL?

2013-10-09 Thread Tony Theodore

On 09/10/2013, at 11:03 PM, Tony Theodore  wrote:

> On 09/10/2013, at 8:39 PM, raghu ram  wrote:
>> 
>> ETL Tools for PostgreSQL::
>> 
>> Definition: An ETL process data to load into the database from a flat file
>> 
>> A. Extract
>> B. Transform
>> C. Load
> 
> Not exactly an ETL tool, but I've recently felt a sense of liberation since 
> switching to make/python/psql/wget... for my ETL needs. Make is probably the 
> cornerstone as it handles dependencies (think foreign key ordering with 
> schema introspection) and parallel loading in a very intuitive way.

Forgot to mention that I do something more like ELT. Postgres itself is the 
best Transform tool.

Cheers,

Tony

Re: [GENERAL] Hi, Friends, are there any ETL tools (free or commercial) available for PostgreSQL?

2013-10-09 Thread Tony Theodore

On 09/10/2013, at 8:39 PM, raghu ram  wrote:

> 
> On Mon, Oct 7, 2013 at 8:32 PM, sunpeng  wrote:
> Hi, Friends, are there any ETL tools (free or commercial) available for 
> PostgreSQL?
> 
> 
> ETL Tools for PostgreSQL::
> 
> Definition: An ETL process data to load into the database from a flat file
> 
> A. Extract
> B. Transform
> C. Load
> 
> 1. PGLoader -  Load .csv file
> 2. Benetl 1.8 - Load .txt or .csv or .xls file
> 3. Talend Open Studio => http://www.talend.com/index.php
> 4. Pentaho Kettle => http://kettle.pentaho.com/
> 5. Visual Importer => Data can be imported from flat files,Excel.MS 
> Access,Oracle.Mysql.Interbase,PostgreSQL,oleDB,ODBC and DBF files.
> 6. CloverETL => http://en.wikipedia.org/wiki/CloverETL
> 
> Both [3 &4] of these ETL solutions have a lot of the same types of features:
> 
> Both are Java based and run on Linux and Windows (Talend is Eclipse-based)
> Visual designers for creating the transformations
> Connectivity for a myriad of databases, including all the big DBs, text 
> formats, etc.
> Supports distributing jobs across multiple servers if you are doing serious 
> lifting
> Excellent error handling and error notification systems
> Active and helpful forums (Kettle is older and seems to have a larger 
> community however)
> Free and open source
> They are complex enough to handle a lot of tasks, but not so much as to kill 
> you
> There are versions of both (GeoKettle and Spatial Data Integrator) that are 
> tailored for GIS 

Not exactly an ETL tool, but I've recently felt a sense of liberation since 
switching to make/python/psql/wget... for my ETL needs. Make is probably the 
cornerstone as it handles dependencies (think foreign key ordering with schema 
introspection) and parallel loading in a very intuitive way.

Cheers,

Tony



Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tony Theodore

On 02/10/2013, at 6:49 PM, Tim Uckun  wrote:

> >The reason for that is that in PostgreSQL there is no time zone
> information stored along with a "timestamp with time zone",
> it is stored in UTC.
> 
> That seems unintuitive. What is the difference between timestamp without time 
> zone and timestamp with time zone? I was expecting to have the time zone 
> stored in the field. For example one row might be in UTC  but the other row 
> might be in my local time.
> 
> Maybe the question I need to ask is "how can I store the time zone along with 
> the timestamp"   
> 
> >That is because AT TIME ZONE returns a "timestamp without time zone"
> 
> Also seems counterintutive but I guess I can aways convert it. I am just not 
> getting the right offset when I convert. That's what's puzzling.


Here's a handy blog post from Josh Berkus about timestamps:

http://it.toolbox.com/blogs/database-soup/zone-of-misunderstanding-48608

Cheers,

Tony



Re: [GENERAL] PERFORM statement

2013-07-08 Thread Tony Theodore

On 09/07/2013, at 2:20 PM, Mike Christensen  wrote:
> 
> PERFORM MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' );
> 
> I get the error:
> 
> ERROR: syntax error at or near "PERFORM"
> SQL state: 42601
> Character: 1
> 
> Is the FAQ out of date or was this feature removed?  I'm using 9.2.1.  Thanks!

I believe PERFORM is a PL/pgSQL construct, not an SQL one. You'd need to 
execute it inside a function.

Cheers,

Tony




Re: [GENERAL] Running out of memory at vacuum

2013-05-16 Thread Tony Dare

On 05/16/2013 07:13 AM, Ioana Danes wrote:

Hi Jeff,

Yes stop/start of the application server does close all the 
connections to the database.
Lately I did restart postgres too everytime that happened. It did 
happen in the past, last year sometime when I tried just to close the 
app and it was not enough. I might mix up different scenarios thought 
because I did have another issue when the by mistake the max 
connections were set to 1000 and run out of memory for good reason. So 
it might have been happened in that case not now.


I will keep you updated.

Thank you,
Ioana

*From:* Jeff Janes 
*To:* Ioana Danes 
*Cc:* PostgreSQL General 
*Sent:* Thursday, May 16, 2013 9:56:07 AM
*Subject:* Re: [GENERAL] Running out of memory at vacuum

On Thu, May 16, 2013 at 6:35 AM, Ioana Danes > wrote:


Hi Jeff,

On Tuesday, May 14, 2013, Ioana Danes wrote:


The fix is to restart postgres ... If I only close the
connections the problem is still these so I need to restart
postgres.


How are you closing the connections?

I restart the application server. The problem is that the max_idle
connections was set to 1000 on jdbc connection so once the spike
happened the app would run with 300 connections and 250 of them or
so IDLE for most of the time. I am fixing that



Hi Ionana,  thanks for the responses.  Does restarting the app server 
successfully cause all of those connections to terminate?  If so, and 
yet you still have memory problems, then there is still the mystery of 
where the memory is going.


Cheers,

Jeff




From http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server



   work_mem maintainance_work_mem
   


If you do a lot of complex sorts, and have a lot of memory, then 
increasing the work_mem parameter allows PostgreSQL to do larger 
in-memory sorts which, unsurprisingly, will be faster than disk-based 
equivalents.


This size is applied to each and every sort done by each user, and 
complex queries can use multiple working memory sort buffers. Set it to 
50MB, and have 30 users submitting queries, and you are soon using 1.5GB 
of real memory. Furthermore, if a query involves doing merge sorts of 8 
tables, that requires 8 times work_mem. You need to consider what you 
set max_connections to in order to size this parameter correctly. This 
is a setting where data warehouse systems, where users are submitting 
very large queries, can readily make use of many gigabytes of memory.


maintenance_work_mem is used for operations like vacuum. Using extremely 
large values here doesn't help very much, and *because you essentially 
need to reserve that memory* for when vacuum kicks in, takes it away 
from more useful purposes. Something in the 256MB range has anecdotally 
been a reasonably large setting here.





Re: [GENERAL] round returns -0

2013-03-08 Thread Tony Dare

On 03/06/2013 07:16 PM, François Beausoleil wrote:

Le 2013-03-06 à 21:42, Tony Dare a écrit :


I'm taking an standard deviation of a population and subtracting it from the 
average of the same population and rounding the result. Sometimes that result 
is negative and rounding it returns (or shows up as) a negative zero (-0) in a 
SELECT.

basically:
SELECT
  client_name, avg(rpt_cnt),
  stddev_pop(rpt_cnt),
  round(avg(rpt_cnt) - stddev_pop(rpt_cnt))
from client_counts
group by client_name

and what I sometimes get is :
  client_name | a dp number | a dp number | -0

In postgresql-world, is -0 = 0?  Can I use that negative 0 in further 
calculations without fear?  Is this a bug?

This is related to the recent discussion of floating point values on this 
mailing list. You can read more about IEEE 754 and whether 0 == -0 on 
Wikipedia: https://en.wikipedia.org/wiki/Signed_zero#Comparisons

According to that article, IEEE 754 specifies that 0 == -0 in Java/C/etc.

Hope that helps!
François Beausoleil
This is happening in a plpgsql function, so I guess that makes it C, 
under the hood.  That does help, thank you.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] round returns -0

2013-03-06 Thread Tony Dare
I'm taking an standard deviation of a population and subtracting it from 
the average of the same population and rounding the result. Sometimes 
that result is negative and rounding it returns (or shows up as) a 
negative zero (-0) in a SELECT.


basically:
SELECT
  client_name, avg(rpt_cnt),
  stddev_pop(rpt_cnt),
  round(avg(rpt_cnt) - stddev_pop(rpt_cnt))
from client_counts
group by client_name

and what I sometimes get is :
  client_name | a dp number | a dp number | -0

In postgresql-world, is -0 = 0?  Can I use that negative 0 in further 
calculations without fear?  Is this a bug?


pg version is 9.2
OS is Windows 2003.

Thanks,

Wade Dare
"Committed to striving for an effort to try..."


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is this a cross join?

2013-02-17 Thread Tony Theodore

On 18/02/2013, at 9:09 AM, Tim Uckun  wrote:

>> 
>> In some way, every join is a cross join, with the results filtered according 
>> to the specificity of the join conditions. In this case:
>> 
>> inner join model_configurations mc on left(crm.customer_class, 6) = 
>> left(mc.sap_code,6)
>> 
>> "customer_class" sounds like a fairly generic sort of field, so you'd expect 
>> many matches. Truncating the fields is likely to make this even less 
>> specific, returning more results.
>> 
> 
> I guess I am not explaining it properly..
> 
> Say I created new columns on both tables called "first_6" and
> populated them with the substrings.  If I did a inner join or a left
> join on those fields would I still get a cross join?

No, it would be no different if you created new columns - the join condition is 
the same.

You're not actually getting a cross join, that would be many more records than 
9million - you're just not setting a specific enough filter.

Cheers,

Tony



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is this a cross join?

2013-02-17 Thread Tony Theodore

On 18/02/2013, at 7:58 AM, Tim Uckun  wrote:

>> Apparently the first 6 characters of those fields are quite common, which
>> gives you a result for every possible combination of the same 6-character
>> value.
> 
> 
> M. That seems kind of weird.   Is there any way to NOT have this
> be a cross join?  For example if I extracted the first six characters
> into a field and then joined on them it would not be a cross join
> right?

In some way, every join is a cross join, with the results filtered according to 
the specificity of the join conditions. In this case:

inner join model_configurations mc on left(crm.customer_class, 6) = 
left(mc.sap_code,6)

"customer_class" sounds like a fairly generic sort of field, so you'd expect 
many matches. Truncating the fields is likely to make this even less specific, 
returning more results.

Cheers,

Tony
  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reducing number of ANDs speeds up query

2013-01-12 Thread Tony Theodore

On 12/01/2013, at 12:47 PM, T. E. Lawrence  wrote:

> Hello,
> 
> I have a pretty standard query with two tables:
> 
> SELECT table_a.id FROM table_a a, table_b b WHERE ... AND ... AND b.value=...;
> 
> With the last "AND b.value=..." the query is extremely slow (did not wait for 
> it to end, but more than a minute), because the value column is not indexed 
> (contains items longer than 8K).
> 
> However the previous conditions "WHERE ... AND ... AND" should have already 
> reduced the candidate rows to just a few (table_b contains over 50m rows). 
> And indeed, removing the last "AND b.value=..." speeds the query to just a 
> millisecond.
> 
> Is there a way to instruct PostgreSQL to do first the initial "WHERE ... AND 
> ... AND" and then the last "AND b.value=..." on the (very small) result?

Have you looked at the WITH clause [1,2]:

WITH filtered as (SELECT table_a.id, b.value as val FROM table_a a, table_b b 
WHERE … AND …)
SELECT * FROM filtered WHERE filtered.val=…

It evaluates the the first SELECT once, then applies the second SELECT to the 
first in memory (at least that's the way I think about them).

Cheers,

Tony


[1] http://www.postgresql.org/docs/9.2/static/queries-with.html
[2] http://www.postgresql.org/docs/9.2/static/sql-select.html#SQL-WITH



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large database

2012-12-11 Thread Tony CL Chan
Hi, 

If you have big table you could also think about Hadoop/HBase or Cassandra but 
do not put large data set in MySQL. I agree with Bill that "Despite the fact 
that lots of people have been able to make it (MySQL) work" (me too, another 
example), there are issues with it.  I have been using MySQL for a number of 
years, using it to handle large DBs with large number of users,  the MySQL is 
the bottleneck, especially when running table joins for large data set, CPU and 
I/O load went up ..

If switching to PostgreSQL, PostgreSQL 9.1.x is very good choice for production 
deployment. 

Thanks
Tony


P.S.  Today I did some stress tests on my PostgreSQL staging server: a)  insert 
2 billions records into the test table, b) full scan the table. here are some 
test results:  

Facts:  
Number of records: 2 billions records inserted today
Full table scan: about 16.76 minutes to scan 2 billions of rows, really 
AMAZING!  
Database size: 109GB
PostgrSQL: 9.2.1
Physical RAM: 8GB
CPU: i5 




EXPLAIN ANALYZE SELECT COUNT(*) FROM test;
QUERY PLAN 
---
Aggregate  (cost=33849559.60..33849559.61 rows=1 width=0) (actual 
time=1006476.308..1006476.309 rows=1 loops=1)
->  Seq Scan on test  (cost=0.00..28849559.28 rows=200128 width=0) (actual 
time=47.147..903264.427 rows=20 loops=1)
Total runtime: 1006507.963 ms





On 11 Dec 2012, at 8:27 PM, Bill Moran wrote:

> On Mon, 10 Dec 2012 15:26:02 -0500 (EST) "Mihai Popa"  
> wrote:
> 
>> Hi,
>> 
>> I've recently inherited a project that involves importing a large set of
>> Access mdb files into a Postgres or MySQL database.
>> The process is to export the mdb's to comma separated files than import
>> those into the final database.
>> We are now at the point where the csv files are all created and amount
>> to some 300 GB of data.
>> 
>> I would like to get some advice on the best deployment option.
>> 
>> First, the project has been started using MySQL. Is it worth switching
>> to Postgres and if so, which version should I use?
> 
> I've been managing a few large databases this year, on both PostgreSQL and
> MySQL.
> 
> Don't put your data in MySQL.  Ever.  If you feel like you need to use
> something like MySQL, just go straight to a system that was designed with
> no constraints right off the bat, like Mongo or something.
> 
> Don't put large amounts of data in MySQL.  There are lots of issuse with it.
> Despite the fact that lots of people have been able to make it work (me,
> for example) it's a LOT harder to keep running well than it is on
> PostgreSQL.  MySQL just isn't designed to deal with large data.  As some
> examples: lack of CREATE INDEX CONCURRENTLY, the fact that the default
> configuration stores everything in a single file, the fact that any table
> changes (including simple things like adding a comment, or seemingly
> unrelated things like adding an index) require a complete table rebuild,
> and the fact that if you use anything other than INT AUTO_INCREMENT for
> your primary key you're liable to hit on awful inefficiencies.
> 
> PostgreSQL has none of these problems.
> 
> -- 
> Bill Moran 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] null tablespace?

2011-12-15 Thread Tony Capobianco
Awesome!  We had about 200G worth of tables on our mount point that held
all our WAL datawhoops!  Thanks for the below query.

One last question, can I define a default tablespace per schema?  I
haven't found anything by searching around.

Thanks.
Tony


On Fri, 2011-12-16 at 00:23 +0530, Raghavendra wrote:
> Its going in default tablespace i.e. base/global.
> 
> 
> Try this query, which give information on the default tablespaces.
> 
> 
> select spcname, case spcname when 'pg_default' then (select setting
> from pg_settings where name = 'data_directory')||'/base' when
> 'pg_global' then (select setting from pg_settings where name =
> 'data_directory')||'/global' else spclocation end from pg_tablespace;
> 
> 
> 
> 
> ---
> Regards,
> Raghavendra
> EnterpriseDB Corporation
> Blog: http://raghavt.blogspot.com/
> 
> 
> 
> On Fri, Dec 16, 2011 at 12:11 AM, Tony Capobianco
>  wrote:
> I have not defined my default tablespace:
> 
> show default_tablespace ;
>  default_tablespace
> 
> 
> 
> However, I have tables with null tablespaces values in
> pg_tables:
> 
> pg_dw=# select * from pg_tables where tablename =
> 'source_prev';
>  schemaname |   tablename   | tableowner | tablespace
> | hasindexes | hasrules | hastriggers
> 
> +---++++--+-
>  support| source_prev   | jspool |
>  | t  | f| f
> 
> 
> We have defined out search_path correctly, so what tablespace
> is this table stored in?
> 
> Thanks!
> 
> 
> --
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] null tablespace?

2011-12-15 Thread Tony Capobianco
I have not defined my default tablespace:

show default_tablespace ;
 default_tablespace 



However, I have tables with null tablespaces values in pg_tables:

pg_dw=# select * from pg_tables where tablename = 'source_prev';
 schemaname |   tablename   | tableowner | tablespace | hasindexes | 
hasrules | hastriggers 
+---++++--+-
 support| source_prev   | jspool || t  | f  
  | f


We have defined out search_path correctly, so what tablespace is this table 
stored in?

Thanks!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump -n switch lock schema from dml/ddl?

2011-11-10 Thread Tony Capobianco
I'm testing out various pg_dump scenarios using the -n switch and I have
a few questions:

- When using the -n switch, is the whole schema locked from all non-read
DML/DDL operations?

- If the whole schema is locked, once each table is dumped, is it then
released for non-read DML/DDL operations?

- Once the final table is dumped 
(i.e. pg_dump: dumping contents of table zip_data), 
are there additional background processes that are still performing
maintenance tasks?  Or is the entire process complete and all objects
are released?


I'm asking because I have a schema with a large table with many indexes
that is consuming the majority of the dump.  This version of the dump
takes about 4 hours.  
As a solution, we run 2 separate dumps in parallel, one with the schema
excluding the large table and one including only the large table.  
The option with just the large table takes 2.5 hours.  However, the
option with the schema excluding the large table still takes 4 hours.
If pg_dump locks each table individually, then releases when the dump is
completed, I must be encountering lock contention.
Also, I use the -v switch, however I'm not getting any information on
how long the dump of each object takes, is there an option that exists
where I can collect this information in the log file?

Thanks.
Tony


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump schma while excluding specific table

2011-11-03 Thread Tony Capobianco
BINGO!

Thanks everyone.  That did the trick!

On Thu, 2011-11-03 at 12:56 -0700, Ioana Danes wrote:
> 
> > 
> > pg_dump newdb > /DUMPDIR/newdb.dmp -n dev -T corgi -w -v -F c 2>
> > /DUMPDIR/newdb.log
> > 
> 
> Try: -T dev.corgi  instead of -T corgi 
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump schma while excluding specific table

2011-11-03 Thread Tony Capobianco
I'm using 9.0.3.  I've tried several permutations of this script and
still I get a dump of the entire schema.  The corgi table is still
included when I need it excluded.

On Thu, 2011-11-03 at 12:02 -0700, Adrian Klaver wrote:
> On Thursday, November 03, 2011 8:16:42 am Tony Capobianco wrote:
> > When I issue:
> > 
> > pg_dump newdb > /DUMPDIR/newdb.dmp -n dev -T corgi -w -v -F c 2>
> > /DUMPDIR/newdb.log
> > 
> > I get a dump of the entire dev schema.  My goal is to dump the dev
> > schema minus the corgi table.  How can I adjust my script to perform
> > this function?
> 
>  A test here worked:
> pg_dump -n public -T csv_null -U postgres -Fc -f test.out  test 
> 
> This is for Postgres version 9.0.4, what version are you using?
> 
> > 
> > Thanks.
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump schma while excluding specific table

2011-11-03 Thread Tony Capobianco
When I issue:

pg_dump newdb > /DUMPDIR/newdb.dmp -n dev -T corgi -w -v -F c 2> 
/DUMPDIR/newdb.log

I get a dump of the entire dev schema.  My goal is to dump the dev
schema minus the corgi table.  How can I adjust my script to perform
this function?

Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgloader hangs with an invalid filename

2011-08-15 Thread Tony Capobianco
I'm running pgloader 2.3.2 and Postgres 9.  Is there an upgraded version
of pgloader on the PgFoundry site?


On Mon, 2011-08-15 at 20:34 +0200, Dimitri Fontaine wrote:
> Tony Capobianco  writes:
> > Has anyone experienced the behavior of pgloader hanging when the .conf
> > file has an incorrect filename?
> > When I execute with the correct filename, everything works just fine.
> > However, when I use an incorrect filename, I get the below:
> 
> That's certainly due to current threading implementation in pgloader, I
> guess the control thread is still waiting for the load to terminate,
> which will never happen.
> 
> It seems to be fixed in current version though.  See if you can upgrade.
> 
> Regards,



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird problem that enormous locks

2011-07-15 Thread Tony Wang
On Fri, Jul 15, 2011 at 18:44, Scott Marlowe wrote:

> On Fri, Jul 15, 2011 at 4:36 AM, Tony Wang  wrote:
> > Weird that I receive your each message twice.
> > On Fri, Jul 15, 2011 at 15:33, Radoslaw Smogura <
> rsmog...@softperience.eu>
> > wrote:
> >>
> >> Simple and obvious question right now do You call commit after
> >> transaction? If yes do you use any query or connection pooler?
> >
> > Yes. connection pool is used as application level, not db level.
> > no commit after transaction is possible (I'm trying to check the logic),
> I
> > just cannot imagine it happened for so many users at the same time, and
> then
> > calmed down for long time, and came again.
> > I found the query I used to log locks would miss locks that relname is
> null.
> > will add that, though no idea why it's null
>
> They're likely exclusive locks on a transaction, which are normal.
>

Thanks for the info. May miss something without such rows.


Re: [GENERAL] Weird problem that enormous locks

2011-07-15 Thread Tony Wang
On Fri, Jul 15, 2011 at 19:47, Radosław Smogura wrote:

> On Fri, 15 Jul 2011 19:07:45 +0800, Tony Wang wrote:
>
>> On Fri, Jul 15, 2011 at 18:50, Radosław Smogura  wrote:
>>
>>  On Fri, 15 Jul 2011 18:36:19 +0800, Tony Wang wrote:
>>>
>>>  Weird that I receive your each message twice.
>>>>
>>>> On Fri, Jul 15, 2011 at 15:33, Radoslaw Smogura  wrote:
>>>>
>>>>  Simple and obvious question right now do You call commit after
>>>>> transaction? If yes do you use any query or connection pooler?
>>>>>
>>>>
>>>> Yes. connection pool is used as application level, not db level.
>>>> no commit after transaction is possible (Im trying to check the
>>>>
>>>> logic), I just cannot imagine it happened for so many users at
>>>> the
>>>> same time, and then calmed down for long time, and came again.
>>>>
>>>> I found the query I used to log locks would miss locks that
>>>> relname is
>>>> null. will add that, though no idea why its null
>>>>
>>>>
>>>>  
>>>>> Regards,
>>>>> Radoslaw Smogura
>>>>> (mobile)
>>>>> -
>>>>> From: Tony Wang
>>>>> Sent: 15 lipca 2011 03:51
>>>>> To: Scott Marlowe
>>>>> Cc: PostgreSQL
>>>>>
>>>>> Subject: Re: [GENERAL] Weird problem that enormous locks
>>>>>
>>>>> On Fri, Jul 15, 2011 at 08:22, Scott Marlowe wrote:
>>>>>
>>>>>  On Thu, Jul 14, 2011 at 6:01 PM, Tony Wang wrote:
>>>>>>
>>>>>>  On Fri, Jul 15, 2011 at 01:13, Scott Marlowe
>>>>>>>
>>>>>> > wrote:
>>>>>>
>>>>>>  On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang wrote:
>>>>>>>>
>>>>>>>
>>>>>> ; On Thu, Jul 14, 2011 at 10:35, John R Pierce
>>>>>>
>>>>>>> >> > wrote:
>>>>>>>
>>>>>> #ccc solid;padding-left:1ex"> > Its a game server, and the
>>>>>>
>>>>>> queries are updating users money,
>>>>>>
>>>>>> as
>>>>>> class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px
>>>>>>
>>>>>>> #ccc solid;padding-left:1ex"> > normal.
>>>>>>>
>>>>>>> > The sql is like "UPDATE player SET money = money + 100
>>>>>>> where
>>>>>>>
>>>>>> blockquote> id =
>>>>>> >> > 12345".
>>>>>> the indexes.
>>>>>> 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> > The
>>>>>>
>>>>>> > weird thing is there was another ExclusiveLo
>>>>>>
>>>>>>  ockquote class="gmail_quote" style="margin:0 0 0
>>>>>>> .8ex;border-left:1px #ccc solid;padding-left:1ex">
>>>>>>>
>>>>>> uot;player" got two locks, one RowExclusiveLock and one
>>>>>> ExclusiveLock.
>>>>>> kquote> acquired on
>>>>>> c solid;padding-left:1ex"> > user
>>>>>>
>>>>>> > tables by any PostgreSQL command."
>>>>>>
>>>>>> You need to figure out what part of your app, or maybe a
>>>>>> rogue
>>>>>> >> developer et
>>>>>>
>>>>>>  order-left:1px #ccc solid;padding-left:1ex">
>>>>>>>
>>>>>>> Yeah, thats what Im trying to do
>>>>>>>
>>>>>>> Cool.  In your first post you said:
>>>>>>>
>>>>>> "gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc
>>>>>> solid;padding-left:1ex"> select pg_class.relname,
>>>>>>
>>>>>> pg_locks.mode, pg_locks.granted,
>>>>>> pg_stat_activity.current_**query,
>>>>>> pg_stat_activity.query_start,
>>>>>> pg_stat_activity.xact_start as transaction_start,
>>>>>> age(now(),pg_stat_activity.**query_start) as query_age,
>>>>>> > age(now(),pg_st
>>>>>>
>&g

Re: [GENERAL] Weird problem that enormous locks

2011-07-15 Thread Tony Wang
On Fri, Jul 15, 2011 at 18:52, Radosław Smogura wrote:

> On Fri, 15 Jul 2011 18:36:19 +0800, Tony Wang wrote:
>
>> Weird that I receive your each message twice.
>>
> Once message You get from mailing list, one because You are (B)CC.
>

gmail should be clever enough handling that, at lease I didn't receive twice
this time.


Re: [GENERAL] Weird problem that enormous locks

2011-07-15 Thread Tony Wang
On Fri, Jul 15, 2011 at 18:50, Radosław Smogura wrote:

> On Fri, 15 Jul 2011 18:36:19 +0800, Tony Wang wrote:
>
>> Weird that I receive your each message twice.
>>
>> On Fri, Jul 15, 2011 at 15:33, Radoslaw Smogura  wrote:
>>
>>
>>  Simple and obvious question right now do You call commit after
>>> transaction? If yes do you use any query or connection pooler?
>>>
>>
>> Yes. connection pool is used as application level, not db level.
>> no commit after transaction is possible (Im trying to check the
>>
>> logic), I just cannot imagine it happened for so many users at the
>> same time, and then calmed down for long time, and came again.
>>
>> I found the query I used to log locks would miss locks that relname is
>> null. will add that, though no idea why its null
>>
>>
>>  
>>> Regards,
>>> Radoslaw Smogura
>>> (mobile)
>>> -
>>> From: Tony Wang
>>> Sent: 15 lipca 2011 03:51
>>> To: Scott Marlowe
>>> Cc: PostgreSQL
>>>
>>> Subject: Re: [GENERAL] Weird problem that enormous locks
>>>
>>> On Fri, Jul 15, 2011 at 08:22, Scott Marlowe wrote:
>>>
>>>  On Thu, Jul 14, 2011 at 6:01 PM, Tony Wang wrote:
>>>>
>>>>> On Fri, Jul 15, 2011 at 01:13, Scott Marlowe
>>>>>
>>>> > wrote:
>>>>
>>>>
>>>>>> On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang wrote:
>>>>>>
>>>>>
>>>>  > On Thu, Jul 14, 2011 at 10:35, John R Pierce
>>>>>>
>>>>> >> > wrote:
>>>>
>>>>> > Its a game server, and the queries are updating users money,
>>>>>>
>>>>> as
>>>>
>>>>> > normal.
>>>>>> > The sql is like "UPDATE player SET money = money + 100 where
>>>>>>
>>>>> id =
>>>> >> > 12345".
>>>>
>>>>> > The locks were RowExclusiveLock for the table "player" and
>>>>>>
>>>>> the indexes.
>>>>
>>>>> > The
>>>>>> > weird thing is there was another ExclusiveLock for the table
>>>>>>
>>>>> "player",
>>>> >> > i.e.
>>>>
>>>>> > "player" got two locks, one RowExclusiveLock and one
>>>>>>
>>>>> ExclusiveLock.
>>>>
>>>>> > In the postgresql documentation
>>>>>> >
>>>>>>
>>>>> (http://www.postgresql.org/**docs/8.4/static/explicit-**locking.html<http://www.postgresql.org/docs/8.4/static/explicit-locking.html>
>>>> [5]), its
>>>>
>>>> >> > said
>>>>
>>>>> > about the  Exclusive "This lock mode is not automatically
>>>>>>
>>>>> acquired on
>>>>
>>>>> > user
>>>>>> > tables by any PostgreSQL command."
>>>>>>
>>>>>> You need to figure out what part of your app, or maybe a rogue
>>>>>>
>>>>> >> developer etc is throwing an exclusive lock.
>>>>
>>>>>
>>>>> Yeah, thats what Im trying to do
>>>>>
>>>>
>>>> Cool.  In your first post you said:
>>>>
>>>>  select pg_class.relname, pg_locks.mode, pg_locks.granted,
>>>>>
>>>> pg_stat_activity.current_**query, pg_stat_activity.query_start,
>>>>
>>>>> pg_stat_activity.xact_start as transaction_start,
>>>>>
>>>> age(now(),pg_stat_activity.**query_start) as query_age,
>>>> > age(now(),pg_stat_activity.**xact_start) as transaction_age,
>>>> pg_stat_activity.procpid from pg_stat_activity,pg_locks left
>>>>
>>>>> outer join pg_class on (pg_locks.relation = pg_class.oid) where
>>>>>
>>>> pg_locks.pid=pg_stat_activity.**procpid and
>>>> > substr(pg_class.relname,1,3) != pg_ order by query_start;
>>>>
>>>>  The only special thing I can find is that there were a lot
>>>>>
>>>> ExclusiveLock, while its normal the locks are
>>>>
>>>>> only AccessShareLock and RowExclusiveLock.
>>>>>
>>>>
>>>> So what did / does current_query say when its happening?  If it
>>>> says
>>>> you dont have access permission then run that query as root when
>>>> it
>>>> happens again.
>>>>
>>>
>>> As I said, its normal update like "UPDATE player SET money = money +
>>>
>>> 100 WHERE id=12345", but there are quite many
>>>
>>
>>
>>
>> Links:
>> --
>> [1] mailto:www...@gmail.com
>> [2] mailto:scott.marl...@gmail.com
>> [3] mailto:www...@gmail.com
>> [4] mailto:pie...@hogranch.com
>> [5] 
>> http://www.postgresql.org/**docs/8.4/static/explicit-**locking.html<http://www.postgresql.org/docs/8.4/static/explicit-locking.html>
>> [6] mailto:scott.marl...@gmail.com
>> [7] mailto:rsmogura@softperience.**eu 
>>
> Actually I don't know what pool You use (I think PHP - I don't know much
> about this), but I imagine following, If You don't use auto commit or
> commit:
> 1. User A updates moneys, gets connections C1, locks his row, no commit
> 2. User A updates moneys again, gets connection C2, but C1 still holds
> lock.
> Regards,
> Radosław Smogura
>

Any connection pool behaves similarly. The connection C1 surely will be
committed and returned after the operation finished. Having said that, the
ONLY possible reason is some transactions hanged holding the locks, and
cause others cannot work any more, and the "ExclusiveLock" is not a problem,
right?
The interesting thing is, I didn't find any timeout/exception after the
"lock" period ended in postgresql log, only long query time.


Re: [GENERAL] Weird problem that enormous locks

2011-07-15 Thread Tony Wang
Weird that I receive your each message twice.

On Fri, Jul 15, 2011 at 15:33, Radoslaw Smogura wrote:

>  Simple and obvious question right now do You call commit after
> transaction? If yes do you use any query or connection pooler?
>

Yes. connection pool is used as application level, not db level.
no commit after transaction is possible (I'm trying to check the logic), I
just cannot imagine it happened for so many users at the same time, and then
calmed down for long time, and came again.

I found the query I used to log locks would miss locks that relname is null.
will add that, though no idea why it's null


>
> 
> Regards,
> Radoslaw Smogura
> (mobile)
> --
> From: Tony Wang
> Sent: 15 lipca 2011 03:51
> To: Scott Marlowe
> Cc: PostgreSQL
>
> Subject: Re: [GENERAL] Weird problem that enormous locks
>
> On Fri, Jul 15, 2011 at 08:22, Scott Marlowe wrote:
>
>>  On Thu, Jul 14, 2011 at 6:01 PM, Tony Wang  wrote:
>> > On Fri, Jul 15, 2011 at 01:13, Scott Marlowe 
>> > wrote:
>> >>
>> >> On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang  wrote:
>> >> > On Thu, Jul 14, 2011 at 10:35, John R Pierce 
>> >> > wrote:
>> >> > It's a game server, and the queries are updating users' money, as
>> >> > normal.
>> >> > The sql is like "UPDATE player SET money = money + 100 where id =
>> >> > 12345".
>> >> > The locks were RowExclusiveLock for the table "player" and the
>> indexes.
>> >> > The
>> >> > weird thing is there was another ExclusiveLock for the table
>> "player",
>> >> > i.e.
>> >> > "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
>> >> > In the postgresql documentation
>> >> > (http://www.postgresql.org/docs/8.4/static/explicit-locking.html),
>> it's
>> >> > said
>> >> > about the  Exclusive "This lock mode is not automatically acquired on
>> >> > user
>> >> > tables by any PostgreSQL command."
>> >>
>> >> You need to figure out what part of your app, or maybe a rogue
>> >> developer etc is throwing an exclusive lock.
>> >
>> > Yeah, that's what I'm trying to do
>>
>> Cool.  In your first post you said:
>>
>> > select pg_class.relname, pg_locks.mode, pg_locks.granted,
>> pg_stat_activity.current_query, pg_stat_activity.query_start,
>> > pg_stat_activity.xact_start as transaction_start,
>> age(now(),pg_stat_activity.query_start) as query_age,
>> > age(now(),pg_stat_activity.xact_start) as transaction_age,
>> pg_stat_activity.procpid from pg_stat_activity,pg_locks left
>> > outer join pg_class on (pg_locks.relation = pg_class.oid) where
>> pg_locks.pid=pg_stat_activity.procpid and
>> > substr(pg_class.relname,1,3) != 'pg_' order by query_start;
>>
>> > The only special thing I can find is that there were a lot
>> ExclusiveLock, while it's normal the locks are
>> > only AccessShareLock and RowExclusiveLock.
>>
>> So what did / does current_query say when it's happening?  If it says
>> you don't have access permission then run that query as root when it
>> happens again.
>>
>
> As I said, it's normal update like "UPDATE player SET money = money + 100
> WHERE id=12345", but there are quite many
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-14 Thread Tony Wang
On Fri, Jul 15, 2011 at 12:38, John R Pierce  wrote:

> On 07/14/11 7:58 PM, Tony Wang wrote:
>
>>
>> There were 2519 RowExclusiveLock and 85 ExclusiveLock
>>
>
> how could 800 max_connections have 2519 row locks ?  do you update multiple
> different rows in the same transaction?
>
>
the row locks includes indexes locks, also some updates related to two
tables. Roughly, about 700+ unique RowExclusiveLock there


>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-14 Thread Tony Wang
On Fri, Jul 15, 2011 at 10:42, Scott Marlowe wrote:

> On Thu, Jul 14, 2011 at 8:28 PM, Tony Wang  wrote:
> > On Fri, Jul 15, 2011 at 10:05, Scott Marlowe 
> > wrote:
> >>
> >> On Thu, Jul 14, 2011 at 7:51 PM, Tony Wang  wrote:
> >> > On Fri, Jul 15, 2011 at 08:22, Scott Marlowe  >
> >> > wrote:
> >> >>
> >> >> On Thu, Jul 14, 2011 at 6:01 PM, Tony Wang  wrote:
> >> >> > On Fri, Jul 15, 2011 at 01:13, Scott Marlowe
> >> >> > 
> >> >> > wrote:
> >> >> >>
> >> >> >> On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang 
> wrote:
> >> >> >> > On Thu, Jul 14, 2011 at 10:35, John R Pierce <
> pie...@hogranch.com>
> >> >> >> > wrote:
> >> >> >> > It's a game server, and the queries are updating users' money,
> as
> >> >> >> > normal.
> >> >> >> > The sql is like "UPDATE player SET money = money + 100 where id
> =
> >> >> >> > 12345".
> >> >> >> > The locks were RowExclusiveLock for the table "player" and the
> >> >> >> > indexes.
> >> >> >> > The
> >> >> >> > weird thing is there was another ExclusiveLock for the table
> >> >> >> > "player",
> >> >> >> > i.e.
> >> >> >> > "player" got two locks, one RowExclusiveLock and one
> >> >> >> > ExclusiveLock.
> >> >> >> > In the postgresql documentation
> >> >> >> > (
> http://www.postgresql.org/docs/8.4/static/explicit-locking.html),
> >> >> >> > it's
> >> >> >> > said
> >> >> >> > about the  Exclusive "This lock mode is not automatically
> acquired
> >> >> >> > on
> >> >> >> > user
> >> >> >> > tables by any PostgreSQL command."
> >> >> >>
> >> >> >> You need to figure out what part of your app, or maybe a rogue
> >> >> >> developer etc is throwing an exclusive lock.
> >> >> >
> >> >> > Yeah, that's what I'm trying to do
> >> >>
> >> >> Cool.  In your first post you said:
> >> >>
> >> >> > select pg_class.relname, pg_locks.mode, pg_locks.granted,
> >> >> > pg_stat_activity.current_query, pg_stat_activity.query_start,
> >> >> > pg_stat_activity.xact_start as transaction_start,
> >> >> > age(now(),pg_stat_activity.query_start) as query_age,
> >> >> > age(now(),pg_stat_activity.xact_start) as transaction_age,
> >> >> > pg_stat_activity.procpid from pg_stat_activity,pg_locks left
> >> >> > outer join pg_class on (pg_locks.relation = pg_class.oid) where
> >> >> > pg_locks.pid=pg_stat_activity.procpid and
> >> >> > substr(pg_class.relname,1,3) != 'pg_' order by query_start;
> >> >>
> >> >> > The only special thing I can find is that there were a lot
> >> >> > ExclusiveLock, while it's normal the locks are
> >> >> > only AccessShareLock and RowExclusiveLock.
> >> >>
> >> >> So what did / does current_query say when it's happening?  If it says
> >> >> you don't have access permission then run that query as root when it
> >> >> happens again.
> >> >
> >> > As I said, it's normal update like "UPDATE player SET money = money +
> >> > 100
> >> > WHERE id=12345", but there are quite many
> >>
> >> A regular update like that can't get a full exclusive lock by itself,
> >> there'd have to be a previous query in the same transaction that took
> >> out an explicit lock.  Is it possible for you to set up query logging
> >> such that you can track connections to see which one does that in the
> >> future?
> >
> > Yeah, and I also wonder when will an ExclusiveLock acquired.
> > I set up query logging after that, that'll be really big file.
> >
> >>
> >> Were there more than 1 exclusive lock (now row exclusive, but just
> >> plain exclusive)?
> >
> > There were many such locks (not row exclusive) updating different player
> id.
>
> How many just plain exclusive locks were there?
>

There were 2519 RowExclusiveLock and 85 ExclusiveLock


Re: [GENERAL] Weird problem that enormous locks

2011-07-14 Thread Tony Wang
On Fri, Jul 15, 2011 at 10:05, Scott Marlowe wrote:

> On Thu, Jul 14, 2011 at 7:51 PM, Tony Wang  wrote:
> > On Fri, Jul 15, 2011 at 08:22, Scott Marlowe 
> > wrote:
> >>
> >> On Thu, Jul 14, 2011 at 6:01 PM, Tony Wang  wrote:
> >> > On Fri, Jul 15, 2011 at 01:13, Scott Marlowe  >
> >> > wrote:
> >> >>
> >> >> On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang  wrote:
> >> >> > On Thu, Jul 14, 2011 at 10:35, John R Pierce 
> >> >> > wrote:
> >> >> > It's a game server, and the queries are updating users' money, as
> >> >> > normal.
> >> >> > The sql is like "UPDATE player SET money = money + 100 where id =
> >> >> > 12345".
> >> >> > The locks were RowExclusiveLock for the table "player" and the
> >> >> > indexes.
> >> >> > The
> >> >> > weird thing is there was another ExclusiveLock for the table
> >> >> > "player",
> >> >> > i.e.
> >> >> > "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
> >> >> > In the postgresql documentation
> >> >> > (http://www.postgresql.org/docs/8.4/static/explicit-locking.html),
> >> >> > it's
> >> >> > said
> >> >> > about the  Exclusive "This lock mode is not automatically acquired
> on
> >> >> > user
> >> >> > tables by any PostgreSQL command."
> >> >>
> >> >> You need to figure out what part of your app, or maybe a rogue
> >> >> developer etc is throwing an exclusive lock.
> >> >
> >> > Yeah, that's what I'm trying to do
> >>
> >> Cool.  In your first post you said:
> >>
> >> > select pg_class.relname, pg_locks.mode, pg_locks.granted,
> >> > pg_stat_activity.current_query, pg_stat_activity.query_start,
> >> > pg_stat_activity.xact_start as transaction_start,
> >> > age(now(),pg_stat_activity.query_start) as query_age,
> >> > age(now(),pg_stat_activity.xact_start) as transaction_age,
> >> > pg_stat_activity.procpid from pg_stat_activity,pg_locks left
> >> > outer join pg_class on (pg_locks.relation = pg_class.oid) where
> >> > pg_locks.pid=pg_stat_activity.procpid and
> >> > substr(pg_class.relname,1,3) != 'pg_' order by query_start;
> >>
> >> > The only special thing I can find is that there were a lot
> >> > ExclusiveLock, while it's normal the locks are
> >> > only AccessShareLock and RowExclusiveLock.
> >>
> >> So what did / does current_query say when it's happening?  If it says
> >> you don't have access permission then run that query as root when it
> >> happens again.
> >
> > As I said, it's normal update like "UPDATE player SET money = money + 100
> > WHERE id=12345", but there are quite many
>
> A regular update like that can't get a full exclusive lock by itself,
> there'd have to be a previous query in the same transaction that took
> out an explicit lock.  Is it possible for you to set up query logging
> such that you can track connections to see which one does that in the
> future?
>

Yeah, and I also wonder when will an ExclusiveLock acquired.
I set up query logging after that, that'll be really big file.


>
> Were there more than 1 exclusive lock (now row exclusive, but just
> plain exclusive)?
>

There were many such locks (not row exclusive) updating different player id.


Re: [GENERAL] Weird problem that enormous locks

2011-07-14 Thread Tony Wang
On Fri, Jul 15, 2011 at 08:22, Scott Marlowe wrote:

> On Thu, Jul 14, 2011 at 6:01 PM, Tony Wang  wrote:
> > On Fri, Jul 15, 2011 at 01:13, Scott Marlowe 
> > wrote:
> >>
> >> On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang  wrote:
> >> > On Thu, Jul 14, 2011 at 10:35, John R Pierce 
> >> > wrote:
> >> > It's a game server, and the queries are updating users' money, as
> >> > normal.
> >> > The sql is like "UPDATE player SET money = money + 100 where id =
> >> > 12345".
> >> > The locks were RowExclusiveLock for the table "player" and the
> indexes.
> >> > The
> >> > weird thing is there was another ExclusiveLock for the table "player",
> >> > i.e.
> >> > "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
> >> > In the postgresql documentation
> >> > (http://www.postgresql.org/docs/8.4/static/explicit-locking.html),
> it's
> >> > said
> >> > about the  Exclusive "This lock mode is not automatically acquired on
> >> > user
> >> > tables by any PostgreSQL command."
> >>
> >> You need to figure out what part of your app, or maybe a rogue
> >> developer etc is throwing an exclusive lock.
> >
> > Yeah, that's what I'm trying to do
>
> Cool.  In your first post you said:
>
> > select pg_class.relname, pg_locks.mode, pg_locks.granted,
> pg_stat_activity.current_query, pg_stat_activity.query_start,
> > pg_stat_activity.xact_start as transaction_start,
> age(now(),pg_stat_activity.query_start) as query_age,
> > age(now(),pg_stat_activity.xact_start) as transaction_age,
> pg_stat_activity.procpid from pg_stat_activity,pg_locks left
> > outer join pg_class on (pg_locks.relation = pg_class.oid) where
> pg_locks.pid=pg_stat_activity.procpid and
> > substr(pg_class.relname,1,3) != 'pg_' order by query_start;
>
> > The only special thing I can find is that there were a lot ExclusiveLock,
> while it's normal the locks are
> > only AccessShareLock and RowExclusiveLock.
>
> So what did / does current_query say when it's happening?  If it says
> you don't have access permission then run that query as root when it
> happens again.
>

As I said, it's normal update like "UPDATE player SET money = money + 100
WHERE id=12345", but there are quite many


Re: [GENERAL] Weird problem that enormous locks

2011-07-14 Thread Tony Wang
On Fri, Jul 15, 2011 at 01:13, Scott Marlowe wrote:

> On Wed, Jul 13, 2011 at 9:47 PM, Tony Wang  wrote:
> > On Thu, Jul 14, 2011 at 10:35, John R Pierce 
> wrote:
> > It's a game server, and the queries are updating users' money, as normal.
> > The sql is like "UPDATE player SET money = money + 100 where id = 12345".
> > The locks were RowExclusiveLock for the table "player" and the indexes.
> The
> > weird thing is there was another ExclusiveLock for the table "player",
> i.e.
> > "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
> > In the postgresql documentation
> > (http://www.postgresql.org/docs/8.4/static/explicit-locking.html), it's
> said
> > about the  Exclusive "This lock mode is not automatically acquired on
> user
> > tables by any PostgreSQL command."
>
> You need to figure out what part of your app, or maybe a rogue
> developer etc is throwing an exclusive lock.
>

Yeah, that's what I'm trying to do


Re: [GENERAL] Weird problem that enormous locks

2011-07-14 Thread Tony Wang
Thanks, I've checked the "for update". No such queries there.

On Thu, Jul 14, 2011 at 15:36, Radoslaw Smogura wrote:

>  Once time I've read 9.x PostgreSQL locks everything before offset, if You
> execute select for update offset. Do you call such query at least once? It's
> the way why we think about having 9.x server.
>
> 
> Regards,
> Radoslaw Smogura
> (mobile)
> --
> From: Tony Wang
> Sent: 14 lipca 2011 07:00
> To: John R Pierce
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Weird problem that enormous locks
>
>
> On Thu, Jul 14, 2011 at 12:35, John R Pierce  wrote:
>
>> On 07/13/11 8:47 PM, Tony Wang wrote:
>>
>>> It's a game server, and the queries are updating users' money, as normal.
>>> The sql is like "UPDATE player SET money = money + 100 where id = 12345".
>>> The locks were RowExclusiveLock for the table "player" and the indexes. The
>>> weird thing is there was another ExclusiveLock for the table "player", i.e.
>>> "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
>>>
>>
>> that query should be quite fast. is it part of a larger transaction?  is
>> there any possibility of multiple sessions/connections accessing the same
>> player.id?
>>
>>
> That's possible, but I think only one row will be locked for a while, but
> not thousands of locks for an hour. It's rare that thousands of users update
> the value at once.
>
>
>>
>> it would be interesting to identify the process that issued the exclusive
>> lock and determine what query/queries its made.  if its not apparent in
>> pg_stat_activity, perhaps enable logging of all DDL commands, and check the
>> logs.
>>
>
> yeah, I've made the log_statement to "all" now. Previously, it only logged
> slow queries more than 50ms. I could know something from logs if it happens
> again (hope not).
>
>
>>
>> if there's a lot of active queries (you ahve 800 connections)
>>
>>select count(*),current_query from pg_stat_activity group by
>> current_query order by count(*) desc;
>>
>
> that's helpful, thanks.
>
>
>>
>> can help you make sense of them.
>>
>>
>>
>> --
>> john r pierceN 37, W 122
>> santa cruz ca mid-left coast
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>>
>
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
On Thu, Jul 14, 2011 at 12:35, John R Pierce  wrote:

> On 07/13/11 8:47 PM, Tony Wang wrote:
>
>> It's a game server, and the queries are updating users' money, as normal.
>> The sql is like "UPDATE player SET money = money + 100 where id = 12345".
>> The locks were RowExclusiveLock for the table "player" and the indexes. The
>> weird thing is there was another ExclusiveLock for the table "player", i.e.
>> "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
>>
>
> that query should be quite fast. is it part of a larger transaction?  is
> there any possibility of multiple sessions/connections accessing the same
> player.id?
>
>
That's possible, but I think only one row will be locked for a while, but
not thousands of locks for an hour. It's rare that thousands of users update
the value at once.


>
> it would be interesting to identify the process that issued the exclusive
> lock and determine what query/queries its made.  if its not apparent in
> pg_stat_activity, perhaps enable logging of all DDL commands, and check the
> logs.
>

yeah, I've made the log_statement to "all" now. Previously, it only logged
slow queries more than 50ms. I could know something from logs if it happens
again (hope not).


>
> if there's a lot of active queries (you ahve 800 connections)
>
>select count(*),current_query from pg_stat_activity group by
> current_query order by count(*) desc;
>

that's helpful, thanks.


>
> can help you make sense of them.
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
On Thu, Jul 14, 2011 at 10:35, John R Pierce  wrote:

> On 07/13/11 7:16 PM, Tony Wang wrote:
>
>  On Thu, Jul 14, 2011 at 10:01, John R Pierce > pie...@hogranch.com>> wrote:
>>
>>On 07/13/11 6:55 PM, Tony Wang wrote:
>>
>>Could I consider it a hardware problem, or postgresql running
>>too long which causes problems? (It ran about half a month,
>>however, it ran much longer than that without problems)
>>
>>
>>i have postgres servers that run for months and even years without
>>problems.
>>
>>
>> Yeah, same for me.
>>
>>
>>based on what I see in your original posting, there's no way
>>anyone on this list could possibly guess what is happening on your
>>server.
>>
>>
>> Sorry but is there anything I'm missing? I just want to know any possible
>> situation can cause high locks. The server runs for more than a year, and I
>> didn't do any related update recently and it just happened.
>>
>
> If I run into locking problems, the first thing *I* do is look at
> pg_stat_activity to see what sort of queries are active, and relate the
> transaction OIDs to the pg_locks and the queries to figure out whats locking
> on what, which it appears your join is doingIf you had that many
> exclusive_locks,  just what were the queries making these locks doing?
>

It's a game server, and the queries are updating users' money, as normal.
The sql is like "UPDATE player SET money = money + 100 where id = 12345".
The locks were RowExclusiveLock for the table "player" and the indexes. The
weird thing is there was another ExclusiveLock for the table "player", i.e.
"player" got two locks, one RowExclusiveLock and one ExclusiveLock.

In the postgresql documentation (
http://www.postgresql.org/docs/8.4/static/explicit-locking.html), it's said
about the  Exclusive "This lock mode is not automatically acquired on user
tables by any PostgreSQL command."


>
> We don't know what sort of schema you have, what kind of queries your
> applications make, etc etc etc.   were there any hardware events related to
> storage in the kernel message buffer (displayed by dmesg (1) on most unix
> and linux systems) ?   If linux, has the oomkiller run amok? (this also
> should be logged in dmesg)
>

Mostly update players' info, and another table called items for the items
ownership for users.
As I listed, I'm using ubuntu 10.04. I didn't find useful messages there.
Does oomkiller means out of memory killer? from the munin graph, the memory
usage is quite normal.


>
>
> 800 concurrent connections is a very large number for a server that has at
> most a dozen cores. (you say you have x5650, thats a 6 core processor, which
> supports at most 2 sockets, for 12 cores total.  these 12 cores support
> hyperthreading, which allows 24 total threads).  With 24 hardware threads
> and 800 queries running, you'd have 33 queries contending for each CPU,
> which likely will result in LOWER total performance than if you tried to
> execute fewer queries at once.If most of those connections are idle at a
> given time, you likely should consider using a connection pooler with a lot
> fewer max_connections, say, no more than 100 or so.


Yeah, that's what I planned to do next.

Thanks for your concerns! :)


>
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
On Thu, Jul 14, 2011 at 10:01, John R Pierce  wrote:

> On 07/13/11 6:55 PM, Tony Wang wrote:
>
>> Could I consider it a hardware problem, or postgresql running too long
>> which causes problems? (It ran about half a month, however, it ran much
>> longer than that without problems)
>>
>
> i have postgres servers that run for months and even years without
> problems.
>

Yeah, same for me.


>
> based on what I see in your original posting, there's no way anyone on this
> list could possibly guess what is happening on your server.
>

Sorry but is there anything I'm missing? I just want to know any possible
situation can cause high locks. The server runs for more than a year, and I
didn't do any related update recently and it just happened.


>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
Could I consider it a hardware problem, or postgresql running too long which
causes problems? (It ran about half a month, however, it ran much longer
than that without problems)

On Wed, Jul 13, 2011 at 00:52, Tony Wang  wrote:

> Hi,
>
> The configuration information is listed at the end.
> I met this problem last weekend. The presentation was that, the db locks
> became enormous, up to 8.3k, and the db hanged there. About half an hour to
> one hour later, it recovered: the locks became 1 or 2 hundreds, which was
> its average level. It happened every 5-8 hours.
>
> I checked the log, but nothing interesting. The log about dead lock
> happened several times a day, and not when hanging. I had a cron job running
> every minute to record the locks using the command below:
>
> select pg_class.relname, pg_locks.mode, pg_locks.granted,
> pg_stat_activity.current_query, pg_stat_activity.query_start,
> pg_stat_activity.xact_start as transaction_start,
> age(now(),pg_stat_activity.query_start) as query_age,
> age(now(),pg_stat_activity.xact_start) as transaction_age,
> pg_stat_activity.procpid from pg_stat_activity,pg_locks left outer join
> pg_class on (pg_locks.relation = pg_class.oid) where
> pg_locks.pid=pg_stat_activity.procpid and substr(pg_class.relname,1,3) !=
> 'pg_' order by query_start;
>
> The only special thing I can find is that there were a lot ExclusiveLock,
> while it's normal the locks are only AccessShareLock and RowExclusiveLock.
>
> After suffering from that for whole weekend, I restarted postgresql, and my
> service, and reduced a bit db pressure by disabling some service, and it
> didn't happen again till now.
>
> The possible reason I think of is that someone was reindexing index, which
> is almost impossible; or the hardware problem, which is also little
> possible.
>
> Have any one experienced that, or any suggestion on researching/debugging?
>
> The configuration information:
> System: Ubuntu server 10.04.2
> Postgresql version: 8.4.8-0ubuntu0.10.04
> CPU: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (24 cores)
> Disk: Fusion IO drive
> Memory: 32G
> Postgresql configuration:
>   max_connection = 800
>   shared_buffers = 2000MB
>   effective_cache_size = 14000MB
>   autovacuum = off
>
> --
> BR,
> Tony Wang
>


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Tony Wang
I think logs may help. Have you checked that?

2011/7/13 Andrus 

> Sometimes per week server stops randomly responding for approx 5 minutes.
> User  should wait for 5 minutes before server responds.
> Other users can work normally at same time.
>
> Monday this happens at  12:16
> I havent noticed anythis special in PostgreSql and windows logs at this
> time.
>
> How to fix or find reason for this ?
>
> Andrus.
>
> Config:
>
> Win 2008 server  2 GB  RAM
> PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit
> 10-25 users manually entering sales order, e.q. low server usage
>
> Server is installed using setup program from postgresql.org directed site,
> postresql.conf file is not modified after installation.
>
> Database size: 2165 MB
>
> Biggest files in decreasing order of size:
>
> 1  firma1.bilkaib880 MB table
> 2  public.attachme   454 MB table
> 3  firma1.rid401 MB table
> 4  firma1.omrid  88 MB  table
> 5  firma1.omdok  79 MB  table
> 6  firma1.bilkaib_cr_idx 75 MB  index
> 7  firma1.bilkaib_cr_pattern_idx 74 MB  index
> 8  firma1.bilkaib_db_pattern_idx 74 MB  index
> 9  firma1.bilkaib_db_idx 74 MB  index
>10  firma1.klient 69 MB  table
>11  firma1.dok64 MB  table
>12  firma1.bilkaib_kuupaev_idx60 MB  index
>13  firma1.bilkaib_dokumnr_idx53 MB  index
>14  firma1.bilkaib_pkey   43 MB  index
>15  firma1.rid_rtellimus_idx  38 MB  index
>16  firma1.rid_toode_idx  25 MB  index
>17  firma1.rid_toode_pattern_idx  24 MB  index
>18  public.strings23 MB  table
>19  firma1.rid_inpdokumnr_idx 19 MB  index
>20  firma1.toode  17 MB  table
>21  firma1.rid_dokumnr_idx16 MB  index
>22  firma1.rid_pkey   16 MB  index
>23  firma1.summav 15 MB  table
>24  public.report 13 MB  table
>25  public.desktop9784 kBtable
>26  public.mailbox7128 kBtable
>27  public.localfil   5584 kBtable
>28  pg_toast.pg_toast_36145_index 5392 kBindex
> ...
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-12 Thread Tony Wang
On Wed, Jul 13, 2011 at 08:40, Craig Ringer wrote:

> On 13/07/2011 12:52 AM, Tony Wang wrote:
>
>> Have any one experienced that, or any suggestion on researching/debugging?
>>
>>  Capture the contents of pg_catalog.pg_stat_activity whenever your cron
> job notices high lock counts. That'll give you some more information to work
> with.
>

Thanks, but the cron job query has already joined pg_stat_activity table


>
> POST Newspapers
> 276 Onslow Rd, Shenton Park
> Ph: 08 9381 3088 Fax: 08 9388 2258
> ABN: 50 008 917 717
> http://www.postnewspapers.com.**au/ <http://www.postnewspapers.com.au/>
>


[GENERAL] Weird problem that enormous locks

2011-07-12 Thread Tony Wang
Hi,

The configuration information is listed at the end.
I met this problem last weekend. The presentation was that, the db locks
became enormous, up to 8.3k, and the db hanged there. About half an hour to
one hour later, it recovered: the locks became 1 or 2 hundreds, which was
its average level. It happened every 5-8 hours.

I checked the log, but nothing interesting. The log about dead lock happened
several times a day, and not when hanging. I had a cron job running every
minute to record the locks using the command below:

select pg_class.relname, pg_locks.mode, pg_locks.granted,
pg_stat_activity.current_query, pg_stat_activity.query_start,
pg_stat_activity.xact_start as transaction_start,
age(now(),pg_stat_activity.query_start) as query_age,
age(now(),pg_stat_activity.xact_start) as transaction_age,
pg_stat_activity.procpid from pg_stat_activity,pg_locks left outer join
pg_class on (pg_locks.relation = pg_class.oid) where
pg_locks.pid=pg_stat_activity.procpid and substr(pg_class.relname,1,3) !=
'pg_' order by query_start;

The only special thing I can find is that there were a lot ExclusiveLock,
while it's normal the locks are only AccessShareLock and RowExclusiveLock.

After suffering from that for whole weekend, I restarted postgresql, and my
service, and reduced a bit db pressure by disabling some service, and it
didn't happen again till now.

The possible reason I think of is that someone was reindexing index, which
is almost impossible; or the hardware problem, which is also little
possible.

Have any one experienced that, or any suggestion on researching/debugging?

The configuration information:
System: Ubuntu server 10.04.2
Postgresql version: 8.4.8-0ubuntu0.10.04
CPU: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (24 cores)
Disk: Fusion IO drive
Memory: 32G
Postgresql configuration:
  max_connection = 800
  shared_buffers = 2000MB
  effective_cache_size = 14000MB
  autovacuum = off

--
BR,
Tony Wang


[GENERAL] pgloader hangs with an invalid filename

2011-05-06 Thread Tony Capobianco
Has anyone experienced the behavior of pgloader hanging when the .conf
file has an incorrect filename?
When I execute with the correct filename, everything works just fine.
However, when I use an incorrect filename, I get the below:

PG_DB> pgloader -c hangs.pgload.conf -v
pgloader INFO Logger initialized
pgloader INFO Reformat path is
['/usr/share/python-support/pgloader/reformat']
pgloader INFO Will consider following sections:
pgloader INFO   emaillistuploaddata
emaillistuploaddata INFO emaillistuploaddata processing
pgloader INFO All threads are started, wait for them to
terminate
emaillistuploaddata ERROR[Errno 2] No such file or directory:
'mb2.up.txt.complete'
Exception in thread emaillistuploaddata:
Traceback (most recent call last):
  File "/usr/lib64/python2.4/threading.py", line 442, in __bootstrap
self.run()
  File "/usr/share/python-support/pgloader/pgloader/pgloader.py", line
840, in run
self.process()
  File "/usr/share/python-support/pgloader/pgloader/pgloader.py", line
1201, in process
self.data_import()
  File "/usr/share/python-support/pgloader/pgloader/pgloader.py", line
1220, in data_import
for offsets, line, columns in self.readlines():
  File "/usr/share/python-support/pgloader/pgloader/pgloader.py", line
1138, in readlines
for offsets, line, columns in self.reader.readlines():
  File "/usr/share/python-support/pgloader/pgloader/textreader.py", line
86, in readlines
check_count = False)
  File "/usr/share/python-support/pgloader/pgloader/reader.py", line
192, in __init__
raise PGLoader_Error, error
PGLoader_Error: [Errno 2] No such file or directory:
'mb2.up.txt.complete'

Here is my hangs.pgload.conf file (I named it 'hangs' because I
intentionally inserted an invalid filename):

[pgsql]
host = localhost
port = 5432
base = PG_DB
user = pg_user
pass = None

log_file= /tmp/.pgload.log
log_min_messages= DEBUG
client_min_messages = WARNING
lc_messages = C

;client_encoding = 'utf-8'
client_encoding = 'latin1'
copy_every  = 10
commit_every= 10

null = ""
empty_string = "\ "


max_parallel_sections = 10

[emaillistuploaddata]
table   = emaillistuploaddata
format  = text
filename= mb2.up.txt.complete
field_sep   = |
skip_head_lines = 0
columns = listid,data

We would expect pgloader to kick back an error and exit if the source
file could not be found.

Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The first dedicated PostgreSQL forum

2010-11-17 Thread Tony Caduto

On 11/15/2010 5:53 PM, Lee Hachadoorian wrote:

If anyone's interested, I've started accessing the postgres list through
gmane.org (along with several other mailing lists I subscribe to). It's
gives you the choice of reading the list as a threaded archive, a blog,
or through an NNTP newsreader or an RSS feed. Everyone chooses their
preferred interface, the community is not fractured by interface preference.



Honestly those options suck.
The mailing lists etc are fine, but they are kind of old school, people 
coming from other databases
expect a web based forum plain and simple.  To attract more users the 
forums are a GREAT idea.


Stop this ridiculous complaining about interface fracture etc .

I can tell you for SURE that many people who are not OLD SCHOOL hate 
mailing lists,  A web based forum

gives everyday users more of a chance of interacting with the community.

It would be a good idea to stop the bickering and just implement the forums.

Sorry if people don't like my honest answer :-)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model

2010-10-26 Thread Tony Cebzanov
On 10/23/10 11:01 AM, Craig Ringer wrote:
> Yep. As for not explicitly mentioning "lower" roles when granting a 
> higher role (ie "admin" isn't also a "user") - role inheritance.

I knew about role inheritance, I just didn't know about the
pg_has_role() function for determining if a user has a role.  That's
helpful, but I really don't want to be hitting the database with a
pg_has_role() call for every time I want to check if a user should have
access to a certain page or function in my application.

Normally, when the user logs in, I'd cache their user info, and any
roles they have, either directly or indirectly.  But how can I do this
if I'm not directly making administrators members of the other groups
they inherit the rights of?  In other words, is there a convenience
function or view I can use to get a list of all roles the user has
access to, both directly or indirectly?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model

2010-10-22 Thread Tony Cebzanov
Hi Dmitriy,

On 10/21/10 4:21 PM, Dmitriy Igrishin wrote:
> IMO, you are trying to reinvent the wheel. Although, you may do it just for
> fun. :-)

Surely I am, but I think it made sense at the time.  It doesn't make as
much sense now that I need to audit every insert/update/delete in the
database.

> Why not just create "groups" via CREATE ROLE User ... and grants this
> roles to the "users" (created via CREATE USER or CREATE ROLE ... LOGIN)Â  ?

The reason I shied away from this initially was the overhead of having
to maintain user info in two places (the pg_catalog schema for postgres
users, and in my application schema, with the user's real name,
application preferences, etc.)  It also seemed like the role information
wasn't very accessible in the system catalogs -- I had noticed that the
pg_group view was deprecated, and the query to get group information out
of the pg_auth_members and pg_roles tables started to look very ugly,
when I could just do a quick "is the user an administrator" check via a
boolean flag in my app user's table.

With my new requirements for auditing, using the database's roles makes
more sense, but I still see some problems with it, even if I can solve
the connection pooling problem by using persistent connections as you
suggest.

For one thing, in this app, all higher permissions include the lower
permissions -- all administrators are auditors and regular users, and
all auditors are regular users.  So, my normal instinct would be to set
it up like this:

GRANT g_user TO g_auditor WITH ADMIN OPTION;
GRANT g_auditor TO g_admin WITH ADMIN OPTION;

Then, in theory, I could grant administrators the g_admin group,
auditors the g_auditor group, etc. and they could do all the things the
lower groups can.  BUT, in my app, to check for access to audit
functions, I can't do a simple query to see if the user is in the
"g_auditor" group, because administrators aren't explicitly granted this
group -- they get those permissions implicitly, but how do I know this
from my application?  Is there some kind of query I can do to get back
all the groups a role is a member of?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Advice needed on application/database authentication/authorization/auditing model

2010-10-21 Thread Tony Cebzanov
I have a web application with a Postgres backend.  In my initial
prototype, I decided not to have a Postgres database user created for
each application user, opting instead to use my own users table.
Authentication of application users is done via PAM, so no password is
necessary in my users table -- I just let PAM do its thing, and if the
user is authenticated, I check for a record in my application's users
table to see if they're authorized to use the app, along with what
privileges they have, e.g.:

CREATE TYPE USER_ROLE AS ENUM ('User', 'Auditor', 'Administrator');

CREATE TABLE users (
id SERIAL PRIMARY KEY,
username TEXT UNIQUE NOT NULL,
displayname TEXT NOT NULL,
role USER_ROLE NOT NULL DEFAULT 'User'
);

Now that this is moving beyond a prototype stage, I need to tighten up
the authentication/authorization/access control model.  In particular, I
need to add some basic audit trail functionality.  I found a couple of
projects that help make auditing easy (tablelog and EMaj being the most
promising) but they both rely on the database users mapping 1:1 to
application users, which is currently not the case -- right now I've
only got one database user that's used for all of the pooled
connections, so the audit logs are showing that user instead of my
application user.

So, I'm wondering what others have done in similar situations.  It seems
to me like the database connection pooling means I probably need the
connections to be made with a privileged "database superuser" account
that has permission to "SET ROLE" to each and every application user,
then "RESET ROLE" when it's done.  That's a bit of a pain, but doable

Then there's the issue of application roles vs. database roles.  I
wanted to have three roles in this application:  regular users, auditors
(who can do everything regular users can, plus access audit tables to
view audit log tables and potentially restore data from them) and
administrators (who can do everything in the application, but shouldn't
be Postgres superusers.)  Unfortunately, I can't figure out a clever way
to do this mapping, especially because Postgres doesn't allow users to
refer to system tables like pg_authid to do an explicit mapping of app
roles to database roles.

So, does anyone have any suggestions here?  I feel like there's got to
be a way to do this, but I can't find anything relevant in the list
archives.

Thanks.
-Tony

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.0 pg_database datconfig ?

2010-09-21 Thread Tony Caduto

 Hi,
Just looking around  9.0 and noticed pg_database is missing the 
datconfig field which stored default session info for the database.
Where is this stored now?  I looked in the release notes, but no mention 
of datconfig.


Thanks,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SQL Programming Question

2010-09-10 Thread tony
Coming from 25 years of programming applications based on dBASE and FoxPro
tables, I have a question about how to deal with a programming task.

I have a situation where I receive a file with transactions that have a
unique key from a vendor. These transactions should only be imported into
my system once, but the vendor system will occasionally resend a
transaction by mistake.

The way I am currently handling this with Micorosft ADO and FoxPro files
is to open a table with an index on the vendor key and seek on the key. If
there is no match I add it, if there is a match I put it in an exception
file to be manually checked.

Using PostgreSQL I can't open a table and do seeks against an index. I
could do a select against the database and see if 0 records are returned,
but that seems to take more time than doing a seek on an index. Is there a
more SQL friendly way of handling this task?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help doing a CSV import

2010-07-14 Thread tony
I am in the process of moving a FoxPro based system to PostgreSQL.

We have several tables that have memo fields which contain carriage
returns and line feeds that I need to preserve. I thought if I converted
these into the appropriate \r and \n codes that they would be imported as
carriage returns and line feeds, but instead they are stored in the
database as \r and \n.

The command I am using is:

copy main.company
  from E'f:\company.csv"
  DELIMITER "|"
  CSV HEADER QUOTE '"' ESCAPE E'\\';

The command imports all of the data, but I want the escaped codes to be
expanded to their actual values.

Can anyone assist me with this?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Revoking CREATE TABLE

2010-03-24 Thread Tony Webb
Hi Tom,

Thanks for your help and the hint (off-line) to use the \dn+ command.
You've hit the nail on the head sir!

\dn+
WARNING:  nonstandard use of \\ in a string literal at character 281
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
   List of
schemas
Name|  Owner   |Access
privileges |   Description
+--+
--+--
 information_schema | postgres | {postgres=UC/postgres,=U/postgres}
|
 pg_catalog | postgres | {postgres=UC/postgres,=U/postgres}
| system catalog schema
 pg_toast   | postgres |
| reserved schema for TOAST tables
 pg_toast_temp_1| postgres |
|
 public | postgres |
{postgres=UC/postgres,=U/postgres,gb_ro=UC/postgres,gb_owner=C/postgres}
| standard public schema
(5 rows)

I'd not used that before.

It shows that the gb_ro user also had extra privs granted to it for the
public schema too (unless I'm reading it wrong). I've revoked them
('all' and 'create') and it works fine now!

There should be 2 additional nologin roles on the db - gb_role and
gb_role_ro - all object privs should be via them, gb_ro should not have
its own privileges.

Thanks again, you're a star!

Pif



--
 The Wellcome Trust Sanger Institute is operated by Genome Research
 Limited, a charity registered in England with number 1021457 and a
 company registered in England with number 2742969, whose registered
 office is 215 Euston Road, London, NW1 2BE.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Revoking CREATE TABLE

2010-03-24 Thread Tony Webb
Thanks Tom,

I think I'm still doing something wrong.

As a superuser I run:
#revoke create on schema public from public;
REVOKE

As the read only user straight after running the above:

create table barney2(col1 integer);
CREATE TABLE
\d barney2
Table "public.barney2"
 Column |  Type   | Modifiers
+-+---
 col1   | integer |

What should I try next? Presumably the privilege is being inherited from
another role?

Cheers

Pif



--
 The Wellcome Trust Sanger Institute is operated by Genome Research
 Limited, a charity registered in England with number 1021457 and a
 company registered in England with number 2742969, whose registered
 office is 215 Euston Road, London, NW1 2BE.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Revoking CREATE TABLE

2010-03-24 Thread Tony Webb
Hi,
 
I would like a setup with the following:
 
Three users - one, called OWNER, that owns the tables and can drop,
alter and change data in the tables; another called USER that can edit
data in the tables created by the owner but cannot create new tables or
drop any tables and a third user called RO which has read only access to
OWNER's tables but cannot change any data or create new tables etc.
 
I can grant table privileges to the USER and RO but how do I stop these
two users from creating  new tables etc?
 
I'm creating the tables from a script so I don't think I can easily
create a schema so am relying on the public schema.
 
I suspect I'm missing something fundamental here :-\
 
All help gratefully received.
 
Thanks
 
Pif



-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 



Re: [GENERAL] How to dump JUST procedures/funnctions?

2010-03-22 Thread Tony Wasson
On Sat, Mar 20, 2010 at 11:09 PM, Carlo Stonebanks <
stonec.regis...@sympatico.ca> wrote:

> Is pg_get_functiondef an 8.4 appears to be an 8.4 function?
>
> I don't see it in the 8.3 documentation and the servers in question are all
> 8.3.
>
> Any alternatives for 8.3? pg_proc has the code body, but not the function
> declaration, etc.
>
>
I've attached a simple perl script to dump all functions per schema from a
database. I won't claim this is perfect, but it does the trick for me.

Works with 8.3. If you have 2 functions called public.foo, it will put them
in public/foo.sql with CREATE OR REPLACE FUNCTION formatting around it.

Hope this is useful to others.
Tony Wasson


dump-all-functions.pl
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Visual DATA editor for PostgreSQL?

2009-12-31 Thread Tony Caduto

Dmitry Koterov wrote:

Hello.

Is there a GUI utility to visually edit Postgres DATA (not a database 
schema!), which allows at least:
- insert/update rows using screen windowed forms (possibly ugly 
auto-generated forms, but - still forms)
- insert foreign key references by selecting them from a list (not by 
typing the keys manually)

- work with multi-line text fields (textarea) for TEXT columns

There is a lot of DB development tools around (e.g. I like EMS 
PostgreSQL Manager). But a developer tool is handy for a database 
STRUCTURE editing, and when we need to modify its DATA quickly, these 
tools are not too useful.
Lightning Admin has a form view when editing data and shows text fields 
as multi line in the form view and in the data grid itself.


Later,


Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


pgsql-general@postgresql.org

2009-12-02 Thread Tony Cebzanov
I have a PL/Perl stored procedure that's worked fine on my development
boxes but is suddenly failing when installed on a test machine.  Both my
test machines and the new machine are RHEL AS5.  The differences are the
test machine is 64-bit, and running Postgres 8.1.18 instead of my
development machine which is 32-bit and running Postgres 8.2.7.

The error in the postgresql log is:

ERROR:  error from Perl function: Undefined subroutine
&main::spi_prepare called at line 2.
LOG:  unexpected EOF on client connection

My first thought was that the postgresql-pl package wasn't installed,
but it appears to be there.  Does anyone else know what sort of
installation/configuration error could cause a failure like this?

Thanks.
-Tony


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


pgsql-general@postgresql.org

2009-12-02 Thread Tony Cebzanov
I have a PL/Perl stored procedure that's worked fine on my development
boxes but is suddenly failing when installed on a test machine.  Both my
test machines and the new machine are RHEL AS5.  The differences are the
test machine is 64-bit, and running Postgres 8.1.18 instead of my
development machine which is 32-bit and running Postgres 8.2.7.

The error in the postgresql log is:

ERROR:  error from Perl function: Undefined subroutine
&main::spi_prepare called at line 2.
LOG:  unexpected EOF on client connection

My first thought was that the postgresql-pl package wasn't installed,
but it appears to be there.  Does anyone else know what sort of
installation/configuration error could cause a failure like this?

Thanks.
-Tony

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Big Delete Consistently Causes a Crash

2009-07-01 Thread Tony Webb

Hi all,

I have a 8.3 cluster running under VM. It seems fine for most activities 
(a bit slow but error free) but if the developer issues a delete 
statement it consistently kills the database.


If the database is running in archive mode then it kills the archiver, 
else it kills the client session. Sample info is below.


The delete is a nasty one, and will potentially delete almost half of 
the data in the database(!) but it crashes consistently after about 1 
minute. Small deletes work fine.


I can understand it taking ages and giving me various warnings as it is 
an unfriendly statement but it shouldn't die.


Any ideas? Is this a bug perhaps? I've tried restating the cluster with 
various memory parameters set, from default to fairly greedy settings 
and the behaviour is consistent.


All suggestions gratefully received.


Log entry below:

2009-06-25 03:33:34 BST LOCATION: exec_execute_message, postgres.c:
> 1947
> 2009-06-25 03:34:26 BST LOG: 0: server process (PID 8379) was
> terminated by signal 9: Killed
> 2009-06-25 03:34:26 BST LOCATION: LogChildExit, postmaster.c:2529
> 2009-06-25 03:34:26 BST LOG: 0: terminating any other active
> server processes
> 2009-06-25 03:34:26 BST LOCATION: HandleChildCrash, postmaster.c:2374
> 2009-06-25 03:34:26 BST WARNING: 57P02: terminating connection
> because of crash of another server process
> 2009-06-25 03:34:26 BST DETAIL: The postmaster has commanded this
> server process to roll back the current transaction and exit,
> because another
> server process exited abnormally and possibly corrupted shared memory.
> 2009-06-25 03:34:26 BST HINT: In a moment you should be able to
> reconnect to the database and repeat your command.
> 2009-06-25 03:34:26 BST LOCATION: quickdie, postgres.c:2454
> 2009-06-25 03:34:26 BST FATAL: 57P03: the database system is in
> recovery mode
> 2009-06-25 03:34:26 BST LOCATION: ProcessStartupPacket,
> postmaster.c:1648

Cheers

Ton



--
The Wellcome Trust Sanger Institute is operated by Genome Research 
Limited, a charity registered in England with number 1021457 and a 
company registered in England with number 2742969, whose registered 
office is 215 Euston Road, London, NW1 2BE. 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY questions

2009-02-18 Thread Tony Caduto

Bill Todd wrote:
Thanks for the suggestion but pgloader appears to be a Linux only 
solution and my environment is Windows. The other problem is that 
there is no documentation that I could find (other than a PDF made 
from slides).


Bill

Bill,
pgloader is a Python app,  It should work on win32 as well.


Later,

Tony Caduto
AM Software Design
htpp://www.amsoftwaredesign.com
Home of Lightning Admin for PostgreSQL


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSQL or other DB?

2009-01-30 Thread Tony Caduto

durumdara wrote:


 - quick (re)connect - because mod_python basically not store the 
database connections persistently
 


You should consider using a Python connection pool with something like 
Pylons (http://pylonshq.com), I use dbutils:

http://www.webwareforpython.org/DBUtils/Docs/UsersGuide.html

You can scale PostgreSQL and Python web apps really well with that as 
long as you are
not using CGI.  I use Pylons combined with mod_wsgi, but you can use it 
with mod_python as well.



Tony Caduto
AM Software Design
Home of Lightning Admin for PostgreSQL
http://www.amsoftwaredesign.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSQL or other DB?

2009-01-30 Thread Tony Caduto

durumdara wrote:


Another important thing that I don't understand (what as I saw) that 
the PostGreSQL is store the databases in one, unseparatable file set, 
in a directory named data.
In another databases, like DBISAM, FireBird, MySQL, the databases are 
separated to another directories/files.


This "one datadir" is seems to be not too good for us. We used DBISAM 
in our clients, and many times when we got some filesystem error, we 
can simply recover the tables - from the files.
When we want to backup or restore one database, we can do it in two 
way: a.) archive all files b.) make sql dump from database.




Postgresql uses a database cluster.  In the data directory each database 
goes into a directory identified by the database OID, and in this 
directory the database resides
in 2 gb chunks(as far as I remember).  You can easily backup the cluster 
by using the file system, you just have to make sure you stop the 
postmaster and then you can backup the entire data dir or individual 
database dir.  You have to restore this to the same version of PG though.
it's the same kind of deal with Firebird, you can't backup a running 
database with the file system and even Firebird must be stopped in order 
to do a file system copy of the database file.


Firebird also stores all it's transaction data in the same file, so you 
end up with HUGE files unless you do a backup and restore to shrink them 
down.  You don't have this problem with PostgreSQL as it stores that 
info in the WAL.


PostgreSQL will give the the best solution for your project hands down, 
just give it a chance and don't worry about what you did with Firebird 
or MySQL.


I have never had a PostgreSQL database or dump file become corrupt, but 
on Firebird it happens fairly often and it's part of the reason why I 
eventually dumped it and moved to PostgreSQL.


Hope that helps you out some.

Tony Caduto
AM Software Design
Home of Lightning Admin for PostgreSQL
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] System table documentation

2009-01-28 Thread Tony Caduto

Bill Todd wrote:
Where can I find documentation for the system tables? I have not found 
anything in the 8.3.1 documentation. Thanks.


Bill


Hi Bill,

Good to see a Delphi guy here :-)

http://www.postgresql.org/docs/8.3/interactive/catalogs.html


Later,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for PostgreSQL

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore question (-T and -t)

2008-12-31 Thread Tony Caduto

Hi,
does anyone know if you can do multiple
-T or -t   (restore named trigger, restore name table) switches?

In the docs for pg_restore it does not specify if it will accept more 
than one, but in the pg_dump docs the -n and -t switches allow multiples.


Thanks,

tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] q query about a bug.

2008-12-09 Thread Tony Linden
Dear 

 

We have a developer doing an application for us using a Postgre Db. One
of the items is to export the data into a xml file and then either email
or whatever but the ability is there then to import the xml data back
into the database. So what happens is that the contents of certain
tables are exported into this xml file, and this is all the columns data
related to that table. We have checked the XML file and we can verify
that all the columns of each table have been exported. 

The problem comes in now when the import takes place all the columns
data does not appear to be imported and when we asked the developer
about this his reply was 

 

There is a known bug in PostGre regarding the number of columns that can
be imported.

 

Is this a fact and where can I find more information on this?

 

Please would you comment on this ?

 

Many thanks

Regards

 

Tony Linden

Database Administrator

Catholic Relief Services

Email: [EMAIL PROTECTED]

Cell   : 0765419357

Fax   :  0865007123

 



[GENERAL] Libpq.dll lite is here :-)

2008-11-17 Thread Tony Caduto
Finally got everything to compile with MinGW.  Thanks to everyone that 
helped me out getting it going.


Here are the download links:

Libpq.dll 8.3.5.8322 with SSL No Zlib (includes the latest openssl dlls 
0.9.8i also built with MinGW)


http://www.milwaukeesoft.com/libpq_mingw_ssl.zip

If you need a really lite deployment there is also:

http://www.milwaukeesoft.com/libpq_mingw_noSSL.zip  (just libpq.dll 
8.3.5.8322 with no other dependencies)


These also have no special MS VC++ runtime requirements, just the 
standard MSVCRT.DLL which means these all work
perfectly on WINE.  The current libpq.dll won't work on wine because of 
the VC++ runtime.


I would like to maintain these for future PostgreSQL releases and if 
anyone is interested in helping or has a suggestion on where to host 
them permanently please let me know.


Later,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Hiroshi Saito wrote:

Hi.

It is very strange

I checked that it was ready in VC9 again. 
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/LOG_LIBPQ_WIN32MAK.txt
Furthermore, even psql was checked. 
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/psql_win32_pg83.mak

Please move psql_win32_pg83 win32.mak on src\bin\psql.
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/LOG_PSQL_WIN32MAK.txt

However, It will be necessary what is bad and to investigate in your 
environment.


I think

1. One proposal nmake -f win32.mak clean

Regards,
Hiroshi Saito

-



Hi,

I will give it another try tomorrow.  It could be something related to 
the fact that I am running on XP 64bit.  I will try and compile it on a 
VM with 32bit XP, and also try to build it with MinGW.  I did try the 
clean and it didn't make any difference.  It compiles fine with the
/MD option, I even got it to compile against the lastest openSSL, though 
I did have to comment out one little line.


Thanks,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Dann Corbit wrote:

Change from /MD to /MT and it will not use the runtime DLL.
Look under: Properties -> Configuration Properties -> C/C++ -> Code
Generation -> Runtime Library


  


Hi Dann,

I changed that in the win32.mak file and now it gives this error:

   link.exe @C:\DOCUME~1\20659\LOCALS~1\Temp\nmA6.tmp
  Creating library .\Release\libpqdll.lib and object .\Release\libpqdll.exp
LIBCMT.lib(dosmap.obj) : error LNK2005: __dosmaperr already defined in 
libpq.lib(win32error.obj)
.\Release\libpq.dll : fatal error LNK1169: one or more multiply defined 
symbols found
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual 
Studio 9.0\VC\BIN\link.exe"' : return code '0x491'

Stop.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Dann Corbit wrote:



How much disk space do you have?

I guess that you are running out of space.

During the link, watch the available disk space.

  


Hi Dan,
Thanks for the reply :-)

I actually got it working.Now the only problem is it still has a 
dependency for

MSVCR90.DLL

Does anyone know if this can be done with mingw so I can eliminate the 
MSVCR90.DLL dependency?
I have it installed but that is even more foreign to me than VS :-)  
(mostly use borland CodeGear tools)


Thanks,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Hi,
Trying to get a libpq.dll that only has SSL deps using these instructions:

http://www.postgresql.org/docs/8.3/interactive/install-win32-libpq.html

It compiles a bit then dies at:

C:\postgresql-8.3.5\src\interfaces\libpq>nmake /f win32.mak

Microsoft (R) Program Maintenance Utility Version 9.00.30729.01
Copyright (C) Microsoft Corporation.  All rights reserved.

Building the Win32 static library...

Using default OpenSSL Include directory: C:\OpenSSL\include
Using default OpenSSL Library directory: C:\OpenSSL\lib\VC
Using default Kerberos Include directory: C:\kfw-2.6.5\inc
Using default Kerberos Library directory: C:\kfw-2.6.5\lib\i386
   link.exe -lib @C:\DOCUME~1\20659\LOCALS~1\Temp\nm1B.tmp
.\Release\win32.obj : fatal error LNK1107: invalid or corrupt file: 
cannot read at 0xB39
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual 
Studio 9.0\VC\BIN\link.exe"' : return code '0x453'

Stop.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] compiling libpq.dll with Borland C++, is it possible?

2008-11-17 Thread Tony Caduto

Hi,
I am trying to compile my own copy of libpq.dll using bcc32.exe, the 
docs say it is possible, but I get a error when it tries to compile dirent.c
Has anyone been able to do this? 



C:\postgresql-8.3.5\src\interfaces\libpq>make -N -DCFG=Release /f bcc32.mak
MAKE Version 5.3  Copyright (c) 1987, 2008 CodeGear
Building the Win32 DLL and Static Library...

Configuration "Release"

   bcc32.exe @MAKE0014.@@@
Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland
..\..\port\dirent.c:
Error E2451 ..\..\port\dirent.c 35: Undefined symbol 
'INVALID_FILE_ATTRIBUTES' in function opendir
Error E2451 ..\..\port\dirent.c 52: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 53: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 59: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 60: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 60: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 61: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 61: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 62: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 64: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 65: Undefined symbol 'handle' in 
function opendir
Error E2451 ..\..\port\dirent.c 66: Undefined symbol 'ret' in function 
opendir
Error E2451 ..\..\port\dirent.c 67: Undefined symbol 'ret' in function 
opendir
Error E2451 ..\..\port\dirent.c 77: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 79: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 79: Undefined symbol 'dirname' in 
function readdir
Error E2451 ..\..\port\dirent.c 80: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 88: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 100: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 102: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 102: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 103: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 109: Undefined symbol 'handle' in 
function closedir
Error E2451 ..\..\port\dirent.c 110: Undefined symbol 'handle' in 
function closedir
Error E2451 ..\..\port\dirent.c 111: Undefined symbol 'dirname' in 
function closedir
Error E2228 ..\..\port\dirent.c 111: Too many error or warning messages 
in function closedir

*** 26 errors in Compile ***

** error 1 ** deleting ".\Release\dirent.obj"

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3 libpq.dll not working on some versions of windows

2008-11-16 Thread Tony Caduto

Glyn Astill wrote:


Pretty sure I've used most 8.3.x versions here on both sp2 and 3.

How have you chacked you have all the dependencies? (I like depends.exe)

http://www.dependencywalker.com/




  
hmm, the problem seems to be that MSVCR71.DLL has a problem with some 
versions of Kernel32.DLL,

dependency walker reports this:
Error: At least one module has an unresolved import due to a missing 
export function in an implicitly dependent module.

I have all the required files in the same dir as my exe

I did this test on the latest version of wine (libpq.dll always worked 
in wine before), and I was getting the exact same behavior on the
PCs at work.  WINE is not a good test for this I know, but all I have at 
the moment.


I will have to try this on the PCs at work(on Monday) that are having 
the issue.  I know one of them is a really old XP (6 years) that was 
just recently updated to SP3. 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 8.3 libpq.dll not working on some versions of windows

2008-11-15 Thread Tony Caduto

Hi,
We have been running into issues where the 8.3.x versions of libpq.dll 
will not load in certain

versions of windows and WINE(does not load at all on wine).

It seems to be hit and miss on Windows XP, mostly seems to affect SP3 
and some SP2 installs of XP.


I have only been able to get around this by installing a much older 
version of libpq.dll. 

And I did have all the dependencies installed along with the DLL, it 
just plain refuses to load.  I also check that there were no

rouge copies of the files in system32.

Thanks,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading Postgres question

2008-11-10 Thread Tony Fernandez
Thanks Joao,

That is what I have done, but wanted to see if there was any other known
potential risks.
The fact about including an extra backup to go back if ever needed was
underestimated, so I will consider it but not in my live servers.

Regards,

Tony Fernandez

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joao Ferreira
gmail
Sent: Monday, November 10, 2008 10:36 AM
To: pgsql-general
Subject: Re: [GENERAL] Upgrading Postgres question

On Wed, 2008-11-05 at 15:08 -0600, Tony Fernandez wrote:
> Hello all,
> 
>  
> 
> I am in the process of updating my DB on Postgres 8.1.11 to 8.3.4.  I
> also use Slony 1.2.14 for replication.
> 
>  
> 
> Is there a safe path on how to accomplish this, please advice on what
> steps I will need to consider.  Bear in mind that I am planning to
> skip from Postgres 8.1.x to 8.3.x and I use Slony to replicate my
> production DB into two more boxes simultaneously.
> 

I don't know about slony


but, one way to do this is by (roughly):

a) dump all contents of your pg with pg_dumpall

b) install pg8.3 in a test server

c) restore the dump into the test server (su postgres; psql -f my_dump);

d) if all went well you can purge the 8.1 database from the disc or u
can skip to the next step and install the new db in some alternative
directory, in case you need to revert to pg8.1 latter.

e) install 8.3 on the main server (maybe you will have to use initdb)

f) restore the dump into the new 8.3 in the production server.

This is how I do it :) Hope it fits your needs

I also noticed some SQL parsing changes (like the need to explicitlly
cast from text to numeric)...

you shold try all your apps running against the test server before
purging the old db

the mais issue here is that, from 8.1 to 8.3 the underlying database
files have changed format... so u need the dump/restore.


u shld rd this:

http://www.postgresql.org/docs/current/static/install-upgrading.html


Joao

>  
> 
> Thanks,
> 
>  
> 
> Tony Fernandez
> 
>  
> 
> 

> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

No virus found in this incoming message.
Checked by AVG - http://www.avg.com 
Version: 8.0.175 / Virus Database: 270.9.0/1776 - Release Date:
11/8/2008 6:49 PM

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrading Postgres question

2008-11-10 Thread Tony Fernandez
Hello all,

 

I am in the process of updating my DB on Postgres 8.1.11 to 8.3.4.  I
also use Slony 1.2.14 for replication.

 

Is there a safe path on how to accomplish this, please advice on what
steps I will need to consider.  Bear in mind that I am planning to skip
from Postgres 8.1.x to 8.3.x and I use Slony to replicate my production
DB into two more boxes simultaneously.

 

Thanks,

 

Tony Fernandez

 



[GENERAL] Upgrading Postgres version

2008-11-10 Thread Tony Fernandez
Hello all,

 

I am in the process of updating my DB on Postgres 8.1.11 to 8.3.4.  I
also use Slony 1.2.14 for replication.

 

Is there a safe path on how to accomplish this, please advice on what
steps I will need to consider.  Bear in mind that I am planning to skip
from Postgres 8.1.x to 8.3.x and I use Slony to replicate my production
DB into two more boxes simultaneously.

 

Thanks,

 

Tony Fernandez

 



Re: [GENERAL] How to know the password for the user 'postgres'

2008-10-28 Thread Tony Caduto

Shashank Sahni wrote:


when i posted the question on dspace mailing list..one of the guy 
replied me with this solution..

createuser -h localhost -U postgres -d -A -P dspace
but he said that to execute the above command i must know the password 
for the user postgres..
Edit the pg_hba.conf file and add a entry for the PC you are doing your 
admin from and set it to Trust.
When set to trust you won't need a password, then use the admin tool of 
your choice to change the postgres password to whatever
you want.  Afterwards remember to set it back to MD5 or whatever it was 
prior to you making the change.


Later,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a LEFT JOIN problem

2008-10-25 Thread Tony Wasson
On Sat, Oct 25, 2008 at 2:11 PM, Thomas <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have the following tables:
>
> Product(id, title, price)
> Item(id, product_id, order_id, quantity)
> Order(id, user_id, amount, paid)
> User(id, name)
>
> What I want to achieve is a query on a specific Product based in its
> title. If the product has at least 1 order in which it appears, then
> return the Product and Order details, if the product has no paid order
> associated, then only return the Product fields.
>
> I have tried the following query:
> --
> SELECT products.*, paid FROM "products"
> LEFT OUTER JOIN items ON products.id = items.product_id
> LEFT OUTER JOIN orders ON items.order_id = orders.id
> LEFT OUTER JOIN users ON orders.user_id = users.id
> WHERE (title = E'some-product' AND paid = 1 AND name = 'thomas')
> --
>
> The problem with my query, is that if there are no paid associated
> orders, then the WHERE will drop every returned line that has paid =
> 0, therefore I don't get anything at all, but I would like to have at
> least the products field.

Put the filtering in your ON clauses. The WHERE clause is processed
after the JOINs are done, and is eliminating your results. Without
knowing your schema, this is just a guess..

SELECT products.*, paid FROM "products"
LEFT OUTER JOIN items ON (products.id = items.product_id AND title =
'some-product')
LEFT OUTER JOIN orders ON (items.order_id = orders.id AND paid = 1)
LEFT OUTER JOIN users ON (orders.user_id = users.id AND name = 'thomas')

> Moreover, the "name" argument in the WHERE comes from the user logged
> in data. So if the user is not logged in, no fields are returned.

I suspect your app will need to to run the query and compare that
against the user(s) currently logged in.

Hope this helps!
Tony Wasson

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] GROUP BY does not follow SQL standard

2008-10-10 Thread Tony Marston
The Postgresql implementation of GROUP BY does not conform to either the 
1999 or 2003 SQL standard. The documentation states that every field in the 
SELECT list which is not aggregated must be specified in the GROUP BY 
clause. While this was true in the 1992 standard, in 1999 this was changed 
to "any non-aggregated column appearing in the SELECT list is functionally 
dependent upon the GROUP BY clause". In the example both p.name and p.price 
are functionally dependent on product_id, therefore there is no need for 
them to be included in the GROUP BY clause.

In this respect Postgresql is wrong and MySQL is right.


-- 
Tony Marston
http://www.tonymarston.net
http://www.radicore.org 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle and Postgresql

2008-09-03 Thread Tony Caduto

David Fetter wrote:

What they want to have is a huge entity they can blame when everything
goes wrong.  They're not interested in the actual response times or
even in the much more important time-to-fix because once they've
blamed Oracle, they know the responsibility is no longer on their
shoulders.

  
That is only a perceived sense of risk avoidance, if you read the EULA 
etc that ship with Oracle, MS SQL server etc, they are not responsible
for anything that may happen to your data.  Sure management could blame 
them, but that's about it.  They would get the same amount of 
satisfaction from blaming the FOSS community.  No matter what management 
says any blame rests squarely on their shoulders and the people they 
have entrusted to create their corp projects/products when something 
goes wrong.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT INTO returns incorrect values

2008-09-03 Thread Tony Caduto

Bill,
Did you try it like this:

parent_id = 0
category_name = ''
select category, parent_category_id
 from note.category
where category_id = 477 into category_name, parent_id;
raise notice 'curr cat, name, parent id: % % ', category_name, 
parent_id;


I have found in the past that it's a good idea to initialize your vars 
before you use them in PL/pgsql.



Also as a FYI, you don't need to upper case all your text in a function 
(I know you have to do that in Firebird), just use standard case with 
normal capitalization because
PostgreSQL will lowercase everything you send to the server that is not 
in quotes.  It's a lot easier to read without the uppercase.


Later,

Tony Caduto
AM Software Design
Home of Lightning Admin for PostgreSQL
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RAISE NOTICE format in pgAdmin

2008-09-02 Thread Tony Caduto

Bill Todd wrote:
If I have a series of RAISE NOTICE 'xxx' statements in a plpgsql 
function and I call the function from pgAdmin the notice messages are 
concatenated on a single line on the Messages tab. Is there any way to 
get each message to appear on a separate line?


Is there a better way than using RAISE NOTICE to debug functions?

Bill


Bill,
Make sure you are on at least version 8.2 (8.3 is preferred) and use a 
plpgsql debugger.  Later versions of Pgadmin have one built in and a 
stand alone version is available from:
http://www.amsoftwaredesign.com/debugger_client_announce  (built with 
Delphi)


If you are using the win32 version there is a option at the end of the 
installer script to install the debugger part.  On 8.2 or 8.3 you will 
need to install it yourself.

Please see: http://pgfoundry.org/projects/edb-debugger/

Hope that helps.

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of Lightning Admin for PostgreSQL


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Number or parameters for functions - limited to 32 ?

2008-04-08 Thread Tony Caduto
Just a FYI, you can get around  the 32 param limit in older versions of 
postgresql by passing the function a varchar/text array, then use the 
built in array functions to pull the "params" from the passed array.


Something like this(not tested code, just a sample):

CREATE or REPLACE FUNCTION test_func(varchar)
RETURNS void AS
$BODY$
DECLARE
IN_ARRAY text[] ;
ACCOUNTNUMBER_INvarchar;
BEGIN

IN_ARRAY = string_to_array($1,'~^~');  --use a unique delimiter

ACCOUNTNUMBER_IN  = IN_ARRAY[1];

return;
END
$BODY$
LANGUAGE 'plpgsql' VOLATILE;


Later,

Tony Caduto
AM Software Design
Home of Lightning Admin for PostgreSQL and MySQL
http://www.amsoftwaredesign.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Tony Caduto

Magnus Hagander wrote:

You know, kinda like PostgreSQL vs Oracle Express ;)


Well, not quite the same since  LA Debugger Client is not crippled in 
some way Like Oracle or MS SQL Express :-)


It's just plain old freeware.

Later,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Tony Caduto

josep porres wrote:


2008/4/7, Dave Page <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>:

On Mon, Apr 7, 2008 at 10:21 AM, josep porres <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
> well, when you asked me about where I downloaded it, I
downloaded it from I
> installed it again.
> It seems I can debug with no problems till now.
> However, when I begin to debug and the function parameters
window appears,
>  if I click cancel it freezes.





Josep,

The stand alone Lightning Debugger does not have this issue and if it 
does crash or hang (not likely) it won't take down whatever admin tool 
you are

using.  Oh, and it's FREE.

http://www.amsoftwaredesign.com/debugger_client_announce

Check it out works great on win32 and built with a native compiler with 
a high performance memory manager.



Tony Caduto
AM Software
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Tony Caduto

josep porres wrote:


Hi,

I don't know if here is the right place to post this, but anyway ...
Does anybody know if there is any binary dist for win32 edb-debugger 
for pgsql8.3?

If not, do you know which compiler I have to use?
I don't need to install anything on the client side where is my 
pgadmin, right?




You can use the Lightning Admin win32 debugger, very stable and FREE.
From my testing it's more stable on win32 than the others, but that 
will probably be debated since I am biased.


Anyway I encourage you to check it out at:


http://www.amsoftwaredesign.com/debugger_client_announce


Tony Caduto
AM Software Design





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting weird pg_tblspc error, has anyone seen this before?

2008-04-01 Thread Tony Caduto

Tom Lane wrote:

Bizarre.  I hope you'll tell them to fix that.


  

Agreed :-)

I bet they just took a snapshot of a install dir that had the postmaster 
stopped and used that in their setup.  It probably does not do a initdb 
during the setup.  Why it's missing that directory is a mystery :-)


I did report the issue on the PostBooks sourceforge project forum.


Later,

Tony Caduto

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   >