Re: [GENERAL] Need some help in postgres locking mechanism

2014-04-16 Thread santhosh kumar
Hi
I am locking some rows using select for update(nowait=true) inside a
transaction.If server in which code exists crashes,locks will be on
rows.Those should be removed manually.I donot want that.I want to keep a
timeout for that transaction.Could you please helpme out with a solution
for this issue.i did not get a proper solution from internet.


On Tue, Apr 8, 2014 at 8:58 PM, Hannes Erven han...@erven.at wrote:

 Hi,


 On 2014-04-08 15:27, Vick Khera wrote:
 [...]

  Problem 2: you cannot have a second process skip over locked rows.



 In fact, you can: use FOR UPDATE NOWAIT and catch any errors.

 e.g. SELECT * FROM jobqueue WHERE id=? FOR UPDATE NOWAIT



 -hannes




-- 
K.Santhosh Kumar


Re: [GENERAL] Need some help in postgres locking mechanism

2014-04-08 Thread Vick Khera
Problem 1: how (and why) are you locking rows in table Y?

Problem 2: you cannot have a second process skip over locked rows. It
sounds to me like a job queue... what you want to do is make it work
*really* fast to mark the job as taken and commit that work so the
locks are released. Then go do the work on it. You do not want to hold
the lock the entire time the worker is running. You will likely want
another process somewhere that looks for abandoned jobs that are
started but not completed and have no worker actively working on them.

On Thu, Apr 3, 2014 at 3:19 PM, santhosh kumar
kurasanthoshku...@gmail.com wrote:
 Hi
 I have two problems to discuss.an you please guide me how to proceed on
 this.
 problem 1:
 I have table X('a' is prmiary key) and table Y('b' is primary key).
 Table Y has ''a''(X primary key) as foreign key.I have one job which runs
 once in one hour.I want to lock the rows in table Y.but this will also lock
 table X rows to avoid concurrent actions.My table X is used by other jobs as
 well for updation.
 The solution what I thought is implement  KEY SHARE on 'a' column in table
 Y.That will help others jobs to update the corresponding rows in table X
 without any issue.

 My doubt is how to keep  key share lock on a column in table Y.by default
 lock is on column 'b' which is primary key of table Y. My table X is so
 huge.I dont want any other locks on it.
 we are using postgres 9.2.

 one more doubt:If I implement key share lock on 'a' column,will the default
 locks on table X and table Y removed or not?

 PLEASE PROVIDE SYNTAX AS WELL

 problem2:
 This is a different issue.I have a table Q('w' is primary key).When a job
 runs ,i want to lock some rows so that the other parallel job wont be
 considering this row.

 what is the simple and best lock I can implement on these rows?I want with
 NOWAIT option.

 kindly give solutions to above issues.I would be greatful for that.



 Thanks
 K.Santhosh


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help in postgres locking mechanism

2014-04-08 Thread Hannes Erven

Hi,


On 2014-04-08 15:27, Vick Khera wrote:
[...]

Problem 2: you cannot have a second process skip over locked rows.



In fact, you can: use FOR UPDATE NOWAIT and catch any errors.

e.g. SELECT * FROM jobqueue WHERE id=? FOR UPDATE NOWAIT



-hannes


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help in postgres locking mechanism

2014-04-08 Thread Vick Khera
On Tue, Apr 8, 2014 at 11:28 AM, Hannes Erven han...@erven.at wrote:
 On 2014-04-08 15:27, Vick Khera wrote:
 [...]

 Problem 2: you cannot have a second process skip over locked rows.



 In fact, you can: use FOR UPDATE NOWAIT and catch any errors.

 e.g. SELECT * FROM jobqueue WHERE id=? FOR UPDATE NOWAIT

That's not really skipping over the locked rows and returning the next
row, that's bailing out and retrying.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help in postgres locking mechanism

2014-04-08 Thread santhosh kumar
Thanks alot..i have implemented in same format.You people are awesome.Thank
You once again


On Tue, Apr 8, 2014 at 8:58 PM, Hannes Erven han...@erven.at wrote:

 Hi,


 On 2014-04-08 15:27, Vick Khera wrote:
 [...]

  Problem 2: you cannot have a second process skip over locked rows.



 In fact, you can: use FOR UPDATE NOWAIT and catch any errors.

 e.g. SELECT * FROM jobqueue WHERE id=? FOR UPDATE NOWAIT



 -hannes




-- 
K.Santhosh Kumar
Member of Technical Staff
Groupon,Chennai


[GENERAL] Need some help in postgres locking mechanism

2014-04-03 Thread santhosh kumar
Hi
I have two problems to discuss.an you please guide me how to proceed on
this.
*problem 1:*
I have table X('a' is prmiary key) and table Y('b' is primary key).
Table Y has ''a''(X primary key) as foreign key.I have one job which runs
once in one hour.I want to lock the rows in table Y.but this will also lock
table X rows to avoid concurrent actions.My table X is used by other jobs
as well for updation.
The solution what I thought is implement  KEY SHARE on 'a' column in table
Y.That will help others jobs to update the corresponding rows in table X
without any issue.

My doubt is *how to keep  key share lock on a column in table Y*.by
default lock is on column 'b' which is primary key of table Y. My table X
is so huge.I dont want any other locks on it.
we are using postgres 9.2.

*one more doubt:**If I implement key share lock on 'a' column,will the
default locks on table X and table Y removed or not?*

*PLEASE PROVIDE SYNTAX AS WELL*

*problem2:*
This is a different issue.I have a table Q('w' is primary key).When a job
runs ,i want to lock some rows so that the other parallel job wont be
considering this row.

what is the simple and best lock I can implement on these rows?I want with
NOWAIT option.

kindly give solutions to above issues.I would be greatful for that.



Thanks
K.Santhosh


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-18 Thread Tomas Vondra
Hi,

On 18 Říjen 2013, 17:06, akp geek wrote:
 when I issue the top command on solaris, what ever I have posted is
 exactly
 getting.

 top  -n
 load averages:  11.4,  10.8,  10.2;up 680+21:31:46
 15:05:21
 137 processes: 123 sleeping, 14 on cpu
 CPU states: 82.0% idle, 17.5% user,  0.5% kernel,  0.0% iowait,  0.0% swap
 Memory: 64G phys mem, 20G free mem, 63G total swap, 63G free swap

PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
  12185 postgres   1  500 3172M 3165M cpu/09:56  1.80% postgres
  23338 postgres   1  500 3180M 3173M cpu/8   79:08  1.77% postgres
   2446 postgres   1   00 3174M 3167M cpu/24   0:58  1.76% postgres
  18359 postgres   1   00 3172M 3164M cpu/46   4:00  1.60% postgres
  28428 postgres   1   00 3178M 3171M cpu/17  58:15  1.60% postgres
  10698 postgres   1   00 3179M 3173M cpu/62  68:33  1.55% postgres
  28374 postgres   1   00 3178M 3172M cpu/14  47:01  1.55% postgres
   6145 postgres   1   00 3180M 3173M cpu/39 115:05  1.55% postgres
   2380 postgres   1   00 3179M 3172M cpu/34  53:38  1.54% postgres
   2088 postgres   1   00 3179M 3172M cpu/31  37:43  1.51% postgres
  19360 postgres   1  500 3172M 3165M cpu/36:55  0.84% postgres
  28426 postgres   1  190 3179M 3172M cpu/21  70:34  0.61% postgres
  21196 postgres   1  520 3403M 3392M sleep0:05  0.37% postgres
  21237 root   1  490 5632K 4000K cpu/51   0:00  0.11% sshd
415 daemon 6  590 6312K 3208K sleep  107:52  0.02% kcfd

1) I'm not sure what options top has on solaris, but on Linux you can
hit c which will give you an extended info about the projects.

2) I seriously doubt there are just ssh+kscd+postgres processes running on
your box.

3) Are you sure the top output was collected while the slow queries were
running? Because to me it seems the top output was collected while the
system is relatively idle - there's 82% free CPU, no I/O activity at all.
Are you sure you have no zones/containers defines? I'm not familiar with
that so maybe this is how it looks when you hit zone limits?



 this is the plan that was generated again.. Thanks a lot.

 Hash Join  (cost=343.96..41796.15 rows=751 width=33) (actual
 time=635.192..5799.477 rows=709 loops=1)
Hash Cond: ((te.statuscode)::text = (tt.statuscode)::text)
-  Hash Join  (cost=342.58..41778.81 rows=751 width=19) (actual
 time=635.135..5797.003 rows=709 loops=1)
  Hash Cond: (te.scenarioid = ts.scenarioid)
  -  Hash Join  (cost=243.38..41667.95 rows=1947 width=26)
 (actual time=634.564..5794.768 rows=711 loops=1)
Hash Cond: (te.userid = tu.userid)
-  Nested Loop  (cost=0.00..41311.26 rows=25026
 width=33) (actual time=0.060..5782.882 rows=11223 loops=1)
  -  Nested Loop  (cost=0.00..1639.76 rows=2751
 width=15) (actual time=0.023..20.904 rows=3702 loops=1)
Join Filter: (td.occserviceareaid =
 tz.occserviceareaid)
-  Seq Scan on t_occ tz  (cost=0.00..1.04
 rows=1 width=14) (actual time=0.006..0.012 rows=1 loops=1)
  Filter: (occid = 10::numeric)
-  Seq Scan on t_domesticnotamlocation td
 (cost=0.00..1471.10 rows=13410 width=15) (actual time=0.012..13.932
 rows=13410 loops=1)
  -  Index Scan using x_event_ix1 on t_event te
 (cost=0.00..14.26 rows=13 width=34) (actual time=0.339..1.553 rows=3
 loops=3702)
Index Cond: (te.affectedlocationid = td.locid)
Filter: (((te.statuscode)::text = ANY
 ('{CON,AERR,SERR,CERR,ACON}'::text[])) OR (((te.statuscode)::text =
 'ACT'::text) AND (COALESCE((te.enddate)::time
 stamp with time zone, now()) = (now() - '2 days'::interval))) OR
 (((te.statuscode)::text = 'CAN'::text) AND (te.canceldate = (now() -
 '2 days'::interval))) OR (((te.statuscode)
 ::text = ANY ('{APEN,CINP,AINP}'::text[])) AND
 (COALESCE((te.enddate)::timestamp with time zone, now()) = now(
-  Hash  (cost=240.34..240.34 rows=243 width=7)
 (actual time=4.040..4.040 rows=243 loops=1)
  Buckets: 1024  Batches: 1  Memory Usage: 8kB
  -  Seq Scan on t_user tu  (cost=0.00..240.34
 rows=243 width=7) (actual time=0.040..3.868 rows=243 loops=1)
Filter: ((usertype)::text = 'OCC'::text)
  -  Hash  (cost=98.64..98.64 rows=45 width=29) (actual
 time=0.549..0.549 rows=45 loops=1)
Buckets: 1024  Batches: 1  Memory Usage: 3kB
-  Nested Loop  (cost=0.00..98.64 rows=45 width=29)
 (actual time=0.044..0.496 rows=45 loops=1)
  -  Index Scan using x_locscenario_pk on
 t_locscenariomap tc  (cost=0.00..43.04 rows=45 width=22) (actual
 time=0.028..0.098 rows=45 loops=1)
Index Cond: (locid = 10::numeric)
  -  Index Scan using 

Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-17 Thread Tomas Vondra
On 17.10.2013 20:56, akp geek wrote:
 got the output from depesz and this the top on the system.   thanks for
 the help ..

From depesz? That site works differently - you enter the explain plan
into the form, it does some formatting/highlighting and you're supposed
to submit the link to that page. E.g. http://explain.depesz.com/s/JwTB
is for your query.

However it seems you've done some anonymization on the query, e.g.
replacing the table/index/function names and string literals with some
random words.

Anyway, the plan shows the estimates are reasonably accurate, but the
index scan on romeo_four consumes ~99% of the query duration. Not sure
if it can be improved, because we don't know the original query and the
anonymization made it rather annoying to deal with.

 
 Regards
 
 load averages:  30.3,  28.8,  27.8;up
 680+00:51:09  
 
 18:24:44
 156 processes: 123 sleeping, 33 on cpu
 CPU states: 49.5% idle, 50.0% user,  0.4% kernel,  0.0% iowait,  0.0% swap
 Memory: 64G phys mem, 19G free mem, 63G total swap, 63G free swap
 
PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
   7965 postgres   1  200 3178M 3171M cpu/7   78:21  1.58% postgres
  20638 postgres   1  200 3181M 3175M cpu/28 187:01  1.58% postgres
  22819 postgres   1  300 3181M 3175M cpu/33 264:06  1.57% postgres
   3789 postgres   1  200 3183M 3176M cpu/9  626:11  1.57% postgres
  10375 postgres   1  300 3182M 3175M cpu/50 293:55  1.57% postgres
  27335 postgres   1  300 3175M 3169M cpu/29  27:27  1.57% postgres
   7959 postgres   1  300 3178M 3171M cpu/41 104:07  1.57% postgres
   8156 postgres   1  300 3180M 3173M cpu/43 124:18  1.56% postgres
  20640 postgres   1   00 3182M 3176M cpu/58 207:49  1.56% postgres
  10584 postgres   1  300 3179M 3173M cpu/35  76:32  1.56% postgres
  13984 postgres   1  300 3181M 3174M cpu/30 207:04  1.56% postgres
   3575 postgres   1  200 3283M 3261M cpu/19   1:48  1.56% postgres
   7964 postgres   1  400 3177M 3170M cpu/62  82:56  1.56% postgres
   1299 postgres   1  400 3166M 3161M cpu/52   5:11  1.56% postgres
  27692 postgres   1   00 3181M 3174M cpu/46 260:58  1.56% postgres


Once again, this top output is incomplete - all it lists are postgres
processes (and it's unclear what they're doing), but there are certainly
some other processes running. Moreover, those 15 processes account for
only ~25% of CPU, but there's almost 50% CPU utilized. So where's the rest?

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-13 Thread akp geek
Sorry all not posting clear.  I posted our postresql.conf file. What we are
having is very slow response from the database.

would like to get some feedback about postgresql.conf file parameters that
I posted are causing the issue for performance of the DB.

Our database is about 60GB and performance is very bad.


pci, instance #5
pci, instance #5
load averages:  17.5,  18.1,  18.6;up
673+23:00:23

16:33:58
156 processes: 140 sleeping, 16 on cpu
CPU states: 76.5% idle, 22.8% user,  0.7% kernel,  0.0% iowait,  0.0% swap
Memory: 64G phys mem, 19G free mem, 63G total swap, 63G free swap


Regards


On Sat, Oct 12, 2013 at 7:43 AM, Michael Paquier
michael.paqu...@gmail.comwrote:

 On Sat, Oct 12, 2013 at 12:56 AM, akp geek akpg...@gmail.com wrote:
  We have been running 4 of our applications on 9.0.4, which we are
 planning
  to update the database 9.2.2 by the year end. we have the streaming
  replication also running
 9.2.4 has fixed a severe security problem. If you do not want to
 expose your cluster to that, you should at least upgrade to this
 version.
 And posting a question would be better...
 --
 Michael



Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-13 Thread bricklen
On Sun, Oct 13, 2013 at 8:43 AM, akp geek akpg...@gmail.com wrote:

 Sorry all not posting clear.  I posted our postresql.conf file. What we
 are having is very slow response from the database.

 would like to get some feedback about postgresql.conf file parameters that
 I posted are causing the issue for performance of the DB.

 Our database is about 60GB and performance is very bad.


 pci, instance #5
 pci, instance #5
 load averages:  17.5,  18.1,  18.6;up
 673+23:00:23

 16:33:58
 156 processes: 140 sleeping, 16 on cpu
 CPU states: 76.5% idle, 22.8% user,  0.7% kernel,  0.0% iowait,  0.0% swap
 Memory: 64G phys mem, 19G free mem, 63G total swap, 63G free swap


You aren't going to get much help without some specific details about what
is slow.
https://wiki.postgresql.org/wiki/Slow_Query_Questions


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-13 Thread Tomas Vondra
Hi,

On 13.10.2013 17:43, akp geek wrote:
 Sorry all not posting clear.  I posted our postresql.conf file. What
 we are having is very slow response from the database.
 
 would like to get some feedback about postgresql.conf file
 parameters that I posted are causing the issue for performance of the
 DB.
 
 Our database is about 60GB and performance is very bad.

The postgresql.conf seems fine - at least I haven't noticed anything
obviously wrong there. But we know almost nothing about your HW (not
even what kind of CPU / how many cores, etc.) or  how you're using it.

For example you haven't even posted a complete top output, so we have no
idea what else is running on the box. OTOH the box seems to be rather
idle - there's plenty of CPU, RAM and the disks are rather idle.

Why exactly do you think it's the database that's slow?

We can't really help you unless you show us what exactly you consider
slow. The best thing you can probably do is to show us a slow query
along with it's EXPLAIN ANALYZE output (use explain.depesz.com to post it).

And it'd be nice to know the history around this - was it fast before,
did it get slow suddenly or over a longer period of time?

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-12 Thread Michael Paquier
On Sat, Oct 12, 2013 at 12:56 AM, akp geek akpg...@gmail.com wrote:
 We have been running 4 of our applications on 9.0.4, which we are planning
 to update the database 9.2.2 by the year end. we have the streaming
 replication also running
9.2.4 has fixed a severe security problem. If you do not want to
expose your cluster to that, you should at least upgrade to this
version.
And posting a question would be better...
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-11 Thread Steve Crawford

On 10/11/2013 08:56 AM, akp geek wrote:
We have been running 4 of our applications on 9.0.4, which we are 
planning to update the database 9.2.2 by the year end


Sorry if I missed it but I couldn't find a question or description of a 
problem you are encountering.


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-11 Thread John R Pierce

On 10/11/2013 9:34 AM, akp geek wrote:


thanks for the help


whats the question?you list what you're doing, and give a snapshot 
of your system state while doing some unspecified tasks.



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help on Performance 9.0.4

2013-10-11 Thread John R Pierce

On 10/11/2013 8:56 AM, akp geek wrote:
We have been running 4 of our applications on 9.0.4, which we are 
planning to update the database 9.2.2


9.0 is currently at 9.0.14, and 9.2 is currently at 9.2.5.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need regexp_replace help

2013-02-14 Thread Gauthier, Dave
Trying ( and failing) to get this to run...

select regexp_replace(pg_get_functiondef(proname::regproc),E'[ ]+\\+\n','\n') 
from pg_proc where proname = 'maketime';

Goal is to remove the space padding and the + at the end of each line in the 
dump of pg_get_functiondef, something that could actually be run if sent out to 
a file (\o) and then read into another DB (\i).  Of course the ; would have 
to be appended at the end, but I can manage that :-)

Thanks in Advance !


Re: [GENERAL] Need regexp_replace help

2013-02-14 Thread Merlin Moncure
On Thu, Feb 14, 2013 at 3:37 PM, Gauthier, Dave dave.gauth...@intel.com wrote:
 Trying ( and failing) to get this to run...



 select regexp_replace(pg_get_functiondef(proname::regproc),E'[
 ]+\\+\n','\n') from pg_proc where proname = 'maketime';



 Goal is to remove the space padding and the + at the end of each line in
 the dump of pg_get_functiondef, something that could actually be run if sent
 out to a file (\o) and then read into another DB (\i).  Of course the ;
 would have to be appended at the end, but I can manage that :-)

this isn't a regex issue, but a psql one.Just disable alignment
and set tuples only etc.  Whenever I do this type of thing (intend
psql output to some consumer),  it's:

psql -XtAqc query   file

there are similar options to do this via various \set commands.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Dave Page
On Sun, Oct 17, 2010 at 2:32 PM, Mike Christensen m...@kitchenpc.com wrote:
 Okay my required n00b question of the week, hopefully this'll be an easy one..

 I decided to give pgAgent a shot, because there's this stored sproc
 (sorry, function) I need to run nightly and I think spending hours
 figuring out pgAgent would somehow be better than the 3 minutes it
 would take to add a cron job that calls psql..

 I'm following the directions at:
 http://www.pgadmin.org/docs/1.6/pgagent-install.html

Why such an old version? Try
http://www.pgadmin.org/docs/1.12/pgagent-install.html

 However, the heading Daemon installation on *nix needs me to believe
 that running pgagent from the command line should install the daemon,
 and then things should be happily running in the background.  However,
 when I run the command line as instructed, it just sits there and does
 nothing.  I never get returned to the bash prompt, it's as if the
 daemon is just running in interactive mode until I stop the process..

Did you use the -f option?

 Maybe I'm missing something, or I have to add that command line to
 some config file.  I'm not a unix guru but from my past experiences, I
 can usually run daemons with /etc/init.d/blah start - pgAgent
 doesn't seem to be installed in that manner.

If you built from source, it's up to you to create the startup script.
The pgAgent source tree doesn't contain OS-specific startup scripts.

 Second, assuming the daemon is running in interactive mode, like..
 now what?  The docs have no next steps.  How do I create a new job to
 run my SQL function every night?  From some documentation from
 Postgres Plus, I was led to believe that there should be a Jobs tree
 in pgAdmin created..  But I see nothing.  Thanks!

Did you setup the database per the part of the docs prior to that section?

The next steps are most certainly there - see the index page at
http://www.pgadmin.org/docs/1.12/pgagent.html

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:07 AM, Dave Page dp...@pgadmin.org wrote:
 On Sun, Oct 17, 2010 at 2:32 PM, Mike Christensen m...@kitchenpc.com wrote:
 Okay my required n00b question of the week, hopefully this'll be an easy 
 one..

 I decided to give pgAgent a shot, because there's this stored sproc
 (sorry, function) I need to run nightly and I think spending hours
 figuring out pgAgent would somehow be better than the 3 minutes it
 would take to add a cron job that calls psql..

 I'm following the directions at:
 http://www.pgadmin.org/docs/1.6/pgagent-install.html

 Why such an old version? Try
 http://www.pgadmin.org/docs/1.12/pgagent-install.html

 However, the heading Daemon installation on *nix needs me to believe
 that running pgagent from the command line should install the daemon,
 and then things should be happily running in the background.  However,
 when I run the command line as instructed, it just sits there and does
 nothing.  I never get returned to the bash prompt, it's as if the
 daemon is just running in interactive mode until I stop the process..

 Did you use the -f option?

 Maybe I'm missing something, or I have to add that command line to
 some config file.  I'm not a unix guru but from my past experiences, I
 can usually run daemons with /etc/init.d/blah start - pgAgent
 doesn't seem to be installed in that manner.

 If you built from source, it's up to you to create the startup script.
 The pgAgent source tree doesn't contain OS-specific startup scripts.

 Second, assuming the daemon is running in interactive mode, like..
 now what?  The docs have no next steps.  How do I create a new job to
 run my SQL function every night?  From some documentation from
 Postgres Plus, I was led to believe that there should be a Jobs tree
 in pgAdmin created..  But I see nothing.  Thanks!

 Did you setup the database per the part of the docs prior to that section?

 The next steps are most certainly there - see the index page at
 http://www.pgadmin.org/docs/1.12/pgagent.html

 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


Thanks for the info!  First, I have no idea what version of pgAgent it
is.  Where does it say?  I installed it using apt-get..

apt-cache search pgagent
pgagent - job scheduler for PostgreSQL

It doesn't say the version, but that's the only match..  Maybe there's
a bin file or something better?

I ran it using:

/usr/bin/pgagent hostaddr=127.0.0.1 dbname=KitchenPC user=root

and it immediately returned to the bash prompt, and it appears to be running:

ps | grep pgagent
 4134 pts/000:00:00 pgagent

I ran the .sql file that it came with, and it created a new catalog
called pgAgent.

However, I'm still not sure what to do next.  According to the
instructions, pgAdmin should show a node called Jobs.  I don't see
that.  I'm using pgAdmin for OS/X and connecting to the server from
remote over SSL.  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:21 AM, Mike Christensen m...@kitchenpc.com wrote:
 On Mon, Oct 18, 2010 at 2:07 AM, Dave Page dp...@pgadmin.org wrote:
 On Sun, Oct 17, 2010 at 2:32 PM, Mike Christensen m...@kitchenpc.com wrote:
 Okay my required n00b question of the week, hopefully this'll be an easy 
 one..

 I decided to give pgAgent a shot, because there's this stored sproc
 (sorry, function) I need to run nightly and I think spending hours
 figuring out pgAgent would somehow be better than the 3 minutes it
 would take to add a cron job that calls psql..

 I'm following the directions at:
 http://www.pgadmin.org/docs/1.6/pgagent-install.html

 Why such an old version? Try
 http://www.pgadmin.org/docs/1.12/pgagent-install.html

 However, the heading Daemon installation on *nix needs me to believe
 that running pgagent from the command line should install the daemon,
 and then things should be happily running in the background.  However,
 when I run the command line as instructed, it just sits there and does
 nothing.  I never get returned to the bash prompt, it's as if the
 daemon is just running in interactive mode until I stop the process..

 Did you use the -f option?

 Maybe I'm missing something, or I have to add that command line to
 some config file.  I'm not a unix guru but from my past experiences, I
 can usually run daemons with /etc/init.d/blah start - pgAgent
 doesn't seem to be installed in that manner.

 If you built from source, it's up to you to create the startup script.
 The pgAgent source tree doesn't contain OS-specific startup scripts.

 Second, assuming the daemon is running in interactive mode, like..
 now what?  The docs have no next steps.  How do I create a new job to
 run my SQL function every night?  From some documentation from
 Postgres Plus, I was led to believe that there should be a Jobs tree
 in pgAdmin created..  But I see nothing.  Thanks!

 Did you setup the database per the part of the docs prior to that section?

 The next steps are most certainly there - see the index page at
 http://www.pgadmin.org/docs/1.12/pgagent.html

 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


 Thanks for the info!  First, I have no idea what version of pgAgent it
 is.  Where does it say?  I installed it using apt-get..

 apt-cache search pgagent
 pgagent - job scheduler for PostgreSQL

 It doesn't say the version, but that's the only match..  Maybe there's
 a bin file or something better?

 I ran it using:

 /usr/bin/pgagent hostaddr=127.0.0.1 dbname=KitchenPC user=root

 and it immediately returned to the bash prompt, and it appears to be running:

 ps | grep pgagent
  4134 pts/0    00:00:00 pgagent

 I ran the .sql file that it came with, and it created a new catalog
 called pgAgent.

 However, I'm still not sure what to do next.  According to the
 instructions, pgAdmin should show a node called Jobs.  I don't see
 that.  I'm using pgAdmin for OS/X and connecting to the server from
 remote over SSL.  Thanks!

 Mike


Sorry, it looks like it defaulted to the wrong DB.  I created the
schema in the postgres database and now I see a Jobs node..

One last question - How do I configure pgAgent to start automatically?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Dave Page
On Mon, Oct 18, 2010 at 10:31 AM, Mike Christensen m...@kitchenpc.com wrote:
 Sorry, it looks like it defaulted to the wrong DB.  I created the
 schema in the postgres database and now I see a Jobs node..

:-)

 One last question - How do I configure pgAgent to start automatically?

You'll need to put together a startup script for your OS, if the
debian/ubuntu packager hasn't done so already. I'd suggest copying one
from /etc/init.d and tweaking it as required.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:34 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:31 AM, Mike Christensen m...@kitchenpc.com wrote:
 Sorry, it looks like it defaulted to the wrong DB.  I created the
 schema in the postgres database and now I see a Jobs node..

 :-)

 One last question - How do I configure pgAgent to start automatically?

 You'll need to put together a startup script for your OS, if the
 debian/ubuntu packager hasn't done so already. I'd suggest copying one
 from /etc/init.d and tweaking it as required.

This is a bit outside my comfort zone, do you have a startup script
you can share with me?  I can copy it into /etc/init.d - Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Dave Page
On Mon, Oct 18, 2010 at 10:41 AM, Mike Christensen m...@kitchenpc.com wrote:
 On Mon, Oct 18, 2010 at 2:34 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:31 AM, Mike Christensen m...@kitchenpc.com 
 wrote:
 Sorry, it looks like it defaulted to the wrong DB.  I created the
 schema in the postgres database and now I see a Jobs node..

 :-)

 One last question - How do I configure pgAgent to start automatically?

 You'll need to put together a startup script for your OS, if the
 debian/ubuntu packager hasn't done so already. I'd suggest copying one
 from /etc/init.d and tweaking it as required.

 This is a bit outside my comfort zone, do you have a startup script
 you can share with me?  I can copy it into /etc/init.d - Thanks!

No I don't (I use a Mac - and never start pgagent automatically
anyway). Can you copy the PostgreSQL script and make appropriate
changes?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:43 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:41 AM, Mike Christensen m...@kitchenpc.com wrote:
 On Mon, Oct 18, 2010 at 2:34 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:31 AM, Mike Christensen m...@kitchenpc.com 
 wrote:
 Sorry, it looks like it defaulted to the wrong DB.  I created the
 schema in the postgres database and now I see a Jobs node..

 :-)

 One last question - How do I configure pgAgent to start automatically?

 You'll need to put together a startup script for your OS, if the
 debian/ubuntu packager hasn't done so already. I'd suggest copying one
 from /etc/init.d and tweaking it as required.

 This is a bit outside my comfort zone, do you have a startup script
 you can share with me?  I can copy it into /etc/init.d - Thanks!

 No I don't (I use a Mac - and never start pgagent automatically
 anyway). Can you copy the PostgreSQL script and make appropriate
 changes?

Okay I found one that I can use..

One question..  Should the connection string in the script have the
password for root hard coded in it?  Or will it use a password from
~/.pgpass automatically?  If so, what user account will it find the
.pgpass file under?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Dave Page
On Mon, Oct 18, 2010 at 10:46 AM, Mike Christensen m...@kitchenpc.com wrote:
 Okay I found one that I can use..

 One question..  Should the connection string in the script have the
 password for root hard coded in it?  Or will it use a password from
 ~/.pgpass automatically?  If so, what user account will it find the
 .pgpass file under?  Thanks!

Have the script start pgagent under the postgres account eg;

su - postgres -c 'p/path/to/pgadmin'

Then it should be able to use postgres' pgpass file. Don't put the
password in the connection string!


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Dave Page
On Mon, Oct 18, 2010 at 11:34 AM, Mike Christensen m...@kitchenpc.com wrote:
 On Mon, Oct 18, 2010 at 3:07 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:46 AM, Mike Christensen m...@kitchenpc.com 
 wrote:
 Okay I found one that I can use..

 One question..  Should the connection string in the script have the
 password for root hard coded in it?  Or will it use a password from
 ~/.pgpass automatically?  If so, what user account will it find the
 .pgpass file under?  Thanks!

 Have the script start pgagent under the postgres account eg;

 su - postgres -c 'p/path/to/pgadmin'

 Then it should be able to use postgres' pgpass file. Don't put the
 password in the connection string!

 Ok, that worked..  I can at least start and stop it now, and it
 remains running when I'm logged off..

 So does anything in /etc/init.d get automatically run when the server boots?

No, you have to enable it. On redhat based distros, you'd do something
like chkconfig servicename on. On Debian based distros, I believe
you use the update-rc.d command.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 3:07 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:46 AM, Mike Christensen m...@kitchenpc.com wrote:
 Okay I found one that I can use..

 One question..  Should the connection string in the script have the
 password for root hard coded in it?  Or will it use a password from
 ~/.pgpass automatically?  If so, what user account will it find the
 .pgpass file under?  Thanks!

 Have the script start pgagent under the postgres account eg;

 su - postgres -c 'p/path/to/pgadmin'

 Then it should be able to use postgres' pgpass file. Don't put the
 password in the connection string!

Ok, that worked..  I can at least start and stop it now, and it
remains running when I'm logged off..

So does anything in /etc/init.d get automatically run when the server boots?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 3:37 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 11:34 AM, Mike Christensen m...@kitchenpc.com wrote:
 On Mon, Oct 18, 2010 at 3:07 AM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Oct 18, 2010 at 10:46 AM, Mike Christensen m...@kitchenpc.com 
 wrote:
 Okay I found one that I can use..

 One question..  Should the connection string in the script have the
 password for root hard coded in it?  Or will it use a password from
 ~/.pgpass automatically?  If so, what user account will it find the
 .pgpass file under?  Thanks!

 Have the script start pgagent under the postgres account eg;

 su - postgres -c 'p/path/to/pgadmin'

 Then it should be able to use postgres' pgpass file. Don't put the
 password in the connection string!

 Ok, that worked..  I can at least start and stop it now, and it
 remains running when I'm logged off..

 So does anything in /etc/init.d get automatically run when the server boots?

 No, you have to enable it. On redhat based distros, you'd do something
 like chkconfig servicename on. On Debian based distros, I believe
 you use the update-rc.d command.

Well, I guess that worked:

etc/init.d# update-rc.d pgagent defaults
 Adding system startup for /etc/init.d/pgagent ...
   /etc/rc0.d/K20pgagent - ../init.d/pgagent
   /etc/rc1.d/K20pgagent - ../init.d/pgagent
   /etc/rc6.d/K20pgagent - ../init.d/pgagent
   /etc/rc2.d/S20pgagent - ../init.d/pgagent
   /etc/rc3.d/S20pgagent - ../init.d/pgagent
   /etc/rc4.d/S20pgagent - ../init.d/pgagent
   /etc/rc5.d/S20pgagent - ../init.d/pgagent

Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need some help setting up pgAgent

2010-10-17 Thread Mike Christensen
Okay my required n00b question of the week, hopefully this'll be an easy one..

I decided to give pgAgent a shot, because there's this stored sproc
(sorry, function) I need to run nightly and I think spending hours
figuring out pgAgent would somehow be better than the 3 minutes it
would take to add a cron job that calls psql..

I'm following the directions at:
http://www.pgadmin.org/docs/1.6/pgagent-install.html

However, the heading Daemon installation on *nix needs me to believe
that running pgagent from the command line should install the daemon,
and then things should be happily running in the background.  However,
when I run the command line as instructed, it just sits there and does
nothing.  I never get returned to the bash prompt, it's as if the
daemon is just running in interactive mode until I stop the process..
Maybe I'm missing something, or I have to add that command line to
some config file.  I'm not a unix guru but from my past experiences, I
can usually run daemons with /etc/init.d/blah start - pgAgent
doesn't seem to be installed in that manner.

Second, assuming the daemon is running in interactive mode, like..
now what?  The docs have no next steps.  How do I create a new job to
run my SQL function every night?  From some documentation from
Postgres Plus, I was led to believe that there should be a Jobs tree
in pgAdmin created..  But I see nothing.  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need some help on PG database transfer

2010-07-22 Thread Steeles
Hi all,

I am doing some tests on the file level copying for one of database in
windows platform. Here is what I want to achieve.

copy files within PG tablespace folder where PG database resides to target
machine. I want to attache the copied data to target PG database.


What I have done is

1. Database: ABCD, OID: 12345, tablespace: e:\pg_data\, obviously,
e:\pg_data has a subfolder named 12345.
2. create database in target, named , and it will create its own OID,
for example, 16333, the folder 16333 will reside under BASE folder.
3. then I stop PG service, delete all files under ..\16333\*.*,
4. copy all files from e:\pg_data\12345\*.* to the folder in target server,
..\16333\.
5. start PG service in target machine. it looks like it pickup most of
tables that are from its source database, ABCD.

But, It is missing tables and functions, compared to sources.

Is it doable for replicating data like that?

PG tablespace, does it contain all the data/tables in its table space
folder?

Please comment.

Thanks.


Re: [GENERAL] Need some help on PG database transfer

2010-07-22 Thread Scott Marlowe
On Thu, Jul 22, 2010 at 2:08 PM, Steeles stee...@gmail.com wrote:
 Hi all,

 I am doing some tests on the file level copying for one of database in
 windows platform. Here is what I want to achieve.

 copy files within PG tablespace folder where PG database resides to target
 machine. I want to attache the copied data to target PG database.

That won't work.  You have to pg_dump out the tables / tablespace and
then psql or pg_restore it to the other db.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help with a query (uniq -c)

2010-04-13 Thread Kenichiro Tanaka

Hello.

I try with With Query.
http://www.postgresql.org/docs/8.4/static/queries-with.html

#We can use With Queries   v8.4
#That'll only work if the time values are contiguous, but there's probably a
#similar trick for non-contiguous ,too.

--create data
drop table foo;
create table foo( time int,message text);
insert into foo values(1,'a');
insert into foo values(2,'b');
insert into foo values(3,'b');
insert into foo values(4,'c');
insert into foo values(5,'a');
insert into foo values(6,'c');
insert into foo values(7,'c');
insert into foo values(8,'a');
insert into foo values(9,'a');
insert into foo values(10,'a');

--begin Answer
with recursive r as (
select foo.time,foo.message,1  as dummy from foo
union all
select foo.time,foo.message,r.dummy+1 from foo , r
where foo.time=r.time-1 and foo.message=r.message
)
,rr as (
select foo.time,foo.message,'OLID' as flag  from foo
union all
select foo.time,foo.message,'DUP' as flag from foo , rr
where foo.time-1=rr.time-2 and foo.message=rr.message
)
select time min,time+max(dummy)-1 max,message,max(dummy) counts
from r where time not in (select distinct (time+1) times from rr
where flag='DUP') group by time,message order by time;

--result
postgres(# where flag='DUP') group by time,message order by time;
 min | max | message | counts
-+-+-+
   1 |   1 | a   |  1
   2 |   3 | b   |  2
   4 |   4 | c   |  1
   5 |   5 | a   |  1
   6 |   7 | c   |  2
   8 |  10 | a   |  3
(6 rows)

--end

But I think some one can provide more simple SQL.

Thank you.

On Apr 12, 2010, at 11:31 AM, Scott Marlowe wrote:

   

On Mon, Apr 12, 2010 at 12:22 PM, A Bgentosa...@gmail.com  wrote:
 

Hello!

I have a table (think of it as a table of log messages)

time |  message
---
1  |   a
2  |   b
3  |   b
4  |  b
5  |  a

the three 'b' are the same message, so I would like to write a query
that would give me a result that is similar to what the unix command
uniq -c would give:

first |  message | last | count
--
1 | a  |   1   | 1
2 | b  |   4   | 3--- here it squeezes
similar consecutive messages into a single row
5 | a  |   5   | 1

How do I write such a command?
   

Pretty straight ahead:

select min(t), message, max(t), count(*) from table group by message.
 

That was my first though too, but it combines everything not just adjacent 
messages.

Something like this, maybe

select t1.message, t1.time as first, t2.time as last, t2.time-t1.time+1 as count
 from foo as t1, foo as t2
 where t1.time= t2.time and t1.message = t2.message
 and not exists
 (select * from foo as t3
  where (t3.time between t1.time and t2.time and t3.message  
t1.message)
  or (t3.time = t2.time + 1 and t3.message = t1.message)
  or (t3.time = t1.time - 1 and t3.message = t1.message));

  message | first | last | count
-+---+--+---
  a   | 1 |1 | 1
  b   | 2 |4 | 3
  a   | 5 |5 | 1

That'll only work if the time values are contiguous, but there's probably a
similar trick for non-contiguous.

Cheers,
   Steve


   



--

Kenichiro Tanaka
K.K.Ashisuto
http://www.ashisuto.co.jp/english/index.html



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help with a query (uniq -c)

2010-04-13 Thread A B
Thank you all who has replied. I will study your suggestions and see
what will work best in my case.

2010/4/13 Kenichiro Tanaka ketan...@ashisuto.co.jp:
 Hello.

 I try with With Query.
 http://www.postgresql.org/docs/8.4/static/queries-with.html

 #We can use With Queries   v8.4
 #That'll only work if the time values are contiguous, but there's probably a
 #similar trick for non-contiguous ,too.

 --create data
 drop table foo;
 create table foo( time int,message text);
 insert into foo values(1,'a');
 insert into foo values(2,'b');
 insert into foo values(3,'b');
 insert into foo values(4,'c');
 insert into foo values(5,'a');
 insert into foo values(6,'c');
 insert into foo values(7,'c');
 insert into foo values(8,'a');
 insert into foo values(9,'a');
 insert into foo values(10,'a');

 --begin Answer
 with recursive r as (
 select foo.time,foo.message,1  as dummy from foo
 union all
 select foo.time,foo.message,r.dummy+1 from foo , r
 where foo.time=r.time-1 and foo.message=r.message
 )
 ,rr as (
 select foo.time,foo.message,'OLID' as flag  from foo
 union all
 select foo.time,foo.message,'DUP' as flag from foo , rr
 where foo.time-1=rr.time-2 and foo.message=rr.message
 )
 select time min,time+max(dummy)-1 max,message,max(dummy) counts
 from r where time not in (select distinct (time+1) times from rr
 where flag='DUP') group by time,message order by time;

 --result
 postgres(# where flag='DUP') group by time,message order by time;
  min | max | message | counts
 -+-+-+
   1 |   1 | a       |      1
   2 |   3 | b       |      2
   4 |   4 | c       |      1
   5 |   5 | a       |      1
   6 |   7 | c       |      2
   8 |  10 | a       |      3
 (6 rows)

 --end

 But I think some one can provide more simple SQL.

 Thank you.

 On Apr 12, 2010, at 11:31 AM, Scott Marlowe wrote:



 On Mon, Apr 12, 2010 at 12:22 PM, A Bgentosa...@gmail.com  wrote:


 Hello!

 I have a table (think of it as a table of log messages)

 time |  message
 ---
 1      |   a
 2      |   b
 3      |   b
 4      |  b
 5      |  a

 the three 'b' are the same message, so I would like to write a query
 that would give me a result that is similar to what the unix command
 uniq -c would give:

 first |  message | last | count
 --
 1     |     a              |   1   |     1
 2     |     b              |   4   |     3--- here it squeezes
 similar consecutive messages into a single row
 5     |     a              |   5   |     1

 How do I write such a command?


 Pretty straight ahead:

 select min(t), message, max(t), count(*) from table group by message.


 That was my first though too, but it combines everything not just adjacent
 messages.

 Something like this, maybe

 select t1.message, t1.time as first, t2.time as last, t2.time-t1.time+1 as
 count
         from foo as t1, foo as t2
     where t1.time= t2.time and t1.message = t2.message
         and not exists
             (select * from foo as t3
              where (t3.time between t1.time and t2.time and t3.message
  t1.message)
              or (t3.time = t2.time + 1 and t3.message = t1.message)
              or (t3.time = t1.time - 1 and t3.message = t1.message));

  message | first | last | count
 -+---+--+---
  a       |     1 |    1 |     1
  b       |     2 |    4 |     3
  a       |     5 |    5 |     1

 That'll only work if the time values are contiguous, but there's probably
 a
 similar trick for non-contiguous.

 Cheers,
   Steve





 --
 
 Kenichiro Tanaka
 K.K.Ashisuto
 http://www.ashisuto.co.jp/english/index.html
 


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need some help with a query (uniq -c)

2010-04-12 Thread A B
Hello!

I have a table (think of it as a table of log messages)

time |  message
---
1  |   a
2  |   b
3  |   b
4  |  b
5  |  a

the three 'b' are the same message, so I would like to write a query
that would give me a result that is similar to what the unix command
uniq -c would give:

first |  message | last | count
--
1 | a  |   1   | 1
2 | b  |   4   | 3 --- here it squeezes
similar consecutive messages into a single row
5 | a  |   5   | 1

How do I write such a command?

I could of course write a plpgsql function that loops but that is not
as interesting as finding out if this can be done in a single simple
command.

Perhaps it would be best to regularly delete neighbouring similar
rows and keeping a count value would reduce the number of rows and
make it more efficient if the query would be run many times and the
number of duplicate messages would be large.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help with a query (uniq -c)

2010-04-12 Thread Scott Marlowe
On Mon, Apr 12, 2010 at 12:22 PM, A B gentosa...@gmail.com wrote:
 Hello!

 I have a table (think of it as a table of log messages)

 time |  message
 ---
 1      |   a
 2      |   b
 3      |   b
 4      |  b
 5      |  a

 the three 'b' are the same message, so I would like to write a query
 that would give me a result that is similar to what the unix command
 uniq -c would give:

 first |  message | last | count
 --
 1     |     a              |   1   |     1
 2     |     b              |   4   |     3     --- here it squeezes
 similar consecutive messages into a single row
 5     |     a              |   5   |     1

 How do I write such a command?

Pretty straight ahead:

select min(t), message, max(t), count(*) from table group by message.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help with a query (uniq -c)

2010-04-12 Thread Sergey Konoplev
On 12 April 2010 22:22, A B gentosa...@gmail.com wrote:

 first |  message | last | count
 --
 1     |     a              |   1   |     1
 2     |     b              |   4   |     3     --- here it squeezes
 similar consecutive messages into a single row
 5     |     a              |   5   |     1

 How do I write such a command?

select min(time) as first, max(time) as last, message, count(*) from
log_table group by message order by 1;

-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help with a query (uniq -c)

2010-04-12 Thread Steve Atkins

On Apr 12, 2010, at 11:31 AM, Scott Marlowe wrote:

 On Mon, Apr 12, 2010 at 12:22 PM, A B gentosa...@gmail.com wrote:
 Hello!
 
 I have a table (think of it as a table of log messages)
 
 time |  message
 ---
 1  |   a
 2  |   b
 3  |   b
 4  |  b
 5  |  a
 
 the three 'b' are the same message, so I would like to write a query
 that would give me a result that is similar to what the unix command
 uniq -c would give:
 
 first |  message | last | count
 --
 1 | a  |   1   | 1
 2 | b  |   4   | 3 --- here it squeezes
 similar consecutive messages into a single row
 5 | a  |   5   | 1
 
 How do I write such a command?
 
 Pretty straight ahead:
 
 select min(t), message, max(t), count(*) from table group by message.

That was my first though too, but it combines everything not just adjacent 
messages.

Something like this, maybe

select t1.message, t1.time as first, t2.time as last, t2.time-t1.time+1 as 
count 
from foo as t1, foo as t2 
where t1.time = t2.time and t1.message = t2.message 
and not exists 
(select * from foo as t3 
 where (t3.time between t1.time and t2.time and t3.message  
t1.message)
 or (t3.time = t2.time + 1 and t3.message = t1.message)
 or (t3.time = t1.time - 1 and t3.message = t1.message));

 message | first | last | count 
-+---+--+---
 a   | 1 |1 | 1
 b   | 2 |4 | 3
 a   | 5 |5 | 1

That'll only work if the time values are contiguous, but there's probably a
similar trick for non-contiguous.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function

2009-02-01 Thread Ivan Sergio Borgonovo
On Sun, 01 Feb 2009 00:10:52 -0800
Mike Christensen ima...@comcast.net wrote:

 Figured out one way to do it, perhaps I can get some feedback on
 if this is the best way..  Thanks!
 
 CREATE TEMP TABLE temp_ratings
 (
   RecipeId uuid,
   Rating smallint,
   CONSTRAINT id_pk PRIMARY KEY (RecipeId)
 );
 
 INSERT INTO temp_ratings(RecipeId, Rating)
   SELECT RecipeId, Avg(Rating) as Rating FROM RecipeRatings GROUP
 BY RecipeId;
 
 UPDATE Recipes
   SET Rating = tr.Rating
   FROM temp_ratings as tr
   WHERE Recipes.RecipeId = tr.RecipeId AND Recipes.Rating 
 tr.Rating

You can have a similarly coincise form using

insert into temp table

http://www.postgresql.org/docs/8.3/interactive/sql-selectinto.html

check what temporary table really means regarding transactions,
functions and connections.

[1]
http://www.postgresql.org/docs/8.3/interactive/sql-createtable.html

If specified, the table is created as a temporary table. Temporary
tables are automatically dropped at the end of a session, or
optionally at the end of the current transaction (see ON COMMIT
below). Existing permanent tables with the same name are not visible
to the current session while the temporary table exists, unless they
are referenced with schema-qualified names. Any indexes created on a
temporary table are automatically temporary as well. Optionally,
GLOBAL or LOCAL can be written before TEMPORARY or TEMP. This makes
no difference in PostgreSQL, but see Compatibility.

of course depending on the context it may be useful to use on
commit that seems to be only supported by the more verbose create
path.
Still the create path offer some shortcut to avoid to specify the
schema of the temp table.

create table like [1]
and create table as that seems the most promising for your needs
http://www.postgresql.org/docs/8.3/interactive/sql-createtableas.html

not everything is yet as we dream it, but there is still a lot of
syntactic sugar available to exploit.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need some help converting MS SQL stored proc to postgres function

2009-02-01 Thread Mike Christensen
Hi guys, I'm in the process of migrating my database from MS SQL 2005 to 
PostgreSQL and there's one final stored proc that's giving me some 
problems..  Perhaps someone can give me some help?  Here's the sproc:


 SELECT
   RecipeId, Avg(Rating) as Rating
 INTO #ratings
 FROM RecipeRatings GROUP BY RecipeId

 UPDATE Recipes
   SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON 
(#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating  Recipes.Rating)


 DROP TABLE #ratings

The error is:

ERROR:  syntax error at or near #
LINE 3:   INTO #ratings
  ^

** Error **

ERROR: syntax error at or near #
SQL state: 42601
Character: 53

Perhaps there's a different way to create temp tables?  Even better is 
if someone can re-write the query to not use the temp table, I'm far 
from a SQL expert.  Thanks!!


Mike

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function

2009-02-01 Thread Tino Wildenhain

Hi,

Mike Christensen wrote:
Hi guys, I'm in the process of migrating my database from MS SQL 2005 to 
PostgreSQL and there's one final stored proc that's giving me some 
problems..  Perhaps someone can give me some help?  Here's the sproc:


 SELECT
   RecipeId, Avg(Rating) as Rating
 INTO #ratings
 FROM RecipeRatings GROUP BY RecipeId

 UPDATE Recipes
   SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON 
(#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating  
Recipes.Rating)


would not

UPDATE receipes
   SET rating = r.rating
  FROM (SELECT recipeid,avg(rating) as rating
GROUP BY recipeid) r
  WHERE recipeid=r.recipeid
AND rating  r.rating

work too w/o temp table?
(untested, can contain errors)

Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function

2009-02-01 Thread Mike Christensen
Figured out one way to do it, perhaps I can get some feedback on if this 
is the best way..  Thanks!


CREATE TEMP TABLE temp_ratings
(
 RecipeId uuid,
 Rating smallint,
 CONSTRAINT id_pk PRIMARY KEY (RecipeId)
);

INSERT INTO temp_ratings(RecipeId, Rating)
 SELECT RecipeId, Avg(Rating) as Rating FROM RecipeRatings GROUP BY 
RecipeId;


UPDATE Recipes
 SET Rating = tr.Rating
 FROM temp_ratings as tr
 WHERE Recipes.RecipeId = tr.RecipeId AND Recipes.Rating  tr.Rating

Mike Christensen wrote:
Hi guys, I'm in the process of migrating my database from MS SQL 2005 
to PostgreSQL and there's one final stored proc that's giving me some 
problems..  Perhaps someone can give me some help?  Here's the sproc:


 SELECT
   RecipeId, Avg(Rating) as Rating
 INTO #ratings
 FROM RecipeRatings GROUP BY RecipeId

 UPDATE Recipes
   SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON 
(#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating  
Recipes.Rating)


 DROP TABLE #ratings

The error is:

ERROR:  syntax error at or near #
LINE 3:   INTO #ratings
  ^

** Error **

ERROR: syntax error at or near #
SQL state: 42601
Character: 53

Perhaps there's a different way to create temp tables?  Even better is 
if someone can re-write the query to not use the temp table, I'm far 
from a SQL expert.  Thanks!!


Mike



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function

2009-02-01 Thread Mike Christensen

Thanks!  You might be onto something, I see two potential problems though:

1) If the nested select returns no rows (no one has rated the recipe 
before), it would try to set the value to null.  The Rating column is 
non-nullable which is the way I want it.


2) I'm not exactly 100% sure on this, but I think this query will end up 
locking every row in the recipes table which could be tens of thousands, 
and create some perf issues or deadlocks.  Even though I run this query 
once per day to update ratings, I'd like to keep it as streamlined as 
possible..


Mike

Tino Wildenhain wrote:

Hi,

Mike Christensen wrote:
Hi guys, I'm in the process of migrating my database from MS SQL 2005 
to PostgreSQL and there's one final stored proc that's giving me some 
problems..  Perhaps someone can give me some help?  Here's the sproc:


 SELECT
   RecipeId, Avg(Rating) as Rating
 INTO #ratings
 FROM RecipeRatings GROUP BY RecipeId

 UPDATE Recipes
   SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON 
(#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating  
Recipes.Rating)


would not

UPDATE receipes
   SET rating = r.rating
  FROM (SELECT recipeid,avg(rating) as rating
GROUP BY recipeid) r
  WHERE recipeid=r.recipeid
AND rating  r.rating

work too w/o temp table?
(untested, can contain errors)

Tino


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help with pl-pgsql

2008-12-24 Thread Adrian Klaver
On Tuesday 23 December 2008 9:57:01 pm justin wrote:


 I have taken your idea and made sure all the variables all appear the
 same and add raise notice for each portion of the command that is failing.

 -
 NOTICE:  _wipvalue: NULL
 CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE,
 false, 0, Current_User, Current_User,  $3  )
 PL/pgSQL function forceclosewo line 66 at SQL statement

 NOTICE:  _wipqty: 1.
 CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE,
 false, 0, Current_User, Current_User,  $3  )
 PL/pgSQL function forceclosewo line 66 at SQL statement

 NOTICE:  _wipvalue/_wipqty= NULL
 CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE,
 false, 0, Current_User, Current_User,  $3  )
 PL/pgSQL function forceclosewo line 66 at SQL statement

 NOTICE:  pwoid: 6916
 CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE,
 false, 0, Current_User, Current_User,  $3  )
 PL/pgSQL function forceclosewo line 66 at SQL statement

 NOTICE:  TotalCost: NULL
 CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE,
 false, 0, Current_User, Current_User,  $3  )
 PL/pgSQL function forceclosewo line 66 at SQL statement

 

 So it appears that something is causing _wipvalue to get set to NULL
 somewhere else in the code.  Now when running the failing select
 statement manually  works is because the modified record is still not
 committed yet and its reading the unmodified record.

 the ForceCloseWo() calls several other functions before PostProduction()
 that beat on the header record a few times.  ForceCloseWo does extactly
 what it sounds like closes Work Orders forcible  based on default values
 set in the database.  I'm betting this record has some flaky setting
 somewhere thats either in the posting of raw material or  labor
 operations.


 its late i'm going to bed fight this again later this morning.

 thanks

Looks like you are narrowing it down. Good luck on that and if you need 
another set of eyes don't hesitate to post.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help with pl-pgsql

2008-12-24 Thread justin



justin wrote:

Adrian Klaver wrote:

- justin jus...@emproshunts.com wrote:




So it appears that something is causing _wipvalue to get set to NULL 
somewhere else in the code.  Now when running the failing select 
statement manually  works is because the modified record is still not 
committed yet and its reading the unmodified record.  

the ForceCloseWo() calls several other functions before 
PostProduction() that beat on the header record a few times.  
ForceCloseWo does extactly what it sounds like closes Work Orders 
forcible  based on default values set in the database.  I'm betting 
this record has some flaky setting somewhere thats either in the 
posting of raw material or  labor operations. 



its late i'm going to bed fight this again later this morning. 


thanks


Found the problem at last.  it was what i thought early this morning, 
the database had some flaky data set to null.  so modified the the 
tables in question set defaults to zero and updated all the records  
that has null to zero.  hopefully that fixes the problem going forward. 

thanks to all for helping, pointing me in the right direction to figure 
out what is going on, as i was completely lost of what next to do.







Re: [GENERAL] need some help with pl-pgsql

2008-12-24 Thread Scott Marlowe
On Wed, Dec 24, 2008 at 9:32 AM, justin jus...@emproshunts.com wrote:


 Found the problem at last.  it was what i thought early this morning, the
 database had some flaky data set to null.  so modified the the tables in
 question set defaults to zero and updated all the records  that has null to
 zero.  hopefully that fixes the problem going forward.

  thanks to all for helping, pointing me in the right direction to figure out
 what is going on, as i was completely lost of what next to do.

Would it make sense to set that field to not null?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] need some help with pl-pgsql

2008-12-23 Thread justin

have a function written in pgsql   it runs just fine, except its doing
something really stupid.

The function runs just fine till this select statement

Select (wo_wipvalue/wo_qtyord)
into _TotalCost
from wo
where wo_id = pWoid ;

This goes to the work order header table to gets the current value
divided by current qty thats been completed so far, then sticks the
value into _TotalCost variable based on the parameter passed into
variable pWoid.

problem is it returns NULL  which is impossible as i can manually run
the select statement and get the $1.589445

I have no idea how to fix this problem going forward???


and the database is 8.3.5 running Ubuntu 8.0 lts


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help with pl-pgsql

2008-12-23 Thread Adrian Klaver
On Tuesday 23 December 2008 1:04:49 pm justin wrote:
 have a function written in pgsql   it runs just fine, except its doing
 something really stupid.

 The function runs just fine till this select statement

 Select (wo_wipvalue/wo_qtyord)
  into _TotalCost
  from wo
  where wo_id = pWoid ;

 This goes to the work order header table to gets the current value
 divided by current qty thats been completed so far, then sticks the
 value into _TotalCost variable based on the parameter passed into
 variable pWoid.

 problem is it returns NULL  which is impossible as i can manually run
 the select statement and get the $1.589445

 I have no idea how to fix this problem going forward???


 and the database is 8.3.5 running Ubuntu 8.0 lts

Would help to see the whole function. Also make sure you did not name one of 
the variables the same as a column name, this will confuse plpgsql.  Are you 
using the same value for wo_id in the function as in the manual select 
statement?

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help with pl-pgsql

2008-12-23 Thread justin

Adrian Klaver wrote:

On Tuesday 23 December 2008 1:04:49 pm justin wrote:
  

have a function written in pgsql   it runs just fine, except its doing
something really stupid.

The function runs just fine till this select statement

Select (wo_wipvalue/wo_qtyord)
 into _TotalCost
 from wo
 where wo_id = pWoid ;

This goes to the work order header table to gets the current value
divided by current qty thats been completed so far, then sticks the
value into _TotalCost variable based on the parameter passed into
variable pWoid.

problem is it returns NULL  which is impossible as i can manually run
the select statement and get the $1.589445

I have no idea how to fix this problem going forward???


and the database is 8.3.5 running Ubuntu 8.0 lts



Would help to see the whole function. Also make sure you did not name one of 
the variables the same as a column name, this will confuse plpgsql.  Are you 
using the same value for wo_id in the function as in the manual select 
statement?
  



First the funciton has been running for months and never has had a 
problem.  No changes to the database scheme.  Second use variable naming 
scheme completely different from column names.   _ always is the first 
character in variables.  p is always the first character in passed 
parameters.


Take a look at the screen shot and be in aw as i am



postproduction(pwoid integer, pqty numeric, pbackflush boolean, 
pbackflushoperations boolean, pitemlocseries integer, psuuser text, 
prnuser text, pdate date)

 RETURNS integer AS
$BODY$ DECLARE
   _woNumber TEXT;
   _itemlocSeries INTEGER;
  
   _parentQty NUMERIC;

   _qty NUMERIC;
   _TotalCost numeric;

BEGIN

   IF (pQty = 0) THEN
   RETURN 0;
   END IF;

   IF ( ( SELECT wo_status
   FROM wo
   WHERE (wo_id=pWoid) ) NOT IN  ('R','E','I') ) THEN
   RETURN -1;
   END IF;

   --If this is item type Job then we are using the wrong function
   SELECT item_type INTO _check
   FROM wo, itemsite, item
   WHERE ((wo_id=pWoid)
   AND (wo_itemsite_id=itemsite_id)
   AND (itemsite_item_id=item_id)
   AND (item_type = 'J'));
  
   IF (FOUND) THEN
   RAISE EXCEPTION 'Work orders for job items are posted when 
quantities are shipped on the associated sales order';

   END IF;

   SELECT formatWoNumber(pWoid) INTO _woNumber;

   SELECT roundQty(item_fractional, pQty) INTO _parentQty
   FROM wo,
   itemsite,
   item
   WHERE ((wo_itemsite_id=itemsite_id)
   AND (itemsite_item_id=item_id)
   AND (wo_id=pWoid));
  
   --  Create the material receipt transaction

   IF (pItemlocSeries = 0) THEN
   SELECT NEXTVAL('itemloc_series_seq') INTO _itemlocSeries;
   ELSE
   _itemlocSeries = pItemlocSeries;
   END IF;

  

   --Lets get Wips Current total cost

   Select (wo_wipvalue/wo_qtyord)
   into _TotalCost 
   from wo 
   where wo_id = pWoid ;

   --Moves WIP into Inventory.
   SELECT postInvTrans( itemsite_id,
   'RM',
   _parentQty,
   'W/O',
   'WO',
   _woNumber,
   '',
   'Receive Inventory from Manufacturing',
   costcat_asset_accnt_id,
   costcat_wip_accnt_id,
   _itemlocSeries,
   true,
   _TotalCost,
   pDate::timestamp ) INTO _invhistid
   FROM wo,
   itemsite,
   costcat
   WHERE ( (wo_itemsite_id=itemsite_id)
   AND (itemsite_costcat_id=costcat_id)
   AND (wo_id=pWoid) );

   --  Increase this W/O's received qty decrease its WIP value
   UPDATE wo SET
   wo_qtyrcv = (wo_qtyrcv + _parentQty),
   wo_wipvalue = (wo_wipvalue - (_TotalCost*pQty))
   FROM itemsite,
   item
   WHERE ((wo_itemsite_id=itemsite_id)
   AND (itemsite_item_id=item_id)
   AND (wo_id=pWoid));

   --  Make sure the W/O is at issue status
   UPDATE wo SET
   wo_status='I'
   WHERE (wo_id=pWoid);


  
   RETURN _itemlocSeries;


END;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE
 COST 100;
ALTER FUNCTION postproduction(integer, numeric, boolean, boolean, 
integer, text, text, date) OWNER TO justin;
GRANT EXECUTE ON FUNCTION postproduction(integer, numeric, boolean, 
boolean, integer, text, text, date) TO justin;
GRANT EXECUTE ON FUNCTION postproduction(integer, numeric, boolean, 
boolean, integer, text, text, date) TO public;




Re: [GENERAL] need some help with pl-pgsql

2008-12-23 Thread Adrian Klaver

- justin jus...@emproshunts.com wrote:

 Adrian Klaver wrote:
 
 Would help to see the
 whole function. Also make sure you did not name one of
 the variables the same as a column name, this will confuse plpgsql. 
 Are you
 using the same value for wo_id in the function as in the manual select
 statement?
 
 First the funciton has been running for months and never has had a
 problem. No changes to the database scheme. Second use variable naming
 scheme completely different from column names. _ always is the first
 character in variables. p is always the first character in passed
 parameters.
 
 Take a look at the screen shot and be in aw as i am
 
 
 
 postproduction(pwoid integer, pqty numeric, pbackflush boolean,
 pbackflushoperations boolean, pitemlocseries integer, psuuser text,
 prnuser text, pdate date)
 RETURNS integer AS
 $BODY$ DECLARE
 _woNumber TEXT;
 _itemlocSeries INTEGER;
 
 _parentQty NUMERIC;
 _qty NUMERIC;
 _TotalCost numeric;
 
 BEGIN
 
 IF (pQty = 0) THEN
 RETURN 0;
 END IF;
 
 IF ( ( SELECT wo_status
 FROM wo
 WHERE (wo_id=pWoid) ) NOT IN ('R','E','I') ) THEN
 RETURN -1;
 END IF;
 
 --If this is item type Job then we are using the wrong function
 SELECT item_type INTO _check
 FROM wo, itemsite, item
 WHERE ((wo_id=pWoid)
 AND (wo_itemsite_id=itemsite_id)
 AND (itemsite_item_id=item_id)
 AND (item_type = 'J'));
 
 IF (FOUND) THEN
 RAISE EXCEPTION 'Work orders for job items are posted when quantities
 are shipped on the associated sales order';
 END IF;
 
 SELECT formatWoNumber(pWoid) INTO _woNumber;
 
 SELECT roundQty(item_fractional, pQty) INTO _parentQty
 FROM wo,
 itemsite,
 item
 WHERE ((wo_itemsite_id=itemsite_id)
 AND (itemsite_item_id=item_id)
 AND (wo_id=pWoid));
 
 -- Create the material receipt transaction
 IF (pItemlocSeries = 0) THEN
 SELECT NEXTVAL('itemloc_series_seq') INTO _itemlocSeries;
 ELSE
 _itemlocSeries = pItemlocSeries;
 END IF;
 
 
 
 --Lets get Wips Current total cost
 Select (wo_wipvalue/wo_qtyord)
 into _TotalCost
 from wo
 where wo_id = pWoid ;
 --Moves WIP into Inventory.
 SELECT postInvTrans( itemsite_id,
 'RM',
 _parentQty,
 'W/O',
 'WO',
 _woNumber,
 '',
 'Receive Inventory from Manufacturing',
 costcat_asset_accnt_id,
 costcat_wip_accnt_id,
 _itemlocSeries,
 true,
 _TotalCost,
 pDate::timestamp ) INTO _invhistid
 FROM wo,
 itemsite,
 costcat
 WHERE ( (wo_itemsite_id=itemsite_id)
 AND (itemsite_costcat_id=costcat_id)
 AND (wo_id=pWoid) );
 
 -- Increase this W/O's received qty decrease its WIP value
 UPDATE wo SET
 wo_qtyrcv = (wo_qtyrcv + _parentQty),
 wo_wipvalue = (wo_wipvalue - (_TotalCost*pQty))
 FROM itemsite,
 item
 WHERE ((wo_itemsite_id=itemsite_id)
 AND (itemsite_item_id=item_id)
 AND (wo_id=pWoid));
 
 -- Make sure the W/O is at issue status
 UPDATE wo SET
 wo_status='I'
 WHERE (wo_id=pWoid);
 
 
 
 RETURN _itemlocSeries;
 
 END;
 $BODY$
 LANGUAGE 'plpgsql' VOLATILE
 COST 100;
 ALTER FUNCTION postproduction(integer, numeric, boolean, boolean,
 integer, text, text, date) OWNER TO justin;
 GRANT EXECUTE ON FUNCTION postproduction(integer, numeric, boolean,
 boolean, integer, text, text, date) TO justin;
 GRANT EXECUTE ON FUNCTION postproduction(integer, numeric, boolean,
 boolean, integer, text, text, date) TO public;

Hmmm. A couple of suggestions while I ponder this more. For clarity sake decide 
on a case scheme. I start getting twitchy when I see pwoid and pWoid refer to 
the same thing. This depends on case folding being consistent, not something I 
like to count on. Second you might want to put in a few RAISE NOTICE statements 
to see what values are being pulled from the db for wo_wipvalue and wo_qtyord 
and what is being supplied to the WHERE clause for pWoid.

Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help with pl-pgsql

2008-12-23 Thread Adrian Klaver
On Tuesday 23 December 2008 3:36:46 pm Adrian Klaver wrote:


 Hmmm. A couple of suggestions while I ponder this more. For clarity sake
 decide on a case scheme. I start getting twitchy when I see pwoid and pWoid
 refer to the same thing. This depends on case folding being consistent, not
 something I like to count on. Second you might want to put in a few RAISE
 NOTICE statements to see what values are being pulled from the db for
 wo_wipvalue and wo_qtyord and what is being supplied to the WHERE clause
 for pWoid.

 Adrian Klaver
 akla...@comcast.net

SELECT postInvTrans( itemsite_id,
            'RM',
            _parentQty,
            'W/O',
            'WO',
            _woNumber,
            '',
            'Receive Inventory from Manufacturing',
            costcat_asset_accnt_id,
            costcat_wip_accnt_id,
            _itemlocSeries,
            true,
            _TotalCost,
            pDate::timestamp ) INTO _invhistid

Potential issue. I don't see where _invhistid is declared.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help with pl-pgsql

2008-12-23 Thread justin

Adrian Klaver wrote:

On Tuesday 23 December 2008 3:36:46 pm Adrian Klaver wrote:

  

Hmmm. A couple of suggestions while I ponder this more. For clarity sake
decide on a case scheme. I start getting twitchy when I see pwoid and pWoid
refer to the same thing. This depends on case folding being consistent, not
something I like to count on. Second you might want to put in a few RAISE
NOTICE statements to see what values are being pulled from the db for
wo_wipvalue and wo_qtyord and what is being supplied to the WHERE clause
for pWoid.

Adrian Klaver
akla...@comcast.net



SELECT postInvTrans( itemsite_id,
'RM',
_parentQty,
'W/O',
'WO',
_woNumber,
'',
'Receive Inventory from Manufacturing',
costcat_asset_accnt_id,
costcat_wip_accnt_id,
_itemlocSeries,
true,
_TotalCost,
pDate::timestamp ) INTO _invhistid

Potential issue. I don't see where _invhistid is declared.

  
its declared.  i noticed this function has more variables declared than 
it ever used so i removed them when  posting  to  the list.  I did not 
want anymore confusion so I removed one to many sorry.


This function was completely rewritten several months ago along with a  
allot of other functions so there still allot of clean up to do making 
notes and removing stuff that does not need to be in the function. 

I just can't figure out why in this function it can't find the record. 

This malfunctioning function is called by  ForceCloseWo()  which goes 
through the work order completing any outstanding operations, and 
issuing material so it has to find the record first or it will fail out 
right.  Once all material and operations are marked completed it then 
calls PostProduction() which moves the items out of WIP tables put the 
items into Inventory and does all the accounting stuff.  So i really 
can't figure out what it is doing???




Now On the mixed case.  i come from the Xbase languages specifically 
Foxpro which is case insensitive and a typeless language so i have a few 
really bad habits.  I try to get every thing to match but sometimes 
screw up.   



Re: [GENERAL] need some help with pl-pgsql

2008-12-23 Thread justin

Adrian Klaver wrote:

- justin jus...@emproshunts.com wrote:

  

Adrian Klaver wrote:

Would help to see the
whole function. Also make sure you did not name one of
the variables the same as a column name, this will confuse plpgsql. 
Are you

using the same value for wo_id in the function as in the manual select
statement?

First the funciton has been running for months and never has had a
problem. No changes to the database scheme. Second use variable naming
scheme completely different from column names. _ always is the first
character in variables. p is always the first character in passed
parameters.

Take a look at the screen shot and be in aw as i am



postproduction(pwoid integer, pqty numeric, pbackflush boolean,
pbackflushoperations boolean, pitemlocseries integer, psuuser text,
prnuser text, pdate date)
RETURNS integer AS
$BODY$ DECLARE
_woNumber TEXT;
_itemlocSeries INTEGER;

_parentQty NUMERIC;
_qty NUMERIC;
_TotalCost numeric;

BEGIN

IF (pQty = 0) THEN
RETURN 0;
END IF;

IF ( ( SELECT wo_status
FROM wo
WHERE (wo_id=pWoid) ) NOT IN ('R','E','I') ) THEN
RETURN -1;
END IF;

--If this is item type Job then we are using the wrong function
SELECT item_type INTO _check
FROM wo, itemsite, item
WHERE ((wo_id=pWoid)
AND (wo_itemsite_id=itemsite_id)
AND (itemsite_item_id=item_id)
AND (item_type = 'J'));

IF (FOUND) THEN
RAISE EXCEPTION 'Work orders for job items are posted when quantities
are shipped on the associated sales order';
END IF;

SELECT formatWoNumber(pWoid) INTO _woNumber;

SELECT roundQty(item_fractional, pQty) INTO _parentQty
FROM wo,
itemsite,
item
WHERE ((wo_itemsite_id=itemsite_id)
AND (itemsite_item_id=item_id)
AND (wo_id=pWoid));

-- Create the material receipt transaction
IF (pItemlocSeries = 0) THEN
SELECT NEXTVAL('itemloc_series_seq') INTO _itemlocSeries;
ELSE
_itemlocSeries = pItemlocSeries;
END IF;



--Lets get Wips Current total cost
Select (wo_wipvalue/wo_qtyord)
into _TotalCost
from wo
where wo_id = pWoid ;
--Moves WIP into Inventory.
SELECT postInvTrans( itemsite_id,
'RM',
_parentQty,
'W/O',
'WO',
_woNumber,
'',
'Receive Inventory from Manufacturing',
costcat_asset_accnt_id,
costcat_wip_accnt_id,
_itemlocSeries,
true,
_TotalCost,
pDate::timestamp ) INTO _invhistid
FROM wo,
itemsite,
costcat
WHERE ( (wo_itemsite_id=itemsite_id)
AND (itemsite_costcat_id=costcat_id)
AND (wo_id=pWoid) );

-- Increase this W/O's received qty decrease its WIP value
UPDATE wo SET
wo_qtyrcv = (wo_qtyrcv + _parentQty),
wo_wipvalue = (wo_wipvalue - (_TotalCost*pQty))
FROM itemsite,
item
WHERE ((wo_itemsite_id=itemsite_id)
AND (itemsite_item_id=item_id)
AND (wo_id=pWoid));

-- Make sure the W/O is at issue status
UPDATE wo SET
wo_status='I'
WHERE (wo_id=pWoid);



RETURN _itemlocSeries;

END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE
COST 100;
ALTER FUNCTION postproduction(integer, numeric, boolean, boolean,
integer, text, text, date) OWNER TO justin;
GRANT EXECUTE ON FUNCTION postproduction(integer, numeric, boolean,
boolean, integer, text, text, date) TO justin;
GRANT EXECUTE ON FUNCTION postproduction(integer, numeric, boolean,
boolean, integer, text, text, date) TO public;



Hmmm. A couple of suggestions while I ponder this more. For clarity sake decide 
on a case scheme. I start getting twitchy when I see pwoid and pWoid refer to 
the same thing. This depends on case folding being consistent, not something I 
like to count on. Second you might want to put in a few RAISE NOTICE statements 
to see what values are being pulled from the db for wo_wipvalue and wo_qtyord 
and what is being supplied to the WHERE clause for pWoid.

Adrian Klaver
akla...@comcast.ne


I have taken your idea and made sure all the variables all appear the 
same and add raise notice for each portion of the command that is failing.


-
NOTICE:  _wipvalue: NULL
CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE, 
false, 0, Current_User, Current_User,  $3  )

PL/pgSQL function forceclosewo line 66 at SQL statement

NOTICE:  _wipqty: 1.
CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE, 
false, 0, Current_User, Current_User,  $3  )

PL/pgSQL function forceclosewo line 66 at SQL statement

NOTICE:  _wipvalue/_wipqty= NULL
CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE, 
false, 0, Current_User, Current_User,  $3  )

PL/pgSQL function forceclosewo line 66 at SQL statement

NOTICE:  pwoid: 6916
CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE, 
false, 0, Current_User, Current_User,  $3  )

PL/pgSQL function forceclosewo line 66 at SQL statement

NOTICE:  TotalCost: NULL
CONTEXT:  SQL statement SELECT postProduction(  $1 ,  $2 , FALSE, 
false, 0, Current_User, Current_User,  $3  )

PL/pgSQL function forceclosewo line 66 at SQL statement



So it appears that something is causing _wipvalue to get set to NULL 
somewhere else in the code.  Now 

[GENERAL] Need some help

2008-07-01 Thread Jamie Deppeler

trying to install Postgresql 8.3  and i keep getting these errors

libodbc.so is needed
libodbcinst.so is needed

Hopefully someone can help me

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help

2008-07-01 Thread Chandra ASGI Tech
Hi,

I may not have the answer immediately but I think more details like on which
OS you are trying to install  and whether you are using the binary or are
building it yourself, would be helpful for others to give a solution.

Regards,
ChandraASGI

On Tue, Jul 1, 2008 at 4:37 AM, Jamie Deppeler [EMAIL PROTECTED]
wrote:

 trying to install Postgresql 8.3  and i keep getting these errors

 libodbc.so is needed
 libodbcinst.so is needed

 Hopefully someone can help me

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Need some help

2008-07-01 Thread Andrej Ricnik-Bay
On 01/07/2008, Jamie Deppeler [EMAIL PROTECTED] wrote:
 trying to install Postgresql 8.3  and i keep getting these errors

  libodbc.so is needed
  libodbcinst.so is needed

  Hopefully someone can help me
I'll hazard a guess and assume you're using some sort
of Linux :} ...which distro are you using, how are you
installing postgres?


Cheers,
Tink

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help

2008-07-01 Thread Allan Kamau
Indeed, perhaps no one in the mailing list can look over your shoulder 
to get the much needed additional details as Chandra has pointed out.


My guess is that you may be trying a binary install, perhaps rpm. The 
install the complains that it cannot find libraries that it requires 
installed on the system, in your case it seems to be ODBC related 
libraries. If you prefer to proceed with installation from from binary 
you may want to google for RPM libodbc.so (if your system makes use of 
RPM packages).



Allan.

Chandra ASGI Tech wrote:

Hi,
 
I may not have the answer immediately but I think more details like on 
which OS you are trying to install  and whether you are using the 
binary or are building it yourself, would be helpful for others to 
give a solution.
 
Regards,

ChandraASGI

On Tue, Jul 1, 2008 at 4:37 AM, Jamie Deppeler [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


trying to install Postgresql 8.3  and i keep getting these errors

libodbc.so is needed
libodbcinst.so is needed

Hopefully someone can help me

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help

2008-07-01 Thread Chandra ASGI Tech
Probably your installation misses those odbc drivers. you may either use the
yum package manager (refer
http://www.centos.org/modules/newbb/viewtopic.php?topic_id=12749forum=44post_id=42050#forumpost42050)
or download the drivers from
http://rpm.pbone.net/index.php3/stat/4/idpl/5594193/com/unixODBC-2.2.12-1.el4s1.1.x86_64.rpm.html

- regards,
ChandraASGI




On Tue, Jul 1, 2008 at 7:30 AM, [EMAIL PROTECTED] wrote:

 OS: Centos 4.6 (eq REL 4)

  Hi,
 
  I may not have the answer immediately but I think more details like on
  which
  OS you are trying to install  and whether you are using the binary or are
  building it yourself, would be helpful for others to give a solution.
 
  Regards,
  ChandraASGI
 
  On Tue, Jul 1, 2008 at 4:37 AM, Jamie Deppeler [EMAIL PROTECTED]
  wrote:
 
  trying to install Postgresql 8.3  and i keep getting these errors
 
  libodbc.so is needed
  libodbcinst.so is needed
 
  Hopefully someone can help me
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
 





Re: [GENERAL] Need for help!

2008-05-16 Thread Martijn van Oosterhout
On Thu, May 15, 2008 at 04:57:03PM -0400, Justin wrote:
 Isn't  that the rub point in HT processor.  A process running in HT 
 virtual processor can lock up a specific chunk of the real processor up 
 that would not normally be locked.  So the process running in the Real 
 processor gets blocked and put to sleep till the process running in HT 
 is cleared out. 

There is no real or virtual processor, HT is symmetric. Both look
like real CPU's but internally they share certain resources. The
advantage is that if a program gets a cache miss it will stall for
dozens of cycles waiting for memory. On a real multicore CPU that's
wasted time, on an HT CPU there is another thread which can (hopefully)
keep running and use the resources the other isn't using.

For programs like GCC which are waiting for memory 50% of the time of
so, HT can provide a measurable increase in performace. For
computationally expensive programs it may be worse.

 As time has progress the kernel schedulers i imagine have improved to 
 better  understand that this virtual processor locks parts of the real 
 processor up so it needs to schedule things a little better so it don't 
 keep dumping  things into the HT processor that it will need a second 
 latter for another process it queried for the real processor.

The thing is that HT processors share an L1 cache so switching between
two HT processors on the same die is much less expensive than switching
to another core. But if you have only two runnning processes it better
to split them across two cores. Schedulars know this now, they didn't
at first.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


[GENERAL] Need for help!

2008-05-15 Thread Semi Noob
Dear all,
I have a postgres 8.2.5 and ~6 GB database with lots of simple selects using
indexes. I see that they use the shared memory so much.
Before, my server has 4GB of RAM, shmmax 1GB, Shared_buffers is set to 256
MB, effective_cache_size 300MB, when i test it's performance with option -c
40 -t 1000, it's results is about 54.542 tps, but  when i up number of
clients to over 64 it refuses to run? Now, my server has 8GB, shmmax 3 GB,
shared_buffers is 2GB  it uses ~ 7GB cache,  after benchmark ( c 40 t
1000 ) the results is 57.658 (???). But after upgrade the max clients is
also 64 (?!?) Is this the maximum clients support by program pgbench (my
server on Linux ver8.2.5, pgbench on Windows - version postgresql is 8.3.1)?
And the number 57 tps is fast?

Another questions, i heard that PostgreSQL does not support HT Technology,
is it right?

Last question, i don't understand so much the shmmax, shared_buffers, after
upgrading my server from 4 GB RAM to 8 GB RAM, first i configure shmmax to
2GB, share_buffers to 1GB and start server, it runs, after that i set shmmax
to 4GB and restart, it fails (?!?). The error logs said that not enough
share memory!  and final i set shmmax to 3GB and share buffer to 2GB, it
runs. Don't know why, can you explain?
Thanks so much!
Regards,


Re: [GENERAL] Need for help!

2008-05-15 Thread Pavan Deolasee
On Tue, May 13, 2008 at 2:43 PM, Semi Noob [EMAIL PROTECTED] wrote:
  But after upgrade the max clients is
 also 64 (?!?) Is this the maximum clients support by program pgbench (my
 server on Linux ver8.2.5, pgbench on Windows - version postgresql is 8.3.1)?
 And the number 57 tps is fast?


You did not give CPU and disk info. But still 57 seems a small number.
What I guess is you're running pgbench with scale factor 1 (since you
haven't mentioned scale factor) and that causes extreme contention for
smaller tables with large number of clients.

Regarding maximum number of clients, check your max_connections setting.

 Another questions, i heard that PostgreSQL does not support HT Technology,
 is it right?


I'm not sure what do you mean by HT, but if it's hyper threading, then
IMO that statement is not completely true. Postgres is not
multi-threaded, so a single process (or connection) may not be able to
use all the CPUs, but as long as there are multiple connections (each
connection corresponds to one backend process), as many CPUs will be
used.

 Last question, i don't understand so much the shmmax, shared_buffers, after
 upgrading my server from 4 GB RAM to 8 GB RAM, first i configure shmmax to
 2GB, share_buffers to 1GB and start server, it runs, after that i set shmmax
 to 4GB and restart, it fails (?!?). The error logs said that not enough
 share memory!  and final i set shmmax to 3GB and share buffer to 2GB, it
 runs. Don't know why, can you explain?

That doesn't make sense. I am guessing that you are running a 32 bit
OS. 4GB of shmmax won't work on a 32 bit OS.

Thanks,
Pavan

Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Pavan Deolasee
On Thu, May 15, 2008 at 3:48 PM, Semi Noob [EMAIL PROTECTED] wrote:


 I set max_connections is 200.

What error message you get when you try with more than 64 clients ?

 57 seems a small number, according to you, how much tps is normal or fast?

Its difficult to say how much is good. On my laptop for s = 10, c =
40, t = 1000, I get 51 tps. But on a larger 2 CPU, 2 GB, 3 RAID 0
disks for data and a separate disk for xlog, I get  232 tps.

 and what is the different of shared_buffers and effective_cache_size.


shared_buffers is the size of the buffer pool which Postgres uses to
cache the data blocks.
effective_cache_size is usually size of the shared buffer plus
estimate of whatever data OS can cache. Planner uses this
approximation to choose right plan for execution.

http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Semi Noob
Thank you for your answer!
*You did not give CPU and disk info. But still 57 seems a small number.
What I guess is you're running pgbench with scale factor 1 (since you
haven't mentioned scale factor) and that causes extreme contention for
smaller tables with large number of clients.*

My CPU is 2CPU: Intel(R) Xeon(TM) CPU 3.20GHz. Disk: disk system is RAID-5;
OS CentOS. the number of scale in pgbench initialization is 100. It will be
generate 10 000 000 rows in the accounts table. Fill factor is default.
In the other way, I heard that: PostgreSQL working with RAID-10 better than
RAID-5 is it right?

*   Regarding maximum number of clients, check your max_connections
setting.
*
I set max_connections is 200. *
*
57 seems a small number, according to you, how much tps is normal or fast?
and what is the different of shared_buffers and effective_cache_size.

Thank you once more!
Regards,
Semi Noob

2008/5/15 Pavan Deolasee [EMAIL PROTECTED]:

 On Tue, May 13, 2008 at 2:43 PM, Semi Noob [EMAIL PROTECTED] wrote:
   But after upgrade the max clients is
  also 64 (?!?) Is this the maximum clients support by program pgbench (my
  server on Linux ver8.2.5, pgbench on Windows - version postgresql is
 8.3.1)?
  And the number 57 tps is fast?
 

 You did not give CPU and disk info. But still 57 seems a small number.
 What I guess is you're running pgbench with scale factor 1 (since you
 haven't mentioned scale factor) and that causes extreme contention for
 smaller tables with large number of clients.

 Regarding maximum number of clients, check your max_connections setting.

  Another questions, i heard that PostgreSQL does not support HT
 Technology,
  is it right?
 

 I'm not sure what do you mean by HT, but if it's hyper threading, then
 IMO that statement is not completely true. Postgres is not
 multi-threaded, so a single process (or connection) may not be able to
 use all the CPUs, but as long as there are multiple connections (each
 connection corresponds to one backend process), as many CPUs will be
 used.

  Last question, i don't understand so much the shmmax, shared_buffers,
 after
  upgrading my server from 4 GB RAM to 8 GB RAM, first i configure shmmax
 to
  2GB, share_buffers to 1GB and start server, it runs, after that i set
 shmmax
  to 4GB and restart, it fails (?!?). The error logs said that not enough
  share memory!  and final i set shmmax to 3GB and share buffer to 2GB, it
  runs. Don't know why, can you explain?

 That doesn't make sense. I am guessing that you are running a 32 bit
 OS. 4GB of shmmax won't work on a 32 bit OS.

 Thanks,
 Pavan

 Pavan Deolasee
 EnterpriseDB http://www.enterprisedb.com



Re: [GENERAL] Need for help!

2008-05-15 Thread Justin



Pavan Deolasee wrote:

On Thu, May 15, 2008 at 3:48 PM, Semi Noob [EMAIL PROTECTED] wrote:
  

I set max_connections is 200.



What error message you get when you try with more than 64 clients ?
  
I have the max connection  set 50.  You want to be careful with this 
setting if  theres allot of active users and the processes/users get 
hung open this will be eating up memory on the server This number needs 
to be set to the maximum number of users  you ever want on the server at 
any one time.  If you get to many hung open presses performance gets hurt.
  

57 seems a small number, according to you, how much tps is normal or fast?



Its difficult to say how much is good. On my laptop for s = 10, c =
40, t = 1000, I get 51 tps. But on a larger 2 CPU, 2 GB, 3 RAID 0
disks for data and a separate disk for xlog, I get  232 tps.
  
As Pevan has started TPS number is directly related to how fast your 
hardware is.   51 tps is not very good given the hardware specs you have 
stated.


Now the server i have gets 1500 to 2000 tps depending on the the test .  
We had a pretty detail discussion about performance numbers back in March

http://archives.postgresql.org/pgsql-performance/2008-03/thrd3.php#00370
the thread is called Benchmark: Dell/Perc 6, 8 disk RAID 10.



RIAD 5 is a terrible setup for performance RAID 10 seems to be what 
everybody goes with unless you get in SANs storage or other more 
complicated setups
  

and what is the different of shared_buffers and effective_cache_size.




shared_buffers is the size of the buffer pool which Postgres uses to
cache the data blocks.
effective_cache_size is usually size of the shared buffer plus
estimate of whatever data OS can cache. Planner uses this
approximation to choose right plan for execution.

http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html

Thanks,
Pavan

  


Re: [GENERAL] Need for help!

2008-05-15 Thread Shane Ambler

Semi Noob wrote:


My CPU is 2CPU: Intel(R) Xeon(TM) CPU 3.20GHz. Disk: disk system is RAID-5;


Early versions of postgresql had issues with P4 HT CPU's but I believe 
they have been resolved.


I am quite certain that it only related to the early P4's not the Xeon.




--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Scott Marlowe
On Thu, May 15, 2008 at 11:08 AM, Shane Ambler [EMAIL PROTECTED] wrote:
 Semi Noob wrote:

 My CPU is 2CPU: Intel(R) Xeon(TM) CPU 3.20GHz. Disk: disk system is
 RAID-5;

 Early versions of postgresql had issues with P4 HT CPU's but I believe they
 have been resolved.

 I am quite certain that it only related to the early P4's not the Xeon.

The real problem was with the various OS kernels not know how to treat
a HT core versus a real core.  Linux in particular was a bad
performer with HT turned on, and pgsql made it suffer more than many
other apps for not knowing the difference.  The linux kernel has known
for some time now how to treat a HT core properly.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Justin



Scott Marlowe wrote:

On Thu, May 15, 2008 at 11:08 AM, Shane Ambler [EMAIL PROTECTED] wrote:
  

Semi Noob wrote:



My CPU is 2CPU: Intel(R) Xeon(TM) CPU 3.20GHz. Disk: disk system is
RAID-5;
  

Early versions of postgresql had issues with P4 HT CPU's but I believe they
have been resolved.

I am quite certain that it only related to the early P4's not the Xeon.



The real problem was with the various OS kernels not know how to treat
a HT core versus a real core.  Linux in particular was a bad
performer with HT turned on, and pgsql made it suffer more than many
other apps for not knowing the difference.  The linux kernel has known
for some time now how to treat a HT core properly.

  
From every thing i have read about Hyper Threading, it should be just 
turned off. There is so much over head to process, it killed its own 
performance if the application was not designed to take advantage of 
it..  A really cool idea that proved unfeasible at the  time.


Intel is says its bringing back hyper threading for Nehalem 
http://en.wikipedia.org/wiki/Nehalem_(CPU_architecture)


If you can i would turn it off and see what the results are


Re: [GENERAL] Need for help!

2008-05-15 Thread Tom Lane
Justin [EMAIL PROTECTED] writes:
 From every thing i have read about Hyper Threading, it should be just
 turned off.

Depends entirely on what you're doing.  I usually leave it turned on
because compiling Postgres from source is measurably faster with it
than without it on my dual-Xeon box.  I'd recommend experimenting
with your own workload before making any decisions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Justin



Tom Lane wrote:

Justin [EMAIL PROTECTED] writes:
  

From every thing i have read about Hyper Threading, it should be just
turned off.



Depends entirely on what you're doing.  I usually leave it turned on
because compiling Postgres from source is measurably faster with it
than without it on my dual-Xeon box.  I'd recommend experimenting
with your own workload before making any decisions.

regards, tom lane
  
Sense PostgreSql  is not a multi-threaded but a single thread 
application which spawns  little exe's  how is hyper threading helping 
Postgresql at all ??




To perfectly honest my programming skills with milti-threading apps is 
non-existent along with Linux world but in the Windows world single 
threaded apps saw no measurable performance boost but the opposite it 
kill the apps performance and allot of multi-threaded apps also got 
there performance smashed?  And if you really kill performance turn on 
HT running W2K?






Re: [GENERAL] Need for help!

2008-05-15 Thread Scott Marlowe
On Thu, May 15, 2008 at 12:53 PM, Justin [EMAIL PROTECTED] wrote:

 Sense PostgreSql  is not a multi-threaded but a single thread application
 which spawns  little exe's  how is hyper threading helping Postgresql at all
 ??

Two ways:

* The stats collector / autovacuum / bgwriter can operate on one CPU
while you the user are using another (whether they're physically
separate cores on different sockets, dual or quad cores on the same
socket, or the sorta another CPU provided by hyperthreading P4s.
* You can use  1 connection, and each new connection spawns another process.

Note that this actually makes postgresql scale to a large number of
CPUs better than many multi-threaded apps, which can have a hard time
using  1 CPU at a time without a lot of extra help.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Craig Ringer

Justin wrote:

[Since] PostgreSql  is not multi-threaded but a single thread 
application which spawns  little exe's  how is hyper threading helping 
Postgresql at all ??


Multiple threads and multiple processes are two ways to tackle a similar 
problem - that of how to do more than one thing on the CPU(s) at once.


Applications that use multiple cooperating processes benefit from more 
CPUs, CPU cores, and CPU hardware execution threads (HT) just like 
applications that use multiple threads do, so long as there is enough 
work to keep multiple CPU cores busy.


There's really not *that* much difference between a multi-threaded 
executable and an executable with multiple processes cooperating using 
shared memory (like PostgreSQL). Nor is there much difference in how 
they use multiple logical CPUs.


The main difference between the two models is that multiple processes 
with shared memory don't share address space except where they have 
specifically mapped it. This means that it's relatively hard for one 
process to mangle other processes' state, especially if it's properly 
careful with its shared memory. By contrast, it's depressingly easy for 
one thread to corrupt the shared heap or even to corrupt other threads' 
stacks in a multi-threaded executable.


On Windows, threads are usually preferred because Windows has such a 
horrible per-process overhead, but it's very good at creating threads 
quickly and cheaply. On UNIX, which has historically been bad at 
threading and very good at creating and destroying processes, the use of 
multiple processes is preferred.


It's also worth noting that you can combine multi-process and 
multi-threaded operation. For example, if PostgreSQL was ever to support 
evaluating a single query on multiple CPU cores one way it could do that 
would be to spawn multiple threads within a single backend. (Note: I 
know it's not even remotely close to that easy - I've been doing way too 
much threaded coding lately).


So ... honestly, whether PostgreSQL is multi-threaded or multi-process 
just doesn't matter. Even if it was muti-threaded instead of 
multi-process, so long as it can only execute each query on a maximum of 
one core then single queries will not benefit (much) from having 
multiple CPU cores, multiple physical CPUs, or CPUs with hyperthreading. 
However, multiple CPU bound queries running in parallel will benefit 
massively from extra cores or physical CPUs, and might also benefit from 
hyperthreading.



To perfectly honest my programming skills with milti-threading apps is 
non-existent along with Linux world but in the Windows world single 
threaded apps saw no measurable performance boost


Of course. They cannot use the extra logical core for anything, so it's 
just overhead. You will find, though, that hyperthreading may improve 
system responsiveness under load even when using only single threaded 
apps, because two different single threaded apps can run (kind of) at 
the same time.


It's pretty useless compared to real multiple cores, though.

and allot of multi-threaded apps also got 
there performance smashed?


That will depend a lot on details of CPU cache use, exactly what they 
were doing on their various threads, how their thread priorities were 
set up, etc. Some apps benefit, some lose.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need for help!

2008-05-15 Thread Justin



Craig Ringer wrote:

Justin wrote:

[Since] PostgreSql  is not multi-threaded but a single thread 
application which spawns  little exe's  how is hyper threading 
helping Postgresql at all ??


Multiple threads and multiple processes are two ways to tackle a 
similar problem - that of how to do more than one thing on the CPU(s) 
at once.


Applications that use multiple cooperating processes benefit from more 
CPUs, CPU cores, and CPU hardware execution threads (HT) just like 
applications that use multiple threads do, so long as there is enough 
work to keep multiple CPU cores busy.


There's really not *that* much difference between a multi-threaded 
executable and an executable with multiple processes cooperating using 
shared memory (like PostgreSQL). Nor is there much difference in how 
they use multiple logical CPUs.


The main difference between the two models is that multiple processes 
with shared memory don't share address space except where they have 
specifically mapped it. This means that it's relatively hard for one 
process to mangle other processes' state, especially if it's properly 
careful with its shared memory. By contrast, it's depressingly easy 
for one thread to corrupt the shared heap or even to corrupt other 
threads' stacks in a multi-threaded executable.


On Windows, threads are usually preferred because Windows has such a 
horrible per-process overhead, but it's very good at creating threads 
quickly and cheaply. On UNIX, which has historically been bad at 
threading and very good at creating and destroying processes, the use 
of multiple processes is preferred.


It's also worth noting that you can combine multi-process and 
multi-threaded operation. For example, if PostgreSQL was ever to 
support evaluating a single query on multiple CPU cores one way it 
could do that would be to spawn multiple threads within a single 
backend. (Note: I know it's not even remotely close to that easy - 
I've been doing way too much threaded coding lately).


So ... honestly, whether PostgreSQL is multi-threaded or multi-process 
just doesn't matter. Even if it was muti-threaded instead of 
multi-process, so long as it can only execute each query on a maximum 
of one core then single queries will not benefit (much) from having 
multiple CPU cores, multiple physical CPUs, or CPUs with 
hyperthreading. However, multiple CPU bound queries running in 
parallel will benefit massively from extra cores or physical CPUs, and 
might also benefit from hyperthreading.



To perfectly honest my programming skills with milti-threading apps 
is non-existent along with Linux world but in the Windows world 
single threaded apps saw no measurable performance boost


Of course. They cannot use the extra logical core for anything, so 
it's just overhead. You will find, though, that hyperthreading may 
improve system responsiveness under load even when using only single 
threaded apps, because two different single threaded apps can run 
(kind of) at the same time.


Isn't  that the rub point in HT processor.  A process running in HT 
virtual processor can lock up a specific chunk of the real processor up 
that would not normally be locked.  So the process running in the Real 
processor gets blocked and put to sleep till the process running in HT 
is cleared out.   Now the problem is on windows to my understanding is 
the kernel scheduler did  not understand that  HT was a virtual process 
so it screwed up scheduling orders and what not. This added allot of 
overhead to the processor to sort what was going on sending sleep 
commands to keep the processor from rolling over and dieing.


As time has progress the kernel schedulers i imagine have improved to 
better  understand that this virtual processor locks parts of the real 
processor up so it needs to schedule things a little better so it don't 
keep dumping  things into the HT processor that it will need a second 
latter for another process it queried for the real processor.




It's pretty useless compared to real multiple cores, though.


and allot of multi-threaded apps also got there performance smashed?


That will depend a lot on details of CPU cache use, exactly what they 
were doing on their various threads, how their thread priorities were 
set up, etc. Some apps benefit, some lose.
I understand these things in theory I just never had to do any of the 
programming.  Life has taught me theory/book  differs allot from real 
life/fact.


--
Craig Ringer



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-23 Thread Jorge Godoy
Em Friday 22 February 2008 13:10:20 Justin escreveu:
 I'm not after small code i'm after maintainable code where the date
 range for this report would be always changing.  So if i can get it to
 one select statement i would on have to pass in some variables and it
 would create the moving average.  Plus what if the sales people decide
 they want to change moving average from 10 weeks to 5 weeks or change it
 to 15 weeks.   People drive me nuts with i want it to do this or that ,
 Of course they have no idea how complicated it sometimes to get what
 they want.

Then just add more parameters to your function.  Instead of just start date 
also include an optional period, in weeks, days, hours, whatever you think is 
the better granularity for this.

So, in your interface, add the period and make 10 the default value. 



-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-22 Thread Jorge Godoy
Em Thursday 21 February 2008 18:37:47 Justin escreveu:

 Now i could write a function to do this or do it in C++ program that
 creates query with all kinds of unions.  I'm wondering if there is a way
 to create this in a single select statement??
 I can't think of a way to do it???

Why you need it in one query?  Think of maintenability not on code size.

Solve the problem in parts, calculating it for one week -- or ten, you can use 
the interval type -- and then moving on...

The function would look like: 

WHILE start_date + '10 weeks'::interval  today:
SELECT INTO weeks_avg avg(value) FROM table WHERE date BETWEEN 
start_date AND 
start_date+'10 weeks'::interval;
start_date:=start_date + '1 week'::interval;
END WHILE;


Start from that and you'll have it done.  (Of course, above is pseudo code and 
untested...)

-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-22 Thread Justin
I'm not after small code i'm after maintainable code where the date 
range for this report would be always changing.  So if i can get it to 
one select statement i would on have to pass in some variables and it 
would create the moving average.  Plus what if the sales people decide 
they want to change moving average from 10 weeks to 5 weeks or change it 
to 15 weeks.   People drive me nuts with i want it to do this or that ,  
Of course they have no idea how complicated it sometimes to get what 
they want.


Thanks you for your ideas Dean and Jorge gives me some ideas to play with.

Jorge Godoy wrote:

Em Thursday 21 February 2008 18:37:47 Justin escreveu:

  

Now i could write a function to do this or do it in C++ program that
creates query with all kinds of unions.  I'm wondering if there is a way
to create this in a single select statement??
I can't think of a way to do it???



Why you need it in one query?  Think of maintenability not on code size.

Solve the problem in parts, calculating it for one week -- or ten, you can use 
the interval type -- and then moving on...


The function would look like: 


WHILE start_date + '10 weeks'::interval  today:
	SELECT INTO weeks_avg avg(value) FROM table WHERE date BETWEEN start_date AND 
start_date+'10 weeks'::interval;

start_date:=start_date + '1 week'::interval;
END WHILE;


Start from that and you'll have it done.  (Of course, above is pseudo code and 
untested...)


  


[GENERAL] need some help on figuring out how to write a query

2008-02-21 Thread Justin
need to write a query for sales order table where we get the qty order * 
price for all line items by week then average then create a moving 
Average past 10 weeks of entered sales orders line items.  

So what will happen is some one will enter a date range going back 6 
month from the present date then query will need to get all the sales 
order line items that falls in that date range by week, then average the 
first 10 weeks,  drop of the first week moving to the 2nd week to the 
11th week to create another average, then drops 2nd week off then moves 
3rd week to the 12th week in the query date range create another average 
and so  and so till the end of the date range. This will create moving 
average for sales order entered.


Now i could write a function to do this or do it in C++ program that 
creates query with all kinds of unions.  I'm wondering if there is a way 
to create this in a single select statement??

I can't think of a way to do it???

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-21 Thread Dean Gibson (DB Administrator)

On 2008-02-21 13:37, Justin wrote:
...  I'm wondering if there is a way to create this in a single select 
statement??

I can't think of a way to do it???
Break down your problem using VIEWs.  Create a VIEW that gets just ONE 
of the averages, based on a starting date.  Then create a SELECT that 
gets data from the VIEW as though it was an actual table.


Once you get it working, you can replace the reference to the VIEWs in 
the SELECT statement, with the definition of the VIEWs, but I would not 
do that unless the result is relatively simple and easy to understand.  
Maintainability should be your goal.


-- Dean

--
Mail to my list address MUST be sent via the mailing list.
All other mail to my list address will bounce.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-21 Thread Scott Marlowe
On Thu, Feb 21, 2008 at 8:09 PM, Dean Gibson (DB Administrator)

  Mail to my list address MUST be sent via the mailing list.
  All other mail to my list address will bounce.

Totally uncool.  There's settings in the mail server software that
will change your outgoing messages to not be from you but from the
list, so that reply and reply all both go to [EMAIL PROTECTED]
and not you.

Not that I'll see the bounce messages.  Most go in the spam bin.  But
still it creates load on everyone else when there's a setting to fix
it.  Since I like to receive both direct and from the list, and my
client knows that they're the same message so it doesn't show it
twice, I use the default settings and don't know how to change the
mailing list manager stuff.  But it's been posted here dozens of
times.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Need quick help with standalone mode

2007-07-31 Thread RW
Hi!

I'm really in big trouble with a production database. It doesn't accept
connections anymore:

2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
vacuumed within 999832 transactions
2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
a full-database VACUUM in userbase.
2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
vacuumed within 999832 transactions
2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
a full-database VACUUM in userbase...

So I tried this script (after shutting down the postmaster):

#!/bin/bash

/usr/local/pgsql81/bin/postgres -D /data/pgsql/data/lindau userbase  SQL
VACUUM FULL VERBOSE ANALYZE
SQL

But all I get is:

backend 2007-07-31 19:27:33 CEST   WARNING:  database userbase must
be vacuumed within 999831 transactions
2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
a full-database VACUUM in userbase.
2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
vacuumed within 999830 transactions
...
2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
vacuumed within 999809 transactions
2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
a full-database VACUUM in userbase.
2007-07-31 19:27:33 CEST   ERROR:  could not access status of
transaction 539227074
2007-07-31 19:27:33 CEST   DETAIL:  could not open file pg_clog/0202:
No such file or directory

I've provided the data path. So I don't understand the entry could not
open file The database version is 8.1.5
and it runs on Redhat Linux 4 AS x86_64.

Any hints?

Thanks!
Robert

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Need quick help with standalone mode

2007-07-31 Thread Merlin Moncure
On 7/31/07, RW [EMAIL PROTECTED] wrote:
 Hi!

 I'm really in big trouble with a production database. It doesn't accept
 connections anymore:

 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999832 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase.
 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999832 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase...

 So I tried this script (after shutting down the postmaster):

 #!/bin/bash

 /usr/local/pgsql81/bin/postgres -D /data/pgsql/data/lindau userbase  SQL
 VACUUM FULL VERBOSE ANALYZE
 SQL

 But all I get is:

 backend 2007-07-31 19:27:33 CEST   WARNING:  database userbase must
 be vacuumed within 999831 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase.
 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999830 transactions
 ...
 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999809 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase.
 2007-07-31 19:27:33 CEST   ERROR:  could not access status of
 transaction 539227074
 2007-07-31 19:27:33 CEST   DETAIL:  could not open file pg_clog/0202:
 No such file or directory

 I've provided the data path. So I don't understand the entry could not
 open file The database version is 8.1.5
 and it runs on Redhat Linux 4 AS x86_64.


First of all, update the postmater.  This was a bug that was fixed in
the 8.1 series.

There is a resolution to this problem.  Here is Alvaro's notes on the subject:

On 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
 Merlin Moncure wrote:

  ya, it doesn't seem to match, as this seems to be repeating quite
  regularly.  interesting that my 'clog' files start at 06B6 and count
  up. 0207 is way off the charts.
 
  a lot of applications are hitting this database, and so far everything
  seems to be running ok (i found this log msg by accident), but I am
  now officially very nervous.

 I don't think there's much cause for concern here.  If my theory is
 correct, this is an autovacuum bug which was fixed in 8.1.7.

 What I'd do is create a 0207 clog file, fill it with 0x55 (which is
 transactions committed for all transactions in that interval), and do
 a VACUUM FREEZE on that database.  You'll need to set
 pg_database.datallowconn=true beforehand.

 Of course, I'd copy the files somewhere else and experiment on a scratch
 postmaster, running on a different port, just to be sure ...


Good news is you haven't lost any data. update the binaries and
schedule a maintenance window if you have to.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Need quick help with standalone mode

2007-07-31 Thread RW
Thanks a lot! That was a life saver :-)

Greetings
Robert

Merlin Moncure wrote:
 On 7/31/07, RW [EMAIL PROTECTED] wrote:
   
 Hi!

 I'm really in big trouble with a production database. It doesn't accept
 connections anymore:

 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999832 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase.
 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999832 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase...

 So I tried this script (after shutting down the postmaster):

 #!/bin/bash

 /usr/local/pgsql81/bin/postgres -D /data/pgsql/data/lindau userbase  SQL
 VACUUM FULL VERBOSE ANALYZE
 SQL

 But all I get is:

 backend 2007-07-31 19:27:33 CEST   WARNING:  database userbase must
 be vacuumed within 999831 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase.
 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999830 transactions
 ...
 2007-07-31 19:27:33 CEST   WARNING:  database userbase must be
 vacuumed within 999809 transactions
 2007-07-31 19:27:33 CEST   HINT:  To avoid a database shutdown, execute
 a full-database VACUUM in userbase.
 2007-07-31 19:27:33 CEST   ERROR:  could not access status of
 transaction 539227074
 2007-07-31 19:27:33 CEST   DETAIL:  could not open file pg_clog/0202:
 No such file or directory

 I've provided the data path. So I don't understand the entry could not
 open file The database version is 8.1.5
 and it runs on Redhat Linux 4 AS x86_64.
 


 First of all, update the postmater.  This was a bug that was fixed in
 the 8.1 series.

 There is a resolution to this problem.  Here is Alvaro's notes on the subject:

 On 2/6/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
   
 Merlin Moncure wrote:

 
 ya, it doesn't seem to match, as this seems to be repeating quite
 regularly.  interesting that my 'clog' files start at 06B6 and count
 up. 0207 is way off the charts.

 a lot of applications are hitting this database, and so far everything
 seems to be running ok (i found this log msg by accident), but I am
 now officially very nervous.
   
 I don't think there's much cause for concern here.  If my theory is
 correct, this is an autovacuum bug which was fixed in 8.1.7.

 What I'd do is create a 0207 clog file, fill it with 0x55 (which is
 transactions committed for all transactions in that interval), and do
 a VACUUM FREEZE on that database.  You'll need to set
 pg_database.datallowconn=true beforehand.

 Of course, I'd copy the files somewhere else and experiment on a scratch
 postmaster, running on a different port, just to be sure ...
 


 Good news is you haven't lost any data. update the binaries and
 schedule a maintenance window if you have to.

 merlin
   


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Need your help on using partion

2007-04-04 Thread Huynh Ngoc Doan


/*I'm learning how to use the partion in Postgres 8.2.3 so I want to 
do some test for my awareness.

I create 2 tables:
The parent table is cities:*/
CREATE TABLE xxx.cities
(
  id serial NOT NULL,
  name text,
  population real,
  altitude integer,
  CONSTRAINT pk_cities PRIMARY KEY (id)
)
/*and a partion of cities is capital:*/
CREATE TABLE xxx.capital
(
  CONSTRAINT pk_capital PRIMARY KEY (id),
  CONSTRAINT capital_id_check CHECK (id  3)
) INHERITS (xxx.cities)

/*My purpose is that the cities will contain the rows having id = 3
and the capital will contain the rows having id =3.
So I create a rule cities_insert_capital to redirect data to the 
table capital when new row's id 3

Here it is:
*/
CREATE OR REPLACE RULE cities_insert_capital AS
ON INSERT TO xxx.cities
   WHERE new.id  3 DO INSTEAD INSERT INTO xxx.capital(id,name, 
population, altitude) VALUES (new.id,new.name, new.population, new.altitude)


After completing the reparation stage, I insert some rows into cities table:
--restart the serial value to 1
alter sequence xxx.cities_id_seq restart with 1;
--serial is 1 at this time
--1.
INSERT INTO xxx.cities(name, population, altitude)
VALUES('HCM',10,10);
--data is inserted into cities table as the result, serial is 3,and 
the expected value is 2 =what wrong here?

--2.Try more insert
INSERT INTO xxx.cities(name, population, altitude)
VALUES('Ha Noi',10,10);
--data is inserted into capital while it's is expected to insert to cities.
--serial is 5 at this time,and expected value is 3 = what wrong here?

/*Conclusion:
The serial increases by 2 when excuting one insert command.I try many 
ways to find out the reason why, but I can't.
Can you help me to explain this unexpected behavior and give me some 
advices to deal with this problem.
I want that the serial will increase by 1 unit after a insert comand. 
Thank you very much.

Note:
I want id will be add automatically when a row is inserted.
*/
Best regard,
Doan. From VietNam


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Need your help on using partion

2007-04-04 Thread Postgres User

My guess is that when you insert into Cities, the sequence field is
incremented one time (=2), and when the Rule is executed to insert
into Capital, the sequence is incremented a second time (=3).  As a
result of these 2 operations, the sequence is incremented 2 times.

I don't know anything about your application or business rules, but
your solution seems unnecessarily complicated.  Remember, the best and
smartest approach is usually the simplest one.  You can do this in a
much simpler way.

On 4/4/07, Huynh Ngoc Doan [EMAIL PROTECTED] wrote:


/*I'm learning how to use the partion in Postgres 8.2.3 so I want to
do some test for my awareness.
I create 2 tables:
The parent table is cities:*/
CREATE TABLE xxx.cities
(
  id serial NOT NULL,
  name text,
  population real,
  altitude integer,
  CONSTRAINT pk_cities PRIMARY KEY (id)
)
/*and a partion of cities is capital:*/
CREATE TABLE xxx.capital
(
  CONSTRAINT pk_capital PRIMARY KEY (id),
  CONSTRAINT capital_id_check CHECK (id  3)
) INHERITS (xxx.cities)

/*My purpose is that the cities will contain the rows having id = 3
and the capital will contain the rows having id =3.
So I create a rule cities_insert_capital to redirect data to the
table capital when new row's id 3
Here it is:
*/
CREATE OR REPLACE RULE cities_insert_capital AS
ON INSERT TO xxx.cities
   WHERE new.id  3 DO INSTEAD INSERT INTO xxx.capital(id,name,
population, altitude) VALUES (new.id,new.name, new.population, new.altitude)

After completing the reparation stage, I insert some rows into cities table:
--restart the serial value to 1
alter sequence xxx.cities_id_seq restart with 1;
--serial is 1 at this time
--1.
INSERT INTO xxx.cities(name, population, altitude)
VALUES('HCM',10,10);
--data is inserted into cities table as the result, serial is 3,and
the expected value is 2 =what wrong here?
--2.Try more insert
INSERT INTO xxx.cities(name, population, altitude)
VALUES('Ha Noi',10,10);
--data is inserted into capital while it's is expected to insert to cities.
--serial is 5 at this time,and expected value is 3 = what wrong here?

/*Conclusion:
The serial increases by 2 when excuting one insert command.I try many
ways to find out the reason why, but I can't.
Can you help me to explain this unexpected behavior and give me some
advices to deal with this problem.
I want that the serial will increase by 1 unit after a insert comand.
Thank you very much.
Note:
I want id will be add automatically when a row is inserted.
*/
Best regard,
Doan. From VietNam


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Need setup help for Postgresql 8.1.3 on Solaris 10

2007-01-16 Thread Gellert, Andre
Hello Richard,

  I decreased the import time from 40 minutes to 32 minutes with changing 
  the parameters to:
  
  shared_buffers = 45  # kept same
  temp_buffers = 10# kept same
  
  work_mem =  32768   # now 32mb , before: 102400 = 100mb
  maintenance_work_mem =  2048000 # now 2.048mb, before: 32768 = 32mb
  
  Should I encrease these parameters?
 
 Nothing that should cause these problems.
 
 Anything unusual about your installation? Any oddities from ./configure 
 or similar?

I've got no news regarding my installation problems, but I had to reduce the 
maintenance_work_mem to 256mb, as vacuumdb had this error:

vacuumdb: vacuuming of database adp_trtr failed: ERROR: invalid memory alloc 
request size 2097151998
( 2.GB was a bit to large, I guess ;-) )

But this shouldn't cause my problems, as you say.
The installation is a standard solaris postgresql package - my admins say, I've 
got no priviledges to check this, further on , of course, I have no chance to 
recompile it, but at the moment it seems to be good to think about compiling 
the latest PG from sources ( 
http://www.postgresql.org/docs/8.2/interactive/install-procedure.html ).

My system admins contacted the company from which we lent the system , to get 
more information, so I am waiting for this.

Maybe the other alternative, RHEL + Postgres8.2.1 (off. RPMs for 64bit) is a 
good way to go.

Thanks so long, 
Andre

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Need setup help for Postgresql 8.1.3 on Solaris 10

2007-01-12 Thread Gellert, Andre
Hello all, 

I need some hints how to setup Postgresql on a brand new testsystem. 

Due to heavy load on the current database servers, my boss decided to test a 
big server as a replacement for  5 other servers. ;-) The system is used in a 
extranet environment - over 50 percent of the load is produced from an online 
catalog.
I doubt, that one system could handle the queries of 5 vehement used 
3ghz-double-processor systems, so I would select another db scenario, but it 
worth to try.

We have such a testsystem now, 4 x AMD Opteron (double core techn.) 885 2.6ghz 
, running with 32 GB Ram and fast 10.000rpm SAS discs, build-in in a nice sun 
case ;-) Sounds nice, but it doesn't perform like a thought it should. 
Maybe this is a misconfiguration of PostgreSQL on Solaris 10, it's my first 
time on this platform, maybe it is a problem with the hardware.

Reading , e.g. dumping a database, seems to run at expected performance, so I 
am going to test this system with read-querys from the live system on monday, 
to see how it performs with hundrets of parallel queries in a minute.
But my concerns are here: Restoring a 800mb database dump, produced with 
pg_dump from this system, really takes long. On the old linux RHEL 3.2ghz 
systems the restore takes 10 minutes (while serving extranets additional), but 
on the new system this takes nearly 40 minutes.
What happens while restoring ? :
- Creating the tables is fast.
- While loading the data into the DB i do not see significant load, a postgres 
process is running with 2-3 % usage in the background, main of the time the 
top command claims that the postgres processes are sleeping. System load is 
0.1.
- While creating indexes the postgres daemon behaves like expected, nearly 
12.5% usage, system load nearly 1 .
Trying to store the sqldump on another partition than the harddisc did not 
help. Copying local on the harddisc is fast (serial read/write, okay). 

Why is the loading process so slow, what could block the write process ? 
Creating indizes is writing, too, so why is this normal fast ?

I do not have configured autovacuum (would slow things down), I do not have 
moved databases or tables to different partitions (could be a speed 
improvement).  Even with the current configuration, it should perform much 
better.

Details to postgresql.conf , these are the values I changed (against defaults):
lc_messages = 'de.UTF-8'   
lc_monetary = 'de_DE.UTF-8'
lc_numeric = 'de_DE.UTF-8'  
lc_time = 'C'
listen_addresses = '*'
maintenance_work_mem = 32768 # 32 MB for vacuumdb etc...
max_connections = 2000
max_fsm_pages = 30
max_fsm_relations = 5000
shared_buffers = 45  # min 16 or max_connections*2, 8KB each #// default 
1000
stats_block_level = off
stats_command_string = off
stats_reset_on_server_start = off
stats_row_level = off
stats_start_collector = on
temp_buffers = 10
work_mem = 102400 # min 64, size in KB #// 1024

Details to Semaphores and shared memory:

prctl -n project.max-sem-ids -i task 330
task: 330
NAMEPRIVILEGE   VALUEFLAG   ACTION   RECIPIENT
project.max-sem-ids
privileged512   -   deny -

prctl -n project.max-shm-memory -i task 330
task: 330
NAMEPRIVILEGE   VALUEFLAG   ACTION   RECIPIENT
project.max-shm-memory
privileged  12.0GB  -   deny -

Maybe somebody has an idea, 
thanx in advance, 
Andre Gellert




Re: [GENERAL] Need setup help for Postgresql 8.1.3 on Solaris 10

2007-01-12 Thread Richard Huxton

Gellert, Andre wrote:

Hello all,

I need some hints how to setup Postgresql on a brand new testsystem.


Due to heavy load on the current database servers, my boss decided to
test a big server as a replacement for  5 other servers. ;-) The
system is used in a extranet environment - over 50 percent of the
load is produced from an online catalog. I doubt, that one system
could handle the queries of 5 vehement used 3ghz-double-processor
systems, so I would select another db scenario, but it worth to try.

We have such a testsystem now, 4 x AMD Opteron (double core techn.)
885 2.6ghz , running with 32 GB Ram and fast 10.000rpm SAS discs,


How many disks? What RAID?


build-in in a nice sun case ;-) Sounds nice, but it doesn't perform
like a thought it should. Maybe this is a misconfiguration of
PostgreSQL on Solaris 10, it's my first time on this platform, maybe
it is a problem with the hardware.

Reading , e.g. dumping a database, seems to run at expected
performance, so I am going to test this system with read-querys from
the live system on monday, to see how it performs with hundrets of
parallel queries in a minute. But my concerns are here: Restoring a
800mb database dump, produced with pg_dump from this system, really
takes long. On the old linux RHEL 3.2ghz systems the restore takes
10 minutes (while serving extranets additional), but on the new
system this takes nearly 40 minutes. What happens while restoring ? :
 - Creating the tables is fast. - While loading the data into the DB
i do not see significant load, a postgres process is running with 2-3
% usage in the background, main of the time the top command claims
that the postgres processes are sleeping. System load is 0.1.


Is your disk I/O saturating? Try vmstat/iostat (or whatever the Solaris 
equivalent is).



- While creating indexes the postgres daemon behaves like expected,
nearly 12.5% usage, system load nearly 1 . Trying to store the
sqldump on another partition than the harddisc did not help. Copying
local on the harddisc is fast (serial read/write, okay).

Why is the loading process so slow, what could block the write
process ? Creating indizes is writing, too, so why is this normal
fast ?

I do not have configured autovacuum (would slow things down), I do
not have moved databases or tables to different partitions (could be
a speed improvement).  Even with the current configuration, it should
perform much better.

Details to postgresql.conf , these are the values I changed (against
defaults):



maintenance_work_mem = 32768 # 32 MB for vacuumdb etc... work_mem =
102400 # min 64, size in KB #// 1024


I think your work_mem is too large (100MB) and maintenance_work_mem too
small, for a restore with 32GB of RAM anyway.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Need setup help for Postgresql 8.1.3 on Solaris 10

2007-01-12 Thread Gellert, Andre
Hello Richard,

  We have such a testsystem now, 4 x AMD Opteron (double core techn.)
  885 2.6ghz , running with 32 GB Ram and fast 10.000rpm SAS discs,
 
 How many disks? What RAID?
 
4 Disks, RAID1.
 
 Is your disk I/O saturating? Try vmstat/iostat (or whatever the Solaris 
 equivalent is).

Hard to say, due to account restrictions I cannot control more than this vmstat 
output gives me:

This is during loading Data, 98% idle:

-bash-3.00$ vmstat 5
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 31865080 27723492 248 43 138 118 79 0 5 1 1 1 0 6873 4235 1481  2  1 98
 0 0 0 31675384 27518448 603 14 0 0 0 0  0  0  0  0  0 15808 1695 1243 1  1 98
 0 0 0 31675384 27518436 609 0 0 0 0  0  0  0  0  0  0 16027 1703 1266 1  1 98
 0 0 0 31675372 27518408 602 0 0 0 0  0  0  0  0  0  0 15808 1682 1231 1  1 98
 0 0 0 31675356 27513660 1280 0 0 0 0 0  0  0  0  0  0 44234 1908 861  1  1 98
 0 0 0 31675356 27511444 369 0 0 0 0  0  0  0  0  0  0 13969 1270 952  1  1 99
 0 0 0 31675356 27510328 343 0 0 0 0  0  0  4  4  4  0 13053 1192 908  1  1 99
 0 0 0 31675344 27509688 426 1 0 0 0  0  0  0  0  0  0 14353 1371 1028 1  1 98
 0 0 0 31675332 27513672 552 0 0 0 0  0  0  0  0  0  0 17503 1675 1243 1  1 98
 0 0 0 31675332 27518468 570 0 0 0 0  0  0  0  0  0  0 17269 1737 1276 1  1 98
 0 0 0 31675320 27519528 561 0 0 0 0  0  0  0  0  0  0 15942 1709 1276 1  1 98
 0 0 0 31675320 27519440 598 169 0 0 0 0 0  0  0  0  0 15553 1738 1248 1  1 98
 0 0 0 31674428 27527404 3311 222 0 120 120 0 0 0 0 0 0 44244 3384 1259 1 2 97
 0 0 0 31675320 27539800 1665 0 0 0 0 0  0  0  0  0  0 37718 2151 1022 1  1 98
 0 0 0 31674644 27520952 408 148 0 80 80 0 0 0 0  0  0 13916 1459 1066 1  1 99
 0 0 0 31674196 27512960 403 146 0 40 40 0 0 0 0  0  0 13958 1425 1031 1  1 99
 0 0 0 31675320 27510932 437 222 0 80 80 0 0 0 0  0  0 17203 1550 1097 1  1 98
 0 0 0 31675320 27513352 402 75 0 0 0 0  0  0  0  0  0 16961 1425 1039 1  1 99
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 31674644 27516248 354 148 0 40 40 0 0 0 0  0  0 15408 1283 953  1  1 99
 0 0 0 31674644 27519036 345 296 0 80 80 0 0 0 0  0  0 14854 1297 915  1  1 99
 0 0 0 31675320 27519880 357 0 0 0 0  0  0  0  0  0  0 14000 1254 957  1  1 99
 0 0 0 31675320 27518796 335 0 0 0 0  0  0  0  0  0  0 28471 1575 886  1  1 98
 0 0 0 31675288 27515356 252 2 0 6 6  0  0  0  0  0  0 15763 1650 1245 1  1 98
 0 0 0 31675288 27517308 265 0 0 0 0  0  0  0  0  0  0 16911 1741 1285 1  1 98
 0 0 0 31675288 27519112 281 0 0 0 0  0  0  0  0  0  0 28401 1858 1167 1  1 98
 0 0 0 31674396 27518676 391 217 0 0 0 0 0  0  0  0  0 14281 1334 950  1  1 99
 0 0 0 31675236 27519480 386 3 0 80 80 0 0  0  0  0  0 14389 1305 971  1  1 99
 0 0 0 31675160 27517528 522 0 0 0 0  0  0  0  0  0  0 14958 1559 1142 1  1 99
 0 0 0 31674968 27515712 522 0 0 0 0  0  0  0  0  0  0 15553 1568 1159 1  1 98
 0 0 0 31674924 27518104 533 0 0 0 0  0  0  0  0  0  0 16687 1595 1173 1  1 99
 0 0 0 31674924 27517816 529 0 0 0 0  0  0  0  0  0  0 15607 1581 1174 1  1 99
 0 0 0 31674520 27516436 570 0 0 0 0  0  0  0  0  0  0 15876 1612 1172 1  1 99
 0 0 0 31674520 27516280 588 0 0 0 0  0  0  0  0  0  0 15998 1612 1192 1  1 98
 0 0 0 31674412 27515228 1312 0 0 0 0 0  0  0  0  0  0 46397 2165 1031 1  1 98
 0 0 0 31673996 27510360 318 0 0 0 0  0  0  0  0  0  0 12950 1082 849  0  1 99
 0 0 0 31673996 27508960 408 203 0 40 40 0 0 0 0  0  0 14094 1318 883  0  1 99
 0 0 0 31673996 27507568 443 0 0 0 0  0  0  0  0  0  0 14330 1373 1052 1  1 98
and so on. During the whole import of data it keeps mainly 99% idle.

This is during working on indexes creation:

-bash-3.00$ vmstat 5
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 31851400 27708172 252 44 129 111 74 0 5 1 0 0 0 7169 4006 1419  2  1 98
 0 0 0 31598656 27425288 50 14 0 0 0  0  0  0  0  0  0 1608   72  156 13  0 87
 0 0 0 31598656 27425224 6973 0 0 0 0 0  0  0  0  0  0 147690 3516 161 12 4 84
 0 0 0 31598656 27424696 13194 0 0 3 3 0 0  0  0  0  0 279048 6620 147 12 7 81
 0 0 0 31598656 27424116 14528 0 0 0 0 0 0  0  0  0  0 307360 7285 153 12 8 80
 0 0 0 31598656 27423852 13503 0 0 0 0 0 0  0  0  0  0 285909 6767 149 12 7 81
 0 0 0 31598656 27424584 3258 0 0 326 326 0 0 0 0 0  0 69522 1682 197 12  2 85
 0 0 0 31598652 27425244 0 0 0  0  0  0  0  0  0  0  0  480   37  184 13  0 87
 0 0 0 31598636 27425228 0 0 0  0  0  0  0  0  0  0  0  420   33  156 13  0 87
 0 0 0 31598636 27425228 0 0 0  0  0  0  0  0  0  0  0  418   32  152 13  0 87
 0 0 0 31598636 27425228 0 0 0  0  0  0  0  0  0  0  0  433   34  166 13  0 87
 0 0 0 31598636 27425228 0 0 0  0  0  0  0  0  0  0  0  423   35  156 13  0 87
 0 0 0 31598616 27425208 0 0 0  0  0  0  0  0  0  0  0  413   33  150 13  

Re: [GENERAL] Need setup help for Postgresql 8.1.3 on Solaris

2007-01-12 Thread Richard Huxton

Gellert, Andre wrote:

Hello Richard,


We have such a testsystem now, 4 x AMD Opteron (double core techn.)
885 2.6ghz , running with 32 GB Ram and fast 10.000rpm SAS discs,

How many disks? What RAID?
 
4 Disks, RAID1.
 
Is your disk I/O saturating? Try vmstat/iostat (or whatever the Solaris 
equivalent is).


Hard to say, due to account restrictions I cannot control more than this vmstat 
output gives me:

This is during loading Data, 98% idle:

-bash-3.00$ vmstat 5
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi  po  fr de sr m1 m1 m1 m2   in   sy   cs us 
sy id
 0 0 0 31865080 27723492 248 43 138 118 79 0  5  1  1  1  0  6873 4235 1481  2  
1 98
 0 0 0 31675384 27518448 603 14 0 0 0 0  0  0  0  0  0 15808 1695 1243 1  1 98
 0 0 0 31675384 27518436 609 0 0 0 0  0  0  0  0  0  0 16027 1703 1266 1  1 98
 0 0 0 31675372 27518408 602 0 0 0 0  0  0  0  0  0  0 15808 1682 1231 1  1 98
 0 0 0 31675356 27513660 1280 0 0 0 0 0  0  0  0  0  0 44234 1908 861  1  1 98


More interestingly your disks appear to be doing nothing. Which can't be 
right. They're not all zeroes though, so I'm not sure why you're seeing 
no disk activity.


Nothing's blocking on disk I/O though...

That also seems like a large number of interrupts for an idle machine.


 0 0 0 31673996 27510360 318 0 0 0 0  0  0  0  0  0  0 12950 1082 849  0  1 99
 0 0 0 31673996 27508960 408 203 0 40 40 0 0 0 0  0  0 14094 1318 883  0  1 99
 0 0 0 31673996 27507568 443 0 0 0 0  0  0  0  0  0  0 14330 1373 1052 1  1 98
and so on. During the whole import of data it keeps mainly 99% idle.

This is during working on indexes creation:

-bash-3.00$ vmstat 5
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 31851400 27708172 252 44 129 111 74 0 5 1 0 0 0 7169 4006 1419  2  1 98
 0 0 0 31598656 27425288 50 14 0 0 0  0  0  0  0  0  0 1608   72  156 13  0 87
 0 0 0 31598656 27425224 6973 0 0 0 0 0  0  0  0  0  0 147690 3516 161 12 4 84
 0 0 0 31598656 27424696 13194 0 0 3 3 0 0  0  0  0  0 279048 6620 147 12 7 81
 0 0 0 31598656 27424116 14528 0 0 0 0 0 0  0  0  0  0 307360 7285 153 12 8 80
 0 0 0 31598656 27423852 13503 0 0 0 0 0 0  0  0  0  0 285909 6767 149 12 7 81


Now that's a *huge* number of interrupts, unless I'm mis-reading this. 
I'm afraid I don't know enough about Solaris to say for sure, but 
there's something strange going on here.



Looks fine, doesn't it?


I think your work_mem is too large (100MB) and maintenance_work_mem too
small, for a restore with 32GB of RAM anyway.


I decreased the import time from 40 minutes to 32 minutes with changing the 
parameters to:

shared_buffers = 45  # kept same
temp_buffers = 10# kept same

work_mem =  32768   # now 32mb , before: 102400 = 100mb
maintenance_work_mem =  2048000 # now 2.048mb, before: 32768 = 32mb

Should I encrease these parameters?


Nothing that should cause these problems.

Anything unusual about your installation? Any oddities from ./configure 
or similar?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] NEED URGENT HELP....

2006-12-28 Thread Sandip G
  Great !!!  Its working Thanks to all for the great help. I am new
  to Postgre and like it's performance... I would like to learn it in
  depth, Please provide me any good resource to learn Postgre with lots
  of sample code/queries. Cheers !!!Best Regards,Sandip.

  - Original Message -
  From: Gurjeet Singh
  To: Martijn van Oosterhout , Henrique P Machado , Guy Rouillier
  , pgsql-general@postgresql.org, Sandip G
  Subject: Re: [GENERAL] NEED URGENT HELP
  Date: Tue, 26 Dec 2006 20:39:01 +0530

  It works Martijn...

  but with a few changes will be required in your function Sandip; you
  will have to pass an ARRAY constructor and return a SETOF record.
  Here's a sample:

  postgres= create table tab ( a int, b int );
  CREATE TABLE
  postgres= insert into tab values ( 1, 9 );
  INSERT 0 1
  postgres= insert into tab values (2,8);
  INSERT 0 1
  postgres= insert into tab values (3,7);
  INSERT 0 1
  postgres= insert into tab values (4,6);
  INSERT 0 1
  postgres= insert into tab values (5,5);
  INSERT 0 1
  postgres= create or replace function fun ( character varying [] )
  returns setof
   tab as
  postgres- $$
  postgres$ select * from tab where a = any ($1)
  postgres$ $$ language 'sql' volatile;
  CREATE FUNCTION
  postgres= select fun('{1}');
  fun
  ---
   (1,9)
  (1 row)

  postgres= select fun('{2,3}');
  fun
  ---
   (2,8)
   (3,7)
  (2 rows)

  postgres=

  Hope it helps


  On 12/26/06, Martijn van Oosterhout kleptog@svana.org wrote:

On Mon, Dec 25, 2006 at 08:52:52PM -0300, Henrique P Machado
wrote:
 WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND
BOOK_NO IN ($3)

 Could'nt he use an array in this 3rd parameter?

I think so, if it's written:

AND BOOK_NO = ANY($3)

Have a nice day,
--
Martijn van Oosterhout   kleptog@svana.org  
http://svana.org/kleptog/
 From each according to his ability. To each according to his
ability to litigate.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFFkQ1LIB7bNG8LQkwRApbCAJsH26IcDusO5Vi5kNC1UQ185usbnACeOxdC
xQo+z5Z7+Xofks/h3MmeF7w=
=Rq6g
-END PGP SIGNATURE-





  --
  [EMAIL PROTECTED]
  [EMAIL PROTECTED] gmail | hotmail | yahoo }.com

-- 
___
Search for products and services at:
http://search.mail.com



Re: [GENERAL] NEED URGENT HELP....

2006-12-26 Thread Martijn van Oosterhout
On Mon, Dec 25, 2006 at 08:52:52PM -0300, Henrique P Machado wrote:
 WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)
 
 Could'nt he use an array in this 3rd parameter?

I think so, if it's written:

AND BOOK_NO = ANY($3)

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] NEED URGENT HELP....

2006-12-26 Thread Gurjeet Singh

It works Martijn...

but with a few changes will be required in your function Sandip; you will
have to pass an ARRAY constructor and return a SETOF record. Here's a
sample:

postgres= create table tab ( a int, b int );
CREATE TABLE
postgres= insert into tab values ( 1, 9 );
INSERT 0 1
postgres= insert into tab values (2,8);
INSERT 0 1
postgres= insert into tab values (3,7);
INSERT 0 1
postgres= insert into tab values (4,6);
INSERT 0 1
postgres= insert into tab values (5,5);
INSERT 0 1
postgres= create or replace function fun ( character varying [] ) returns
setof
tab as
postgres- $$
postgres$ select * from tab where a = any ($1)
postgres$ $$ language 'sql' volatile;
CREATE FUNCTION
postgres= select fun('{1}');
 fun
---
(1,9)
(1 row)

postgres= select fun('{2,3}');
 fun
---
(2,8)
(3,7)
(2 rows)

postgres=

Hope it helps


On 12/26/06, Martijn van Oosterhout kleptog@svana.org wrote:


On Mon, Dec 25, 2006 at 08:52:52PM -0300, Henrique P Machado wrote:
 WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)

 Could'nt he use an array in this 3rd parameter?

I think so, if it's written:

AND BOOK_NO = ANY($3)

Have a nice day,
--
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
litigate.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFFkQ1LIB7bNG8LQkwRApbCAJsH26IcDusO5Vi5kNC1UQ185usbnACeOxdC
xQo+z5Z7+Xofks/h3MmeF7w=
=Rq6g
-END PGP SIGNATURE-






--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [GENERAL] NEED URGENT HELP....

2006-12-26 Thread Iannsp

Martijn van Oosterhout escreveu:

On Mon, Dec 25, 2006 at 08:52:52PM -0300, Henrique P Machado wrote:
  

WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)

Could'nt he use an array in this 3rd parameter?



I think so, if it's written:

AND BOOK_NO = ANY($3)

Have a nice day,
  
I believe not because the aray have one another sintax, different of on 
simple data, but if you create one stored procedure you will be 
transform the data...
If you sayd array to the data like '1,2,3' is another case, and I think 
you dont have problemns with this.

ok.


--
Ivo Nascimento
Iann tech - Desenvolvendo soluções com performance e segurança
http://www.ianntech.com.br


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] NEED URGENT HELP....

2006-12-25 Thread Sandip G
  I am using PostgreSql 8.1 with pgAdmin III. OS is XP. this is my
  function:

  CREATE OR REPLACE FUNCTION sp_get_phase(character varying, character
  varying, character varying)
  RETURNS ret_dv_sp_get_phase AS
  $BODY$
  SELECT  BOOK_NO, USER_ID, COMPANY_ID, PHASE, UPDATE_USER_ID,
  UPDATE_DATE,
  AddInfo1, AddInfo2
  FROM  T_PHASE
  WHERE (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)
  $BODY$
  LANGUAGE 'sql' VOLATILE;


  When I run
  select * from sp_get_phase ('sandip', 'oms', '4') returns 1
  record.this works fine

  select * from sp_get_phase ('sandip', 'oms', '1')  returns 1
  record.this also works fine... BUT

  select * from sp_get_phase ('sandip', 'oms', '1,4') this return a
  Blank record.

  I tried to execute the SQL statement from the function

  SELECT  BOOK_NO, USER_ID, COMPANY_ID, PHASE, UPDATE_USER_ID,
  UPDATE_DATE,
  AddInfo1, AddInfo2
  FROM  T_PHASE
  WHERE (USER_ID = 'testUser') AND (COMPANY_ID = 'testCompany') AND
  BOOK_NO IN (1,4)

  - This Works fine... returns 2 records. What may be the problem?

  Thanks in advance.
  Regards,
  Sandip.

-- 
___
Search for products and services at:
http://search.mail.com



Re: [GENERAL] NEED URGENT HELP....

2006-12-25 Thread Harpreet Dhaliwal

select * from sp_get_phase ('sandip', 'oms', '1,4') this return a Blank
record

it would match ur query against '1,4' for the corressponding field in the
table.
do u really have one such value for that field in your table, i mean '1,4'
??
it won't search for 1 and 4 separately if that is what you want your query
to work.

~Harpreet


On 12/21/06, Sandip G [EMAIL PROTECTED] wrote:


 I am using PostgreSql 8.1 with pgAdmin III. OS is XP.

 this is my function:

CREATE OR REPLACE FUNCTION sp_get_phase(character varying, character
varying, character varying)
  RETURNS ret_dv_sp_get_phase AS
$BODY$
SELECT  BOOK_NO, USER_ID, COMPANY_ID, PHASE, UPDATE_USER_ID,
UPDATE_DATE,
  AddInfo1, AddInfo2
FROMT_PHASE
WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)
$BODY$
  LANGUAGE 'sql' VOLATILE;


When I run
select * from sp_get_phase ('sandip', 'oms', '4')   returns 1
record.this works fine

select * from sp_get_phase ('sandip', 'oms', '1')  returns 1
record.this also works fine... BUT

select * from sp_get_phase ('sandip', 'oms', '1,4') this return a Blank
record.

I tried to execute the SQL statement from the function

SELECT  BOOK_NO, USER_ID, COMPANY_ID, PHASE, UPDATE_USER_ID,
UPDATE_DATE,
  AddInfo1, AddInfo2
FROMT_PHASE
WHERE   (USER_ID = 'testUser') AND (COMPANY_ID = 'testCompany')
AND BOOK_NO IN (1,4)

- This Works fine... returns 2 records.   What may be the problem?

Thanks in advance.
Regards,
Sandip.



-- http://a8-asy.a8ww.net/a8-ads/adftrclick?redirectid=en-mail_a_01


Re: [GENERAL] NEED URGENT HELP....

2006-12-25 Thread Guy Rouillier
Sandip G wrote:
 WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)

 select * from sp_get_phase ('sandip', 'oms', '1,4') this return a
 Blank record.

$3 is a parameter marker for a single value.  You cannot supply a
comma-separated list of values and expect it to operate on them.  As
Harpreet points out, it is interpreting your comma-separated list as a
single value.  If you know you will always have (e.g.) two values you
want to pass, you could code your function with in ($3, $4), but if
you want a generalized, variable-length list of values in your IN
clause, you'd need to use dynamic SQL.

-- 
Guy Rouillier

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] NEED URGENT HELP....

2006-12-25 Thread Henrique P Machado

WHERE   (USER_ID = $1) AND (COMPANY_ID = $2) AND BOOK_NO IN ($3)

Could'nt he use an array in this 3rd parameter?

--
ZehRique


Re: [GENERAL] Need some help creating a database sandbox...

2005-08-03 Thread Magnus Hagander
 I'm trying to setup a safe testing database environment for 
 some unit testing of a web application.  I would like to have 
 the unit tests restore the database to a known state before 
 every test.  The simplest way I thought would be to have the 
 testing tools drop/create the testing database on every test 
 case, and then populate the database from a specified file.  
 However I don't want to give the test user superuser 
 privileges.  Thus I don't think I can restrict it to only 
 drop/create a single named DB.

No, AFAIK there is no way to do that.


 My next thought was to code up a DELETE ALL script that 
 would delete all entities in the database.  However it seems 
 easy to miss something and its not robust against schema 
 changes, even though it can be looked down to the test_db_owner.

If you're giong to drop *everything* in the db, you can drive something
off the system tables or information schema. Like:

SELECT 'DROP TABLE ' || table_schema || '.' || table_name FROM
information_schema.tables WHERE table_type='BASE TABLE' AND table_schema
NOT IN ('pg_catalog','information_schema')

And then feed the generated script back through a different psql prompt.

Similar scripts for other object types of coruse (views, functions etc).
It might be easier to drive it off the system tables directly instead of
information schema, if you can live with possible backend version
dependencies.


 A third thought would be to run a second cluster containing 
 only the test database(s).  Then the users wouldn't be 
 shared, so even if it someone connected to the wrong DB it 
 would lack any permissions.  I don't have much experience 
 running multiple clusters, however.  So I don't know if thats 
 opening up another whole can of worms.

Just make them run in completely different directories, and use
different accouts to start each cluster (each only having permissions on
it's own data directory, of course). It's going to mean two sets of
shared buffer caches etc, so you may need to trim the memory values in
your postgresql.conf, and of course run them on different ports, but it
should work just fine.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Need some help creating a database sandbox...

2005-08-02 Thread Eric D. Nielsen
I'm trying to setup a safe testing database environment for some  
unit testing of a web application.  I would like to have the unit  
tests restore the database to a known state before every test.  The  
simplest way I thought would be to have the testing tools drop/create  
the testing database on every test case, and then populate the  
database from a specified file.  However I don't want to give the  
test user superuser privileges.  Thus I don't think I can restrict it  
to only drop/create a single named DB.


My next thought was to code up a DELETE ALL script that would  
delete all entities in the database.  However it seems easy to miss  
something and its not robust against schema changes, even though it  
can be looked down to the test_db_owner.


A third thought would be to run a second cluster containing only the  
test database(s).  Then the users wouldn't be shared, so even if it  
someone connected to the wrong DB it would lack any permissions.  I  
don't have much experience running multiple clusters, however.  So I  
don't know if thats opening up another whole can of worms.


Any suggestions?

Thanks
Eric

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Need some help creating a database sandbox...

2005-08-02 Thread Tom Lane
Eric D. Nielsen [EMAIL PROTECTED] writes:
 ... simplest way I thought would be to have the testing tools drop/create  
 the testing database on every test case, and then populate the  
 database from a specified file.  However I don't want to give the  
 test user superuser privileges.  Thus I don't think I can restrict it  
 to only drop/create a single named DB.

CREATEDB is a pretty restricted privilege --- I don't actually see
the problem here?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Need some help creating a database sandbox...

2005-08-02 Thread Eric D. Nielsen

On Aug 2, 2005, at 11:33 PM, Tom Lane wrote:


Eric D. Nielsen [EMAIL PROTECTED] writes:

... simplest way I thought would be to have the testing tools drop/ 
create

the testing database on every test case, and then populate the
database from a specified file.  However I don't want to give the
test user superuser privileges.  Thus I don't think I can restrict it
to only drop/create a single named DB.



CREATEDB is a pretty restricted privilege --- I don't actually see
the problem here?

regards, tom lane



I was equating CREATEDB as superuser.  Its not I see.  So if a user  
has CREATEDB, but not CREATEUSER, I should be safe.


Thanks!

Eric

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] need trigger help

2005-05-09 Thread Abdul-Wahid Paterson
Hi,

I have created a simple plpgsql function as follows as included below.
The function works fine but I need to have this done automatically on
updates and inserts. So I have a table

CREATE TABLE addresses (
address_id serial PRIMARY KEY,
companyvarchar(250),
fnamevarchar(100),
lnamevarcahr(100,
...etc...
hash_company  varchar(250),
hash_fname  varchar(100),
hash_lname  varchar(100)
);

The idea is, that the hashify_text function below is called for each
(required) field e.g. comapany, fname, lname etc. and a hash version
(using the word hash in a loose sense here) is created. The hash
version is then used for quick db selects so that searchin for...

'A B C Ltd.' would find the strings 'ABC ltd', 'A.B.C. ltd.', 'A B C LTD' etc.

So how can I create a trigger to automatically update the hash fields
on updates and inserts?


CREATE FUNCTION hashify_text(TEXT) RETURNS TEXT AS '
  DECLARE
out_textTEXT := ;
in_text TEXT;
index   INTEGER := 0;
max INTEGER;
tmp CHAR;

  BEGIN
in_text := $1;
max = char_length(in_text);

FOR i IN 1 .. max LOOP
  tmp = upper(substring(in_text from i for 1));
  IF ( strpos(''01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ'', tmp)  0 ) THEN
out_text := out_text || tmp;
  END IF;
END LOOP;

RETURN out_text;

  END;

' LANGUAGE 'plpgsql';


Regards,


Abdul-Wahid

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


  1   2   >