Re: [GENERAL] Avoiding deadlocks on mass delete / update

2010-03-26 Thread Роман Маширов




Craig Ringer wrote:

  Роман Маширов wrote:
  
  
I've got a simple 'spool' table, one process 'worker' reads and updates
this table, other 'stat' performs 'delete ... where ... returning *'.
Sometimes I've got dedlocks on delete operation in 'stat', seems like at
the moment of expiration of data by timeout some state changes arrived
from worker. So the question, is it possible to somehow set order of row
deletion in such bulk delete operation, to avoid deadlocks?

  
  
OK, so for the sake of example, WORKER is UPDATEing rows that stat is
trying to DELETE at the same time, such that worker holds a lock on row
A and wants a lock on row B, but stat holds B and wants A?

In other words, the deadlock is an _interaction_ between 'stat' and
'worker'?
  

yes

  
Can you post the queries?
  

as dumb as possible :) 

worker parses several thousand events and do
update queue set state=$1 where queue_id in () and state
in ()
for each target state, so it performs 1-4 update queries.

stat do
delete from queue where queue_stamp < now()-'1day'::interval or
state in ()
returning *

The main reason for such thing is to reduce amount of queries to
perform, since this queue could work at about 50 objects per second
with 4 state changes.

  One option is to SELECT ... FOR UPDATE NOWAIT before your UPDATE or DELETE.
  

Yep, thank you very much!

But, it would be good feature to somehow allow to explicitly set order
of multi-row update / delete, 
or to 'delete but locked', meaning delete all rows by given query
parameters, except locked ones. 

--
MRJ




[GENERAL] Invitation to connect on LinkedIn

2010-03-26 Thread Paresh Masani
LinkedIn
Paresh Masani requested to add you as a connection on LinkedIn:
--

Andrew,

I'd like to add you to my professional network on LinkedIn.

- Paresh

Accept invitation from Paresh Masani
http://www.linkedin.com/e/ATaEtqAFKHyclb4yaP2gMVdFpLR9OBj5WwZ3C3FunFK/blk/I1918892872_2/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYOnP8Te38Ve3wNej59bQYRil9HmncQbPoOd34QdPsTcj4LrCBxbOYWrSlI/EML_comm_afe/

View invitation from Paresh Masani
http://www.linkedin.com/e/ATaEtqAFKHyclb4yaP2gMVdFpLR9OBj5WwZ3C3FunFK/blk/I1918892872_2/39vczsUczAUe34VckALqnpPbOYWrSlI/svi/

-- 
DID YOU KNOW your LinkedIn profile helps you control your public image when 
people search for you? Setting your profile as public means your LinkedIn 
profile will come up when people enter your name in leading search engines. 
Take control of your image! 
http://www.linkedin.com/e/ewp/inv-22/

 
--
(c) 2010, LinkedIn Corporation

Re: [GENERAL] Large index operation crashes postgres

2010-03-26 Thread Frans Hals
Operation is now running for around 13 hrs.
Two postmaster processes above 1% memory usage are running.

One of them uses constantly 26.5% of memory.
The other one is growing:
After 1 hr25%
After 9 hrs  59%
After 13 hrs64%

Thanks & regards
Frans


2010/3/25 Frans Hals :
> Paul,
>
> I have started the operation right now after a fresh reboot of the machine.
> 
>  Please give me some time to collect the results...
>
> Thanks & regards
> Frans
>
> 2010/3/25 Paul Ramsey :
>> If you build an index, or try to update a column, using just the
>>
>> ST_X(ST_Centroid())
>>
>> without all the extra pl/pgsql wrapping, do you still see the memory
>> footprint grow substantially?
>>
>> P.
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to write Rules on a View to allow all actions as in the physical table?

2010-03-26 Thread Andre Lopes
Hi,

I have one problem with a view and his rules.

Ok, I have a table to store Session data, the structure is this:
[code]
CREATE TABLE "cti_sessions" (
"session_id" varchar(40) NOT NULL DEFAULT 0,
"ip_address" varchar(16) NOT NULL DEFAULT 0,
"user_agent" varchar(50) NOT NULL,
"last_activity" int4 NOT NULL DEFAULT 0,
"user_data" text,
"coment" varchar(2000),
"id_utiliz_ins" varchar(45),
"id_utiliz_upd" varchar(45),
"data_ult_actual" timestamp,
  PRIMARY KEY("session_id"),
  CONSTRAINT "ckeck_last_activity" CHECK(last_activity >= 0)
);
[/code]

And I have a view with this structure:
[code]
CREATE OR REPLACE VIEW "ci_sessions" AS
select session_id, ip_address, user_agent, last_activity, user_data from
cti_sessions;

CREATE OR REPLACE RULE "ins_ci_sessions" AS
ON INSERT TO "ci_sessions"
DO INSTEAD
(insert into cti_sessions (session_id, ip_address, user_agent,
last_activity, user_data) values (new.session_id, new.ip_address,
new.user_agent, new.last_activity, new.user_data));

CREATE OR REPLACE RULE "del_ci_sessions" AS
ON DELETE TO "ci_sessions"
DO INSTEAD
(delete from cti_sessions where session_id = old.session_id);

CREATE OR REPLACE RULE "upd_ci_sessions" AS
ON UPDATE TO "ci_sessions"
DO INSTEAD
(update cti_sessions set ip_address = new.ip_address, user_agent =
new.user_agent, last_activity = new.last_activity, user_data = new.user_data
where session_id = old.session_id);
[/code]

If I use the physical table do to the operations with sessions it works OK.
If I use the view it won't works.

How can I write the Rules to allow do all as in the physical table?


Best Regards,


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Alan McKay
On Thu, Mar 25, 2010 at 4:04 PM, Merlin Moncure  wrote:
> There is very little reason to do this.  both postgres and the
> operating system cache frequently used pages in memory already and
> they are pretty smart about it -- this leaves more memory for
> temporary demands like sorts, indexes, large result sets, etc.  It's
> usually just as good to simply set fsync=off on the database in
> scenarios where you are ok with data loss following a crash and the
> system is performance critical.

(I work with the OP)

We've found that writes to the ram based DB are about 3x faster than
disk based (with fsync turned ON), but we were expecting them to be a
LOT faster than that and are wondering what we might be doing wrong.


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Rajan, Pavithra
Hello - I have this table with 90 rows, which contains 2 columns ,column
A (type 'numeric') and column B(type text) . Column 'A' is filled with a
constant number and column 'B' has an unique entry for each row.
E.g.  A B
 (numeric)(text)

15968.0   002
15968.0   003
15968.0   004
15968.0   011 
15968.0   012
15968.0   057
15968.0   006
15968.0   009
15968.0   010
 ..
 ..  
I would here want to update the entire column A with a list of values
that I have.(
06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0
...).How do I accomplish this? Thank you.
  



Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Alan McKay
On Thu, Mar 25, 2010 at 4:15 PM, Scott Marlowe  wrote:
> These questions always get the first question back, what are you
> trying to accomplish?  Different objectives will have different
> answers.

We have a real-time application that processes data as it comes in.
Doing some simple math tells us that a disk-based DB cannot possible
perform fast enough to allow us to process the data.

> Now, if your pg_xlog directory is a problem, then you either need
> bigger faster hard drives, or your data is more transient in nature
> and you can recreate it and you put the whole db into RAM.

When we only saw a 3x improvement in speed with the RAM based DB, we
were still seeing a fair bit of disk activity but were not sure what
was going on.  Then we thought about pg_xlog and moved it to RAM as
well, but as I recall still not a great improvement.

We are trying a test right now where "initdb" was run against
/ramdisk/data so that absolutely everything should be in there.  Will
report back with results.

We are also about to try another test with a regular disk-based DB and
fsync turned OFF

> Note that the query planner wasn't designed with RAM as the storage
> space for pg, so it might make some bad decisions until you adjust
> postgresql.conf to stop that.  and then it still might make some bad
> decisions.

What thinks might need adjusting?

thanks,
-Alan


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Ozz Nixon

On 3/26/10 10:06 AM, Alan McKay wrote:

On Thu, Mar 25, 2010 at 4:15 PM, Scott Marlowe  wrote:
   

These questions always get the first question back, what are you
trying to accomplish?  Different objectives will have different
answers.
 

We have a real-time application that processes data as it comes in.
Doing some simple math tells us that a disk-based DB cannot possible
perform fast enough to allow us to process the data.
   


I have to ask the obvious question... as we develop solutions which must 
process 100,000 queries a second. In those cases, we use a combination 
hash table and link-lists. There are times where SQL is not the right 
choice, it is great for simplifying indexing and locks - but prior to 
SQL *we* had to write code guys... and it sounds like you too need to go 
back to old-school programming techniques.


O.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Szymon Guz
2010/3/26 Rajan, Pavithra 

>  Hello - I have this table with 90 rows, which contains 2 columns ,column
> A (type 'numeric') and column B(type text) . Column 'A' is filled with a
> constant number and* column 'B' has an unique entry for each row*.
>
> E.g.  A B
>  (numeric)(text)
>
> 15968.0   002
> 15968.0   003
> 15968.0   004
> 15968.0   011
> 15968.0   012
> 15968.0   057
> 15968.0   006
> 15968.0   009
> 15968.0   010
>  ..
>  ..
> I would here want to update the entire column A with a list of values that
> I have.(
> 06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0
> …).How do I accomplish this? Thank you.
>
>
>

Hi,
what is the relation between current data and the data that you have in the
list? How do you want to update those values? Where should be the value 06959.0
and where 29872.0?

regards
Szymon Guz


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Rajan, Pavithra
Hello , 
 Yes -I need to get the exact  the same result as you had listed.Thanks.



From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:12 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of 
values, I have.



Hello,

 

what do you expect as Result? Something like this?

 

E.g.  A B 
 (numeric)(text) 

   06959.0   002 
   15308.0   003 
   15968.0   004 
   18916.0   011 
   19961.0   012 
   26528.0   057 
   29553.0   006 
   29872.0   009 
   30631.0   010 



 

Mit freundlichen Grüßen

Timo Klecker 

 

 

Von: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Rajan, Pavithra 
Gesendet: Freitag, 26. März 2010 14:48
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

 

Hello - I have this table with 90 rows, which contains 2 columns ,column A 
(type 'numeric') and column B(type text) . Column 'A' is filled with a constant 
number and column 'B' has an unique entry for each row.

E.g.  A B 
 (numeric)(text) 

15968.0   002 
15968.0   003 
15968.0   004 
15968.0   011 
15968.0   012 
15968.0   057 
15968.0   006 
15968.0   009 
15968.0   010 
 .. 
 ..  
I would here want to update the entire column A with a list of values that I 
have.( 06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0 
...).How do I accomplish this? Thank you.

  




Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Timo Klecker
Hello,

 

what do you expect as Result? Something like this?

 

E.g.  A B 
 (numeric)(text) 

   06959.0   002 
   15308.0   003 
   15968.0   004 
   18916.0   011 
   19961.0   012 
   26528.0   057 
   29553.0   006 
   29872.0   009 
   30631.0   010 



 

Mit freundlichen Grüßen

Timo Klecker 

 

 

Von: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Rajan, Pavithra 
Gesendet: Freitag, 26. März 2010 14:48
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Need help on updating an entire column with a list of
values, I have.

 

Hello - I have this table with 90 rows, which contains 2 columns ,column A
(type 'numeric') and column B(type text) . Column 'A' is filled with a
constant number and column 'B' has an unique entry for each row.

E.g.  A B 
 (numeric)(text) 

15968.0   002 
15968.0   003 
15968.0   004 
15968.0   011 
15968.0   012 
15968.0   057 
15968.0   006 
15968.0   009 
15968.0   010 
 .. 
 ..  
I would here want to update the entire column A with a list of values that I
have.(
06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0
…).How do I accomplish this? Thank you.

  




Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Thom Brown
On 26 March 2010 13:47, Rajan, Pavithra  wrote:
> Hello - I have this table with 90 rows, which contains 2 columns ,column A
> (type 'numeric') and column B(type text) . Column 'A' is filled with a
> constant number and column 'B' has an unique entry for each row.
>
> E.g.  A B
>  (numeric)    (text)
>
>     15968.0   002
>     15968.0   003
>     15968.0   004
>     15968.0   011
>     15968.0   012
>     15968.0   057
>     15968.0   006
>     15968.0   009
>     15968.0   010
>  ..
>  ..
> I would here want to update the entire column A with a list of values that I
> have.(
> 06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0
> …).How do I accomplish this? Thank you.
>

It looks like you'd need to do something like the following:

UPDATE my_table SET a = 06959.0 WHERE b = '002';
UPDATE my_table SET a = 15308.0 WHERE b = '003';
UPDATE my_table SET a = 15968.0 WHERE b = '004';

etc

Not sure about your column layout though.  The primary key (in this
case your column B) would usually appear as the first column, and does
it always contain numbers?  If so, wouldn't an int be better?

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Alan McKay
On Fri, Mar 26, 2010 at 10:14 AM, Ozz Nixon  wrote:
> I have to ask the obvious question... as we develop solutions which must
> process 100,000 queries a second. In those cases, we use a combination hash
> table and link-lists. There are times where SQL is not the right choice, it
> is great for simplifying indexing and locks - but prior to SQL *we* had to
> write code guys... and it sounds like you too need to go back to old-school
> programming techniques.

Oh I hear ya!   I'm in the IT team, and I told the SW designers some 6
or 7 weeks ago now that I don't think they should be doing this with a
DB.


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Grzegorz Jaśkiewicz
create temporary table, insert your data, and than run update with join
against the table you wish to modify. And than drop your temp table.
simple.


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Gordan Bobic

Alan McKay wrote:

On Thu, Mar 25, 2010 at 4:15 PM, Scott Marlowe  wrote:

These questions always get the first question back, what are you
trying to accomplish?  Different objectives will have different
answers.


We have a real-time application that processes data as it comes in.
Doing some simple math tells us that a disk-based DB cannot possible
perform fast enough to allow us to process the data.


Now, if your pg_xlog directory is a problem, then you either need
bigger faster hard drives, or your data is more transient in nature
and you can recreate it and you put the whole db into RAM.


When we only saw a 3x improvement in speed with the RAM based DB, we
were still seeing a fair bit of disk activity but were not sure what
was going on.  Then we thought about pg_xlog and moved it to RAM as
well, but as I recall still not a great improvement.


Have you considered using one of these:
http://www.acard.com/english/fb01-product.jsp?idno_no=270&prod_no=ANS-9010&type1_title= 
Solid State Drive&type1_idno=13


Gordan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Timo Klecker
Hi again,

 

are there oids in your table or do you have any possibility to assure the
mentioned order of your data lines when you do a select?

 

If you can assure the order, you could use the temp table solution mentioned
by Grzegorz Jaśkiewicz.

 

If you cannot assure the order this could get tricky.

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

 

Von: Rajan, Pavithra [mailto:raj...@coned.com] 
Gesendet: Freitag, 26. März 2010 15:17
An: Timo Klecker; pgsql-general@postgresql.org
Betreff: RE: [GENERAL] Need help on updating an entire column with a list of
values, I have.

 

Hello , 

 Yes -I need to get the exact  the same result as you had listed.Thanks.

 

  _  

From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:12 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of
values, I have.

Hello,

 

what do you expect as Result? Something like this?

 

E.g.  A B 
 (numeric)(text) 

   06959.0   002 
   15308.0   003 
   15968.0   004 
   18916.0   011 
   19961.0   012 
   26528.0   057 
   29553.0   006 
   29872.0   009 
   30631.0   010 

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

Von: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Rajan, Pavithra 
Gesendet: Freitag, 26. März 2010 14:48
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Need help on updating an entire column with a list of
values, I have.

 

Hello - I have this table with 90 rows, which contains 2 columns ,column A
(type 'numeric') and column B(type text) . Column 'A' is filled with a
constant number and column 'B' has an unique entry for each row.

E.g.  A B 
 (numeric)(text) 

15968.0   002 
15968.0   003 
15968.0   004 
15968.0   011 
15968.0   012 
15968.0   057 
15968.0   006 
15968.0   009 
15968.0   010 
 .. 
 ..  
I would here want to update the entire column A with a list of values that I
have.(
06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0
.).How do I accomplish this? Thank you.

  




[GENERAL] Solid State Drives with PG (was: in RAM DB)

2010-03-26 Thread Alan McKay
> Have you considered using one of these:
> http://www.acard.com/english/fb01-product.jsp?idno_no=270&prod_no=ANS-9010&type1_title=
> Solid State Drive&type1_idno=13

We did some research which suggested that performance may not be so
great with them because the PG engine is not optimized to utilize
those drives.

So, I'll change the subject line to see if anyone has experience using these.


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using readline for frequently used queries

2010-03-26 Thread Tim Landscheidt
Hi,

depending on the database, I use some "dashboard queries"
rather frequently. To ease executing them, I've put:

| $include /etc/inputrc

| $if psql
| "\e[24~": "\fSELECT * FROM DashboardQuery;\n"
| $endif

in my ~/.inputrc ("\e[24~" is [F12]).

  Obviously, this only works if a) the current line and
b) the query buffer are empty. Before I try and err: Has
anyone put some thought in how to reliably do that? My first
impulse would be C-a, C-k, "\r", C-j, but this doesn't solve
the problem if psql is in "quote mode" (e. g. the previous
line contained an opening "'" or '"').

TIA,
Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Szymon Guz
W dniu 26 marca 2010 15:21 użytkownik Grzegorz Jaśkiewicz  napisał:

> create temporary table, insert your data, and than run update with join
> against the table you wish to modify. And than drop your temp table.
> simple.
>
>
It would be a nice solution, assuming that we know anything about the
correlation between the new values list and the B column values. In fact we
have some values in a random order in the database and a list of values that
isn't correlated with the database values.

regards
Szymon


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Grzegorz Jaśkiewicz
you can't really do any updates sensibly unless you know what the relation
is. So, I kind of silently assume that you know that.


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Timo Klecker
Hello,

 

you could use an plpgsql function:

 

CREATE OR REPLACE FUNCTION update(numeric[])

  RETURNS void AS

$BODY$

 

declare

  data alias for $1;

  table_obj record;

  I integer; 

Begin

i:=0;

 for table_obj in execute select * from TABLENAME order by THE_ORDER
loop

 update TABLENAME set A = data[i] where B = table_obj.B limit 1;

I := I + 1;

 end loop;

end;

 

$BODY$

  LANGUAGE 'plpgsql' IMMUTABLE STRICT

 

And call it with:

 

Select
update(‘{06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,306
31.0}‘)

 

PS: written code was not tested!

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

 

Von: Rajan, Pavithra [mailto:raj...@coned.com] 
Gesendet: Freitag, 26. März 2010 15:17
An: Timo Klecker; pgsql-general@postgresql.org
Betreff: RE: [GENERAL] Need help on updating an entire column with a list of
values, I have.

 

Hello , 

 Yes -I need to get the exact  the same result as you had listed.Thanks.

 

  _  

From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:12 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of
values, I have.

Hello,

 

what do you expect as Result? Something like this?

 

E.g.  A B 
 (numeric)(text) 

   06959.0   002 
   15308.0   003 
   15968.0   004 
   18916.0   011 
   19961.0   012 
   26528.0   057 
   29553.0   006 
   29872.0   009 
   30631.0   010 

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

Von: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Rajan, Pavithra 
Gesendet: Freitag, 26. März 2010 14:48
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Need help on updating an entire column with a list of
values, I have.

 

Hello - I have this table with 90 rows, which contains 2 columns ,column A
(type 'numeric') and column B(type text) . Column 'A' is filled with a
constant number and column 'B' has an unique entry for each row.

E.g.  A B 
 (numeric)(text) 

15968.0   002 
15968.0   003 
15968.0   004 
15968.0   011 
15968.0   012 
15968.0   057 
15968.0   006 
15968.0   009 
15968.0   010 
 .. 
 ..  
I would here want to update the entire column A with a list of values that I
have.(
06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0
…).How do I accomplish this? Thank you.

  




Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Rajan, Pavithra
Yes thanks -I am trying to figure writing out a script that will do the update 
than doing  individual inserts or update.I'll try this idea.



From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:51 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of 
values, I have.



Hello,

 

you could use an plpgsql function:

 

CREATE OR REPLACE FUNCTION update(numeric[])

  RETURNS void AS

$BODY$

 

declare

  data alias for $1;

  table_obj record;

  I integer; 

Begin

i:=0;

 for table_obj in execute select * from TABLENAME order by THE_ORDER loop

 update TABLENAME set A = data[i] where B = table_obj.B limit 1;

I := I + 1;

 end loop;

end;

 

$BODY$

  LANGUAGE 'plpgsql' IMMUTABLE STRICT

 

And call it with:

 

Select 
update('{06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0}')

 

PS: written code was not tested!

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

 

Von: Rajan, Pavithra [mailto:raj...@coned.com] 
Gesendet: Freitag, 26. März 2010 15:17
An: Timo Klecker; pgsql-general@postgresql.org
Betreff: RE: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

 

Hello , 

 Yes -I need to get the exact  the same result as you had listed.Thanks.

 



From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:12 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

Hello,

 

what do you expect as Result? Something like this?

 

E.g.  A B 
 (numeric)(text) 

   06959.0   002 
   15308.0   003 
   15968.0   004 
   18916.0   011 
   19961.0   012 
   26528.0   057 
   29553.0   006 
   29872.0   009 
   30631.0   010 

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

Von: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Rajan, Pavithra 
Gesendet: Freitag, 26. März 2010 14:48
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

 

Hello - I have this table with 90 rows, which contains 2 columns ,column A 
(type 'numeric') and column B(type text) . Column 'A' is filled with a constant 
number and column 'B' has an unique entry for each row.

E.g.  A B 
 (numeric)(text) 

15968.0   002 
15968.0   003 
15968.0   004 
15968.0   011 
15968.0   012 
15968.0   057 
15968.0   006 
15968.0   009 
15968.0   010 
 .. 
 ..  
I would here want to update the entire column A with a list of values that I 
have.( 06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0 
...).How do I accomplish this? Thank you.

  




Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread John Gage
As a  kind of [very?] dumb question, is this where SQLite has been  
used?  I am just curious.



On Mar 26, 2010, at 3:14 PM, Ozz Nixon wrote:


On 3/26/10 10:06 AM, Alan McKay wrote:
On Thu, Mar 25, 2010 at 4:15 PM, Scott Marlowe>  wrote:



These questions always get the first question back, what are you
trying to accomplish?  Different objectives will have different
answers.


We have a real-time application that processes data as it comes in.
Doing some simple math tells us that a disk-based DB cannot possible
perform fast enough to allow us to process the data.



I have to ask the obvious question... as we develop solutions which  
must process 100,000 queries a second. In those cases, we use a  
combination hash table and link-lists. There are times where SQL is  
not the right choice, it is great for simplifying indexing and locks  
- but prior to SQL *we* had to write code guys... and it sounds like  
you too need to go back to old-school programming techniques.


O.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error 42501 permission denied for schema

2010-03-26 Thread akp geek
Dear all

I have 2 schemas , schema1 and schema 2.

1. GRANT USAGE ON SCHEMA schema1 TO schema2;

I am trying to create a function in shema2, In that function I need to
access some tables from schema1.
p
I am getting the following error when I compile the function

Search path set to schema2,schema1,public

Can you please help? Appreciate your time


Regards


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Merlin Moncure
On Fri, Mar 26, 2010 at 10:06 AM, Alan McKay  wrote:
> We are trying a test right now where "initdb" was run against
> /ramdisk/data so that absolutely everything should be in there.  Will
> report back with results.
>
> We are also about to try another test with a regular disk-based DB and
> fsync turned OFF
>
>> Note that the query planner wasn't designed with RAM as the storage
>> space for pg, so it might make some bad decisions until you adjust
>> postgresql.conf to stop that.  and then it still might make some bad
>> decisions.
>
> What thinks might need adjusting?

if you are chasing tps in volatile cpu bound problems, you can try:
*) disable fsync
*) full_page_writes = off
*) disable ALL statistics (track_activities, etc)

application side:
*) using libpq? consider moving to binary style queries
and
*) prepare queries
or
*) if possible, stack queries together, perhaps stacking data into arrays

well, if you are looking at extreme TPS rates beyond what postgres can
handle on your hardware (there are some hard cpu limits besides what
the disks are doing), you may have to start looking at an 'in process'
cache management library.  of course, you have to balance this against
the loss of backend flexibility that postgres provides.  also beware
this route if you have any type of concurrency requirements.

at one point due to curiosity I hacked a test into the backend by
inserting queries directly into the protocol handling switch in tcop.c
(bypassing the protocol completely)  and was able to observe TPS rates
that are simply impossible in the current architecture (with the
backend processing humming along at 100% cpu from a single client).
until postgres supports this type of mode of operation (which is not
likely to happen anytime soon), it is going to remain relatively
unsuited for super high tps rate low concurrency 'cache' type
applications where the real strengths of the database don't play (all
that said, you may still get it to work for your own stuff).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Ozz Nixon

On 3/26/10 11:12 AM, John Gage wrote:
As a  kind of [very?] dumb question, is this where SQLite has been 
used?  I am just curious.

All questions are good ones, as that is how we all learn. ;-)

SQLite is useful for small foot print environments, along with simpler 
solutions like XBase (DBase) files. They tend to be quick and easy for 
implementation and usage, not robust for enterprise multi-user systems. 
(Not trying to stat a flame war, just the facts).


Enterprise engines are great for day to day transactional data flow, a 
few thousand reads with fewer writes. When you start to exceed writes to 
reads, then this is where you need to decide -- are those writes for 
audit and archive, or are those writes compounding the results of the reads.


If they are archive/history and audit as needed, this is where 
partitionable databases come to mind, or even simplistic text files 
(encrypted if needed).


If they are compounding your reads then the fork in the road 
appears... there are questions you have to ask yourself about the 'now' 
and '3 years from now' of your data. For example, the original statement 
was that running the SQL engine in RAM mode only handled 3 times more 
data requests, and that is not enough (I assume). There are probably 
database designs and query techniques that could improve your 
performance -- but does that answer the now or the 3 years from now 
need? We spend hours on each of our database designs, and our queries - 
and sometimes the queries force us to redesign the schema so we can milk 
out a few hundred more queries in our time of measurement (minutes, 
seconds, or hours).


We had an existing solution in place which was capable of 
processing 10,000 queries a minute. At the point of design, that was 
more than our customer thought of doing. 8 months later, they were 
starting to see waits on their processes for our solution. I spent the 
next 2 days redesigning a simple socket listener with the data in RAM 
using link-lists, hashes and returning it back in XML. Introduced 5 
additional queries to improve the quality of the results, and delivered 
it to them handling over 100,000 queries a second now.


So with that said, the questions become:

What does your schema look like now?

What are your writing into the database?

How often are you writing?

What are you searching for?

How often are you searching?

How large is the result set that is flowing across the ether?

There are times answer these questions, it is easier to see the 
problem is not the technology you are trying to leverage, but how you 
are using the technology. Then, there are times were you are trying to 
use the wrong technology. Answering those above will allow myself and 
the postgreSQL guru's to help you out.


* I use a wide range of SQL engines, depending upon budget, needs, etc. 
Along with developing custom solutions when the DB way is not tailored 
enough for a need. Hope that helps, and shows you, depending upon your 
needs for now and 36 months from now play a big roll in designs and 
re-designs.


O.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error 42501 permission denied for schema

2010-03-26 Thread Tom Lane
akp geek  writes:
> I have 2 schemas , schema1 and schema 2.

> 1. GRANT USAGE ON SCHEMA schema1 TO schema2;

You seem to be confusing schemas and users --- they are not the same
thing at all.  The above grants the right to lookup objects in schema1
to the user (a/k/a role) named schema2; who doesn't necessarily have
anything to do with objects in schema2.

Also, although you failed to show the specific command that was failing,
I suspect what you are actually running into is lack of CREATE
permission not USAGE permission.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error 42501 permission denied for schema

2010-03-26 Thread akp geek
Sorry for the confusion  that I have caused


   - roles > role1 , role2
   - schemas > schema1, schema2
   - GRANT USAGE ON SCHEMA schema1 TO role2;
   - create function fnc_name(IN i_id numeric)
   - function is created using role2

I ended up getting the error


ERROR: permission denied for schema schema1
SQL state: 42501


Appreciate your help.



On Fri, Mar 26, 2010 at 11:45 AM, Tom Lane  wrote:

> akp geek  writes:
> > I have 2 schemas , schema1 and schema 2.
>
> > 1. GRANT USAGE ON SCHEMA schema1 TO schema2;
>
> You seem to be confusing schemas and users --- they are not the same
> thing at all.  The above grants the right to lookup objects in schema1
> to the user (a/k/a role) named schema2; who doesn't necessarily have
> anything to do with objects in schema2.
>
> Also, although you failed to show the specific command that was failing,
> I suspect what you are actually running into is lack of CREATE
> permission not USAGE permission.
>
>regards, tom lane
>


Re: [GENERAL] Large index operation crashes postgres

2010-03-26 Thread Tom Lane
Frans Hals  writes:
> Operation is now running for around 13 hrs.
> Two postmaster processes above 1% memory usage are running.

> One of them uses constantly 26.5% of memory.
> The other one is growing:
> After 1 hr25%
> After 9 hrs  59%
> After 13 hrs64%

Well, it's pretty clear that you've found a memory leak, but that was
what we thought before; this data doesn't move us any closer to a fix.
In particular it's not possible to guess whether the leak should be
blamed on Postgres or Postgis code.  Even if we knew that, I'm not
sure we could fix the leak without tracing through actual execution.

Can you generate a self-contained test case that exhibits similar bloat?
I would think it's probably not very dependent on the specific data in
the column, so a simple script that constructs a lot of random data
similar to yours might be enough, if you would rather not show us your
real data.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-26 Thread Paul Ramsey
Occams razor says it's PostGIS. However, I'm concerned about how old
the code being run is. In particular, the library underneath PostGIS,
GEOS, had a *lot* of memory work done on it over the last year. I'd
like to see if things improve if you upgrade to GEOS 3.2.

On Fri, Mar 26, 2010 at 9:04 AM, Tom Lane  wrote:
> Frans Hals  writes:
>> Operation is now running for around 13 hrs.
>> Two postmaster processes above 1% memory usage are running.
>
>> One of them uses constantly 26.5% of memory.
>> The other one is growing:
>> After 1 hr        25%
>> After 9 hrs      59%
>> After 13 hrs    64%
>
> Well, it's pretty clear that you've found a memory leak, but that was
> what we thought before; this data doesn't move us any closer to a fix.
> In particular it's not possible to guess whether the leak should be
> blamed on Postgres or Postgis code.  Even if we knew that, I'm not
> sure we could fix the leak without tracing through actual execution.
>
> Can you generate a self-contained test case that exhibits similar bloat?
> I would think it's probably not very dependent on the specific data in
> the column, so a simple script that constructs a lot of random data
> similar to yours might be enough, if you would rather not show us your
> real data.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread John Gage

Thanks very, very much for this reply.  It is extremely useful.

So far, I have not run into anything remotely resembling a performance  
barrier in Postgres.   I'm still looking :-)




On Mar 26, 2010, at 4:43 PM, Ozz Nixon wrote:


On 3/26/10 11:12 AM, John Gage wrote:
As a  kind of [very?] dumb question, is this where SQLite has been  
used?  I am just curious.

All questions are good ones, as that is how we all learn. ;-)

SQLite is useful for small foot print environments, along with  
simpler solutions like XBase (DBase) files. They tend to be quick  
and easy for implementation and usage, not robust for enterprise  
multi-user systems. (Not trying to stat a flame war, just the facts).


Enterprise engines are great for day to day transactional data flow,  
a few thousand reads with fewer writes. When you start to exceed  
writes to reads, then this is where you need to decide -- are those  
writes for audit and archive, or are those writes compounding the  
results of the reads.


   If they are archive/history and audit as needed, this is where  
partitionable databases come to mind, or even simplistic text files  
(encrypted if needed).


   If they are compounding your reads then the fork in the road  
appears... there are questions you have to ask yourself about the  
'now' and '3 years from now' of your data. For example, the original  
statement was that running the SQL engine in RAM mode only handled 3  
times more data requests, and that is not enough (I assume). There  
are probably database designs and query techniques that could  
improve your performance -- but does that answer the now or the 3  
years from now need? We spend hours on each of our database designs,  
and our queries - and sometimes the queries force us to redesign the  
schema so we can milk out a few hundred more queries in our time of  
measurement (minutes, seconds, or hours).


   We had an existing solution in place which was capable of  
processing 10,000 queries a minute. At the point of design, that was  
more than our customer thought of doing. 8 months later, they were  
starting to see waits on their processes for our solution. I spent  
the next 2 days redesigning a simple socket listener with the data  
in RAM using link-lists, hashes and returning it back in XML.  
Introduced 5 additional queries to improve the quality of the  
results, and delivered it to them handling over 100,000 queries a  
second now.


   So with that said, the questions become:

What does your schema look like now?

What are your writing into the database?

How often are you writing?

What are you searching for?

How often are you searching?

How large is the result set that is flowing across the ether?

   There are times answer these questions, it is easier to see the  
problem is not the technology you are trying to leverage, but how  
you are using the technology. Then, there are times were you are  
trying to use the wrong technology. Answering those above will allow  
myself and the postgreSQL guru's to help you out.


* I use a wide range of SQL engines, depending upon budget, needs,  
etc. Along with developing custom solutions when the DB way is not  
tailored enough for a need. Hope that helps, and shows you,  
depending upon your needs for now and 36 months from now play a big  
roll in designs and re-designs.


O.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on updating an entire column with a list of values, I have.

2010-03-26 Thread Rajan, Pavithra
Hello  all - Thanks for all your inputs and Klecker's script.Slightly tweaked 
the script, with Bryan's help to implement it.
 
CREATE OR REPLACE FUNCTION update(numeric[])

  RETURNS void AS

$BODY$

 

declare

  data alias for $1;

  table_obj record;

  I integer; 

Begin

i:=1; //index sarted from 1.

 for table_obj in execute select * from TABLENAME order by THE_ORDER loop

update TABLENAME set A = substring('0' || data[i]::varchar from 
length(data[i]::varchar)-5) where B = table_obj.B limit 1; // had to do 
this  as  datatype integer was cutting out leading zeros in entries.

I := I + 1;

 end loop;

end;

 

$BODY$

  LANGUAGE 'plpgsql';

Select 
update('{06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0}')
 

 

Rgds

Pavithra

 

 
 
 


From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:51 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of 
values, I have.



Hello,

 

you could use an plpgsql function:

 

CREATE OR REPLACE FUNCTION update(numeric[])

  RETURNS void AS

$BODY$

 

declare

  data alias for $1;

  table_obj record;

  I integer; 

Begin

i:=0;

 for table_obj in execute select * from TABLENAME order by THE_ORDER loop

 update TABLENAME set A = data[i] where B = table_obj.B limit 1;

I := I + 1;

 end loop;

end;

 

$BODY$

  LANGUAGE 'plpgsql' IMMUTABLE STRICT

 

And call it with:

 

Select 
update('{06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0}')

 

PS: written code was not tested!

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

 

Von: Rajan, Pavithra [mailto:raj...@coned.com] 
Gesendet: Freitag, 26. März 2010 15:17
An: Timo Klecker; pgsql-general@postgresql.org
Betreff: RE: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

 

Hello , 

 Yes -I need to get the exact  the same result as you had listed.Thanks.

 



From: Timo Klecker [mailto:klec...@decoit.de] 
Sent: Friday, March 26, 2010 10:12 AM
To: Rajan, Pavithra ; pgsql-general@postgresql.org
Subject: AW: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

Hello,

 

what do you expect as Result? Something like this?

 

E.g.  A B 
 (numeric)(text) 

   06959.0   002 
   15308.0   003 
   15968.0   004 
   18916.0   011 
   19961.0   012 
   26528.0   057 
   29553.0   006 
   29872.0   009 
   30631.0   010 

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

Von: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von Rajan, Pavithra 
Gesendet: Freitag, 26. März 2010 14:48
An: pgsql-general@postgresql.org
Betreff: [GENERAL] Need help on updating an entire column with a list of 
values, I have.

 

Hello - I have this table with 90 rows, which contains 2 columns ,column A 
(type 'numeric') and column B(type text) . Column 'A' is filled with a constant 
number and column 'B' has an unique entry for each row.

E.g.  A B 
 (numeric)(text) 

15968.0   002 
15968.0   003 
15968.0   004 
15968.0   011 
15968.0   012 
15968.0   057 
15968.0   006 
15968.0   009 
15968.0   010 
 .. 
 ..  
I would here want to update the entire column A with a list of values that I 
have.( 06959.0,15308.0,15968.0,18916.2,19961.0,26528.0,29553.0,29872.0,30631.0 
...).How do I accomplish this? Thank you.

  




Re: [GENERAL] Solid State Drives with PG (was: in RAM DB)

2010-03-26 Thread Merlin Moncure
On Fri, Mar 26, 2010 at 10:32 AM, Alan McKay  wrote:
>> Have you considered using one of these:
>> http://www.acard.com/english/fb01-product.jsp?idno_no=270&prod_no=ANS-9010&type1_title=
>> Solid State Drive&type1_idno=13
>
> We did some research which suggested that performance may not be so
> great with them because the PG engine is not optimized to utilize
> those drives.
>
> So, I'll change the subject line to see if anyone has experience using these.

postgres works fine with flash SSD, understanding that:
*) postgres disk block is 8k and ssd erase block is much larger (newer
ssd controllers minimize this penalty though)
*) many flash drives cheat and buffer writes to delay full sync, for
performance reasons and to extend the life of the drive
*) if you have a relatively small database, the big 'win' off SSD,
fast random reads, is of little/no use because the o/s will buffer the
database in ram anywys.

The ideal candidate for flash SSD from database point of view is one
who is having I/O problems coming from OLTP type activity forcing the
disks  to constantly seek all over the place to write and (especially)
read data.  This happens when your database grows to the point when
its OPERATIONAL (that is, frequently used) data size exceeds ram to a
certain extent and o/s buffering of reads starts to become less
effective.  This can crush database performance.

flash SSD 'fixes' this problem because relative to a disk head seek
the cost of random read i/o on flash is basically zero.  however flash
has some problems writing, such that you get to choose between
volatility of data (irrespective of fsync) or lousy performance.  So
flash isn't yet a general purpose database solution, and wont be until
the write performance problem is fixed in a way that doesn't
compromise on volatility.  If/when that happens, and there isn't a
huge price premium to pay vs flash prices today, all my new servers
will be spec'd with flash :-).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] round(x) function

2010-03-26 Thread Gaietti, Mauro (SELEX GALILEO Guest, Italy)
This query: 

select round(0.5), round(0.5::integer), round(0.5::bigint), round(
0.5::float ), round( 0.5::double precision ),round(cast(0.5 as double
precision )),round(cast(0.5::double precision as numeric )); has strange
result:

1 1 1 0 0 0 1

Is this correct? 

My expected result is

1 1 1 1 1 1 1

Mauro 


 

SELEX Galileo S.p.A.
Con unico socio, direzione e coordinamento di Finmeccanica S.p.A.
Sede legale: Via Albert Einstein, 35 – 50013 Campi Bisenzio (FI) - Italia
Capitale sociale: Euro 293.391.015,00, i.v.
Reg. Imp. Firenze, C.F. e P.I. 02328910480

This email and any attachments are confidential to the intended recipient and 
may also be privileged. If you are not the intended recipient please delete it 
from your system and notify the sender. You should not copy it or use it for 
any purpose nor disclose or distribute its contents to any other person.
 
Questa e-mail e tutti i suoi allegati sono da intendersi inviati in via 
riservata all'effettivo destinatario e possono essere soggetti a restrizioni 
legali. Se non siete l'effettivo destinatario o avete ricevuto il messaggio per 
errore siete pregati di cancellarlo dal vostro sistema e di avvisare il 
mittente. E' vietata la duplicazione, l'uso a qualsiasi titolo, la divulgazione 
o la distribuzione dei contenuti di questa e-mail a qualunque altro soggetto.
 
 


Re: [GENERAL] round(x) function

2010-03-26 Thread Tom Lane
"Gaietti, Mauro \(SELEX GALILEO Guest, Italy\)" 
 writes:
> This query: 
> select round(0.5), round(0.5::integer), round(0.5::bigint), round(
> 0.5::float ), round( 0.5::double precision ),round(cast(0.5 as double
> precision )),round(cast(0.5::double precision as numeric )); has strange
> result:

> 1 1 1 0 0 0 1

> Is this correct? 

On most machines the float4/float8 operations follow the IEEE-754 spec's
"round to nearest even" rule for such cases.  I think all the other ones
are "add 0.5 and truncate" implementations.  It's unlikely we'll mess
with either; and *highly* unlikely that we would change the float
behavior since that's not even under our direct control (the hardware or
libm is what's doing that).  There's some case to be made for making
numeric and integer math do it the IEEE way, but I think that would
displease at least as many people as it pleased ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread John R Pierce

Chris Barnes wrote:
 
 
  We are testing in memory postgres database and have questions about 
configuring the ram mount point and whether there is great gains in 
setting it up this way? Are there any considerations for postgres?
 
  If you have experience, can you please give us some ideas on how you 
have accomplished this?
 


you might look into TimesTen... Oracle bought them a couple years ago, 
they have an SQL database thats heavily optimized for memory rather than 
block oriented disk.  it optionally uses a disk as a persistence backing 
store.   Of course, the entire database has to fit in ram, and they 
charge proportional to database size.its extremely fast.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] round(x) function

2010-03-26 Thread Justin Graf
On 3/26/2010 12:12 PM, Tom Lane wrote:
> "Gaietti, Mauro \(SELEX GALILEO Guest, 
> Italy\)"  writes:
>
>> This query:
>> select round(0.5), round(0.5::integer), round(0.5::bigint), round(
>> 0.5::float ), round( 0.5::double precision ),round(cast(0.5 as double
>> precision )),round(cast(0.5::double precision as numeric )); has strange
>> result:
>> 1 1 1 0 0 0 1
>>  
>> Is this correct?
>>  
> On most machines the float4/float8 operations follow the IEEE-754 spec's
> "round to nearest even" rule for such cases.  I think all the other ones
> are "add 0.5 and truncate" implementations.  It's unlikely we'll mess
> with either; and *highly* unlikely that we would change the float
> behavior since that's not even under our direct control (the hardware or
> libm is what's doing that).  There's some case to be made for making
> numeric and integer math do it the IEEE way, but I think that would
> displease at least as many people as it pleased ...
>
>   regards, tom lane
>
>
This topic keeps coming up every few months about rounding which way is 
correct.  I would be in favor of adding a option to round() function to 
specify method.

Leave the defaults as they are for backwards capability, meaning if no 
option is passed to Round() it follows current methods.  I agree 
changing how it works now would annoy many.

The option would be very simple something like this
 Round( 0.5, RoundToEven) = 0
Round( -0.5, RoundToEven) = 0
 Round(0.5, RoundUp) = 1
 Round(-0.5, RoundUp) = 0
 Round(0.5, RoundDown) = 0
 Round(-0.5, RoundDown) = -1
 Round(0.5, RoundToZero) = 0
  Round(-0.5, RoundToZero) = 0

There are so many methods of rounding to choose from
http://en.wikipedia.org/wiki/Rounding


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does anyone use in ram postgres database?

2010-03-26 Thread Chris Barnes

 

I just looked into timesten, at 46K for perpetual licence or 10k for yearly 
plus support.

 

Is there anything else available? LOL

 

Chris
 
> Date: Fri, 26 Mar 2010 10:39:37 -0700
> From: pie...@hogranch.com
> To: compuguruchrisbar...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Does anyone use in ram postgres database?
> 
> Chris Barnes wrote:
> > 
> > 
> > We are testing in memory postgres database and have questions about 
> > configuring the ram mount point and whether there is great gains in 
> > setting it up this way? Are there any considerations for postgres?
> > 
> > If you have experience, can you please give us some ideas on how you 
> > have accomplished this?
> > 
> 
> you might look into TimesTen... Oracle bought them a couple years ago, 
> they have an SQL database thats heavily optimized for memory rather than 
> block oriented disk. it optionally uses a disk as a persistence backing 
> store. Of course, the entire database has to fit in ram, and they 
> charge proportional to database size. its extremely fast.
> 
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Live connected with Messenger on your phone
http://go.microsoft.com/?linkid=9712958

Re: [GENERAL] Error 42501 permission denied for schema

2010-03-26 Thread Tom Lane
akp geek  writes:
> Sorry for the confusion  that I have caused
>- roles > role1 , role2
>- schemas > schema1, schema2
>- GRANT USAGE ON SCHEMA schema1 TO role2;
>- create function fnc_name(IN i_id numeric)
>- function is created using role2

> I ended up getting the error

> ERROR: permission denied for schema schema1

Well, keep in mind that what normally counts for a function's queries
is the permissions of the *caller*, not the owner.  If you want the
function to be able to do things regardless of who's calling it, mark
it SECURITY DEFINER, and then it runs with the owner's permissions.
Beware that malicious users might be able to subvert a SECURITY DEFINER
function to make it do something unintended ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Moving data directory from one server to another

2010-03-26 Thread Ehsan Haq
Hi,
   I have two Linux servers both having same Hardware architecture one have ES5 
and the other having ES4. Both the servers have same version of PostGres 
installed (8.3). I want to move all my DBs from ES5 server to ES4 server. I 
have tried the pg_dump but there are a lot of encoding problems, so can I copy 
the whole data directory from ES5 server to ES4 server when the PostGres server 
is not running? Will it be safe?

Thanks
Ehsan



  

Re: [GENERAL] Warm Standby Setup Documentation

2010-03-26 Thread Greg Smith

Bryan Murphy wrote:
The one thing you should be aware of is that when you fail over, your 
spare has no spares.  I have not found a way around this problem yet.  
So, when you fail over, there is a window where you have no backups 
while you're building the new spares.  This can be pretty nerve 
wracking if your database is like ours and it takes 3-6 hours to bring 
a new spare online from scratch.


If there's another server around, you can have your archive_command on 
the master ship to two systems, then use the second one as a way to 
jump-start this whole process.  After fail-over, just start shipping 
from the new primary to that 3rd server, now the replacement standby, 
and sync any files it doesn't have.  Then switch it into recovery.  Much 
faster than doing a new base backup from the standby on larger systems.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solid State Drives with PG

2010-03-26 Thread Greg Smith

Merlin Moncure wrote:

So flash isn't yet a general purpose database solution, and wont be until
the write performance problem is fixed in a way that doesn't
compromise on volatility.


Flash drives that ship with a supercapacitor large enough to ensure 
orderly write cache flushing in the event of power loss seem to be the 
only solution anyone is making progress on for this right now.  That 
would turn them into something even better even than the traditional 
approach of using regular disk with a battery-backed write caching 
controller.  Given the relatively small write cache involved and the 
fast write speed, it's certainly feasible to just flush at power loss 
every time rather than what the BBWC products do--recover once power 
comes back. 


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Warm Standby Setup Documentation

2010-03-26 Thread Ogden

On Mar 26, 2010, at 1:32 PM, Greg Smith wrote:

> Bryan Murphy wrote:
>> The one thing you should be aware of is that when you fail over, your spare 
>> has no spares.  I have not found a way around this problem yet.  So, when 
>> you fail over, there is a window where you have no backups while you're 
>> building the new spares.  This can be pretty nerve wracking if your database 
>> is like ours and it takes 3-6 hours to bring a new spare online from scratch.
> 
> If there's another server around, you can have your archive_command on the 
> master ship to two systems, then use the second one as a way to jump-start 
> this whole process.  After fail-over, just start shipping from the new 
> primary to that 3rd server, now the replacement standby, and sync any files 
> it doesn't have.  Then switch it into recovery.  Much faster than doing a new 
> base backup from the standby on larger systems.

How is it possible to use the archive_command to ship to different ones?

archive_command = 'rsync -a %p 
postg...@192.168.x.x:/usr/local/pgsql/walfiles/%f http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Storing data on a regular lat/lon grid

2010-03-26 Thread Mike Charles
Hi all,

I have temperature data that has been interpolated to a regular lat/lon
grid. I have one grid per day.

I want to be able to select points within a certain region, and within a
certain time period.

Now, I could store each grid point as a separate record (a new row for every
single point/time, with columns lat, lon, time, temperature), but since my
grid is constant, there's a lot of redundancy there, no? Disk space is
somewhat of a constraint, since there are a lot of files.

Basically, has anyone done any work with storing gridded spatial data? I see
lot's of info on Geospatial data, but it's usually cities, stations, etc.,
not a regular grid that doesn't change...

I also noticed that PostGIS does not support raster data...

Thanks a lot,

--
Mike


Re: [GENERAL] Solid State Drives with PG

2010-03-26 Thread Merlin Moncure
On Fri, Mar 26, 2010 at 2:32 PM, Greg Smith  wrote:
> Merlin Moncure wrote:
>>
>> So flash isn't yet a general purpose database solution, and wont be until
>> the write performance problem is fixed in a way that doesn't
>> compromise on volatility.
>
> Flash drives that ship with a supercapacitor large enough to ensure orderly
> write cache flushing in the event of power loss seem to be the only solution
> anyone is making progress on for this right now.  That would turn them into
> something even better even than the traditional approach of using regular
> disk with a battery-backed write caching controller.  Given the relatively
> small write cache involved and the fast write speed, it's certainly feasible
> to just flush at power loss every time rather than what the BBWC products
> do--recover once power comes back.

right -- unfortunately there is likely going to be a fairly high cost
premium on these devices for a good while yet.  right now afaik you
only see this stuff on boutique type devices...yeech.  I have to admit
until your running expose in this stuff I was led to believe by a few
companies (especially Intel) that flash storage technology was a few
years ahead of where it really was -- it's going to take me a long
time to forgive them for that!

put another way (are you listening intel?): _NO_ drive should be
positioned to the server/enterprise market that does not honor fsync
by default unless it is very clearly documented!  This is forgivable
for a company geared towards the consumer market...but Intel...ugh!

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing data on a regular lat/lon grid

2010-03-26 Thread Merlin Moncure
On Fri, Mar 26, 2010 at 3:25 PM, Mike Charles  wrote:
> Hi all,
>
> I have temperature data that has been interpolated to a regular lat/lon
> grid. I have one grid per day.
>
> I want to be able to select points within a certain region, and within a
> certain time period.
>
> Now, I could store each grid point as a separate record (a new row for every
> single point/time, with columns lat, lon, time, temperature), but since my
> grid is constant, there's a lot of redundancy there, no? Disk space is
> somewhat of a constraint, since there are a lot of files.
>
> Basically, has anyone done any work with storing gridded spatial data? I see
> lot's of info on Geospatial data, but it's usually cities, stations, etc.,
> not a regular grid that doesn't change...

well, you could play around with storing information in arrays.
storing record for each point gives you the most flexibility of
querying and indexing but is extremely inefficient from storage
perspective.   arrays are better from that point of view and can work
pretty well as long as you read/write the data in blocks in a fairly
regular way (and hopefully the layout of your grid doesn't change that
often).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solid State Drives with PG

2010-03-26 Thread Brad Nicholson
On Fri, 2010-03-26 at 15:27 -0400, Merlin Moncure wrote:
> On Fri, Mar 26, 2010 at 2:32 PM, Greg Smith  wrote:
> > Merlin Moncure wrote:
> >>
> >> So flash isn't yet a general purpose database solution, and wont be until
> >> the write performance problem is fixed in a way that doesn't
> >> compromise on volatility.
> >
> > Flash drives that ship with a supercapacitor large enough to ensure orderly
> > write cache flushing in the event of power loss seem to be the only solution
> > anyone is making progress on for this right now.  That would turn them into
> > something even better even than the traditional approach of using regular
> > disk with a battery-backed write caching controller.  Given the relatively
> > small write cache involved and the fast write speed, it's certainly feasible
> > to just flush at power loss every time rather than what the BBWC products
> > do--recover once power comes back.
> 
> right -- unfortunately there is likely going to be a fairly high cost
> premium on these devices for a good while yet.  right now afaik you
> only see this stuff on boutique type devices...yeech.  

TMS RamSan products have more than adequate capacitor power to handle
failure cases.  They look like a very solid product.  In addition to
this, they have internal RAID across the chips to protect against chip
failure. Wear-leveling is controlled on the board instead of offloaded
to the host.  I haven't gotten my hands on one yet, but should at some
point in the not to distant future.

I'm not sure what the price point is though.  But when you factor in the
cost of the products they are competing against from a performance
perspective, I'd be surprise if they aren't a lot cheaper.  Especially
when figuring in all the other costs that go along with disk arrays -
power, cooling, rack space costs.  

Depends on the your vantange point I guess.  I'm looking at these as
potential alternatives to some high end, expensive storage products, not
a cheap way to get really fast disk. 
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Solid State Drives with PG

2010-03-26 Thread Merlin Moncure
On Fri, Mar 26, 2010 at 3:43 PM, Brad Nicholson
 wrote:
> I'm not sure what the price point is though.

here is a _used_ 320gb ramsan for 15k :-).  dram storage is pricey.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


R: [GENERAL] round(x) function

2010-03-26 Thread Gaietti, Mauro (SELEX GALILEO Guest, Italy)
I think this is not consistent with documentation that says there is just one 
round function, with one argument of numeric type. It seems there is at least 2 
different round functions with two different behaviours. One for float4/float8 
that "round to nearest even"  and another one for numeric type that "add 0.5 
and truncate"
 
From the manual:
 Example 10-4. Rounding Function Argument Type Resolution

There is only one round function that takes two arguments; it takes a first 
argument of type numeric and a second argument of type integer. So the 
following query automatically converts the first argument of type integer to 
numeric:

SELECT round(4, 4);

 round

 4.
(1 row)
That query is actually transformed by the parser to:

SELECT round(CAST (4 AS numeric), 4);



 

SELEX Galileo S.p.A.
Con unico socio, direzione e coordinamento di Finmeccanica S.p.A.
Sede legale: Via Albert Einstein, 35 – 50013 Campi Bisenzio (FI) - Italia
Capitale sociale: Euro 293.391.015,00, i.v.
Reg. Imp. Firenze, C.F. e P.I. 02328910480

This email and any attachments are confidential to the intended recipient and 
may also be privileged. If you are not the intended recipient please delete it 
from your system and notify the sender. You should not copy it or use it for 
any purpose nor disclose or distribute its contents to any other person.
 
Questa e-mail e tutti i suoi allegati sono da intendersi inviati in via 
riservata all'effettivo destinatario e possono essere soggetti a restrizioni 
legali. Se non siete l'effettivo destinatario o avete ricevuto il messaggio per 
errore siete pregati di cancellarlo dal vostro sistema e di avvisare il 
mittente. E' vietata la duplicazione, l'uso a qualsiasi titolo, la divulgazione 
o la distribuzione dei contenuti di questa e-mail a qualunque altro soggetto.
 
-Messaggio originale- 
Da: Justin Graf [mailto:jus...@magwerks.com] 
Inviato: venerdì 26 marzo 2010 18.44
A: Tom Lane
Cc: Gaietti, Mauro (SELEX GALILEO Guest, Italy); pgsql-general@postgresql.org
Oggetto: Re: [GENERAL] round(x) function

On 3/26/2010 12:12 PM, Tom Lane wrote:
> "Gaietti, Mauro \(SELEX GALILEO Guest, 
> Italy\)"  writes:
>
>> This query:
>> select round(0.5), round(0.5::integer), round(0.5::bigint), round(
>> 0.5::float ), round( 0.5::double precision ),round(cast(0.5 as double
>> precision )),round(cast(0.5::double precision as numeric )); has strange
>> result:
>> 1 1 1 0 0 0 1
>>  
>> Is this correct?
>>  
> On most machines the float4/float8 operations follow the IEEE-754 spec's
> "round to nearest even" rule for such cases.  I think all the other ones
> are "add 0.5 and truncate" implementations.  It's unlikely we'll mess
> with either; and *highly* unlikely that we would change the float
> behavior since that's not even under our direct control (the hardware or
> libm is what's doing that).  There's some case to be made for making
> numeric and integer math do it the IEEE way, but I think that would
> displease at least as many people as it pleased ...
>
>   regards, tom lane
>
>
This topic keeps coming up every few months about rounding which way is 
correct.  I would be in favor of adding a option to round() function to 
specify method.

Leave the defaults as they are for backwards capability, meaning if no 
option is passed to Round() it follows current methods.  I agree 
changing how it works now would annoy many.

The option would be very simple something like this
 Round( 0.5, RoundToEven) = 0
Round( -0.5, RoundToEven) = 0
 Round(0.5, RoundUp) = 1
 Round(-0.5, RoundUp) = 0
 Round(0.5, RoundDown) = 0
 Round(-0.5, RoundDown) = -1
 Round(0.5, RoundToZero) = 0
  Round(-0.5, RoundToZero) = 0

There are so many methods of rounding to choose from
http://en.wikipedia.org/wiki/Rounding


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: R: [GENERAL] round(x) function

2010-03-26 Thread Tom Lane
"Gaietti, Mauro \(SELEX GALILEO Guest, Italy\)" 
 writes:
> I think this is not consistent with documentation that says there is
> just one round function, with one argument of numeric type.

The documentation you're quoting says there is just one round function
that takes *two* arguments.  Which is true:

regression=# \df round
  List of functions
   Schema   | Name  | Result data type | Argument data types |  Type  
+---+--+-+
 pg_catalog | round | double precision | double precision| normal
 pg_catalog | round | numeric  | numeric | normal
 pg_catalog | round | numeric  | numeric, integer| normal
(3 rows)

I don't think there's a claim anywhere that the numeric and double
precision versions of round(x) act identically.  They hardly could be
exactly identical anyway given the different properties of the two
datatypes.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Warm Standby Setup Documentation

2010-03-26 Thread Bryan Murphy
On Fri, Mar 26, 2010 at 1:32 PM, Greg Smith  wrote:

> If there's another server around, you can have your archive_command on the
> master ship to two systems, then use the second one as a way to jump-start
> this whole process.  After fail-over, just start shipping from the new
> primary to that 3rd server, now the replacement standby, and sync any files
> it doesn't have.  Then switch it into recovery.  Much faster than doing a
> new base backup from the standby on larger systems.
>
>
Every time I've tried to do this it's failed because the third server was
looking for log files starting with 0006... but the secondary server
(new master) is now shipping files starting with 0007...  How do I get
the third server to switch over to the higher numbered files?  That's the
part I was never able to overcome.

I'd really like to fix this, because this has literally given me nightmares.
:)

Bryan


[GENERAL] Connection Pooling

2010-03-26 Thread David Kerr
Howdy all,

I have some apps that are connecting to my DB via direct JDBC and I'd like to 
pool their connections.

I've been looking at poolers for a while, and pgbouncer and pgpool-ii seem to 
be some of the most popular, so
i've started with those.


I'm setting up pgbouncer, and i've hit a bit of a snag. Hopefully someone can 
tell me if pgbouncer or pgpool are 
capable of this (and if so, how to do it) or alternatly a pooler that can...

What I'd like to be able to do is this (not using pooler syntax, this is just a 
high level of what i want to achive)

Say i set max pool size = 10 connections. and max # of pools = 5. 

That means that i should have 5 connections to my database covering 50 
connections total.

I can't really seem to make that work with pgbouncer without naming the pools 
separetly. (pool1 = dbname = a, pool2 = dbname =a)
which means my app is tied to a pool (or has to specifically code to rotate 
pools...) which is not really desireable.

is what i'm looking to do possible in pgbouncer or pgpool? or anything?

Thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-26 Thread Frans Hals
Tom,

I'm pretty new to memory debugging, so please be patient if I'm not as
precise as you suppose me to be.
For the records I have run a valgrind postmaster session, starting my
initial indexing routine until it crashes postgres.
If you think this might be enlightning for you, I'll send you the
transcript. It's too long for the list.

I'm not sure, what you're thinking about generating a self-contained
test that exhibits similar bloat.
I have started an index creation using my data without calling postgis
functions. Just to make it busy:

This is now running against the 50.000.000 rows in placex. I will
update you about the memory usage it takes.

The data itself isn't a secret. I need your experience to find and fix
the problem. For myself I 'll try all necessary steps, you suggest me
to do.

Kind regards
Frans

2010/3/26 Tom Lane :
> Well, it's pretty clear that you've found a memory leak, but that was
> what we thought before; this data doesn't move us any closer to a fix.
> In particular it's not possible to guess whether the leak should be
> blamed on Postgres or Postgis code.  Even if we knew that, I'm not
> sure we could fix the leak without tracing through actual execution.
>
> Can you generate a self-contained test case that exhibits similar bloat?
> I would think it's probably not very dependent on the specific data in
> the column, so a simple script that constructs a lot of random data
> similar to yours might be enough, if you would rather not show us your
> real data.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-26 Thread Frans Hals
The index mentioned below has been created in some minutes without problems.
Dropped it and created it again. Uses around 36 % of memorywhile
creating, after completion postmaster stays at 26 %.


> I'm not sure, what you're thinking about generating a self-contained
> test that exhibits similar bloat.
> I have started an index creation using my data without calling postgis
> functions. Just to make it busy:
>  (substring(geometry,1,100), rank_address, osm_type, osm_id);>
> This is now running against the 50.000.000 rows in placex. I will
> update you about the memory usage it takes.
>
>> Can you generate a self-contained test case that exhibits similar bloat?
>> I would think it's probably not very dependent on the specific data in
>> the column, so a simple script that constructs a lot of random data
>> similar to yours might be enough, if you would rather not show us your
>> real data.
>>
>>                        regards, tom lane
>>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection Pooling

2010-03-26 Thread Allan Kamau
On Fri, Mar 26, 2010 at 11:17 PM, David Kerr  wrote:
> Howdy all,
>
> I have some apps that are connecting to my DB via direct JDBC and I'd like to 
> pool their connections.
>
> I've been looking at poolers for a while, and pgbouncer and pgpool-ii seem to 
> be some of the most popular, so
> i've started with those.
>
>
> I'm setting up pgbouncer, and i've hit a bit of a snag. Hopefully someone can 
> tell me if pgbouncer or pgpool are
> capable of this (and if so, how to do it) or alternatly a pooler that can...
>
> What I'd like to be able to do is this (not using pooler syntax, this is just 
> a high level of what i want to achive)
>
> Say i set max pool size = 10 connections. and max # of pools = 5.
>
> That means that i should have 5 connections to my database covering 50 
> connections total.
>
> I can't really seem to make that work with pgbouncer without naming the pools 
> separetly. (pool1 = dbname = a, pool2 = dbname =a)
> which means my app is tied to a pool (or has to specifically code to rotate 
> pools...) which is not really desireable.
>
> is what i'm looking to do possible in pgbouncer or pgpool? or anything?
>
> Thanks
>
> Dave
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


You may also have a look at Commons DBCP from Apache software
foundation, "http://commons.apache.org/dbcp/";. I have used it for a
few projects and have had no problems.


Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general