Re: [GENERAL] simultaneously reducing both memory usage and runtime for a query

2010-03-29 Thread Faheem Mitha



On Sun, 28 Mar 2010, Andy Colson wrote:

Wait... Your saying your question is so complex it needs 41 pages 
(including graphs) to ask?  I didn't bother before, but now I'm curious, 
I'll have to go take a look.


No, only the 25 page document (which has the graphs) is directly related 
to the question. It is different variants on a query (well, two queries, 
but they are very similar), along with EXPLAIN ANALYZE VERBOSE and 
time-memory graphs.


The 41 page document is just background, but relevant background. It 
contains information about the schema, tables, hardware, pg config info. 
It also has transcripts of a couple of old IRC sessions, which are 
increasingly less relevant, so I may remove that.


Since I've been told by a couple of different people that the way I asked 
my question was not ideal, I'll try reposting again with a modified 
version (but still containing the same information) in a bit.


  Regards, Faheem.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simultaneously reducing both memory usage and runtime for a query

2010-03-29 Thread Faheem Mitha
Hi again Andy,

On Sun, 28 Mar 2010 20:59:24 -0500, Andy Colson a...@squeakycode.net wrote:
 On 03/28/2010 07:43 PM, Andy Colson wrote:
 On 03/28/2010 03:05 PM, Faheem Mitha wrote:



 Wait... Your saying your question is so complex it needs 41 pages
 (including graphs) to ask? I didn't bother before, but now I'm curious,
 I'll have to go take a look.

 -Andy

 Faheem, you seem to be incredibly detail oriented.  We probably on
 need 10% of whats in diag.pdf:

Most of this is stuff I've been asked on #postgresql. Admittedly, in
different contexts than this query.

[snippage]

 The rest is irc chat about getting the data imported into PG, and
  other than slowness problems, does not seem relevant to the sql in
  opt.pdf.

True, the IRC sessions should go. I'll take them out.

 As for opt.pdf, I dont think, again, we need all that detail.  And
  the important parts got cut off.  The explain analyze output is
  needed, but its cut off.

 I'd recommend you paste the output here:

 http://explain.depesz.com/

 And give us links.  The explain analyze will have counts and info
 that we (ok not me, but Tom and others) can use to help you.

That's one way to go. I was going to paste the entirety of opt.tex
into an email. That would include all the EXPLAIN ANALYZE STUFF, but
not the graphs, and thus would be relatively self-contained. For the
graphs you'd have to look at a pdf (unless a ps.gz is preferred).

 You also seem to have gone through several revisions of the sql (I
 admit, I just skimmed the pdf's), it would be great if you could
 drop the ones you are sure are not useful, and we concentrate on
 just one or two.

Ok, I'll trim it down a bit. At least the initial queries in both
sections are not relevant. Thanks for the feedback.

   Regards,Faheem.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection Pooling

2010-03-29 Thread Wappler, Robert
On 2010-03-29, David Kerr wrote:
 
 On 3/27/2010 12:46 AM, John R Pierce wrote:
 Allan Kamau wrote:
 You may also have a look at Commons DBCP from Apache software
 foundation, http://commons.apache.org/dbcp/;. I have used it for a
 few projects and have had no problems.
 
 for that matter, JDBC has its own connection pooling in java.
 
 
 
 
 It looks like both of those solutions require a coding change. I'm
 hoping for a middleware solution similar to pgpool/pgbouncer.
 

I'm using proxool for JDBC-connection pooling. It behaves as a usual
JDBC-Driver or DataSource should do. The configuration can be loaded
statically when starting the application.

-- 
Robert...
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-29 Thread Frans Hals
Paul,

took your advice and installed Geos 3.2.0.
Index is now running for 14 hrs, postmaster is taking all the RAM.
Sadly it looks like the Geos update didn't save me.

Regards
Frans

2010/3/28 Paul Ramsey pram...@cleverelephant.ca:
 GEOS 3.2 is backwards compatible, so you can install it overtop of 3.1
 and things will still work.

 P


 2010/3/26 Paul Ramsey pram...@cleverelephant.ca:
 Occams razor says it's PostGIS. However, I'm concerned about how old
 the code being run is. In particular, the library underneath PostGIS,
 GEOS, had a *lot* of memory work done on it over the last year. I'd
 like to see if things improve if you upgrade to GEOS 3.2.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why index occupy less amount of space than the table with same structure.

2010-03-29 Thread Alban Hertroys
On 29 Mar 2010, at 6:03, Tadipathri Raghu wrote:

 Hi Alban,
  
 Thank you for the update.
  
 For one thing: The table holds information regarding to which transactions 
 each row is visible (the xid) whereas the index does not.
  
 What would be the each value of the xid, like 8 bytes,or 32 bytes..which 
 causing the table to hold what index is not and the space occupied is exactly 
 half of the table in indexes. Can you explain a bit on this.

I'm pretty sure the documentation explains this better than I can.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4bb0618a10411369417804!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] looking for a powerful frontend/teport generator

2010-03-29 Thread Clemens Eisserer
Hi,

Sorry for beeing a bit off-topic.
Recently I've done some database-fontends, which I used
java+swingset+netbeans-gui-builder for.
Compared to plain java development is quite fast - however I wonder
wether you could recommend db-fontend generators like the infamous
access.

What I've found so far was either:
- extremly expensive
- not compatible with free DBs (like postgres)
- not powerful
- not cross-platform

Does anybody know tools which don't have properties like listed above?

Thank you in advance, Cleens

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] need a query

2010-03-29 Thread AI Rumman
I need a query to get the initial letter of the words:
Like:

Ispahani Public School  IPS
Ahmed Iftekhar  AI
Any help please.


Re: [GENERAL] need a query

2010-03-29 Thread Florent THOMAS
and what abourt that :
http://www.postgresql.org/docs/8.4/interactive/functions-string.html


Le lundi 29 mars 2010 à 14:44 +0600, AI Rumman a écrit :
 I need a query to get the initial letter of the words: 
 Like: 
 
 Ispahani Public School  IPS 
 Ahmed Iftekhar  AI 
 
 Any help please.


Re: [GENERAL] need a query

2010-03-29 Thread Timo Klecker
Try this:

 

SELECT regexp_matches('foobar beque bazil barf bonk', '(\\m.)', 'g')

 

Now you have the first letters, you can use array_to_string to get an string
and use concat_str after this:

http://stackoverflow.com/questions/43870/how-to-concatenate-strings-of-a-str
ing-field-in-a-postgresql-group-by-query

 

 

 

 

Mit freundlichen Grüßen

Timo Klecker 

 

 

 

Von: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Im Auftrag von AI Rumman
Gesendet: Montag, 29. März 2010 10:45
An: pgsql-general General
Betreff: [GENERAL] need a query

 

I need a query to get the initial letter of the words: 
Like: 

Ispahani Public School  IPS 
Ahmed Iftekhar  AI 

Any help please.



Re: [GENERAL] need a query

2010-03-29 Thread Ognjen Blagojevic

If you are sure that your words are space delimited, you may use this:

select regexp_replace('Ispahani Public School'||' ', '([^ ])([^ ]* )', 
'\\1', 'g');


Regards,
Ognjen


Timo Klecker wrote:

Try this:

SELECT regexp_matches('foobar beque bazil barf bonk', '(\\m.)', 'g')

Now you have the first letters, you can use array_to_string to get an 
string and use concat_str after this:


http://stackoverflow.com/questions/43870/how-to-concatenate-strings-of-a-string-field-in-a-postgresql-group-by-query

 

 

 

 


Mit freundlichen Grüßen

Timo Klecker

* *

* *

 

*Von:* pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] *Im Auftrag von *AI Rumman

*Gesendet:* Montag, 29. März 2010 10:45
*An:* pgsql-general General
*Betreff:* [GENERAL] need a query

 


I need a query to get the initial letter of the words:
Like:

Ispahani Public School  IPS
Ahmed Iftekhar  AI

Any help please.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] one null value in array isnt allowed???

2010-03-29 Thread Armand Turpel

Hi,

Updating an array field with one null value isnt possible:

UPDATE table SET integer_array = ARRAY[NULL]

But those queries are working:

UPDATE table SET integer_array = ARRAY[NULL,1]
UPDATE table SET integer_array = ARRAY[1,NULL]

This dosent seems logical to me.
Is it a bug?

Thanks for helping,
atu

#
Scanned by MailMarshal - Marshal's comprehensive email content security solution. 
#


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] one null value in array isnt allowed???

2010-03-29 Thread Wappler, Robert
On 2010-03-29, Armand Turpel wrote:
 
 Hi,
 
 Updating an array field with one null value isnt possible:
 
 UPDATE table SET integer_array = ARRAY[NULL]
 

Try to specify an explicit type, e.g. ARRAY[NULL]::int[]

 But those queries are working:
 
 UPDATE table SET integer_array = ARRAY[NULL,1]
 UPDATE table SET integer_array = ARRAY[1,NULL]
 
 This dosent seems logical to me.
 Is it a bug?
 

No, those are obviously arrays of integers inferred from the non-NULL
element.

 Thanks for helping,
 atu
 

HTH.


-- 
Robert...
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hstore equality-index performance question

2010-03-29 Thread Sergey Konoplev
On 29 March 2010 02:57, Stefan Keller sfkel...@gmail.com wrote:
 Documentation at F.13.3. Indexes says that hstore has index support
 for @ and ? operators...
 = Therefore no index does support equality-indexes?

 If so, then I suppose that following (potentially slow) query
 which contains an equality test for all keys 'a' and returns all values...

  SELECT id, (kvp-'a') FROM mytable;

 ... can be accelerated nevertheless by adding following where clause:

  SELECT id, (kvp-'a') FROM mytable WHERE kvp ? 'a';

 = Is this correct?


May be you are looking for something like this?

postg...@localhost test=#
CREATE TABLE hstore_partial_index_table (id serial PRIMARY KEY, h hstore);
NOTICE:  CREATE TABLE will create implicit sequence
hstore_partial_index_table_id_seq for serial column
hstore_partial_index_table.id
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
hstore_partial_index_table_pkey for table
hstore_partial_index_table
CREATE TABLE

postg...@localhost test=#
CREATE INDEX i_hstore_partial_index_table__h_a ON
hstore_partial_index_table (id) WHERE h ? 'a';
CREATE INDEX

postg...@localhost test=#
EXPLAIN SELECT * FROM hstore_partial_index_table WHERE h ? 'a';
 QUERY PLAN
-
 Index Scan using i_hstore_partial_index_table__h_a on
hstore_partial_index_table  (cost=0.00..8.27 rows=1 width=36)
(1 row)


-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Andrus

no it is in same order. generate_series generates indexes from

1,2,3 so result have to be exactly in same order. You do some
wrong.

In my sample I used joind and projecton this changes order.
How to add order number 1,2,.. to created table ?

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to give security to pg_catalogs

2010-03-29 Thread raghavendra t
Hi All,

How to give security to the pg_catalogs, as these are freely alterable and
cause some security problem. Here i mean to say, as a superuser we can
delete the rows from a catalogs are alter the catalogs, is there anyway to
put restriction or any promting before doing anything to catalogs.
Any suggestions for this ?

Regards
Raghavendra


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Andrus kobrule...@hot.ee:
 no it is in same order. generate_series generates indexes from

 1,2,3 so result have to be exactly in same order. You do some
 wrong.

 In my sample I used joind and projecton this changes order.
 How to add order number 1,2,.. to created table ?


you cannot use join for this task

you can use some trick - using a sequences

http://www.postgresql.org/files/documentation/books/aw_pgsql/node75.html
postgres=# create temp SEQUENCE xx;
CREATE SEQUENCE
Time: 3,496 ms
postgres=# select nextval('xx'), * from gg;
 nextval | a  | b
-++
   1 | 10 | 33
   2 | 55 | 22
(2 rows)

Time: 0,926 ms
postgres=# select * from gg;
 a  | b
+
 10 | 33
 55 | 22
(2 rows)

regards
Pavel Stehule


 Andrus.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] best practice in archiving CDR data

2010-03-29 Thread Juan Backson
Hi,

I am using Postgres to store CDR data for voip switches.  The data size
quickly goes about a few TBs.

What I would like to do is to be able to regularly archive the oldest data
so only the most recent 6 months of data is available.

All those old data will be stored in a format that can be retrieved back
either into DB table or flat files.

Does anyone know how should I go about doing that?  Is there any existing
tool that can already do that?

thanks,
jb


Re: [GENERAL] best practice in archiving CDR data

2010-03-29 Thread A. Kretschmer
In response to Juan Backson :
 Hi,
 
 I am using Postgres to store CDR data for voip switches.  The data size 
 quickly
 goes about a few TBs.  
 
 What I would like to do is to be able to regularly archive the oldest data so
 only the most recent 6 months of data is available.  
 
 All those old data will be stored in a format that can be retrieved back 
 either
 into DB table or flat files.
 
 Does anyone know how should I go about doing that?  Is there any existing tool
 that can already do that?

Sounds like table partitioning: create, for instance, a table for each
month and DROP old tables after 6 month or so.

http://www.postgresql.org/docs/current/static/ddl-partitioning.html


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practice in archiving CDR data

2010-03-29 Thread Juan Backson
Hi

Instead of dropping the table, I would like to archive the old table into a
format that can be read and retrieved.

Can I db_dump on each child table?

What is the best way to do it?  db_dump and make the data into csv and then
tar.gz it or backup it up into a pg archived format?

thanks,
jb

On Mon, Mar 29, 2010 at 9:33 PM, A. Kretschmer 
andreas.kretsch...@schollglas.com wrote:

 In response to Juan Backson :
  Hi,
 
  I am using Postgres to store CDR data for voip switches.  The data size
 quickly
  goes about a few TBs.
 
  What I would like to do is to be able to regularly archive the oldest
 data so
  only the most recent 6 months of data is available.
 
  All those old data will be stored in a format that can be retrieved back
 either
  into DB table or flat files.
 
  Does anyone know how should I go about doing that?  Is there any existing
 tool
  that can already do that?

 Sounds like table partitioning: create, for instance, a table for each
 month and DROP old tables after 6 month or so.

 http://www.postgresql.org/docs/current/static/ddl-partitioning.html


 Regards, Andreas
 --
 Andreas Kretschmer
 Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
 GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] [pgsql-general] looking for a powerful frontend/teport generator

2010-03-29 Thread Oliver Kohll - Mailing Lists
On 29 Mar 2010, at 14:33, Clemens Eisserer linuxhi...@gmail.com wrote:

 Hi,
 
 Sorry for beeing a bit off-topic.
 Recently I've done some database-fontends, which I used
 java+swingset+netbeans-gui-builder for.
 Compared to plain java development is quite fast - however I wonder
 wether you could recommend db-fontend generators like the infamous
 access.
 
 What I've found so far was either:
 - extremly expensive
 - not compatible with free DBs (like postgres)
 - not powerful
 - not cross-platform
 
 Does anybody know tools which don't have properties like listed above?
 
 Thank you in advance, Cleens

Hello, there are one or two (including one I developed) under 'GUI builders' at 
pgFoundry:
http://pgfoundry.org/softwaremap/trove_list.php?form_cat=323

Regards
Oliver Kohll


oli...@agilebase.co.uk / +44(0)845 456 1810
www.agilebase.co.uk - software
www.gtwm.co.uk - company




Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Andrus

Pavel,


CREATE OR REPLACE FUNCTION unnest(anyarray)
RETURNS SETOF anyelement as $$
 SELECT $1[i] FROM generate_series(1,4) g(i)
$$ LANGUAGE sql;

pa...@postgres:5481=# select unnest(string_to_array('23,2,3,4,5',','));
unnest

23
2
3
4
(4 rows)


Result is wrong: it must contain 5 rows.
How to make this work with with any array size ?

Some lines are long.
How to implement word wrap to new row in 80th position but between words 
only ?


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Andrus kobrule...@hot.ee:
 Pavel,

 CREATE OR REPLACE FUNCTION unnest(anyarray)
 RETURNS SETOF anyelement as $$
  SELECT $1[i] FROM generate_series(1,4) g(i)
 $$ LANGUAGE sql;

 pa...@postgres:5481=# select unnest(string_to_array('23,2,3,4,5',','));
 unnest
 
 23
 2
 3
 4
 (4 rows)

 Result is wrong: it must contain 5 rows.
 How to make this work with with any array size ?

CREATE OR REPLACE FUNCTION unnest(anyarray)
RETURNS SETOF anyelement as $$
 SELECT $1[i] FROM
generate_series(array_lower($1,1),array_upper($1,1)) g(i) -- there was
error
 $$ LANGUAGE sql;

regards
Pavel

 Some lines are long.
 How to implement word wrap to new row in 80th position but between words
 only ?

 Andrus.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] optimizing import of large CSV file into partitioned table?

2010-03-29 Thread Dimitri Fontaine
Rick Casey caseyr...@gmail.com writes:

 So, I am wondering if there is any to optimize this process? I have been 
 using Postgres for several years, but have never had to partition or optimize 
 it for files
 of this size until now. 
 Any comments or suggestions would be most welcomed from this excellent forum.

The pgloader tool will import your data as batches of N lines, you get
to say how many lines you want to consider in each transaction. Plus,
you can have more than one python thread importing your big file, either
sharing one writer and having the other threads doing the parsing and
COPY, or having N independent threads doing the reading/parsing/COPY.

  http://pgloader.projects.postgresql.org/

Hope this helps,
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] insert into test_b (select * from test_a) with different column order

2010-03-29 Thread Ole Tange
I have 2 tables that have the same column names but in different
order. Similar to this:

  create table test_a (col_a text, col_b int);
  create table test_b (col_b int, col_a text);
  insert into test_a values ('abc', 2),( 'def', 3);

I would like to do this:

  insert into test_b (select * from test_a);

This fails because the columns in test_b are not in the same order as
test_a. For my use case the tables may get more columns or have
columns removed over time og be recreated in a different order, the
only thing that is given is that the column names in test_a and test_b
always are the same and that the datatype of the named columns are the
same.

Is there a general solution I can use to do the insert?

Regards,

Ole Tange

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to give security to pg_catalogs

2010-03-29 Thread Tom Lane
raghavendra t raagavendra@gmail.com writes:
 How to give security to the pg_catalogs, as these are freely alterable and
 cause some security problem. Here i mean to say, as a superuser we can
 delete the rows from a catalogs are alter the catalogs, is there anyway to
 put restriction or any promting before doing anything to catalogs.
 Any suggestions for this ?

Don't give superuser privileges to anyone who's dumb enough to try such
things on a production database.

This is much like the fact that, say, root can trivially destroy any
Unix filesystem.  You could imagine trying to put enough training wheels
on superuserdom to prevent such things, but it's not really practical
and any attempt would get in the way of many legitimate uses.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Tom Lane
Andrus kobrule...@hot.ee writes:
 Pavel,
 pa...@postgres:5481=# select unnest(string_to_array('23,2,3,4,5',','));
 unnest
 
 23
 2
 3
 4
 (4 rows)

 Result is wrong: it must contain 5 rows.

Surely that's a copy-and-paste mistake?  I get 5 rows from this example.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Andrus

Pavel

thank you.
How to add word wrap to this at some column between words ?
For example string 


'    '

if word wrap is at column 12 should produce table with two rows:

 
  

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Andrus

I changed unction name and tried:

CREATE OR REPLACE FUNCTION unnest21(anyarray)
RETURNS SETOF anyelement as $$
 SELECT $1[i] FROM generate_series(1,4) g(i)
$$ LANGUAGE sql;

select unnest21(string_to_array('23,2,3,4,5',','));

In this case it returns only 4 rows.
No idea what is happening.

Andrus.

- Original Message - 
From: Tom Lane t...@sss.pgh.pa.us

To: Andrus kobrule...@hot.ee
Cc: Pavel Stehule pavel.steh...@gmail.com; 
pgsql-general@postgresql.org

Sent: Monday, March 29, 2010 6:00 PM
Subject: Re: [GENERAL] Splitting text column to multiple rows



Andrus kobrule...@hot.ee writes:

Pavel,

pa...@postgres:5481=# select unnest(string_to_array('23,2,3,4,5',','));
unnest

23
2
3
4
(4 rows)



Result is wrong: it must contain 5 rows.


Surely that's a copy-and-paste mistake?  I get 5 rows from this example.

regards, tom lane 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] insert into test_b (select * from test_a) with different column order

2010-03-29 Thread Leif Biberg Kristensen
On Monday 29. March 2010 16.51.35 Ole Tange wrote:
 I have 2 tables that have the same column names but in different
 order. Similar to this:
 
   create table test_a (col_a text, col_b int);
   create table test_b (col_b int, col_a text);
   insert into test_a values ('abc', 2),( 'def', 3);
 
 I would like to do this:
 
   insert into test_b (select * from test_a);
 
 This fails because the columns in test_b are not in the same order as
 test_a. For my use case the tables may get more columns or have
 columns removed over time og be recreated in a different order, the
 only thing that is given is that the column names in test_a and test_b
 always are the same and that the datatype of the named columns are the
 same.
 
 Is there a general solution I can use to do the insert?

Per the SQL standard, there's no inherent order between columns. That said, 
you'll usually get the columns in the order that they were created, but 
there's no guarantee for it. Actually, when you do a SELECT * FROM ... you 
make a totally unwarranted assumption that the columns will come out in any 
specific order. So, the answer to your question is to specify the columns 
explicitly in your query, as

insert into test_b (select col_b, col_a from test_a);

regards,
-- 
Leif Biberg Kristensen
http://solumslekt.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Andrus

This returns 5 rows:

CREATE OR REPLACE FUNCTION unnest(anyarray)
RETURNS SETOF anyelement as $$
 SELECT $1[i] FROM generate_series(1,4) g(i)
$$ LANGUAGE sql;

select unnest(string_to_array('23,2,3,4,5',','));

simply changing name returns 4 rows:

CREATE OR REPLACE FUNCTION unnest21(anyarray)
RETURNS SETOF anyelement as $$
 SELECT $1[i] FROM generate_series(1,4) g(i)
$$ LANGUAGE sql;

select unnest21(string_to_array('23,2,3,4,5',','));

Andrus.

- Original Message - 
From: Tom Lane t...@sss.pgh.pa.us

To: Andrus kobrule...@hot.ee
Cc: Pavel Stehule pavel.steh...@gmail.com; 
pgsql-general@postgresql.org

Sent: Monday, March 29, 2010 6:00 PM
Subject: Re: [GENERAL] Splitting text column to multiple rows



Andrus kobrule...@hot.ee writes:

Pavel,

pa...@postgres:5481=# select unnest(string_to_array('23,2,3,4,5',','));
unnest

23
2
3
4
(4 rows)



Result is wrong: it must contain 5 rows.


Surely that's a copy-and-paste mistake?  I get 5 rows from this example.

regards, tom lane 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Andrus kobrule...@hot.ee:
 Pavel

 thank you.
 How to add word wrap to this at some column between words ?
 For example string
 '    '

 if word wrap is at column 12 should produce table with two rows:

  
   


You can't do it. This working only for one column.

regards
Pavel

 Andrus.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Tom Lane
Andrus kobrule...@hot.ee writes:
 I changed unction name and tried:
 CREATE OR REPLACE FUNCTION unnest21(anyarray)
 RETURNS SETOF anyelement as $$
   SELECT $1[i] FROM generate_series(1,4) g(i)
 $$ LANGUAGE sql;

 select unnest21(string_to_array('23,2,3,4,5',','));

 In this case it returns only 4 rows.
 No idea what is happening.

Well, the generate_series call is wrong for this use ...

I think if it appeared to work before it was because the built-in
unnest() function was capturing the call.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Tom Lane t...@sss.pgh.pa.us:
 Andrus kobrule...@hot.ee writes:
 I changed unction name and tried:
 CREATE OR REPLACE FUNCTION unnest21(anyarray)
 RETURNS SETOF anyelement as $$
   SELECT $1[i] FROM generate_series(1,4) g(i)
 $$ LANGUAGE sql;

 select unnest21(string_to_array('23,2,3,4,5',','));

 In this case it returns only 4 rows.
 No idea what is happening.

 Well, the generate_series call is wrong for this use ...

 I think if it appeared to work before it was because the built-in
 unnest() function was capturing the call.

he uses 8.1. the bug is in generate_series(1,4)

Pavel

                        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Andrus kobrule...@hot.ee:
 This returns 5 rows:

 CREATE OR REPLACE FUNCTION unnest(anyarray)
 RETURNS SETOF anyelement as $$
  SELECT $1[i] FROM generate_series(1,4) g(i) --- 4 is constant 
 $$ LANGUAGE sql;

 select unnest(string_to_array('23,2,3,4,5',','));

 simply changing name returns 4 rows:

sure .. original buggy function is here still.

Pavel


 CREATE OR REPLACE FUNCTION unnest21(anyarray)
 RETURNS SETOF anyelement as $$
  SELECT $1[i] FROM generate_series(1,4) g(i)
 $$ LANGUAGE sql;

 select unnest21(string_to_array('23,2,3,4,5',','));

 Andrus.

 - Original Message - From: Tom Lane t...@sss.pgh.pa.us
 To: Andrus kobrule...@hot.ee
 Cc: Pavel Stehule pavel.steh...@gmail.com;
 pgsql-general@postgresql.org
 Sent: Monday, March 29, 2010 6:00 PM
 Subject: Re: [GENERAL] Splitting text column to multiple rows


 Andrus kobrule...@hot.ee writes:

 Pavel,

 pa...@postgres:5481=# select unnest(string_to_array('23,2,3,4,5',','));
 unnest
 
 23
 2
 3
 4
 (4 rows)

 Result is wrong: it must contain 5 rows.

 Surely that's a copy-and-paste mistake?  I get 5 rows from this example.

 regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practice in archiving CDR data

2010-03-29 Thread Edgardo Portal
On 2010-03-29, Juan Backson juanback...@gmail.com wrote:
 --0016e64ccb10fb54050482f07924
 Content-Type: text/plain; charset=ISO-8859-1

 Hi,

 I am using Postgres to store CDR data for voip switches.  The data size
 quickly goes about a few TBs.

 What I would like to do is to be able to regularly archive the oldest data
 so only the most recent 6 months of data is available.

 All those old data will be stored in a format that can be retrieved back
 either into DB table or flat files.

 Does anyone know how should I go about doing that?  Is there any existing
 tool that can already do that?

 thanks,
 jb

FWIW, I partition by ISO week, use INSERT RULEs to route CDRs to the correct
partition (keeping about 3 partitions open to new CDRs at any one time),
use pg_dump to archive partition tables to off-line storage, and
DROP TABLE to keep the main DBs to about 40 weeks of data. I used
to use monthly partitioning, but the file sizes got a bit awkward
to deal with.

When I need to restore old CDRs (e.g. to service a subpoena) I
use pg_restore to load the needed CDRs to a throwaway database
and process as necessary.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] optimizing import of large CSV file into partitioned table?

2010-03-29 Thread Rick Casey
Thanks Dim; I was not aware of pgloader. This, and the other suggestions,
have helped a lot; thanks everyone.

--rick

On Mon, Mar 29, 2010 at 7:41 AM, Dimitri Fontaine dfonta...@hi-media.comwrote:

 Rick Casey caseyr...@gmail.com writes:

  So, I am wondering if there is any to optimize this process? I have been
 using Postgres for several years, but have never had to partition or
 optimize it for files
  of this size until now.
  Any comments or suggestions would be most welcomed from this excellent
 forum.

 The pgloader tool will import your data as batches of N lines, you get
 to say how many lines you want to consider in each transaction. Plus,
 you can have more than one python thread importing your big file, either
 sharing one writer and having the other threads doing the parsing and
 COPY, or having N independent threads doing the reading/parsing/COPY.

  http://pgloader.projects.postgresql.org/

 Hope this helps,
 --
 dim




-- 

Rick Casey :: caseyr...@gmail.com :: 303.345.8893


Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 2010/3/29 Tom Lane t...@sss.pgh.pa.us:
 I think if it appeared to work before it was because the built-in
 unnest() function was capturing the call.

 he uses 8.1. the bug is in generate_series(1,4)

If renaming the function makes it appear to work differently,
then there is another function of similar name in there somewhere.
I'm betting the server is not 8.1 after all.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How long will the query take

2010-03-29 Thread John Gage
I ran a query out of pgAdmin, and (as I expected) it took a long  
time.  In fact, I did not let it finish.  I stopped it after a little  
over an hour.


I'm using 8.4.2 on a Mac with a 2.4GHz processor and 2GB of RAM.

My question is: is there a way to tell how close the query is to being  
finished.  It would be a great pity if the query would have finished  
in the 10 seconds after I quit it, but I had no way of telling.


As a postscript, I would add that the query was undoubtedly too  
ambitious.  I have a reduced set version which I will run shortly.   
But I am still curious to know if there is a way to tell how much time  
is left.


Thanks,

John

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help

2010-03-29 Thread 赤松 建司
help

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: ***SPAM*** Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Andrus

If renaming the function makes it appear to work differently,
then there is another function of similar name in there somewhere.
I'm betting the server is not 8.1 after all.


I'm using

PostgreSQL 8.4.1, compiled by Visual C++ build 1400, 32-bit

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dblink vs calling a function that returns void

2010-03-29 Thread Boszormenyi Zoltan
Hi,

I need to call a function via dblink that returns a void, i.e.
technically nothing.

=# select public.dblink_exec('import', 'SELECT
import.add_one_word(''word'', true)');
ERROR:  statement returning results not allowed

=# select * from public.dblink('import', 'SELECT
import.add_one_word(''word'', true)') as x();
ERROR:  syntax error at or near )
LINE 1: ...ort', 'SELECT import.add_one_word(''iphon'', true)') as x();

=# select public.dblink('import', 'SELECT import.add_one_word(''word'',
true)') as x(x void);
ERROR:  syntax error at or near (
LINE 1: ...'SELECT import.add_one_word(''iphon'', true)') as x(x void);

And, although RETURNS VOID is indistinguishable from returning a NULL:

=# select * from public.dblink('import', 'SELECT
import.add_one_word(''word'', true)') as x(x int);
ERROR:  invalid input syntax for integer: 

So, how can I do it? Besides modifying the interface of the function,
say RETURNS int4 and using PG_RETURN_NULL()?

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2010-03-29 Thread 赤松 建司
bye
end

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: ***SPAM*** Re: [GENERAL] Splitting text column to multiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Andrus kobrule...@hot.ee:
 If renaming the function makes it appear to work differently,
 then there is another function of similar name in there somewhere.
 I'm betting the server is not 8.1 after all.

 I'm using

 PostgreSQL 8.4.1, compiled by Visual C++ build 1400, 32-bit


oh sorry, you are asked on 8.1 on yesterday

then you don't need custom unnest function.

regards
Pavel Stehule

 Andrus.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practice in archiving CDR data

2010-03-29 Thread David Fetter
On Mon, Mar 29, 2010 at 02:08:23PM +, Edgardo Portal wrote:
 On 2010-03-29, Juan Backson juanback...@gmail.com wrote:
  --0016e64ccb10fb54050482f07924
  Content-Type: text/plain; charset=ISO-8859-1
 
  Hi,
 
  I am using Postgres to store CDR data for voip switches.  The data
  size quickly goes about a few TBs.
 
  What I would like to do is to be able to regularly archive the
  oldest data so only the most recent 6 months of data is available.
 
  All those old data will be stored in a format that can be
  retrieved back either into DB table or flat files.
 
  Does anyone know how should I go about doing that?  Is there any
  existing tool that can already do that?
 
 
 FWIW, I partition by ISO week, use INSERT RULEs to route CDRs

Just generally, triggers are much better than RULEs for this kind of
thing.  The underlying functions can be made quite efficient.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column tomultiple rows

2010-03-29 Thread Andrus

oh sorry, you are asked on 8.1 on yesterday


I'm developing in 8.4 but customers have servers starting at 8.1
So I asked for a solution starting at 8.1 
Hopefully renaming unnest to something other will work in all servers.


Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How long will the query take

2010-03-29 Thread A. Kretschmer
In response to John Gage :
 I ran a query out of pgAdmin, and (as I expected) it took a long  
 time.  In fact, I did not let it finish.  I stopped it after a little  
 over an hour.
 
 I'm using 8.4.2 on a Mac with a 2.4GHz processor and 2GB of RAM.
 
 My question is: is there a way to tell how close the query is to being  
 finished.  It would be a great pity if the query would have finished  
 in the 10 seconds after I quit it, but I had no way of telling.
 
 As a postscript, I would add that the query was undoubtedly too  
 ambitious.  I have a reduced set version which I will run shortly.   
 But I am still curious to know if there is a way to tell how much time  
 is left.

No, not really. But you can (and should) run EXPLAIN your query to
obtain the execution plan for that query, und you can show us this plan
(and the table-definition for all included tables). Maybe someone is able
to tell you what you can do to speed up your query.

And yes, have you tuned your postgresql.conf?


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How long will the query take

2010-03-29 Thread Bill Moran
In response to A. Kretschmer andreas.kretsch...@schollglas.com:

 In response to John Gage :
  I ran a query out of pgAdmin, and (as I expected) it took a long  
  time.  In fact, I did not let it finish.  I stopped it after a little  
  over an hour.
  
  I'm using 8.4.2 on a Mac with a 2.4GHz processor and 2GB of RAM.
  
  My question is: is there a way to tell how close the query is to being  
  finished.  It would be a great pity if the query would have finished  
  in the 10 seconds after I quit it, but I had no way of telling.
  
  As a postscript, I would add that the query was undoubtedly too  
  ambitious.  I have a reduced set version which I will run shortly.   
  But I am still curious to know if there is a way to tell how much time  
  is left.
 
 No, not really. But you can (and should) run EXPLAIN your query to
 obtain the execution plan for that query, und you can show us this plan
 (and the table-definition for all included tables). Maybe someone is able
 to tell you what you can do to speed up your query.

To piggyback on this ... EXPLAIN _is_ the way to know how long your
query will take, but keep in mind it's only an _estimate_.

Given that, in my experience EXPLAIN is pretty accurate 90% of the
time, as long as you analyze frequently enough.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help

2010-03-29 Thread Raymond O'Donnell
On 29/03/2010 15:43, 赤松 建司 wrote:
 help

Surely. What with? :-)

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting text column tomultiple rows

2010-03-29 Thread Pavel Stehule
2010/3/29 Andrus kobrule...@hot.ee:
 oh sorry, you are asked on 8.1 on yesterday

 I'm developing in 8.4 but customers have servers starting at 8.1
 So I asked for a solution starting at 8.1 Hopefully renaming unnest to
 something other will work in all servers.


ok. It is better to describe your environment more.

Regards
Pavel

 Andrus.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dblink vs calling a function that returns void

2010-03-29 Thread Tom Lane
Boszormenyi Zoltan z...@cybertec.at writes:
 I need to call a function via dblink that returns a void, i.e.
 technically nothing.

You're overthinking the problem.  Imagine void is just a datatype
(which it is...)  This should work:

select * from public.dblink('import', 'SELECT import.add_one_word(''word'', 
true)') as x(x void);

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How long will the query take

2010-03-29 Thread Andreas Kretschmer
Bill Moran wmo...@potentialtech.com wrote:

  No, not really. But you can (and should) run EXPLAIN your query to
  obtain the execution plan for that query, und you can show us this plan
  (and the table-definition for all included tables). Maybe someone is able
  to tell you what you can do to speed up your query.
 
 To piggyback on this ... EXPLAIN _is_ the way to know how long your
 query will take, but keep in mind it's only an _estimate_.
 
 Given that, in my experience EXPLAIN is pretty accurate 90% of the
 time, as long as you analyze frequently enough.

As far as i know, EXPLAIN _can't_ say how long a query will take, it
returns only a COST, not a TIME.

Or can you tell me how long this query will be take?

test=# explain select * from foo;
  QUERY PLAN
---
 Seq Scan on foo  (cost=0.00..34.00 rows=2400 width=4)
(1 Zeile)


Okay, it's a really little table and a really simple plan ... but
imagine, i have a table with 100 millions rows and a) a slow disk and b)
a fast SSD.

You can't say how long the query will runs, even an estimate, okay?


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to implement word wrap

2010-03-29 Thread Andrus

Database column contains large string without line feeds.
How to split it with word wrap between words ?
I tried to implement word wrap using

create temp table test (line char(7));
insert into test select repeat('aa ',10);
select * from test;

Expected result is that table test contains multiple rows and every row 
contains two words:


aa aa

Instead I got string too long exception.

How to implement word wrap in PostgreSql if  string contains words of any 
size separated by spaces?


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Processor speed relative to postgres transactions per second

2010-03-29 Thread Chris Barnes

 

We have two camps that think that the speed of cpu processors is/aren't 
relative to the number of transactions that postgres that can performed per 
second.

 

I am of the opinion that is we throw the faster processors at the database 
machine, there will be better performance.

 

Just like faster drives and controllers, there must be some improvement over 
the other processor.

 

Is there anything to support this, a document or someone's personal experience?

 

Chrs Barnes

 

 

 

 
  
_
Stay in touch.
http://go.microsoft.com/?linkid=9712959

Re: [GENERAL] Warm Standby Setup Documentation

2010-03-29 Thread Ogden

On Mar 28, 2010, at 7:45 PM, Yar Tykhiy wrote:

 On Fri, Mar 26, 2010 at 01:35:43PM -0500, Ogden wrote:
 On Mar 26, 2010, at 1:32 PM, Greg Smith wrote:
 
 Bryan Murphy wrote:
 The one thing you should be aware of is that when you fail over, your 
 spare has no spares.  I have not found a way around this problem yet.  So, 
 when you fail over, there is a window where you have no backups while 
 you're building the new spares.  This can be pretty nerve wracking if your 
 database is like ours and it takes 3-6 hours to bring a new spare online 
 from scratch.
 
 If there's another server around, you can have your archive_command on the 
 master ship to two systems, then use the second one as a way to jump-start 
 this whole process.  After fail-over, just start shipping from the new 
 primary to that 3rd server, now the replacement standby, and sync any files 
 it doesn't have.  Then switch it into recovery.  Much faster than doing a 
 new base backup from the standby on larger systems.
 
 How is it possible to use the archive_command to ship to different ones?
 
 archive_command = 'rsync -a %p 
 postg...@192.168.x.x:/usr/local/pgsql/walfiles/%f /dev/null'
 archive_timeout = 120# force a logfile segment switch after 
 this
  
 I suppose you can put multiple commands there then?
 
 You can always wrap as many commands as you like in a script.
 However, there is a pitfall to watch out for when shipping WALs to
 multiple standby servers.  Namely your script has to handle failures
 of individual WAL shipping targets so that a single target going down
 doesn't disrupt operation of the whole cluster.  Please see
 http://archives.postgresql.org/pgsql-general/2009-10/msg00590.php
 for discussion.


Is it as simple as doing this:

archive_command = '/var/lib/pgsql/data/warm_standby.sh %p %f  /dev/null'

Where /var/lib/pgsql/data/warm_standby.sh  is:

#!/bin/sh

rsync -a $1 postg...@192.168.1.26:/usr/local/pgsql/walfiles/$2
rsync -a $1 postg...@192.168.1.27:/usr/local/pgsql/walfiles/$2
...

For each warm standby slave?

Is it safe to do it this way? I wish there were some scripts out there that I 
can see as examples.

Thank you

Ogden







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to implement word wrap

2010-03-29 Thread Thom Brown
2010/3/29 Andrus kobrule...@hot.ee

 Database column contains large string without line feeds.
 How to split it with word wrap between words ?
 I tried to implement word wrap using

 create temp table test (line char(7));
 insert into test select repeat('aa ',10);
 select * from test;

 Expected result is that table test contains multiple rows and every row
 contains two words:

 aa aa

 Instead I got string too long exception.

 How to implement word wrap in PostgreSql if  string contains words of any
 size separated by spaces?

 Andrus.

 --


No, what you've done is asked it to insert 'aa aa aa aa aa aa aa aa aa aa'.

I suspect you're looking for:

INSERT INTO test SELECT 'aa' FROM generate_series(1,10);

Regards

Thom


Re: [GENERAL] How to implement word wrap

2010-03-29 Thread Andreas Kretschmer
Andrus kobrule...@hot.ee wrote:

 Database column contains large string without line feeds.
 How to split it with word wrap between words ?
 I tried to implement word wrap using

 create temp table test (line char(7));
 insert into test select repeat('aa ',10);
 select * from test;

 Expected result is that table test contains multiple rows and every row  
 contains two words:

 aa aa

 Instead I got string too long exception.

 How to implement word wrap in PostgreSql if  string contains words of any 
 size separated by spaces?

I think you have to write a function (plpgsql, plperl, ...), counting
chars per line and change space to newline if no more space in the line.
That's not really a SQL-problem ...

Maybe there are some perl-modules for that available, i don't know.



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to implement word wrap

2010-03-29 Thread Thom Brown
On 29 March 2010 17:42, Thom Brown thombr...@gmail.com wrote:

 2010/3/29 Andrus kobrule...@hot.ee

 Database column contains large string without line feeds.
 How to split it with word wrap between words ?
 I tried to implement word wrap using

 create temp table test (line char(7));
 insert into test select repeat('aa ',10);
 select * from test;

 Expected result is that table test contains multiple rows and every row
 contains two words:

 aa aa

 Instead I got string too long exception.

 How to implement word wrap in PostgreSql if  string contains words of any
 size separated by spaces?

 Andrus.

 --


 No, what you've done is asked it to insert 'aa aa aa aa aa aa aa aa aa aa'.

 I suspect you're looking for:

 INSERT INTO test SELECT 'aa' FROM generate_series(1,10);

 Regards

 Thom


Just realised that's not what you're after, but my first point still stands.

Thom


Re: [GENERAL] Processor speed relative to postgres transactions per second

2010-03-29 Thread Steve Atkins

On Mar 29, 2010, at 9:42 AM, Chris Barnes wrote:

  
 We have two camps that think that the speed of cpu processors is/aren't 
 relative to the number of transactions that postgres that can performed per 
 second.
  
 I am of the opinion that is we throw the faster processors at the database 
 machine, there will be better performance.
  
 Just like faster drives and controllers, there must be some improvement over 
 the other processor.
  
 Is there anything to support this, a document or someone's personal 
 experience?
  

There will always be a bottleneck. If your query speed is limited by the time 
it takes for the drives to seek, then you can throw as much CPU at the problem 
as you like and nothing will change. If your query speed is limited by the time 
it takes to read data from memory, a faster CPU will only help if it has a 
faster memory bus. If you're limited by complex or slow functions in the 
database then a faster CPU is what you need.

For larger databases, IO speed is the bottleneck more often than not. In those 
cases throwing memory, better disk controllers and faster / more drives at them 
will improve things. More CPU will not.

Also, the price/speed curve for CPUs is not pretty at the higher end. You can 
get a lot of RAM or disk for the price difference between the fastest and next 
fastest CPU for any given system.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How long will the query take

2010-03-29 Thread Bill Moran
In response to Andreas Kretschmer akretsch...@spamfence.net:

 Bill Moran wmo...@potentialtech.com wrote:
 
   No, not really. But you can (and should) run EXPLAIN your query to
   obtain the execution plan for that query, und you can show us this plan
   (and the table-definition for all included tables). Maybe someone is able
   to tell you what you can do to speed up your query.
  
  To piggyback on this ... EXPLAIN _is_ the way to know how long your
  query will take, but keep in mind it's only an _estimate_.
  
  Given that, in my experience EXPLAIN is pretty accurate 90% of the
  time, as long as you analyze frequently enough.
 
 As far as i know, EXPLAIN _can't_ say how long a query will take, it
 returns only a COST, not a TIME.

Correct.

 Or can you tell me how long this query will be take?
 
 test=# explain select * from foo;
   QUERY PLAN
 ---
  Seq Scan on foo  (cost=0.00..34.00 rows=2400 width=4)
 (1 Zeile)

EXPLAIN ANALYZE a few other queries of various complexity, and I'll be
able to translate that estimate to a time.

No, it's not 100% accurate, but (as I stated earlier) in my experience,
it gives you a pretty good idea.

 Okay, it's a really little table and a really simple plan ... but
 imagine, i have a table with 100 millions rows and a) a slow disk and b)
 a fast SSD.

You're absolutely correct, and that's something that I should not have
omitted from my previous response.  Translating the cost into a time
estimate is highly hardware-dependent, and not 100% accurate, so run
some tests to get an idea of what your cost - time ratio is, and take
those cost estimates with a grain of salt.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large index operation crashes postgres

2010-03-29 Thread Frans Hals
Paul,

I have checked the different kinds of data in the table for their memory usage.
ST_LineSting is the one that's leaking, the other types complete
indexing without leakage.
Update to Geos 3.2.0 didn't improve the operation.

Kind regards
Frans

2010/3/28 Paul Ramsey pram...@cleverelephant.ca:
 MIght be random, might be a clue, we'll see. So it looks like much of
 the table is two-point lines and points.

 P

 On Sat, Mar 27, 2010 at 1:16 PM, Frans Hals fha...@googlemail.com wrote:

  ST_Point        |              | 20648939
  ST_MultiPolygon |              |     6188
  ST_Polygon      |              |  8054680

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to perform text merge

2010-03-29 Thread Harald Fuchs
In article 609bf3ce079445569fc0d047a5c81...@andrusnotebook,
Andrus kobrule...@hot.ee writes:

 Database column contains merge data in text column.
 Expressions are between  and  separators.
 How to replace them with database values ?

 For example, code below should return:

 Hello Tom Lane!

 How to implement textmerge procedure or other idea ?

 Andrus.

 create temp table person ( firstname text, lastname text ) on commit drop;
 insert into person values ('Tom', 'Lane');
 create temp table mergedata ( template text ) on commit drop;
 insert into mergedata values ('Hello firstname||'' ''||lastname!');

 select textmerge(template,'select * from person') from mergedata;

Here's a quick shot:

CREATE FUNCTION textmerge(tpl text, query text) RETURNS text AS $$
DECLARE
  pref text = substring(tpl FROM '(.*)');
  expr text = substring(tpl FROM '(.+)');
  post text = substring(tpl FROM '(.*)');
  tmp1 text = regexp_replace(query, E'\\*', expr);
  tmp2 text;
BEGIN
  EXECUTE tmp1 INTO tmp2;
  RETURN pref || tmp2 || post;
END;
$$ LANGUAGE plpgsql IMMUTABLE;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL on Windows

2010-03-29 Thread Mark Vantzelfde
I am running 8.4 on a Windows Vista system. The software was installed using
the auto-install process. PostgreSQL starts successfully as a service when
the system boots. The running postgres* processes can be verified via Task
Manager. I can run psql from the command prompt. I have the PGDATA env
variable set to the folder where the PostgreSQL data sits. Running the
pg_ctl status command from the command prompt returns pg_ctl: no server
running. Ideas why pg_ctl doesn't know about the running services?

Mark


Re: [GENERAL] Processor speed relative to postgres transactions per second

2010-03-29 Thread Scott Marlowe
On Mon, Mar 29, 2010 at 11:00 AM, Steve Atkins st...@blighty.com wrote:
 For larger databases, IO speed is the bottleneck more often than not. In 
 those cases throwing memory, better disk controllers and faster / more drives 
 at them will improve things. More CPU will not.

We're in the situation where we are CPU bound on a dual 4 core 2.1GHz
opteron, and IO wait is never more than one CPU's worth (12%).  That's
on the slony source server.  The destination servers are even more CPU
bound, with little or no IO wait.

The RAID array is a RAID-10 with 12 drives, and a RAID-1 with two for
pg_xlog.  The RAID-1 pair is running at about 30 megabytes per second
written to it continuously.  It can handle sequential throughput to
about 60 megabytes per second.

Of course, if we put more CPU horsepower on that machine, (mobo
replacement considered) then I'm sure we'd start getting IO bound, and
so forth.

 Also, the price/speed curve for CPUs is not pretty at the higher end. You can 
 get a lot of RAM or disk for the price difference between the fastest and 
 next fastest CPU for any given system.

Agreed.  The curve really starts to get ugly when you need more than 2
sockets.  Dual socket 6 and 8 core cpus are now out, and not that
expensive.  CPUs that can handle being in a 4 to 8 socket machine are
two to three times as much for the same basic speed.  At that point
it's a good idea to consider partitioning your data out into some
logical manner across multiple machines.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL on Windows

2010-03-29 Thread Raymond O'Donnell
On 29/03/2010 18:38, Mark Vantzelfde wrote:
 I am running 8.4 on a Windows Vista system. The software was installed using
 the auto-install process. PostgreSQL starts successfully as a service when
 the system boots. The running postgres* processes can be verified via Task
 Manager. I can run psql from the command prompt. I have the PGDATA env
 variable set to the folder where the PostgreSQL data sits. Running the
 pg_ctl status command from the command prompt returns pg_ctl: no server
 running. Ideas why pg_ctl doesn't know about the running services?

Have you tried running it with the -D option instead of the env
variable? No idea if it will make any difference...just a wild guess.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to give security to pg_catalogs

2010-03-29 Thread raghavendra t
Hi Tom,

Thank you for the update


 This is much like the fact that, say, root can trivially destroy any
 Unix filesystem.  You could imagine trying to put enough training wheels
 on superuserdom to prevent such things, but it's not really practical
 and any attempt would get in the way of many legitimate uses.


Can we create any prompts on the pg_catalogs while doing any operation like
altering/deleting manually.

Regards
Raghavendra

On Mon, Mar 29, 2010 at 8:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:

  raghavendra t raagavendra@gmail.com writes:
  How to give security to the pg_catalogs, as these are freely alterable
 and
  cause some security problem. Here i mean to say, as a superuser we can
  delete the rows from a catalogs are alter the catalogs, is there anyway
 to
  put restriction or any promting before doing anything to catalogs.
  Any suggestions for this ?

 Don't give superuser privileges to anyone who's dumb enough to try such
 things on a production database.

 This is much like the fact that, say, root can trivially destroy any
 Unix filesystem.  You could imagine trying to put enough training wheels
 on superuserdom to prevent such things, but it's not really practical
 and any attempt would get in the way of many legitimate uses.

regards, tom lane



Re: [GENERAL] PostgreSQL on Windows

2010-03-29 Thread Mark Vantzelfde
Same result.

On Mon, Mar 29, 2010 at 1:51 PM, Raymond O'Donnell r...@iol.ie wrote:

 On 29/03/2010 18:38, Mark Vantzelfde wrote:
  I am running 8.4 on a Windows Vista system. The software was installed
 using
  the auto-install process. PostgreSQL starts successfully as a service
 when
  the system boots. The running postgres* processes can be verified via
 Task
  Manager. I can run psql from the command prompt. I have the PGDATA env
  variable set to the folder where the PostgreSQL data sits. Running the
  pg_ctl status command from the command prompt returns pg_ctl: no server
  running. Ideas why pg_ctl doesn't know about the running services?

 Have you tried running it with the -D option instead of the env
 variable? No idea if it will make any difference...just a wild guess.

 Ray.

 --
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie




-- 
Mark Vantzelfde
NetMasters, Inc.


Re: [GENERAL] How long will the query take

2010-03-29 Thread John Gage
I will report back on this and attempt to give the particulars.  It  
will take 24 hours due to other time commitments.


Thank you very much for explaining :) this to me.

When I used only the first  10,000 rows of the 100+ thousand rows in  
the original table (of two tables) I was working with, I got the  
result I wanted in 10 minutes, which was really probably 80% of what I  
wanted.  Nevertheless, I do not want to fly blind in the future.


John


On Mar 29, 2010, at 7:10 PM, Bill Moran wrote:


In response to Andreas Kretschmer akretsch...@spamfence.net:


Bill Moran wmo...@potentialtech.com wrote:

No, not really. But you can (and should) run EXPLAIN your query  
to
obtain the execution plan for that query, und you can show us  
this plan
(and the table-definition for all included tables). Maybe someone  
is able

to tell you what you can do to speed up your query.


To piggyback on this ... EXPLAIN _is_ the way to know how long your
query will take, but keep in mind it's only an _estimate_.

Given that, in my experience EXPLAIN is pretty accurate 90% of the
time, as long as you analyze frequently enough.


As far as i know, EXPLAIN _can't_ say how long a query will take, it
returns only a COST, not a TIME.


Correct.


Or can you tell me how long this query will be take?

test=# explain select * from foo;
 QUERY PLAN
---
Seq Scan on foo  (cost=0.00..34.00 rows=2400 width=4)
(1 Zeile)


EXPLAIN ANALYZE a few other queries of various complexity, and I'll be
able to translate that estimate to a time.

No, it's not 100% accurate, but (as I stated earlier) in my  
experience,

it gives you a pretty good idea.


Okay, it's a really little table and a really simple plan ... but
imagine, i have a table with 100 millions rows and a) a slow disk  
and b)

a fast SSD.


You're absolutely correct, and that's something that I should not have
omitted from my previous response.  Translating the cost into a time
estimate is highly hardware-dependent, and not 100% accurate, so run
some tests to get an idea of what your cost - time ratio is, and take
those cost estimates with a grain of salt.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Processor speed relative to postgres transactions per second

2010-03-29 Thread Greg Smith
Recently I ran a set of tests on two systems:  a 4-core server with 5 
disks (OS + WAL + 3 for DB) on a battery backed disk controller, and a 
newer Hyper-threaded design with 4 physical cores turning into 8 virtual 
ones--but only a single disk and no RAID controller, so I had to turn 
off its write cache to get reliable database operation.  (See 
http://www.postgresql.org/docs/current/interactive/wal-reliability.html )


When running pgbench with its simple built-in SELECT-only test, on a 
tiny data set that fits in RAM, I went from a peak of 28336 TPS on the 
4-core system to a peak of 58164 TPS on the 8-core one.


On the default write-heavy test, the 4-core server peaked at 4047 TPS.  
The 8-core one peaked at 94 TPS because that's as fast as its single 
disk could commit data.


The moral is that a faster processor or more cores only buys you 
additional speed if enough of your data fits in RAM that the processor 
speed is the bottleneck.  If you're waiting on disks, a faster processor 
will just spin without any work to do.  You can't answer will I get 
more transactions per second? without specifying what your transaction 
is, and knowing what the current limiter is.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] playr (or similar tool)?

2010-03-29 Thread Greg Smith

Kevin Kempter wrote:
I need a tool that will duplicate ALL messages from one db to another 
(including begin, commit, etc).  I think Playr does (did) this but the 
myyearbook links from the past pg conference talks (the one from PG East 2009 
in particular) no longer work.
  


After poking at the myYearbook guys at this year's PG East last week, I 
discovered that it (and their other tools such as staplr and golconde) 
are now at http://github.com/myYearbook/ instead of the 
area51.myyearbook.com site things used to be hosted at.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Side effect of synchronous_commit = off

2010-03-29 Thread Arthur Turrini
Being asynchronous, means that write-to-disk will be queued in kernel for
later execution, ie, postgresql won't wait for its write confirmation, but
they will occurr in a serial manner.


On Thu, Mar 25, 2010 at 4:49 AM, Yan Cheng CHEOK ycch...@yahoo.com wrote:

 I was wondering whether setting synchronous_commit = off will have the
 following side effect.

 (1) Process A issues UPDATE command on row x, from false to true.
 (2) After that, Process B READ from row x.

 Is it possible that when Process B start to read row x, the true value is
 not being flushed to the table. Hence, process B will read the row x as
 false?

 If this situation will happen, is it possible that Process B may issues a
 command, use to flush all pending data to be written to disk?

 Thanks and Regards
 Yan Cheng CHEOK





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] COPY ERROR

2010-03-29 Thread paulo matadr
Hi all, 
When I try to import big file base.txt( 700MB),I get this:

x=# create table arquivo_serasa_marco( varchar(3000));
x=# COPY arquivo_serasa_marco from '/usr/local/pgsql/data/base.txt';
ERROR:  literal newline found in data
HINT:  Use \n to represent newline.
CONTEXT:  COPY arquivo_serasa_marco, line 2: 

How can find solution for this?

Regards

Paul


  

Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: [GENERAL] hstore equality-index performance question

2010-03-29 Thread Stefan Keller
Thank you Sergey for your reply.

I'm not sure how your partial index makes a difference. Obviously the
? operator gets indexed:

# EXPLAIN SELECT id, (kvp-'a') FROM mytable WHERE kvp ? 'a';
Index Scan using mytable_kvp_idx on mytable  (cost=0.00..8.27 rows=1 width=36)
  Index Cond: (kvp ? 'a'::text)

My question is, if one can get also index support for the '-' operator?

-S.

2010/3/29 Sergey Konoplev gray...@gmail.com:
 On 29 March 2010 02:57, Stefan Keller sfkel...@gmail.com wrote:
 Documentation at F.13.3. Indexes says that hstore has index support
 for @ and ? operators...
 = Therefore no index does support equality-indexes?

 If so, then I suppose that following (potentially slow) query
 which contains an equality test for all keys 'a' and returns all values...

  SELECT id, (kvp-'a') FROM mytable;

 ... can be accelerated nevertheless by adding following where clause:

  SELECT id, (kvp-'a') FROM mytable WHERE kvp ? 'a';

 = Is this correct?


 May be you are looking for something like this?

 postg...@localhost test=#
 CREATE TABLE hstore_partial_index_table (id serial PRIMARY KEY, h hstore);
 NOTICE:  CREATE TABLE will create implicit sequence
 hstore_partial_index_table_id_seq for serial column
 hstore_partial_index_table.id
 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
 hstore_partial_index_table_pkey for table
 hstore_partial_index_table
 CREATE TABLE

 postg...@localhost test=#
 CREATE INDEX i_hstore_partial_index_table__h_a ON
 hstore_partial_index_table (id) WHERE h ? 'a';
 CREATE INDEX

 postg...@localhost test=#
 EXPLAIN SELECT * FROM hstore_partial_index_table WHERE h ? 'a';
                                                     QUERY PLAN
 -
  Index Scan using i_hstore_partial_index_table__h_a on
 hstore_partial_index_table  (cost=0.00..8.27 rows=1 width=36)
 (1 row)


 --
 Sergey Konoplev

 Blog: http://gray-hemp.blogspot.com /
 Linkedin: http://ru.linkedin.com/in/grayhemp /
 JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hstore equality-index performance question

2010-03-29 Thread Sergey Konoplev
 My question is, if one can get also index support for the '-' operator?

I am not sure what do you mean.

  SELECT id, (kvp-'a') FROM mytable;

 ... can be accelerated nevertheless by adding following where clause:

  SELECT id, (kvp-'a') FROM mytable WHERE kvp ? 'a';

 = Is this correct?

These queries could return completely different result sets. First
query returns all the records with the value of kvp-'a' if kvp has
'a' key and NULL otherwise. Second one returns only those records
where kvp has 'a' key.

-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Floating point exception in initdb

2010-03-29 Thread Vikram Patil
Hello Guys,

 

I am getting an floating exception while running initdb from custom
built postgresql . I am using suse linux for installing postgresql.

initDB.sh: line 14: 20285 Floating point exception$pgsql_home/bin/initdb
-A password -U postgres --pwfile=pwFile -E UTF8 -D postgresql/data
/log/initDB.log

 

 --Warning. PostgreSQL database can't be initialized successfully,
please manually create schema or migrate encyclopedia DB after
initialized the 

 

Same thing works on redhat machine without any issues.Following link
says:

 

Work around gcc bug that causes floating-point exception instead of
division by zero on some platforms (Tom)

 

http://www.postgresql.org/docs/8.4/static/release-8-4-1.html

 

I am not able to located description about this bug. Please also let me
know where I can look at this bug description

 

Thanks  Regards,

Vikram



Re: [GENERAL] Floating point exception in initdb

2010-03-29 Thread Tom Lane
Vikram Patil vpa...@actuate.com writes:
 I am getting an floating exception while running initdb from custom
 built postgresql . I am using suse linux for installing postgresql.

 initDB.sh: line 14: 20285 Floating point exception$pgsql_home/bin/initdb
 -A password -U postgres --pwfile=pwFile -E UTF8 -D postgresql/data
 /log/initDB.log

Hm, dunno what's causing that ...

 Work around gcc bug that causes floating-point exception instead of
 division by zero on some platforms (Tom)

... but I'm quite sure that patch won't fix it for you, because it was
in code that wouldn't get executed during initdb.

What compiler are you using, for what hardware?  Did you use any
nondefault configure or compiler switches?  Have you modified the
Postgres sources at all?  Which step of initdb gets the failure?
Try running that step under gdb so you can get a stack trace pointing
at the failure location.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Floating point exception in initdb

2010-03-29 Thread Vikram Patil
Thanks for reply Tom. I am having source code form 8.4.1 version which
is released on 09-09-2009. 

I was able to install it on Redhat machine without any issues.  But I am
facing this issue while running initdb on Suse Ent. 9 machine.( Linux
2.6.5-7.97-smp #1 SMP Fri Jul 2 14:21:59 UTC 2004 i686 i686 i386
GNU/Linux )

Source code was compiled in following environment:

Gcc : version 4.1.2 20071124 (Red Hat 4.1.2-42)
Kernel Version: 2.6.18-8.el5 #1 SMP

With Configure string:
./configure --prefix=$BIN_DIR --without-zlib --without-readline

I built debug build using CFLAGS=-g for configuring and now trying to
step through for finding failure location.

Thanks  Regards,
Vikram

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, March 29, 2010 4:45 PM
To: Vikram Patil
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Floating point exception in initdb 

Vikram Patil vpa...@actuate.com writes:
 I am getting an floating exception while running initdb from
custom
 built postgresql . I am using suse linux for installing postgresql.

 initDB.sh: line 14: 20285 Floating point
exception$pgsql_home/bin/initdb
 -A password -U postgres --pwfile=pwFile -E UTF8 -D postgresql/data
 /log/initDB.log

Hm, dunno what's causing that ...

 Work around gcc bug that causes floating-point exception instead of
 division by zero on some platforms (Tom)

... but I'm quite sure that patch won't fix it for you, because it was
in code that wouldn't get executed during initdb.

What compiler are you using, for what hardware?  Did you use any
nondefault configure or compiler switches?  Have you modified the
Postgres sources at all?  Which step of initdb gets the failure?
Try running that step under gdb so you can get a stack trace pointing
at the failure location.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Floating point exception in initdb

2010-03-29 Thread Tom Lane
Vikram Patil vpa...@actuate.com writes:
 Thanks for reply Tom. I am having source code form 8.4.1 version which
 is released on 09-09-2009. 

 I was able to install it on Redhat machine without any issues.  But I am
 facing this issue while running initdb on Suse Ent. 9 machine.( Linux
 2.6.5-7.97-smp #1 SMP Fri Jul 2 14:21:59 UTC 2004 i686 i686 i386
 GNU/Linux )

 Source code was compiled in following environment:
 Gcc : version 4.1.2 20071124 (Red Hat 4.1.2-42)
 Kernel Version: 2.6.18-8.el5 #1 SMP

[ squint... ]  This isn't totally clear, but are you saying you compiled
on some semi-recent Red Hat platform and are trying to run the resulting
executables on an old SUSE platform?  I wouldn't really expect that to
work.  glibc's API changes from time to time.  Would be better to
compile on the same release you're planning to run on.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Floating point exception in initdb

2010-03-29 Thread Vikram Patil
Tom,

   Thanks for guidelines. I checked glibc versions. I am currently using
glibc libarary 2.3.3  on SUSE platform while build machine I used has
glibc library 2.5 . I will try to install it on SUSE machine with glibc
library 2.5. 

Thanks  Regards,
Vikram

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, March 29, 2010 5:29 PM
To: Vikram Patil
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Floating point exception in initdb 

Vikram Patil vpa...@actuate.com writes:
 Thanks for reply Tom. I am having source code form 8.4.1 version which
 is released on 09-09-2009. 

 I was able to install it on Redhat machine without any issues.  But I
am
 facing this issue while running initdb on Suse Ent. 9 machine.( Linux
 2.6.5-7.97-smp #1 SMP Fri Jul 2 14:21:59 UTC 2004 i686 i686 i386
 GNU/Linux )

 Source code was compiled in following environment:
 Gcc : version 4.1.2 20071124 (Red Hat 4.1.2-42)
 Kernel Version: 2.6.18-8.el5 #1 SMP

[ squint... ]  This isn't totally clear, but are you saying you compiled
on some semi-recent Red Hat platform and are trying to run the resulting
executables on an old SUSE platform?  I wouldn't really expect that to
work.  glibc's API changes from time to time.  Would be better to
compile on the same release you're planning to run on.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] set statement_timeout does not work

2010-03-29 Thread Jun Wang
Hi,

I try to set the statement_timeout so that select pg_stop_backup();
will not hang if archive command failed. Below are the command and
errors.

psql.exe -d mydb -h myhost -p 5432 -U postgres -w -c set
statement_timeout = 1000; select pg_stop_backup();
WARNING:  pg_stop_backup still waiting for archive to complete (60
seconds elapsed)
WARNING:  pg_stop_backup still waiting for archive to complete (120
seconds elapsed)
WARNING:  pg_stop_backup still waiting for archive to complete (240
seconds elapsed)
WARNING:  pg_stop_backup still waiting for archive to complete (480
seconds elapsed)


I also tried to run the two commands seperately as below. It also does not work.

psql.exe -d mydb -h myhost -p 5432 -U postgres -w -c set
statement_timeout = 1000;
psql.exe -d mydb -h myhost -p 5432 -U postgres -w -c select pg_stop_backup();

If I change the statement_timeout setting of postgresql.conf, it
works. But it will afftect all the queries.

How to use psql to do it?

Thanks.

Jack

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simultaneously reducing both memory usage and runtime for a query

2010-03-29 Thread Faheem Mitha



On Sun, 28 Mar 2010, Tom Lane wrote:


Faheem Mitha fah...@email.unc.edu writes:

... In any case, feedback would be helpful. Details of my attempts
at optimization are at
http://bulldog.duhs.duke.edu/~faheem/snppy/opt.pdf


By and large, this is not the way to ask for help on the Postgres lists.
If you're supplying extremely large test data or something, it's fair to
provide a link instead of putting the information in-line, but otherwise
you should try to make your email self-contained.  Those of us who are
willing to help are not here just to help you --- we'd like other people
to learn from it too, both at the time and later from the archived
discussion.  So the information needs to be in the email thread, not
only on some transient web page.


I submitted a modified self-contained email as requested, but it does not 
appear to have made it to the list, and I never got any kind of reject 
message. It was around 1000 lines with no attachments. Is there a size 
limit for posts, and if so, what is it? If it was rejected due to some 
filter, it would be desirable (and polite) if the recepient was told what 
happened.


  Regards, Faheem.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general