Re: [GENERAL] 10GbE / iSCSI storage for postgresql.

2011-09-23 Thread Craig Ringer

On 22/09/2011 5:47 PM, Rajesh Kumar Mallah wrote:

Dear Craig ,

The other end of the iSCSI shall have all the goodies like the raid controller
with a WBC with BBU. There can even be multiple raid cards for multiple
servers and disksets. I am even planning for NICs having TOE features .

  The doubt is will it work withing a acceptable performance range as
compared to the situation of DAS (Direct Attached Storage). Has anyone tried 
like this before ?


Sure, people use iSCSI and similar relatively frequently, and as I said 
it depends a lot on the controller (client- and server-side), the 
workload, and the details of the implementation.


If the iSCSI storage is fast, PostgreSQL will be fast. If the iSCSI 
storage has slow writes, PostgreSQL will have slow writes. And so on.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Materialized views in Oracle

2011-09-23 Thread Mike Christensen
On Wed, Sep 21, 2011 at 11:34 PM, Craig Ringer ring...@ringerc.id.au wrote:
[snip]
 This can get complicated when you have triggers acting recursively on a
 table and it isn't always that easy to understand exactly what a trigger
 will see.

I do agree with most all your points.  The value I got out of this
experience was the perspective I gained dealing with materialized
views and being forced to think about the logic behind their
implementation.  As you said, this feature is friggen tough to do
right.  I read up a bit about Microsoft SQL’s “indexed views” and they
too have a long list of things you can’t do, and from what I can tell,
they only support the idea of keeping them up to date on every commit.
So, since we’re not on this list to discuss how we can improve Oracle,
I’d like to not make that the focus of my response.  I’d rather have a
discussion on what materialized views could mean in the Postgres
world.  I think my expectations, now slightly more validated through
my realization of what’s possible with Oracle, would go something like
this:

1) Though I might have given the impression that a “manual complete
refresh” is not useful, I definitely see value in this especially for
data warehousing scenarios.  However, I’d almost want to call this a
“snapshot” and not a “materialized view” – the two things are so
different, I think it warrants different syntax.  I think the ability
to create a snapshot in time would be quite useful:

CREATE SNAPSHOT Foo AS
   SELECT * FROM Bar;

Now, I have “Foo” as a record in time, and can refer to it as I could
any other table. I’d also like the ability to refresh it (via a
schedule or a trigger):

REFRESH SNAPSHOT Foo;

Snapshots would remember their underlying query and could thus easily
update at any time.

2) The huge feature here (which MS SQL implements as indexed views)
are views that automatically update as the data underneath them
changes.  I’ve come to the conclusion that, while it’s impressive that
Oracle can do anything close to this and have it actually work pretty
well in a lot of scenarios, Oracle simply can’t have an
all-encompassing knowledge of exactly how my database works and where
all the data comes from.  You know who does have that knowledge?  Me.
I believe I should have the power to instruct Postgres exactly when
and how to update my view in situations where it cannot be
automatically ascertained through the engine, rather than the DB
saying “Sorry I can’t be perfect thus you can’t do that.”  For me, I
see this “chore” as a lot more appealing than trying to figure out why
I can’t make the view that I want.

I expect to be able to create two kinds of materialized views: Ones
that COMPLETE refresh any time any referred column changes, and ones
that do a FAST refresh.  If I specify FAST but the engine can’t infer
what it needs, I should get a NOTICE and it should fall back to a
COMPLETE.  If I specify nothing, it should create FAST if it can, and
if not fall back to COMPLETE without notice.

When creating a materialized view, I believe warnings should be issued
when a column’s source cannot be inferred by the parser, but at the
risk of being controversial, I think I should still be able to create
the view anyway.  I’ve always been partial to systems that allow you
to shoot yourself in the foot.  I could see doing something like:

CREATE MATERIALIZED VIEW Foo AS
   SELECT ID, Name FROM Users;

Postgres knows that ID is a primary key, and can thus update the view
when Users changes.  Had I not put in a primary key, I think one
should be generated for me automatically based on every primary key in
the referred tables.  If tables do not have primary keys, you’d get a
warning that the view has to be re-created on any change to the
referred keyless tables.

CREATE MATERIALIZED VIEW Foo AS
   SELECT State, AVG(Age) FROM Users GROUP BY State;

Ok this is a tough one to figure out.  Since the Age column is
referred to, we could force a refresh every time Age in any row
changes.  In theory, the engine could be smart enough to realize the
aggregate age came from a group, and thus update the appropriate
“State” row when any Age within that state changed.  Wow, this is
getting tough; I see why Oracle just said no way on aggregate
functions.

CREATE MATERIALIZED VIEW Foo AS
   SELECT ID, GetStateFromZip(ZipCode) as State FROM Users;

Ouch, a function.  Well, the engine could look at the volatility of my
GetStateFromZip function and make an assumption that passing in value
X would always result in Y for nonvolatile functions.  Then, update
row ID when ZipCode changes.  However, if the function is volatile or
the data that the function itself uses changes, then we’d definitely
run into issues.  Two options: One, you’d issue a NOTICE and say
something like “The source of column ‘State’ cannot be inferred.” and
create the view anyway.  Maybe there could be some STRICT option or
pragma to simply not allow creating the dangerous views that could get
stale.  

Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread hamann . w
Alban Hertroys haram...@gmail.com wrote:

 What is the output of explain?

 You say 'the other table', so presumably we're dealing with a foreign key
 here. Is there an index on that column?

Albe Laurenz wrote:

 Is the index used for where code ~ '^ABC3563'?
 
 If not, then the result is fast only because the table is scanned only once,
 and it's just the factor of 3000 that's killing you.
  
 The second query (where code ~ wantcode) can never use an index because
 the pattern wantcode is unknown at query planning time.
  
 Yours,
 Laurenz Albe


Here I created a subset (just number and code matching a certain prefix)

\d items
  Table pg_temp_1.items
 Column | Type  | Modifiers 
+---+---
 num| integer   | 
 code   | character varying(40) | 
create index itemsc on items (code);

select count(*) from items;
 count 
---
  9614

A single anchored query
select * from items where code ~ '^ABC';
does indeed use the index to retrieve data.
 
Next I copied a file of wanted codes

create temp table n (wantcode text);
\copy n from /tmp/rmartin.tmp

the file contains plain names, i.e. unanchored matches

explain analyze select num, n.wantcode from items, n where items.code ~ 
n.wantcode;
 Nested Loop  (cost=20.00..216502.14 rows=48070 width=36) (actual 
time=148.479..336280.488 rows=2871 loops=1)
   Join Filter: ((outer.code)::text ~ inner.wantcode)
   -  Seq Scan on items  (cost=0.00..167.14 rows=9614 width=42) (actual 
time=0.048..38.666 rows=9614 loops=1)
   -  Materialize  (cost=20.00..30.00 rows=1000 width=32) (actual 
time=0.001..1.049 rows=815 loops=9614)
 -  Seq Scan on n  (cost=0.00..20.00 rows=1000 width=32) (actual 
time=0.003..1.839 rows=815 loops=1)
 Total runtime: 336286.692 ms

An exact match  where items.code = n.wantcode on the same data completes in 
40 ms

BTW: indexing the second table does not affect the query plan or the runtime, 
it just shows
actual row count rather than estimate.

This is, of course, bad; an anchored match could be faster and also is more 
appropriate
to the scenario. So I change the contents of the second table

update n set wantcode = textcat('^', wantcode);

and try again, with similar results
 Nested Loop  (cost=14.15..176478.01 rows=39178 width=36) (actual 
time=125.114..308831.697 rows=2871 loops=1)
   Join Filter: ((outer.code)::text ~ inner.wantcode)
   -  Seq Scan on items  (cost=0.00..167.14 rows=9614 width=42) (actual 
time=0.061..2034.572 rows=9614 loops=1)
   -  Materialize  (cost=14.15..22.30 rows=815 width=32) (actual 
time=0.001..1.095 rows=815 loops=9614)
 -  Seq Scan on n  (cost=0.00..14.15 rows=815 width=32) (actual 
time=0.114..1.893 rows=815 loops=1)
 Total runtime: 308837.746 ms


I am aware that this is unlikely to work fast (the planner would perhaps need a 
hint in the query
rather than in the data column to choose an anchored match algorithm (in case 
there is
such an algo, of course)

So I wonder whether there might be a different approach to this problem rather 
than
pattern matching.
I recall I had a similar problem before with a contacts column possibly 
containing one or more
email addresses. Here searches would also be number of people times number of 
requests
performance. I finally ended up with a @@ match (contrib/tsquery) and a 
supporting GIST index,
but that only supports exact match, not prefix

Regards
Wolfgang Hamann






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Relative performance of prefix and suffix string matching

2011-09-23 Thread Andrew Rose
Basic Question: In text fields, is prefix matching significantly faster than 
suffix matching?

Background:

I'm designing a database schema where a common operation will be search for 
substring x either at the beginning or end of column 'str'.

1. I could have the client issue...

SELECT * FROM tbl WHERE str LIKE 'x%' OR str LIKE '%x'

2. Alternatively, I could store column 'rev_str' as a reversed version of 
column 'str' and have the client produce a reversed version of x on each query 
(call it r).  Then the client would issue...

SELECT * FROM tbl WHERE str LIKE 'x%' OR rev_str LIKE 'r%'

...which would use prefix matches only instead of requiring suffix matches.  
Since I've seen this form used by others, I was wondering if it's necessary - 
i.e. if databases really do perform prefix matching faster?

3. Is there a solution I'm unaware of with even better performance?

Thanks,

Andrew

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Date time value error in Ms Access using pass through queries

2011-09-23 Thread c k
As I have connected to postgres from MS Access, it thrown the error.
Now I have updated ODBC driver, still same problems comes for 9.1 and not
for 9.0. There should be some thing that has changed in 9.1 release which
prevents from auto-conversion of format of date from client to server.
From package I have installed postgres 9.1 using EnterpriseDB one click
installer.
ODBC driver has version 9.0.3.10
Should I file a bug for this?

Regards,
C P Kulkarni

On Thu, Sep 22, 2011 at 7:42 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On Thursday, September 22, 2011 2:25:40 am c k wrote:
  It shows error as 'Date/time value out of range' and gives the actual
 part
  of the sql statement which contains date as the details.

 What program threw the error, Access,ODBC or Postgres?

 
  I have installed both databases at different locations on same drive on
  Fedora 15 and accessing it from Windows Xp virtual machine.
  And both databases from 9.0. and 9.1 have same structure and and it is
  found that this problem comes for all date columns for 9.1. There is no
  such problem for 9.0 at all.

 Define install, from source or package?

 
  When same query is executed from PgAdmin it executed successfully for
 9.1.
  So the error may be in ODBC driver and some things related to date data
  type may be changed in 9.1.

 What version of the ODBC driver are using?

 FYI, there is a Postgres ODBC list:
 http://archives.postgresql.org/pgsql-odbc/

 
  C P Kulkarni
 


 --
 Adrian Klaver
 adrian.kla...@gmail.com



Re: [GENERAL] Relative performance of prefix and suffix string matching

2011-09-23 Thread Tore Halvorsen
On Fri, Sep 23, 2011 at 11:47 AM, Andrew Rose andrew.r...@metaswitch.comwrote:

 Basic Question: In text fields, is prefix matching significantly faster
 than suffix matching?


If you are using text_pattern_ops, then yes.



 2. Alternatively, I could store column 'rev_str' as a reversed version of
 column 'str' and have the client produce a reversed version of x on each
 query (call it r).  Then the client would issue...


... or use an index on the reversed string.

create table foo (text text not null);
insert into foo select md5(generate_series(1, 100, 1)::text);
create index on foo(text text_pattern_ops);
create index on foo(reverse(text) text_pattern_ops);
explain select * from foo where text like 'f000' || '%' or reverse(text)
like reverse('f000') || '%'

Bitmap Heap Scan on foo  (cost=9.20..13.22 rows=200 width=33)
  Recheck Cond: ((text ~~ 'f000%'::text) OR (reverse(text) ~~
'000f%'::text))
  Filter: ((text ~~ 'f000%'::text) OR (reverse(text) ~~ '000f%'::text))
  -  BitmapOr  (cost=9.20..9.20 rows=1 width=0)
-  Bitmap Index Scan on foo_text_idx  (cost=0.00..4.55 rows=1
width=0)
  Index Cond: ((text ~=~ 'f000'::text) AND (text ~~
'f001'::text))
-  Bitmap Index Scan on foo_reverse_idx  (cost=0.00..4.55 rows=1
width=0)
  Index Cond: ((reverse(text) ~=~ '000f'::text) AND
(reverse(text) ~~ '000g'::text))

... at least this works for me :)

-- 
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
demo 2011 Tore Halvorsen || +052 0553034554


Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread Eduardo Morras

At 09:45 23/09/2011, haman...@t-online.de wrote:


A single anchored query
select * from items where code ~ '^ABC';
does indeed use the index to retrieve data.


So I wonder whether there might be a different approach to this 
problem rather than

pattern matching.
I recall I had a similar problem before with a contacts column 
possibly containing one or more
email addresses. Here searches would also be number of people times 
number of requests
performance. I finally ended up with a @@ match (contrib/tsquery) 
and a supporting GIST index,

but that only supports exact match, not prefix


You can try these, i doubt they will use any index but its a 
different approach:


select * from items where length(items.code)length(rtrim(items.code,'ABC'));

select * from items where strpos(items.code,'ABC')=0 or 
strpos(items.code,'any_substring')=0;


HTH 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Relative performance of prefix and suffix string matching

2011-09-23 Thread Alban Hertroys
On 23 September 2011 11:47, Andrew Rose andrew.r...@metaswitch.com wrote:

 Basic Question: In text fields, is prefix matching significantly faster
 than suffix matching?


It does depend on what type of index you use. BTrees split off text strings,
from left to right, halving the number of records you need to scan at every
branch. For a suffix match, that's exactly the wrong way around.
Hash indexes probably don't fare any better.
I don't know enough about GIST or GIN indexes to comment on their
suitability for suffix matches, but presumably they're better at those.

I recall doing suffix matches used to be a problem in at least earlier
versions of Postgres, but it's quite possible that the query planner is
smart enough to do the reverse match by itself nowadays (I doubt it, seeing
that it would also need to reverse the way the index is organised).


 2. Alternatively, I could store column 'rev_str' as a reversed version of
 column 'str' and have the client produce a reversed version of x on each
 query (call it r).  Then the client would issue...

 SELECT * FROM tbl WHERE str LIKE 'x%' OR rev_str LIKE 'r%'

 ...which would use prefix matches only instead of requiring suffix matches.
  Since I've seen this form used by others, I was wondering if it's necessary
 - i.e. if databases really do perform prefix matching faster?

 3. Is there a solution I'm unaware of with even better performance?


You can create a functional index on the reverse of the string, that way
omitting the need for an extra column (that needs updating as well).

CREATE INDEX tbl_str_rev ON tbl (reverse(str));
SELECT * FROM tbl WHERE str LIKE 'x%' OR reverse(str) LIKE 'x%';

See: http://www.postgresql.org/docs/9.0/static/indexes-expressional.html

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


Re: [GENERAL] Query performs badly with materialize node

2011-09-23 Thread Ingmar Brouns

 In short: if you want to get useful commentary on your problem, you
 need to provide a more complete picture.

                        regards, tom lane


I've posted the query plans now several times to this list, but they
do not show up, almost as if being caught by a spam filter or
something. I've wrote to pgsql-general-owner about this a few days ago
but haven't received a reply. Am I missing something?

Kind regards,

Ingmar

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] get number and names of processes connected to postgresql

2011-09-23 Thread Gregg Jaskiewicz
Basically, I got bunch of local processes connecting to postgresql,
need to aggregate some sort of report about number of connections and
its origin every so often.
pg version is 8.3

Any ideas if there's tools to gather that info on linux ?
Netstat is the only one I know, but I have to parse/awk its output to
get something meaningful out of it.


Ideas are welcomed.

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread Alban Hertroys
On 23 September 2011 09:45, haman...@t-online.de wrote:

 Alban Hertroys haram...@gmail.com wrote:

  What is the output of explain?
 
  You say 'the other table', so presumably we're dealing with a foreign
 key
  here. Is there an index on that column?

 Albe Laurenz wrote:

  Is the index used for where code ~ '^ABC3563'?
 
  If not, then the result is fast only because the table is scanned only
 once,
  and it's just the factor of 3000 that's killing you.
 
  The second query (where code ~ wantcode) can never use an index because
  the pattern wantcode is unknown at query planning time.
 
  Yours,
  Laurenz Albe


 Here I created a subset (just number and code matching a certain prefix)

 \d items
  Table pg_temp_1.items
  Column | Type  | Modifiers
 +---+---
  num| integer   |
  code   | character varying(40) |
 create index itemsc on items (code);

 select count(*) from items;
  count
 ---
  9614

 A single anchored query
 select * from items where code ~ '^ABC';
 does indeed use the index to retrieve data.

 Next I copied a file of wanted codes

 create temp table n (wantcode text);
 \copy n from /tmp/rmartin.tmp

 the file contains plain names, i.e. unanchored matches

 explain analyze select num, n.wantcode from items, n where items.code ~
 n.wantcode;
  Nested Loop  (cost=20.00..216502.14 rows=48070 width=36) (actual
 time=148.479..336280.488 rows=2871 loops=1)
   Join Filter: ((outer.code)::text ~ inner.wantcode)
   -  Seq Scan on items  (cost=0.00..167.14 rows=9614 width=42) (actual
 time=0.048..38.666 rows=9614 loops=1)
   -  Materialize  (cost=20.00..30.00 rows=1000 width=32) (actual
 time=0.001..1.049 rows=815 loops=9614)
 -  Seq Scan on n  (cost=0.00..20.00 rows=1000 width=32) (actual
 time=0.003..1.839 rows=815 loops=1)
  Total runtime: 336286.692 ms


So you're comparing a variable field value to a variable pattern - yeah,
that's going to hurt. There's no way you could index exactly that.

Perhaps there's some way you can transform the problem so that you get
something indexable?
For example, if your match patterns follow a certain pattern by themselves,
you could add a column with the longest match pattern that would match the
string. Then you could just do a query for which records have the match
pattern (in that new column) that you're looking for and voila!

If something like that is possible strongly depends on what kind of match
patterns you're using, of course.


 An exact match  where items.code = n.wantcode on the same data completes
 in 40 ms


That's an exact string match, of course that will be fast ;)

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


Re: [GENERAL] Query performs badly with materialize node

2011-09-23 Thread Alban Hertroys
On 23 September 2011 12:31, Ingmar Brouns swi...@gmail.com wrote:

 
  In short: if you want to get useful commentary on your problem, you
  need to provide a more complete picture.
 
 regards, tom lane
 

 I've posted the query plans now several times to this list, but they
 do not show up, almost as if being caught by a spam filter or
 something. I've wrote to pgsql-general-owner about this a few days ago
 but haven't received a reply. Am I missing something?

 Kind regards,

 Ingmar


GMail says you may not be who you say you are, so there is something going
on with your mails that's suspicious to some systems.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


Re: [GENERAL] Speed of lo_unlink vs. DELETE on BYTEA

2011-09-23 Thread Albe Laurenz
Reuven M. Lerner wrote:
 When a record in the main table is deleted, there is a rule (yes a rule -- 
 not a trigger) in the
 referencing table that performs a lo_unlink on the associated object.
 
 I just want to check that my intuition is correct: Wouldn't it be way faster 
 and more efficient
 for us to use BYTEA columns to store the data (which can get into the 20-50 
 MB range),
 and for us to just depend on ON DELETE CASCADE, rather than a rule?
[followed by dramatic performance numbers]
 
Could you try with a trigger instead of a rule and see if the performance is 
better?
 
Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Relative performance of prefix and suffix string matching

2011-09-23 Thread Stéphane A. Schildknecht
Le 23/09/2011 12:30, Alban Hertroys a écrit :
 On 23 September 2011 11:47, Andrew Rose andrew.r...@metaswitch.com
 mailto:andrew.r...@metaswitch.com wrote:
 
 Basic Question: In text fields, is prefix matching significantly faster
 than suffix matching?
 
 
 It does depend on what type of index you use. BTrees split off text strings,
 from left to right, halving the number of records you need to scan at every
 branch. For a suffix match, that's exactly the wrong way around.
 Hash indexes probably don't fare any better.
 I don't know enough about GIST or GIN indexes to comment on their suitability
 for suffix matches, but presumably they're better at those.
 
 I recall doing suffix matches used to be a problem in at least earlier 
 versions
 of Postgres, but it's quite possible that the query planner is smart enough to
 do the reverse match by itself nowadays (I doubt it, seeing that it would also
 need to reverse the way the index is organised).
  
 
 2. Alternatively, I could store column 'rev_str' as a reversed version of
 column 'str' and have the client produce a reversed version of x on each
 query (call it r).  Then the client would issue...
 
 SELECT * FROM tbl WHERE str LIKE 'x%' OR rev_str LIKE 'r%'
 
 ...which would use prefix matches only instead of requiring suffix 
 matches.
  Since I've seen this form used by others, I was wondering if it's
 necessary - i.e. if databases really do perform prefix matching faster?
 
 3. Is there a solution I'm unaware of with even better performance?
 
  
 You can create a functional index on the reverse of the string, that way
 omitting the need for an extra column (that needs updating as well).
 
 CREATE INDEX tbl_str_rev ON tbl (reverse(str));
 SELECT * FROM tbl WHERE str LIKE 'x%' OR reverse(str) LIKE 'x%';
 
 See: http://www.postgresql.org/docs/9.0/static/indexes-expressional.html

You can use the pg_trgm extension which lets you create gin or gist indexes on
your text field.

There is also wildspeed (see http://www.sai.msu.su/~megera/wiki/wildspeed).

Didn't try the latter solution, but the first one gives really great result for
searching partial strings.

a propos, there's one thing I'd like to know, is how to set the similarity
limit within pg_trgm on a server side (I'd like to have it settled to 0.2 for
every new session, for instance).

Regards,
-- 
Stéphane Schildknecht
http://www.loxodata.com
Contact régional PostgreSQL


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_upgrade 9.0 - 9.1

2011-09-23 Thread Thomas Kellerer

Thomas Kellerer, 17.09.2011 12:32:

I was trying to upgrade my Postgres 9.0 installation using pg_upgrade. Running 
it first with --check revealed no problems.

The when I did the actual migration, the following happened:


Mismatch of relation id: database dellstore, old relid 83613, new relid 16530
Failure, exiting



I now got the same error (alas with a different relation id) while migrating a 
completely different data directory.

Anything I can do to help find the reason for this problem (or bug?)

Unfortuantely the data contains some confidential information so I cannot make 
it available.

Regards
Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Date time value error in Ms Access using pass through queries

2011-09-23 Thread c k
I found the solution. Earlier for 9.0 database I have changed a line as
Datestyle= ' ISO, mdy' to DateStyle= ISO, dmy'. So it was working fine. For
9.1 I forgot to make this change in postgresql.conf. After making the change
everything is working fine.

Regards,
C P Kulkarni

On Fri, Sep 23, 2011 at 3:43 PM, c k shreeseva.learn...@gmail.com wrote:

 As I have connected to postgres from MS Access, it thrown the error.
 Now I have updated ODBC driver, still same problems comes for 9.1 and not
 for 9.0. There should be some thing that has changed in 9.1 release which
 prevents from auto-conversion of format of date from client to server.
 From package I have installed postgres 9.1 using EnterpriseDB one click
 installer.
 ODBC driver has version 9.0.3.10
 Should I file a bug for this?

 Regards,
 C P Kulkarni


 On Thu, Sep 22, 2011 at 7:42 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On Thursday, September 22, 2011 2:25:40 am c k wrote:
  It shows error as 'Date/time value out of range' and gives the actual
 part
  of the sql statement which contains date as the details.

 What program threw the error, Access,ODBC or Postgres?

 
  I have installed both databases at different locations on same drive on
  Fedora 15 and accessing it from Windows Xp virtual machine.
  And both databases from 9.0. and 9.1 have same structure and and it is
  found that this problem comes for all date columns for 9.1. There is no
  such problem for 9.0 at all.

 Define install, from source or package?

 
  When same query is executed from PgAdmin it executed successfully for
 9.1.
  So the error may be in ODBC driver and some things related to date data
  type may be changed in 9.1.

 What version of the ODBC driver are using?

 FYI, there is a Postgres ODBC list:
 http://archives.postgresql.org/pgsql-odbc/

 
  C P Kulkarni
 


 --
 Adrian Klaver
 adrian.kla...@gmail.com





Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread hamann . w
Eduardo Morras wrote:

 You can try these, i doubt they will use any index but its a 
 different approach:
 
 select * from items where 
 length(items.code)length(rtrim(items.code,'ABC'));
 
 select * from items where strpos(items.code,'ABC')=0 or 
 strpos(items.code,'any_substring')=0;

Hi,

if I understand this right, it does not mean check if the string appears at 
position 0
which could translate into an index query, but rather check if the string 
appears anywhere
and then check if that is position 0, so the entire table is checked.

explain analyze select items.num, wantcode from items, n where strpos(code, 
wantcode) = 0;
 Nested Loop  (cost=167.14..196066.54 rows=39178 width=36) (actual 
time=0.074..36639.312 rows=7832539 loops=1)
   Join Filter: (strpos((inner.code)::text, outer.wantcode) = 0)
   -  Seq Scan on n  (cost=0.00..14.15 rows=815 width=32) (actual 
time=0.005..2.212 rows=815 loops=1)
   -  Materialize  (cost=167.14..263.28 rows=9614 width=42) (actual 
time=0.007..13.970 rows=9614 loops=815)
 -  Seq Scan on items  (cost=0.00..167.14 rows=9614 width=42) (actual 
time=0.044..14.855 rows=9614 loops=1)
 Total runtime: 46229.836 ms


The query ran much faster than the pattern query, however. This seems to be the 
performance
of just searching for a plain string vs. initializing the regex engine every 
time (for 815
queries in a test set)

Regards
Wolfgang Hamann

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread hamann . w
Alban Hertroys wrote:

 So you're comparing a variable field value to a variable pattern - yeah,
 that's going to hurt. There's no way you could index exactly that.
 
 Perhaps there's some way you can transform the problem so that you get
 something indexable?
 For example, if your match patterns follow a certain pattern by themselves,
 you could add a column with the longest match pattern that would match the
 string. Then you could just do a query for which records have the match
 pattern (in that new column) that you're looking for and voila!
 
 If something like that is possible strongly depends on what kind of match
 patterns you're using, of course.

Hi Alban,

I already did that - the test set is just all records from the real table 
(about a million
entries) that match the common 'ABC' prefix

 An exact match  where items.code = n.wantcode on the same data completes
 in 40 ms


 That's an exact string match, of course that will be fast ;)

The main difference is: the fast query looks like

explain select items.num, wantcode from items, n where code = wantcode;
 Merge Join  (cost=53.56..1104.02 rows=39178 width=36)
   Merge Cond: ((outer.code)::text = inner.wantcode)
   -  Index Scan using itemsc on items  (cost=0.00..438.75 rows=9614 width=42)
   -  Sort  (cost=53.56..55.60 rows=815 width=32)
 Sort Key: n.wantcode
 -  Seq Scan on n  (cost=0.00..14.15 rows=815 width=32)

and the slow ones looks like that one:

 Nested Loop  (cost=14.15..176478.01 rows=39178 width=36)
   Join Filter: ((outer.code)::text ~ inner.wantcode)

So the database takes an entirely differnet approach at retrieving the entries.

Regards
Wolfgang Hamann


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] : Checksum ERROR when restoring Online Backup

2011-09-23 Thread Venkat Balaji
Hello Everyone,

I am testing the Online Backups of our production databases ( this is part
of our disaster recovery plan ).

After restoring the Online Backup, we tried to bring up the cluster and
ended up with the following error -

2011-09-23 07:29:04 CDT [24092]: [1-1] FATAL:  incorrect checksum in control
file.

Does this message mean, the Online Backup is corrupted or invalid ?

Normally, we get recovery related error messages. This is the first time we
are facing a problem like this.

Please help as this is critical for us.

Thanks
Venkat


Re: [GENERAL] : Checksum ERROR when restoring Online Backup

2011-09-23 Thread Venkat Balaji
Apologies -

I did not mention complete details of PG and OS -

Postgres 9.0.1
Production Operating System version where Postgres is running is as follows
-
Linux *prod-server* 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT 2010
x86_64 x86_64 x86_64 GNU/Linux

The server version where I restored the production backup is as follows -
Linux *backup-server* 2.6.18-194.3.1.el5PAE #1 SMP Sun May 2 04:42:25 EDT
2010 i686 i686 i386 GNU/Linux

I read some where that, Postgres datafiles are not architecture
independent.

Please help !

Thanks
Venkat


On Fri, Sep 23, 2011 at 6:11 PM, Venkat Balaji venkat.bal...@verse.inwrote:

 Hello Everyone,

 I am testing the Online Backups of our production databases ( this is part
 of our disaster recovery plan ).

 After restoring the Online Backup, we tried to bring up the cluster and
 ended up with the following error -

 2011-09-23 07:29:04 CDT [24092]: [1-1] FATAL:  incorrect checksum in
 control file.

 Does this message mean, the Online Backup is corrupted or invalid ?

 Normally, we get recovery related error messages. This is the first time we
 are facing a problem like this.

 Please help as this is critical for us.

 Thanks
 Venkat



Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread Alban Hertroys
On 23 September 2011 14:29, haman...@t-online.de wrote:

 Alban Hertroys wrote:

  So you're comparing a variable field value to a variable pattern - yeah,
  that's going to hurt. There's no way you could index exactly that.
 
  Perhaps there's some way you can transform the problem so that you get
  something indexable?
  For example, if your match patterns follow a certain pattern by themselves,
  you could add a column with the longest match pattern that would match the
  string. Then you could just do a query for which records have the match
  pattern (in that new column) that you're looking for and voila!
 
  If something like that is possible strongly depends on what kind of match
  patterns you're using, of course.

 Hi Alban,

 I already did that - the test set is just all records from the real table 
 (about a million
 entries) that match the common 'ABC' prefix

I think you misunderstood what I wrote. Notice the difference between
which strings match the pattern and which records have the match
pattern (in that new column) - the first is a regular expression
match (unindexable), while the second is a string equality match
(indexable).

What I'm suggesting is to add a column, which for the string 'ABCDEFG'
would contain 'ABC%'.
Data would look like:

SELECT str, pattern FROM tbl;
 str | pattern
-+-
 ABCDEFG | ABC%
 ABCDEF  | ABC%
 BCDEFGH | BCD%
 etc.

(can't format this properly in webmail, sorry)

When you look for records that match the pattern 'ABC%', you would
normally perform a query like:

SELECT str FROM tbl WHERE str LIKE 'ABC%';

But with this new column, you would query:

SELECT str FROM tbl WHERE pattern = 'ABC%';

As I said, it depends a lot on your pattern needs whether this
solution would work at all for you. If you only ever use a few
patterns, it will work. If you use many different patterns or don't
know before-hand which patterns will be used, it won't work well at
all.

 The main difference is: the fast query looks like

 explain select items.num, wantcode from items, n where code = wantcode;
  Merge Join  (cost=53.56..1104.02 rows=39178 width=36)
   Merge Cond: ((outer.code)::text = inner.wantcode)
   -  Index Scan using itemsc on items  (cost=0.00..438.75 rows=9614 width=42)
   -  Sort  (cost=53.56..55.60 rows=815 width=32)
         Sort Key: n.wantcode
         -  Seq Scan on n  (cost=0.00..14.15 rows=815 width=32)

Is there an index on wantcode? If you have a million or more records,
I would expect an index scan for a measly 815 matches...

 and the slow ones looks like that one:

  Nested Loop  (cost=14.15..176478.01 rows=39178 width=36)
   Join Filter: ((outer.code)::text ~ inner.wantcode)

 So the database takes an entirely differnet approach at retrieving the 
 entries.

Yes, because you're still using ~ there, with a pattern that's unknown
at query planning time. That will only be fast under some fairly rare
circumstances.

--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : Checksum ERROR when restoring Online Backup

2011-09-23 Thread Richard Huxton

On 23/09/11 13:53, Venkat Balaji wrote:

Linux *prod-server* 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT
2010 x86_64 x86_64 x86_64 GNU/Linux

The server version where I restored the production backup is as follows -
Linux *backup-server* 2.6.18-194.3.1.el5PAE #1 SMP Sun May 2 04:42:25
EDT 2010 i686 i686 i386 GNU/Linux

I read some where that, Postgres datafiles are not architecture
independent.


They (the WAL files) are not, and it looks like you're trying to restore 
a 64-bit version onto a 32-bit server. That's not going to work. A 
pg_dump/restore works of course, and if you need replication then Slony 
can handle this.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speed of lo_unlink vs. DELETE on BYTEA

2011-09-23 Thread Reuven M. Lerner

  
  
Hi, everyone. Albe wrote:

  Could you try with a trigger instead of a rule and see if the
  performance is better? Yours,
  Laurenz Albe

Great idea.  I did that, and here are the results for 10,000
records:

|   | Delete    | Dump  |
  |---+---+---|
  | Empty content | 8.162s    | 0.064s    |
  | bytea | 1m0.417s  | 0.157s    |
  | large object with rule    | 4m44.501s | 1m38.454s |
  | large object with trigger | 7m42.208s | 1m48.369s |

Ideas, anyone? 

Reuven
-- 
Reuven M. Lerner -- Web development, consulting, and training
Mobile: +972-54-496-8405 * US phone: 847-230-9795
Skype/AIM: reuvenlerner

  



[GENERAL] pg_dump compress

2011-09-23 Thread Roger Niederland
Using pg_dump from the command line with the exe included in windows 32 
bit install in postgresql 9.1.

pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname

outputs a file that is in plain text.  In previous versions of 
postgresql, this output a gzipped file.

Also tried:  pg_dump -U username -Z 9 -i dbname  somefile.sql.gz
got the same results.

Tried to revert to a previous version of pg_dump from pgadmin 1.12.
The dump is aborted because of server version mismatch (server version 
9.1.0; pg_dump version: 9.0.1)


Looked at the pg_dump documentation and it was not noted that the -Z 
option changed.


Roger



Re: [GENERAL] pg_dump compress

2011-09-23 Thread hubert depesz lubaczewski
On Thu, Sep 22, 2011 at 11:00:10PM -0700, Roger Niederland wrote:
 Using pg_dump from the command line with the exe included in windows
 32 bit install in postgresql 9.1.
 pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname

please note that pg_dump --help shows:
  -Z, --compress=0-9  compression level for compressed formats

hint: the part for compressed formats is critical.

plain dump is not compressed, you'd have to use -Fc to get compression,
but it's good to use anyway.

Best regards,

depesz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Adrian Klaver
On Thursday, September 22, 2011 11:00:10 pm Roger Niederland wrote:
 Using pg_dump from the command line with the exe included in windows 32
 bit install in postgresql 9.1.
 pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname
 
 outputs a file that is in plain text.  In previous versions of
 postgresql, this output a gzipped file.
 Also tried:  pg_dump -U username -Z 9 -i dbname  somefile.sql.gz
 got the same results.
 
 Tried to revert to a previous version of pg_dump from pgadmin 1.12.
 The dump is aborted because of server version mismatch (server version
 9.1.0; pg_dump version: 9.0.1)
 
 Looked at the pg_dump documentation and it was not noted that the -Z
 option changed.

Works here on a Linux machine. Maybe the gzip code is not installed in your 
Windows install?  By the way as concerns the -i option, from the docs:

http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html

-i
--ignore-version

A deprecated option that is now ignored. 

So ignore is ignored:)

 
 Roger

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Adrian Klaver
On Friday, September 23, 2011 6:46:49 am hubert depesz lubaczewski wrote:
 On Thu, Sep 22, 2011 at 11:00:10PM -0700, Roger Niederland wrote:
  Using pg_dump from the command line with the exe included in windows
  32 bit install in postgresql 9.1.
  pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname
 
 please note that pg_dump --help shows:
   -Z, --compress=0-9  compression level for compressed formats
 
 hint: the part for compressed formats is critical.

The docs show:

http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html

-Z 0..9
--compress=0..9

Specify the compression level to use. Zero means no compression. For the 
custom archive format, this specifies compression of individual table-data 
segments, and the default is to compress at a moderate level. For plain text 
output, setting a nonzero compression level causes the entire output file to be 
compressed, as though it had been fed through gzip; but the default is not to 
compress. The tar archive format currently does not support compression at all. 

 
 plain dump is not compressed, you'd have to use -Fc to get compression,
 but it's good to use anyway.
 
 Best regards,
 
 depesz

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Adrian Klaver
On Thursday, September 22, 2011 11:00:10 pm Roger Niederland wrote:
 Using pg_dump from the command line with the exe included in windows 32
 bit install in postgresql 9.1.
 pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname
 

Ignore my previous post. I just realized that I was using the 9.0.4 version of 
pg_dump not 9.1. Sorry for the noise.

 
 Roger

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread hamann . w
 
  Hi Alban,
 
  I already did that - the test set is just all records from the real table=
  (about a million
  entries) that match the common 'ABC' prefix
 
 I think you misunderstood what I wrote. Notice the difference between
 which strings match the pattern and which records have the match
 pattern (in that new column) - the first is a regular expression
 match (unindexable), while the second is a string equality match
 (indexable).
 
 What I'm suggesting is to add a column, which for the string 'ABCDEFG'
 would contain 'ABC%'.
 Data would look like:
 
 SELECT str, pattern FROM tbl;
 =A0str | pattern
 -+-
 =A0ABCDEFG | ABC%
  ABCDEF  | ABC%
  BCDEFGH | BCD%
  etc.
 
 (can't format this properly in webmail, sorry)
 
 When you look for records that match the pattern 'ABC%', you would
 normally perform a query like:
 
 SELECT str FROM tbl WHERE str LIKE 'ABC%';
 
 But with this new column, you would query:
 
 SELECT str FROM tbl WHERE pattern =3D 'ABC%';
 
 As I said, it depends a lot on your pattern needs whether this
 solution would work at all for you. If you only ever use a few
 patterns, it will work. If you use many different patterns or don't
 know before-hand which patterns will be used, it won't work well at
 all.
 
  The main difference is: the fast query looks like
 
  explain select items.num, wantcode from items, n where code =3D wantcode;
  =A0Merge Join =A0(cost=3D53.56..1104.02 rows=3D39178 width=3D36)
  =A0 Merge Cond: ((outer.code)::text =3D inner.wantcode)
  =A0 - =A0Index Scan using itemsc on items =A0(cost=3D0.00..438.75 rows=
 =3D9614 width=3D42)
  =A0 - =A0Sort =A0(cost=3D53.56..55.60 rows=3D815 width=3D32)
  =A0 =A0 =A0 =A0 Sort Key: n.wantcode
  =A0 =A0 =A0 =A0 - =A0Seq Scan on n =A0(cost=3D0.00..14.15 rows=3D815 wid=
 th=3D32)
 
 Is there an index on wantcode? If you have a million or more records,
 I would expect an index scan for a measly 815 matches...
 
  and the slow ones looks like that one:
 
  =A0Nested Loop =A0(cost=3D14.15..176478.01 rows=3D39178 width=3D36)
  =A0 Join Filter: ((outer.code)::text ~ inner.wantcode)
 
  So the database takes an entirely differnet approach at retrieving the en=
 tries.
 
 Yes, because you're still using ~ there, with a pattern that's unknown
 at query planning time. That will only be fast under some fairly rare
 circumstances.

Hi,

the problem is that I read the patterns from a file, as part of the script. 
They are not
known seperately. So it seems that creating the extra column is just the same 
effort as
running the original query. The processing is one-time in nature.
The one thing I can do is selecting a range of items on a common prefix, if all 
the
codes in the second table have some characters in common

 Is there an index on wantcode? If you have a million or more records,
 I would expect an index scan for a measly 815 matches..

When I ran a test, there was no real difference with wantcode indexed or not
It was interesting to try another suggestion and noting the difference between 
comparison
functions, with identical (lack of) use of an index

Regards
Wolfgang Hamann


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Adrian Klaver
On Thursday, September 22, 2011 11:00:10 pm Roger Niederland wrote:
 Using pg_dump from the command line with the exe included in windows 32
 bit install in postgresql 9.1.
 pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname
 
 outputs a file that is in plain text.  In previous versions of
 postgresql, this output a gzipped file.
 Also tried:  pg_dump -U username -Z 9 -i dbname  somefile.sql.gz
 got the same results.
 
 Tried to revert to a previous version of pg_dump from pgadmin 1.12.
 The dump is aborted because of server version mismatch (server version
 9.1.0; pg_dump version: 9.0.1)
 
 Looked at the pg_dump documentation and it was not noted that the -Z
 option changed.

I set up a 9.1 installation on my Linux machine and can confirm that the 9.1 
version of pg_dump does not compress plain text format dump files.

 
 Roger

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : Checksum ERROR when restoring Online Backup

2011-09-23 Thread Venkat Balaji
Thanks Richard !

I realized that, I was restoring on an 32 bit server.

Regards,
Venkat

On Fri, Sep 23, 2011 at 6:59 PM, Richard Huxton d...@archonet.com wrote:

 On 23/09/11 13:53, Venkat Balaji wrote:

 Linux *prod-server* 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT
 2010 x86_64 x86_64 x86_64 GNU/Linux

 The server version where I restored the production backup is as follows -
 Linux *backup-server* 2.6.18-194.3.1.el5PAE #1 SMP Sun May 2 04:42:25
 EDT 2010 i686 i686 i386 GNU/Linux

 I read some where that, Postgres datafiles are not architecture
 independent.


 They (the WAL files) are not, and it looks like you're trying to restore a
 64-bit version onto a 32-bit server. That's not going to work. A
 pg_dump/restore works of course, and if you need replication then Slony can
 handle this.

 --
  Richard Huxton
  Archonet Ltd



Re: [GENERAL] Replication between 64/32bit systems?

2011-09-23 Thread Hannes Erven
OK,


thank you for clarification, I'll resign ;-) and accept that mixing
32/64 bits is not possible.


I continued by experiments, and would like to share my results. First of
all, I'm an idiot because I had logging_collector set to on and wondered
why the terminals did not give any meaningful output when postgres was
started with -d4 .


Having fixed that, what actually seems to work with 9.1.0 is:

On the Windows host (64 bit):
- just use the 32bit binaries
- initdb --locale C
   (or any other locale that is also available on the slave)

Linux replication slave (32 bit):
- use CFLAGS=m128bit-long-double -malign-double ./configure
   (otherwise you'll hit checksum error in control file)


When you try to connect to a 64bit postgres host, the connection
succeeds and everything looks fine so far, until the first transaction
needs to be transferred: then, weird messages appear on the slave.
Obviously, this is the result of binary protocol incompatibility...


So far, I have only tried to replicate a small, newly created test
database. Next, I'll transfer a bigger database to a locale=C cluster
and see if replication works there as well.


Thanks again for your support,

-hannes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Roger Niederland



On 9/23/2011 6:46 AM, hubert depesz lubaczewski wrote:

On Thu, Sep 22, 2011 at 11:00:10PM -0700, Roger Niederland wrote:

Using pg_dump from the command line with the exe included in windows
32 bit install in postgresql 9.1.
pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname

please note that pg_dump --help shows:
   -Z, --compress=0-9  compression level for compressed formats

hint: the part for compressed formats is critical.

plain dump is not compressed, you'd have to use -Fc to get compression,
but it's good to use anyway.

Best regards,

depesz

On previous versions the -Z 9  option would compress the plain text sql 
output to a gzipped file.
Now it now longer compresses.  Using the -Fc the outputs a compressed 
file, which is only readable

by pg_restore.

Roger


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Gregg Jaskiewicz
can you pipe things on windows ?
It's a desktop system after all, but dos had that sort of a feature -
I seem to remember.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a faster way to do that

2011-09-23 Thread David Johnston

explain analyze select num, n.wantcode from items, n where items.code ~
n.wantcode;
 Nested Loop  (cost=20.00..216502.14 rows=48070 width=36) (actual
time=148.479..336280.488 rows=2871 loops=1)
  Join Filter: ((outer.code)::text ~ inner.wantcode)
  -  Seq Scan on items  (cost=0.00..167.14 rows=9614 width=42) (actual
time=0.048..38.666 rows=9614 loops=1)
  -  Materialize  (cost=20.00..30.00 rows=1000 width=32) (actual
time=0.001..1.049 rows=815 loops=9614)
-  Seq Scan on n  (cost=0.00..20.00 rows=1000 width=32) (actual
time=0.003..1.839 rows=815 loops=1)
 Total runtime: 336286.692 ms


So you're comparing a variable field value to a variable pattern - yeah,
that's going to hurt. There's no way you could index exactly that.

Perhaps there's some way you can transform the problem so that you get
something indexable?
For example, if your match patterns follow a certain pattern by themselves,
you could add a column with the longest match pattern that would match the
string. Then you could just do a query for which records have the match
pattern (in that new column) that you're looking for and voila!

 

I've only been partially following this thread but did you try something
like:

 

WHERE items.code ~ ('^' || n.wantcode)

 

Not sure if this will be valid for your needs but the issue is that
PostgreSQL cannot rely on an index for non-anchored search patterns and your
compare-to data rightly does not contain regex meta-characters.  If you
explicitly indicate that the input expression is going to be anchored would
PostgreSQL then realize it can use the index?

 

Not Tested.

 

David J.

 

 



Re: [GENERAL] Materialized views in Oracle

2011-09-23 Thread Josh Kupershmidt
On Fri, Sep 23, 2011 at 3:03 AM, Mike Christensen m...@kitchenpc.com wrote:
 1) Though I might have given the impression that a “manual complete
 refresh” is not useful, I definitely see value in this especially for
 data warehousing scenarios.  However, I’d almost want to call this a
 “snapshot” and not a “materialized view” – the two things are so
 different, I think it warrants different syntax.
[snip]

I like the terminology employed by this page:
  http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views
which clarifies between snapshot, eager, lazy, and very lazy
materialized views.

[snip]

 One could argue that if you’re going this far, you’ve basically turned
 the whole trigger mechanism inside out and one could already implement
 this whole thing on Postgres 9 using real tables and triggers when the
 appropriate data changes.  This is something I’m struggling with as
 well.  Materialized views seem to be great for quick database
 snapshots and very simple selects, but anything too complicated and
 it’s either not an option, or you’d have to explain so much logic that
 you’re better off using a series of triggers and writing a custom
 solution anyway.

 This makes me really question the fundamental use of materialized
 views.  In other words, what is a real solid scenario that you would
 use one for that simply cannot be done currently using triggers?  Or,
 is it simply the “ease of creation” people are after?  There must be
 things I’m just not seeing, as this is the number one feature request
 on postgresql.uservoice.com by a massive landslide, and I fail to
 believe all those people are just “lazy.”

As a preface, I think materialized views are awesome when used
correctly. I don't have first-hand experience with Oracle's
implementation of them.

As far as I know, the maintenance of the matview itself can always be
done just as well using triggers as it can by a database engine (like
Oracle). In fact, I'd bet that for any reasonably complicated matview,
you're going to be able to wring out quite a bit more performance from
the hand-written one, as you can employ optimizations that the
database engine just doesn't know about.

For example, a while back I wrote a heavily-queried ~50M row matview
which was derived from ~12 or so base relations, some of which were
heavily updated. I was able to make the load imposed by the matview a
small part of the bulk-loading of the base tables by keeping track of
which changes to the base tables were actually interesting to the
matview, where interesting meant could possibly cause a significant
change to the corresponding row in the matview. Plus little tricks
like knowing when it'd be cheaper to perform a full refresh of the
matview vs. slogging through millions of rows of UPDATEs. I imagine a
database-engine maintained matview in Oracle would have imposed more
system load and been slower to keep up-to-date.

Of course, one cool thing a database-engine-supported matview can do
is automatic query-rewriting, if it can prove that the materialized
view is both up to date, and capable of satisfying the user's query.
But then, if you are using materialized views, is it really that hard
to change your performance-intensive queries to select from the view
instead of the base table?

As for materialized views being the #1 most-requested feature on
uservoice... well, perhaps the voters don't have a good idea of how
much effort it would be to implement (vs. other compelling features we
could add), and perhaps also don't understand how powerful and
flexible trigger-maintained matviews can be. The fact that the
suggestion claims it would ... boost performance for every web
application makes me pretty skeptical right off the bat. I'm not
saying it wouldn't be cool to have, just that there is still a lot
more low-hanging fruit that the data-warehousing crowd would benefit
from.

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speed of lo_unlink vs. DELETE on BYTEA

2011-09-23 Thread Daniel Verite
Reuven M. Lerner wrote:

 1.1 1,000 records 
 ==
 
   DeleteDump
  ---+-+
   Empty content   0.172s0.057s  
   bytea   0.488s0.066s  
   large object30.833s   9.275s  

How much bytea are you dumping for it to take only 0.066s?
The fact that it takes about the same time than dumping the empty content
looks very suspicious.

On my desktop machine, if I create a table with 1000 blobs containing strings
of 5 million 'x', which is what I understood you basically did (perhaps I
misunderstood?), then it takes about 200s to dump it with pg_dump -Fc

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to install 9.1 on non-graphical server?

2011-09-23 Thread Guillaume Lelarge
On Thu, 2011-09-22 at 22:11 -0300, Dickson S. Guedes wrote:
 2011/9/22 Andreas maps...@gmx.net:
  Hi,
  is there a way to install the EnterpriseDB V9.1 release on a server without
  X-Windows?
  I've got an OpenSuse 11.4 and the have only V9.0.4 in their repository so
  I'd need the binary of EnterpriseDB.
 
 Have you tried to use a --help on the EDB installer command line?
 There is a text-mode AFAIR.
 

Or you can simply compile it.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] /usr/local/pgsql/data permissions

2011-09-23 Thread Rich Shepard

  My server just crashed because a CPU-intensive build threatened to
overheat the processor so the system shut down. When I rebooted and tried to
start postgres the attempt failed because `data directory
/usr/local/pgsql/data has group or world access'. As far as I can recall,
it's always been 755 like the other directories in /usr/local/pgsql.

  When I changed the perms to 700 I was able to start postgres. I want to
check that these are the proper permissions (I can run psql from the command
line and access Ledger-123 so I assume they're OK.)

  How the permissions on the directory changed when the system shut down
will probably always remain a mystery.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] /usr/local/pgsql/data permissions

2011-09-23 Thread John R Pierce

On 09/23/11 1:48 PM, Rich Shepard wrote:

  My server just crashed because a CPU-intensive build threatened to
overheat the processor so the system shut down. When I rebooted and 
tried to

start postgres the attempt failed because `data directory
/usr/local/pgsql/data has group or world access'. As far as I can 
recall,

it's always been 755 like the other directories in /usr/local/pgsql.

  When I changed the perms to 700 I was able to start postgres. I want to
check that these are the proper permissions (I can run psql from the 
command

line and access Ledger-123 so I assume they're OK.)

  How the permissions on the directory changed when the system shut down
will probably always remain a mystery.




Indeed, 700 are the correct privs.

# ls -ld /var/lib/pgsql/9.0/data
drwx--. 13 postgres postgres 4096 Sep 12 20:07 /var/lib/pgsql/9.0/data

# ls -ld /var/lib/pgsql/data
drwx-- 11 postgres postgres 4096 Sep  9 12:08 /var/lib/pgsql/data

# ls -ld /var/postgres/8.4-community/data_64
drwx--  11 postgres dba   18 May 14 08:37 
/var/postgres/8.4-community/data_64


$ ls -ld $PGDATA
drwx--   11 postgres staff  4096 Jun 07 12:29 /u01/pgsql/data

(different machines, different operating systems even...)


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] /usr/local/pgsql/data permissions

2011-09-23 Thread Rich Shepard

On Fri, 23 Sep 2011, John R Pierce wrote:


Indeed, 700 are the correct privs.


John,

  When it started and worked I assumed that was the case. But, I've not
before had directory permissions change when a system crashed.

  Cue the Twilight Zone theme.

Thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] /usr/local/pgsql/data permissions

2011-09-23 Thread John R Pierce

On 09/23/11 2:14 PM, Rich Shepard wrote:

  When it started and worked I assumed that was the case. But, I've not
before had directory permissions change when a system crashed.

  Cue the Twilight Zone theme. 


did the system run some sort of fsck autorepair when it restarted?   
thats about the only thing I could think of that might have messed with 
the permissions.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] /usr/local/pgsql/data permissions

2011-09-23 Thread Rich Shepard

On Fri, 23 Sep 2011, John R Pierce wrote:


did the system run some sort of fsck autorepair when it restarted?   thats
about the only thing I could think of that might have messed with the
permissions.


  The file system is ext3 so it did restore from the journals.

  Anyway, now I know if I should ever again be unable to start postgres,
check the permissions per the error message.

Thanks again,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Adrian Klaver
On Friday, September 23, 2011 7:26:19 am Roger Niederland wrote:
 On 9/23/2011 6:46 AM, hubert depesz lubaczewski wrote:
  On Thu, Sep 22, 2011 at 11:00:10PM -0700, Roger Niederland wrote:
  Using pg_dump from the command line with the exe included in windows
  32 bit install in postgresql 9.1.
  pg_dump  -U username -f somefile.sql.gz -Z 9 -i dbname
  
  please note that pg_dump --help shows:
 -Z, --compress=0-9  compression level for compressed formats
  
  hint: the part for compressed formats is critical.
  
  plain dump is not compressed, you'd have to use -Fc to get compression,
  but it's good to use anyway.
  
  Best regards,
  
  depesz
 
 On previous versions the -Z 9  option would compress the plain text sql
 output to a gzipped file.
 Now it now longer compresses.  Using the -Fc the outputs a compressed
 file, which is only readable
 by pg_restore.

Seems there has been a code change in pg_dump.c that ignores compression level 
for plain text.

From line 537:

/*
 * Ignore compression level for plain format. XXX: This is a bit
 * inconsistent, tar-format throws an error instead.
 */

The documentation has not caught up with this change.

 
 Roger

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump compress

2011-09-23 Thread Filip Rembiałkowski
2011/9/23 Gregg Jaskiewicz gryz...@gmail.com

 can you pipe things on windows ?

Yes you can. It surprised me positively several years ago.
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/redirection.mspx?mfr=true


 It's a desktop system after all,

:-) what a nice dose of condescending irony.
You know, they are selling a server OS on top of that. Quite a lot of
copies.




cheers, Filip