Re: [BUGS] pgBench cannot use prepared statements and reconnections

2013-08-01 Thread Josh Berkus
On 07/31/2013 04:11 PM, Mark Kirkwood wrote:
> Hmmm, apologies - actually it didn't - I read right past the text
> 
> ERROR:  prepared statement "P0_7" does not exist
> 

Yah.  It's quite possible that the answer is to explicitly disallow this
-- the only reason I even tried it is that I was testing for a
PostgreSQL bug -- but we should fail gracefully, e.g. "Use of prepared
statements with pgbench requires persistent connections.  You may not
use the -C and -m prepared options together".

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] pgBench cannot use prepared statements and reconnections

2013-07-31 Thread Josh Berkus

To wit:

[jberkus@pgx-test prepare]$ pgbench -c 4 -C -T 180 -l -r -M prepared bench
starting vacuum...end.
Client 1 aborted in state 7: ERROR:  prepared statement "P0_7" does not
exist
Client 0 aborted in state 7: ERROR:  prepared statement "P0_7" does not
exist
Client 3 aborted in state 7: ERROR:  prepared statement "P0_7" does not
exist
Client 2 aborted in state 7: ERROR:  prepared statement "P0_7" does not
exist
transaction type: TPC-B (sort of)

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] RESET ROLE issue (Unreproduceable (so far) )

2013-04-29 Thread Josh Berkus
Debuggers,

Having an interesting issue with RESET ROLE. So far it's defied my
ability to come up with a simpler test case so that I can diagnose the
issue.  Here's the situ:

1. plpgsql function create_data_tables() creates a bunch of tables.

2. as part of this, it uses SET ROLE to a role the calling user is a
member of ("dbowner").

3. create_data_tables then calls create_partitions() to partition the
newly created tables, in a loop.


4. create_partitions() also does SET ROLE dbowner, and RESET ROLE at the
end of the function.

5. If called, on the ~~ 8th iteration of create_partitions(), it has a
permissions problem: "must be owner of table new_master_table"

6. If I remove the SET/RESET from the create_partitions function, it
works fine.

Anyway, I can't imagine how you could diagnose this without a valid test
case, and my attempts to construct a very simple reproduceable one so
far have met with failure. So I'm filing this bug *in case* we see other
issues with repetitive RESET ROLE calls in the future.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Re: BUG #7969: Postgres Recovery Fatal With: "incorrect local pin count:2"

2013-03-27 Thread Josh Berkus
Folks,

So I'm a bit surprised that this bug report hasn't gotten a follow-up.
Does this sound like the known 9.2.2 corruption issue, or is it
potentially something else?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] New kind of crash?

2013-03-21 Thread Josh Berkus
Alvaro,

BTW, we haven't been able to reproduce this crash deliberately, yet.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] New kind of crash?

2013-03-09 Thread Josh Berkus
Mark,

I have PL/R and PL/v8 installed on that server (as well as a few other
extensions).  However, neither of those is invoked in the procedure
which caused the crash; it's straight PL/pgSQL.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] New kind of crash?

2013-03-09 Thread Josh Berkus
On 03/08/2013 07:27 PM, Alvaro Herrera wrote:
> Josh Berkus wrote:
>> Folks,
>>
>> This is one I've never seen before:
>>
>> => select generate_master_tables();
>> WARNING:  AbortTransaction while in COMMIT state
>> PANIC:  cannot abort transaction 9387287, it was already committed
> 
> Anything that causes an ERROR in the final stages of a transaction
> commit will look like this.  Maybe, for example, something tried to
> acquire more shared memory for something (serializable xact?) but that
> was already full because of lock objects.
> 

Well, is it worth trying to reproduce and diagnose?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] New kind of crash?

2013-03-08 Thread Josh Berkus
Folks,

This is one I've never seen before:

=> select generate_master_tables();
WARNING:  AbortTransaction while in COMMIT state
PANIC:  cannot abort transaction 9387287, it was already committed
PANIC:  cannot abort transaction 9387287, it was already committed
The connection to the server was lost. Attempting reset: Failed.
Time: 42259.753 ms
!>

Interestingly, Postges came back up on its own after about a minute and
seems to be OK.  The stored procedure in question is one which creates
about 600 tables.  This actually exceeds max_locks_per_transaction, but
what happened above is not the normal failure for that.

Note that the tables were, in fact, created, and as far as I can tell
there's no corruption of the database.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Issue with range types and casts?

2013-03-08 Thread Josh Berkus

select version();
   version

--
 PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit


create table tenmin as select * from sampledata where collect_ts <@
'[2013-01-01 00:00:00,2013-01-01 00:10:00)';
ERROR:  could not find range type for data type timestamp with time zone
Time: 0.189 ms

This seems like it ought to be fixable.  Postgres has figured out that
it needs to find the range type for timestamptz.  Why can't it?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Why does FK creation still take an access exclusive lock on the referenced table?

2013-02-07 Thread Josh Berkus
Folks,

I'd thought this was fixed with some of the ALTER changes in 9.2, but
apparently not (9.2.2):

SESSION1:

sampledata=# create table test1 ( id int not null primary key );
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"test1_pkey" for table "test1"
CREATE TABLE
Time: 55.224 ms   ^
sampledata=# create table test2 ( id int not null primary key, test1 int );
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"test2_pkey" for table "test2"
CREATE TABLE
Time: 40.841 ms
sampledata=# insert into test1 select i from generate_series(1,10) as gs(i);
sampledata=# begin;
BEGIN
Time: 0.088 ms
sampledata=# alter table test2 add constraint test1_fk foreign key (
test1 ) references test1(id);
ALTER TABLE
Time: 2.185 ms
sampledata=#

SESSION2:

sampledata=# select * from test1;
... wait forever
^C

sampledata=# select locktype, mode, relname from pg_locks join pg_class
on relation=oid;
 locktype |mode |  relname
--+-+
 relation | AccessShareLock | test1_pkey
 relation | AccessShareLock | test2_pkey
 relation | AccessShareLock | pg_class_relname_nsp_index
 relation | AccessShareLock | pg_class_oid_index
 relation | AccessShareLock | pg_class
 relation | AccessShareLock | pg_locks
 relation | AccessShareLock | test2
 relation | AccessExclusiveLock | test2
 relation | AccessShareLock | test1
 relation | RowShareLock| test1
 relation | AccessExclusiveLock | test1

I understand why establishing an FK needs an ExclusiveLock on the
referenced table, but it doesn't need an AccessExclusiveLock.  This
causes lots of deployment issues for users.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Arbitrary whitespace restrictions on range types

2012-12-18 Thread Josh Berkus
Jeff, Hackers:

Is there a strong reason why this has to error?

postgres=# select '[,]'::TSTZRANGE
postgres-# ;
 tstzrange
---
 (,)
(1 row)

postgres=# select '[, ]'::TSTZRANGE;
ERROR:  invalid input syntax for type timestamp with time zone: " "
LINE 1: select '[, ]'::TSTZRANGE

--Josh

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] log_collector doesn't respond to reloads

2012-04-30 Thread Josh Berkus

>> On the whole though, I think this is an invented problem.  We've never
>> heard a complaint from the field about it.
> 
> I think process title seems reasonable. We do that for archiver for
> example, to tell you where it's writing, don't we?

Yes, we do.

This isn't an invented problem; a brief canvass on IRC shows that people
run into issues with log_collector reloads fairly commonly.  However,
it's pretty low priority, certainly -- never rises above the level of
"annoyance".  The sort of thing where it would be good to have a
bugtracker to put it in so the next time someone is working on the log
collector for other reasons they can tweak this too, or when some new
hacker wants an easy first patch.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] log_collector doesn't respond to reloads

2012-04-27 Thread Josh Berkus
> Well, sorry, but you can't have that.  SHOW will tell you what your own
> backend thinks the value of the GUC variable is, but there is no way to
> know what's happening inside the logging collector process.  And I'd be
> against trying to add a signaling mechanism that would support telling
> you that, because it would add fragility to the logging collector, which
> we don't want.

Hmmm.  There's really no feasible way to determine where the actual
logging collector is logging to?

> I don't buy that argument.  If things are working as intended, the
> collector ought to create a new file in the commanded location
> immediately.  I would think any normal DBA would look for that, just
> to check that the SIGHUP had worked.

You're assuming that the change is happening attended.  Consider
automated changes being deployed via puppet or chef, possibly across
many servers.

You can end up in a situation where the logs aren't going where they're
supposed to due to some external problem, and that the DBA has no way to
find out what went wrong because he doesn't know where the logs are *now*.

Mind you, I don't have a solution to suggest, but I think it's a real
problem.

And fixing the rotation bug will help that a lot; it would narrow down
the problem to the current day/hour, which would make it much more
likely that someone would know that the log location had changed.

> BTW, what log messages were you getting exactly?  I'd have expected
> something about "could not open log file" as well as the "disabling
> automatic rotation" one.

2012-04-26 16:38:21 PDT [10180]: [2-1] user=,db= LOG:  received SIGHUP,
reloading configuration files
2012-04-26 16:38:21 PDT [10181]: [1-1] user=,db= LOG:  could not open
log file "/pglogs/check/logs/datacollection-2012-04-26-16-38": No such
file or directory
2012-04-26 16:38:21 PDT [10181]: [2-1] user=,db= LOG:  disabling
automatic rotation (use SIGHUP to re-enable)

So, yes, exactly.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] log_collector doesn't respond to reloads

2012-04-26 Thread Josh Berkus

> Whoever wrote that thought that Log_RotationAge/Log_RotationSize would
> get reset to normal values during SIGHUP, but it's far from clear to me
> that any such thing would actually happen.  However, this would only
> apply to Josh's problem if he was trying to set a bogus new value of
> log_directory, eg not there or not writable by postgres.  In any case,
> if this is the locus of the problem, there ought to be instances of that
> log message in the active log file.  (Josh?)

Aha.

Yeah, the problem is, directory permissions to the contrary, something
is preventing the logger from writing to that directory even though I
can do so as the logged-in postgres user.  I'll bet it's their SAN
dynamic mounter thing, given the trouble it's been before. *sigh*.

So this is an issue peculiar to their system, and not a general case.
Sorry for the noise.  I've seen transitory behavior like this before and
had hoped that I'd found a reproduceable test case.  But no.

We do have one piece of unituitive behavior here though, which forms a
bit of a catch-22:

1. DBA changes the log directory
2. Log directory is unwriteable
3. Postgres continues writing to the old log_directory
4. SHOW log_directory displays the *new* log_directory

I think (4) here needs to change.  We shouldn't be showing a different
log directory in pg_settings than we're actually writing to, ever.

Now, here's the Catch-22:

Consider the case that time elapses before anyone discovers that logs
are not being written to the new location, and you change personnel; how
would the new DBA have any idea where the old log was so that he could
read the log message about the unwriteable directory?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] log_collector doesn't respond to reloads

2012-04-26 Thread Josh Berkus

> Do you want to try attaching to the collector with a debugger and seeing
> if it ever gets into the "if (got_SIGHUP)" block in SysLoggerMain?

Hmmm.  No debugger on this system, not unexpectedly.  I'll see if I can
get authorization to add one.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] log_collector doesn't respond to reloads

2012-04-26 Thread Josh Berkus
On 4/26/12 5:50 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> Summary: despite pg_reload(), log directory, filename and destination
>> don't change
> 
> Looking at the code, it's really hard to see how this could possibly
> happen, unless maybe the process is blocking receipt of SIGHUP.  Which
> it shouldn't be.  Not sure about RHEL5, but on recent Linuxen you can
> check the process's signal masks like this:
> 
>   grep ^Sig /proc//status
> 
> where  is the logging collector's PID.  Could we see that?

SigQ:   0/399360
SigPnd: 
SigBlk: 0000
SigIgn: 01007806
SigCgt: 0201

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] log_collector doesn't respond to reloads

2012-04-26 Thread Josh Berkus
Summary: despite pg_reload(), log directory, filename and destination
don't change

Version: 9.1.3
Platform: RHEL5, custom compile
Severity: Persistant annoyance

Description:

1) change log_directory in postgresql.conf
2) pg_reload_conf()
3) show log_directory displays the correct new directory
4) logging continues to the old directory

Given the custom compile, I'd chalk this up to wonky platform issues,
except that I've seen this problem on other systems, intermittently.
This particular system has the issue in 100% reproduceable fashion, so
I'd like suggestions on how to get to the bottom of it.

I also tried a sighup to the logger process, with  no effect.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] 291 pg_toast_temp schemas?

2012-04-26 Thread Josh Berkus
On 4/26/12 3:48 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> Also, have we discussed maybe hiding these schemas from \dn?
> 
> We've done more than discuss it:
> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=e43fb604d6db229d70d3101aa53348cc16a5473a
> 
> I take it you're using something older than 9.1.

Yep, 9.0.7 per bug.

Thanks!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] 291 pg_toast_temp schemas?

2012-04-26 Thread Josh Berkus
On 4/26/12 11:22 AM, Tom Lane wrote:
> Josh Berkus  writes:
>> summary: database has 291 empty pg_toast_temp schemas.
> 
> If your max_connections is 300 or more, this isn't surprising in the
> least.

Yes, they are.

>> ... so, apparently we still have an issue with cleaning up pg_toast_temp
>> schema?
> 
> If they are empty, no we don't have a problem with cleaning them up.
> The pg_namespace entries are deliberately left there across sessions,
> to avoid useless catalog churn.

Aha, ok.  Should I add a FAQ item about this?

Also, have we discussed maybe hiding these schemas from \dn?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] 291 pg_toast_temp schemas?

2012-04-26 Thread Josh Berkus
summary: database has 291 empty pg_toast_temp schemas.
severity: head-scratcher
version: 9.0.7
platform: RHEL6

description:

* medium-large production database (300GB)
* does data collection and analytics
* lately has been having chronic lock-blocking issues
* does many, daily batch jobs which involve the creation of many temp
tables.

Today, while investigating the lock-blocking issues, I noticed that the
db has pg_toast_temp_1 through 291.   All of these schema are completely
empty:

breakpad=# select schemaname, relname from pg_stat_all_tables where
schemaname LIKE 'pg_toast_temp_%';
 schemaname | relname
+-

... so, apparently we still have an issue with cleaning up pg_toast_temp
schema?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] wCTE cannot be used to update parent inheritance table

2012-01-29 Thread Josh Berkus

>>> What I think we want to distinguish between is things that are
>>> PEBKAC/GIGO, and everything else.  In other words, if a particular
>>> error message can be caused by typing something stupid, unexpected,
>>> erroneous, or whatever into psql, it's just an error.  But if no
>>> input, however misguided, should ever cause that symptom, then it's, I
>>> don't know what the terminology should be, say, a "severe error".
>>
>> +1
> 
> I'm strongly in favour of this.

This is *so* not a discussion to have on the pgsql-bugs list.  Please
take it to -hackers.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] wCTE cannot be used to update parent inheritance table

2012-01-29 Thread Josh Berkus

> This is the kind of thing that could go unnoticed for a long time,
> simply because it is not highlighted any more prominently than a
> routine error message like an integrity constraint violation. I
> continue to maintain that we should have a new severity level for this
> sort of thing.

Huh?  I don't follow you at all Peter.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] wCTE cannot be used to update parent inheritance table

2012-01-29 Thread Josh Berkus
On 1/28/12 5:27 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> SUMMARY: if you attempt to UPDATE or DELETE FROM a parent table in an
>> inheritance relationship using a wCTE, you get the following error message:
>>  ERROR:  could not find plan for CTE
> 
> Fixed, thanks for the report.

Should we add a regression test for this?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] wCTE cannot be used to update parent inheritance table

2012-01-25 Thread Josh Berkus
SEVERITY: normal

TYPE: SQL feature

VERSION TESTED: 9.1.2

PLATFORM: Ubuntu Linux, installed from apt-get

REPRODUCEABLE: 100%

SUMMARY: if you attempt to UPDATE or DELETE FROM a parent table in an
inheritance relationship using a wCTE, you get the following error message:

ERROR:  could not find plan for CTE

This does not happen with INSERTs, child tables or UPDATE ONLY.

STEPS TO REPRODUCE:

create table parent ( id int, val text );

create table child1 ( constraint child1_part check ( id between 1 and 5
) ) inherits ( parent );

create table child2 ( constraint child2_part check ( id between 6 and 10
) ) inherits ( parent );

create table other_table ( whichtab text, totals int );

postgres=# insert into child1 values ( 1, 'one' ),( 2, 'two' );
INSERT 0 2
postgres=# insert into child2 values ( 6, 'six' ),( 7, 'seven' );
INSERT 0 2
postgres=# with wcte as ( select sum(id) as totalid from parent ) insert
into other_table select 'parent', totalid from wcte;
INSERT 0 1
postgres=# with wcte as ( select sum(id) as totalid from child1 ) insert
into other_table select 'parent', totalid from wcte;
INSERT 0 1
postgres=# with wcte as ( select whichtab from other_table ) update
child1 set val = whichtab from other_table;
UPDATE 2
postgres=# with wcte as ( select whichtab from other_table ) update
parent set val = whichtab from other_table;
UPDATE 4
postgres=# with wcte as ( select whichtab from other_table ) update
child1 set val = whichtab from wcte;
UPDATE 2


postgres=# with wcte as ( select whichtab from other_table ) update
parent set val = whichtab from wcte;
ERROR:  could not find plan for CTE "wcte"


postgres=# with wcte as ( select whichtab from other_table ) update only
parent set val = whichtab from wcte;
UPDATE 0
postgres=# update parent set val = 'parent';
UPDATE 4
postgres=# with wcte as ( select whichtab from other_table ) insert into
parent select 11, whichtab from other_table;
INSERT 0 2


postgres=# with wcte as ( select whichtab from other_table ) delete from
parent using wcte where val = whichtab;
ERROR:  could not find plan for CTE "wcte"

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Cannot dump 8.4.8 database using later versions

2011-11-16 Thread Josh Berkus

> The least painful solution might be to always quote *every* identifier
> in commands sent to the source server, since we don't especially care
> how nice-looking those are.

I've never been clear on why we don't do that in the first place.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Cannot dump 8.4.8 database using later versions

2011-11-14 Thread Josh Berkus

>> ... somehow the query to dump the sequences is getting mangled.  Any
>> clue how?
> 
> Seems you have a sequence called "new";  seems we don't handle that
> well.

So, tested this some more.  For some reason, 8.4's pg_dump would
recognize NEW as a reserved word and quote it before dumping.  9.0 and
later pg_dump does not.  Any ideas?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Cannot dump 8.4.8 database using later versions

2011-11-14 Thread Josh Berkus

> Seems you have a sequence called "new";  seems we don't handle that
> well.

Hmmm ... yes, you're correct.  Idiot users.

Interestingly, the sequence is no problem until 9.0.  8.4 handled it
fine.  I'd guess this is another example of where merging in plpgsql
broke something.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Cannot dump 8.4.8 database using later versions

2011-11-14 Thread Josh Berkus
Severity: normal

Versions tested: 9.0.4 and 9.1.1
Installed from packages on Ubuntu

Reproduceability: 100% on this server
have not tried a test case yet

Steps:

1. large running 8.4.8 PostgreSQL database

2. /usr/lib/postgresql/9.1/bin/pg_dump -Fc -v -f test.dump dbname

3. dumps all table etc definitions correctly.  then:

pg_dump: saving database definition
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  NEW used in query that is
not in a rule
LINE 1: ...END AS min_value, cache_value, is_cycled, is_called from new
^
pg_dump: The command was: SELECT sequence_name, start_value, last_value,
increment_by, CASE WHEN increment_by > 0 AND max_value =
9223372036854775807 THEN NULL  WHEN increment_by < 0 AND max_value =
-1 THEN NULL  ELSE max_value END AS max_value, CASE WHEN
increment_by > 0 AND min_value = 1 THEN NULL  WHEN increment_by < 0
AND min_value = -9223372036854775807 THEN NULL  ELSE min_value END
AS min_value, cache_value, is_cycled, is_called from new

(yes, that is the entire query, I checked the logs)

... somehow the query to dump the sequences is getting mangled.  Any
clue how?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5944: COPY FROM doesn't work with international characters

2011-10-26 Thread Josh Berkus
On 10/19/11 5:41 AM, Robert Haas wrote:
> I don't think so.  It's not really clear to me what the issue is.  Is
> there some confusion between the encoding of the file and the encoding
> in use in the client session?  It's odd that we would have a bug that
> only affects win1252.

I think it's quite possible that this is something broken in the win1252
encoding itself.  I've seen a lot of reports online for errors from
other software.  However, we need to at least find a workaround for
users if we can't fix it ...

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] psql's \copy incompatible with :variables

2011-10-14 Thread Josh Berkus

> \copy is different because it uses OT_WHOLE_LINE mode to read the
> argument, and that doesn't expand :variable references.  I'd be a bit
> leery of changing that.

So, doc warning then?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Broken selectivity with special inet operators

2011-09-21 Thread Josh Berkus

> It's possible that we could build simple estimators for these operators
> that just turn the problem into a range estimation and then pass it off
> to somewhere else, but nobody has tried.

Right, that's why I'm asking.  I'd like to reuse code ...

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Broken selectivity with special inet operators

2011-09-21 Thread Josh Berkus

> If you mean the indexscan optimization, we do know how to estimate the
> cost of the indexscans, because that depends mostly on the behavior of
> the added < or > condition(s).  This does not imply knowing how to
> estimate the behavior of >>= itself.

If we can estimate the cost of the indexscan, why can't we estimate the
rowcount?  Sorry if I'm being dense here, but I really don't understand
how the special operator code works at all.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Broken selectivity with special inet operators

2011-09-21 Thread Josh Berkus
On 9/21/11 1:56 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> Summary: special inet operators ( << >> <<= =>> ) are
>>  up to 100X off in estimating rowcounts
> 
> A look in pg_operator will show you that these operators have no
> associated selectivity estimators at all.  It's not so much "broken"
> as "unimplemented".

Oh!  I was assuming that the special case code kicked in regardless.

So we implemented the rewrite to < and > for the actual execution, but
not for cost estimates?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Broken selectivity with special inet operators

2011-09-21 Thread Josh Berkus
Summary: special inet operators ( << >> <<= =>> ) are
up to 100X off in estimating rowcounts
Type: performance
Severity: normal
Tested on: 9.1.0
Description:

We've been noticing that row estimates for queries which use the =>> and
<<= operators for inet data were way, way off.  We finally narrowed the
problem down to a simple test:

===
USING <<= :
===

explain analyze
SELECT count(*)
FROM partition1 lh
WHERE lh.ip <<= '1.2.3'::cidr;


QUERY PLAN
.
   ->  Index Scan using partition1_ip on partition1 lh
(cost=0.00..10.21 rows=6956732 width=0)
(actual time=0.016..0.016 rows=0 loops=1)
 Index Cond: ((ip >= '1.2.3.0/24'::inet) AND (ip <=
'1.2.3.255'::inet))
 Filter: (ip <<= '1.2.3.0/24'::inet)
.

explain analyze
SELECT count(*)
FROM partition2 WHERE 1=1 AND ip <<= '87.178.193.0/24'::inet;

   QUERY PLAN
 Aggregate  (cost=18296.78..18296.79 rows=1 width=0) (actual
time=0.037..0.038 rows=1 loops=1)
   ->  Index Scan using partition2_ip on partition2  (cost=0.00..38.36
rows=7303365 width=0) (actual ti
me=0.022..0.031 rows=5 loops=1)
 Index Cond: ((ip >= '87.178.193.0/24'::inet) AND (ip <=
'87.178.193.255'::inet))
 Filter: (ip <<= '87.178.193.0/24'::inet)
 Total runtime: 0.107 ms


USING < > :


explain analyze
SELECT count(*)
FROM partition1 lh
WHERE lh.ip >= '1.2.3.0/24'::inet and lh.ip <= '1.2.3.255'::inet;

QUERY PLAN

   ->  Index Scan using partition1_ip on partition1 lh
(cost=0.00..10.22 rows=1 width=0)
(actual time=0.016..0.016 rows=0 loops=1)
 Index Cond: ((ip >= '1.2.3.0/24'::inet) AND (ip <=
'1.2.3.255'::inet))


explain analyze
SELECT count(*)
FROM partition2 WHERE 1=1 AND ip > '87.178.193.0'::inet and ip <=
'87.178.193.255'::inet;

QUERY PLAN

 Aggregate  (cost=26.34..26.35 rows=1 width=0) (actual time=0.033..0.033
rows=1 loops=1)
   ->  Index Scan using partition2_ip on partition2  (cost=0.00..26.33
rows=5 width=0) (actual time=0.0
19..0.029 rows=5 loops=1)
 Index Cond: ((ip > '87.178.193.0'::inet) AND (ip <=
'87.178.193.255'::inet))
 Total runtime: 0.097 ms



Note that the mis-estimate of rows returned in each case is almost
exactly 50% of the total rows in the table.  That would suggest that
match_special_index_operator is failing, and not recognizing the <<=
operator for estimation purposes and just going with a default estimate
of 0.5.

I've tried to locate the cause of this problem, but the code involved is
rather convoluted and crusty, and I can't follow the logic.  Help?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] psql's \copy incompatible with :variables

2011-06-07 Thread Josh Berkus
Summary: \copy interprets psql's :variables as literal strings

Tested On: 8.4.4, 9.0.4

Severity: Annoyance

Steps to Reproduce:

psql

\set filename 'test.csv'

\copy pg_class to :filename with csv

\q

ls

:filename

Note that psql variables work perfectly fine with COPY.  It's just \copy
which seems to be misbehaving.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5944: COPY FROM doesn't work with international characters

2011-03-24 Thread Josh Berkus
I have another example of this issue with WIN1252.  This line in a copy
file:

659446  828 1   /6�\bH@^W^Za$H�\b�@\\/No
valid or unique HTTP objects found in XML response.

Into this table:

 Table "public.ep_tests"
Column  | Type | Modifiers
-+--+---
id  | bigint   | not null
v_id| integer  | not null
status  | character(1) | not null
vkey| text |
details | text |

Results in this error while loading the data:

the following error is encounted by the \copy: ERROR:  missing data for
column "details"
CONTEXT:  COPY ep_tests, line 1028752: "659446  828 1   /6�\bH@↨"

The dump file was produced by using 9.0.3's pg_dump in text mode to dump
an 8.2 database, then using 9.0.3's psql to load the file.  Both servers
are UTF8, locale WIN1252.

So it looks like we're not successfully escaping characters on WIN1252.
 The characters in question are also latin characters.

We've reproduced this on a clean install.


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] psql crashes with bad history file in 9.0.3

2011-03-22 Thread Josh Berkus
All,

At two sites I'm not allowed to share data from (grrr), I've encountered
core dumps of psql.  In once case the core dump happens as soon as you
run any query.   In the second case, it occurs when you exit.

What both sites have in common is that there's something screwy about
the .psql_history file.  In the first case, moving the history file
fixed the issue.  In the second case, the shell user has messed up
permissions on their home dir and can't save the history file.

I'd never encountered this before 9.0.3.  Both sites are RHEL.

Hopefully I'll run into it somewhere I can do a stack trace ...

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Mismapping of Mountain Time

2011-03-03 Thread Josh Berkus

> It's not all that separate: per the Olsen database,
> 
> Link  America/Denver  US/Mountain
> Link  America/Denver  Navajo
> 
> Those are all aliases for the exact same timezone behavior, and PG
> doesn't have any good way to choose which one you think is preferred.
> It looks like it chooses the shortest name ... so in this case you'll
> get Navajo.  If you don't like that, set the timezone setting
> explicitly or via the TZ environment variable for the postmaster.

What's more my concern is that Ubuntu, Debian and Red Hat do not set
$TZ, so we'll get this kind of behavior on most Linux systems with a
default install of PostgreSQL.  Since it's confusing to users (and will
result in other such bug reports and/or complaints), it would be nice to
do something to pick time zones which is more likely to result in
unsurprising values of Show TimeZone.

(This issue was reported by a customer as a bug to us)

I'll give some thought as to how we could do so, and maybe add it to the
TODO list.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Mismapping of Mountain Time

2011-03-03 Thread Josh Berkus
On 3/3/11 2:31 PM, Josh Berkus wrote:
> uname -a
> Linux hemingway 2.6.32-25-server #44-Ubuntu SMP Fri Sep 17 21:13:39 UTC
> 2010 x86_64 GNU/Linux
> 
> date
> Thu Mar  3 15:30:17 MST 2011

Also:

echo $TZ returns nothing.  We've checked several Ubuntu systems, and it
seems that Ubuntu does not set $TZ.

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Mismapping of Mountain Time

2011-03-03 Thread Josh Berkus

uname -a
Linux hemingway 2.6.32-25-server #44-Ubuntu SMP Fri Sep 17 21:13:39 UTC
2010 x86_64 GNU/Linux

date
Thu Mar  3 15:30:17 MST 2011

ls -l localtime
lrwxrwxrwx 1 root root 31 2010-10-18 08:20 localtime ->
/usr/share/zoneinfo/US/Mountain

postgres=# select * from pg_settings where name = 'TimeZone';
-[ RECORD 1 ]---
name   | TimeZone
setting| Navajo
unit   |
category   | Client Connection Defaults / Locale and Formatting
short_desc | Sets the time zone for displaying and interpreting time stamps.
extra_desc |
context| user
vartype| string
source | command line
min_val|
max_val|
enumvals   |
boot_val   | UNKNOWN
reset_val  | Navajo
sourcefile |
sourceline |

Version 8.4.7.

There is actually a time zone "Navajo", which is a *separate* time zone
from US/Mountain.  Ideas on how this happened?

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-26 Thread Josh Berkus
On 1/26/11 12:13 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> Oh!  Actually, it only *did* 27 runs.  So it actually completed building
>> the index.  I'd expected trace_sort to give me some kind of completion
>> message; apologies for not checking all screen windows.
> 
> Huh, there should be an "external sort ended" message, as well as some
> other ones once it finishes writing the initial runs.

Oh, fooled by log rotation:

LOG:  finished writing run 83 to tape 82: CPU 2600.37s/19449.04u sec
elapsed 22402.36 sec

LOG:  finished writing run 84 to tape 83: CPU 2630.31s/19674.93u sec
elapsed 22659.40 sec

LOG:  finished writing run 85 to tape 84: CPU 2658.46s/19905.44u sec
elapsed 22919.25 sec

LOG:  performsort starting: CPU 2686.35s/20121.46u sec elapsed 23164.61 sec

LOG:  finished writing run 86 to tape 85: CPU 2686.48s/20127.46u sec
elapsed 23170.74 sec

LOG:  finished writing final run 87 to tape 86: CPU 2688.70s/20220.72u
sec elapsed 23266.35 sec

LOG:  performsort done (except 87-way final merge): CPU
2691.55s/20224.35u sec elapsed 23277.44 sec

LOG:  external sort ended, 7991989 disk blocks used: CPU
3237.80s/27221.09u sec elapsed 31382.50 sec

So, a total of almost 9 hours.  Clearly there's something wrong here; if
I have time I'll try a recompile with GCC and see how it does.
Collation is 'C'.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-26 Thread Josh Berkus
Tom,

> [ raised eyebrow... ]  Those numbers are the differences between two
> gettimeofday() readings.  It's really really hard to believe that that's
> wrong, unless there's something seriously wrong with your machine.

Actually, you're right ... I don't know what time the run27 message was
posted.  It's possible that it finished run27 at 9pm last night ...

Oh!  Actually, it only *did* 27 runs.  So it actually completed building
the index.  I'd expected trace_sort to give me some kind of completion
message; apologies for not checking all screen windows.

So, why didn't the index build complete (after more than 36 hours) when
I ran it as part of pg_restore?  I guess this goes back to being a
pg_restore problem and not an index build problem.

> FWIW, I did a test last night on the time to sort some random
> (varchar, timestamptz) data using git HEAD.  I didn't have enough disk
> space to sort 1.4 billion rows, but I tested with 200 million rows and
> 1GB maintenance_work_mem, and was able to create an index in 2424
> seconds (~40 minutes) --- this on a pretty generic desktop machine.
> I don't see a reason to think the runtime for 1.4 billion would have
> been over 5 hours.  The log output for my test looked like

2+ hours is then still very slow considering the machine I'm on compared
to yours.   I wonder if we maybe should be using GCC instead of SunCC.

Oh, that's why: it's a SPARC processor.  Slowness explained then.

> Your machine seems to be dumping a run about once every 250-300 seconds,
> which is about half the speed of mine, which is a bit odd if it's big
> iron.  (I wonder whether you have a non-C locale selected ...) 

See above.

So, as usual, I've completely mislocated the bug.  I'll need to rerun
the pg_restore and actually diagnose *that*.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-26 Thread Josh Berkus

> Your logic has nothing to do with what is actually happening.  Could we
> have a little bit more patience to see what happens next?

So, after 15 hours:

LOG:  finished writing run 20 to tape 19: CPU 588.86s/4484.35u sec
elapsed 5160.97 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 21 to tape 20: CPU 620.46s/4712.51u sec
elapsed 5435.63 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 22 to tape 21: CPU 654.01s/4953.74u sec
elapsed 5726.87 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 23 to tape 22: CPU 688.84s/5204.24u sec
elapsed 6029.41 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 24 to tape 23: CPU 721.71s/5442.40u sec
elapsed 6316.64 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 25 to tape 24: CPU 757.74s/5708.86u sec
elapsed 6636.64 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 26 to tape 25: CPU 790.87s/5946.83u sec
elapsed 6924.41 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 27 to tape 26: CPU 823.64s/6186.06u sec
elapsed 7212.11 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);

... I'll point out that the elapsed times which trace_sort is giving me
are clearly not clock times; I started this index run at 7pm PST
yesterday and it's been 15 hours, not 2 as the elapsed time would suggest.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-25 Thread Josh Berkus

>> If it's going to take 3 minutes each to write each of 3745 tapes, that
>> means completing in around 9 days.
> 
> Your logic has nothing to do with what is actually happening.  Could we
> have a little bit more patience to see what happens next?

OK, I'll let it run overnight and give you the trace output.


> (Memo to -hackers: what we lack in this trace is any indication of how
> long the runs are.  Maybe should add that.  Knuth claims the initial
> runs should amount to about 2X maintenance_work_mem on average, but
> maybe there's something unusual about the data distribution here.)

Well, for the straight test on event_time, the rows in the table should
have been more-or-less physically already in order.  This table is
most-insert, and event_time mostly corresponds to current server time.

For the composite index, rows would be more randomly distributed.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-25 Thread Josh Berkus
On 1/25/11 1:21 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> Note: I have this running now on a test box.  If someone responds in the
>> next couple hours, I can run whatever diagnostics you want on it.
>> Otherwise I'll kill it off and start over with debug logging turned on.
> 
> trace_sort would be interesting.

OK, either tape sort is broken or trace_sort is.  The larger my
maint_work_mem, the MORE tapes it wants to use:

LOG:  begin index sort: unique = f, workMem = 4194304, randomAccess = f
LOG:  switching to external sort with 14980 tapes: CPU 33.33s/24.23u sec
elapsed 58.86 sec

LOG:  begin index sort: unique = f, workMem = 131072, randomAccess = f
LOG:  switching to external sort with 469 tapes: CPU 1.87s/1.48u sec
elapsed 3.47 sec


-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-25 Thread Josh Berkus

> trace_sort would be interesting.

This is it so far:

LOG:  begin index sort: unique = f, workMem = 1048576, randomAccess = f
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  switching to external sort with 3745 tapes: CPU 9.06s/6.65u sec
elapsed 21.68 sec
STATEMENT:  create index "write_log_accounttime_idx" on write_log
(account_id, event_time);
LOG:  finished writing run 1 to tape 0: CPU 33.39s/149.02u sec elapsed
190.11 sec

LOG:  finished writing run 2 to tape 1: CPU 62.72s/371.06u sec elapsed
443.16 sec

LOG:  finished writing run 3 to tape 2: CPU 91.04s/599.43u sec elapsed
701.37 sec

LOG:  finished writing run 4 to tape 3: CPU 120.95s/823.59u sec elapsed
956.67 sec

If it's going to take 3 minutes each to write each of 3745 tapes, that
means completing in around 9 days.

I wanted to see what this looks like with a single-column index, so I
did one on event_time, which was even *worse*:

LOG:  begin index sort: unique = f, workMem = 1048576, randomAccess = f
STATEMENT:  create index "write_log_time_idx" on write_log (event_time);

LOG:  switching to external sort with 3745 tapes: CPU 14.45s/10.87u sec
elapsed 26.19 sec

LOG:  finished writing run 1 to tape 0: CPU 135.32s/302.18u sec elapsed
447.44 sec

... run 2 didn't complete in even 1/2 hour.

So the problem isn't multicolumn indexes, it's indexes on a table this
large in general (a VARCHR index wasn't much faster to build).  I'd
guess that the PK index finished only because the table might have been
in that rough physical order.

For whatever reason, tape sort is being extra slow on this build on
Solaris10; did we change anything between 8.4 and 9.0?  Or is this
possibly the build I used?

I'll try to run a sort_trace on an 8.4.4 copy of the database.

Oh, FWIW, the rough number of rows in the table:
1 486 530 000

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-25 Thread Josh Berkus
On 1/25/11 1:21 PM, Tom Lane wrote:
> Josh Berkus  writes:
>> Note: I have this running now on a test box.  If someone responds in the
>> next couple hours, I can run whatever diagnostics you want on it.
>> Otherwise I'll kill it off and start over with debug logging turned on.
> 
> trace_sort would be interesting.

OK, I'll need to kill it and restart.  Will go ahead.

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Multicolun index creation never completes on 9.0.1/solaris

2011-01-25 Thread Josh Berkus
Note: I have this running now on a test box.  If someone responds in the
next couple hours, I can run whatever diagnostics you want on it.
Otherwise I'll kill it off and start over with debug logging turned on.

Version: 9.0.1
Platform: Solaris 10u8 / ZFS over DAS 7-drive array
Severity: Serious
Reproduceable? Always, on this machine
Description:

I've been running tests for a client on their old production machine.
Part of the test setup involves restoring a 70GB table (part of a 400GB
database).  Restoring this table always fails to complete (we terminate
it after 3 days).

Watching it closer this time, I've noticed that it's hanging on the
creation of a multicolumn index*:

"write_log_accounttime_idx" btree (account_id, event_time)

Table def*:

   Table "public.write_log"
 Column |   Type   |   Modifiers
+--+
 log_id | character varying| not null
 account_id | character varying|
 write_id   | character varying|
 event_time | timestamp with time zone | not null default now()
 event_loc  | character varying(1024)  |
 source | character varying(1024)  |
 detail | character varying(1024)  |
 source_ip  | inet |
 event_length   | integer  |
 keyword| character varying|

account_id is usually a 32-character hex string (a UUID).

When trying to restore this index ... or, indeed, do *anything* else
with in including ANALYZE or REINDEX ... takes longer than we're willing
to monitor in order to complete (I once waited 30 hours).  (I was able
to test Analyze/reindex because the first time I converted the database
using pg_upgrade).

Interestingly, on the production database (where the index was
successfully created on 8.4.4., that index has no entries in the stats
table, i.e. pg_stat_user_indexes does not have a row for that index.
However, this database *did* restore on the same platform in 8.4.4.

The PK for this table restores successfully.  I haven't been able to
test other single-column indexes.

Watching the process of creating this index, it's certainly busy:

read(224, " o02\0\0E0C9 ]D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 P\f ^D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 P88E7D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\010 . ^D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0B0 r ^D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\010 q ^D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0C8F9 ^D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 hBE aD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0C8D6 ^D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 `F8 ^D101\0\0\0".., 8192)   = 8192
write(219, " e\0\0\0\0\0\0\08082D4 V".., 8192)  = 8192
read(224, " o02\0\08085 xD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0E8 ] _D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 ( _ _D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0B080 _D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0F8A2 _D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 xB5EAD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 @E4 _D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0  \b `D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\090 ) `D101\0\0\0".., 8192)   = 8192
write(219, "\0 OA197E1 !01\0 ,\0\0\0".., 8192)  = 8192
read(224, " o02\0\0B0 K `D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\01002 bD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0188E `D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 HB0 `D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 HD2 `D101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 H U\vD201\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 x14 aD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0C8B9 aD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\010BB aD101\0\0\0".., 8192)   = 8192
read(224, " o02\0\0 h v aD101\0\0\0".., 8192)   = 8192
write(219, " ,\0\0\0F4\0E8 *05\0 ( @".., 8192)  = 8192

This is also the kind of activity I'd expect to see for an index build.
 But ... it goes on forever.  The current index build run has been going
for almost 100 hours.

More detail:
mainentance_work_mem = 1GB (out of 32GB RAM available)
9.0.1 was built from source, using Sun CC
for the current test, I'm running with fsync off
(but I've had the same issue with fsync on)

Ideas?

(* object names have been changed to preserve confidentiality)

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] What happened to SSL_CIPHERS?

2010-12-15 Thread Josh Berkus
On 12/15/10 4:58 AM, Magnus Hagander wrote:
>> > In any case, a doc patch would be the right thing for the back branches.
> 
> I can look at this too (yes, I know we just wrapped, but I'm working
> down the backlog :S). You mean something as simple as "this parameter
> is unavailable if the server was not compiled with support for SSL"?

Exactly.  If you don't get around to it, bug me in January.

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] What happened to SSL_CIPHERS?

2010-10-29 Thread Josh Berkus



So, did ssl_ciphers go away on purpose?  If so, why?  If not, why isn't it
accessible?


Are you sure you're on an SSL enabled build?


Oh, good call ... I thought I'd installed the SSL build, but apparently not.

Mind you, we *also* need a doc patch. ("This parameter is only available 
if PostgreSQL was built with SSL support").



--
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] What happened to SSL_CIPHERS?

2010-10-28 Thread Josh Berkus

Folks,

This doc says we ought to have the ssl_ciphers parameter:

http://www.postgresql.org/docs/9.0/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SECURITY

Nor is there anything in the 9.0 release notes about it going away.

Yet:

postgres=# select version();
 version 


-
 PostgreSQL 9.0.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
(Ubuntu 4.3.3-5ubuntu4) 4.3.3, 64-bit

(1 row)

postgres=# show ssl_ciphers;
ERROR:  unrecognized configuration parameter "ssl_ciphers"

So, did ssl_ciphers go away on purpose?  If so, why?  If not, why isn't 
it accessible?


--
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] 9.0 Bug: cannot build against python3.1, with two versions of python in the environment

2010-09-11 Thread Josh Berkus
I discussed this report with James Pye already, and he beleives it's a 
configure script bug which should be fixed before release.  Anyone 
capable of taking it on?


 Original Message 
Subject:[TESTERS] Configure/Build 9.0 rc1 - cannot build against
python3.1, with two versions of python in the environment
Date:   Sat, 11 Sep 2010 13:12:13 -0400
From:   Lou Picciano 
To: pgsql-test...@postgresql.org



*[TEST REPORT]*
*[Release]:* 9.0 RC1
*[Test Type]:* Install (Build)
*[Test]:* Configure/Build 9.0 rc1 - cannot build against python3.1, with
two versions of python in the environment
*[Platform]:* Solaris 10 Sparc E450 Quad
*[Parameters]:*
--with-python \
*
--with-includes=/usr/local/include/python3.1:/usr/local/include:/usr/local/ssl/include 


\
--with-libraries=/usr/local/lib/python3.1:/usr/local/lib:/usr/local/ssl/lib
\
CONFIGURE OUTPUT: -
checking for python... /usr/bin/python
checking Python configuration directory... /usr/lib/python2.4/config
checking how to link an embedded Python application... -L/usr/lib
-lpython2.4 -lresolv -lsocket -lnsl -lrt -ldl -lm
configure: using CPPFLAGS= -I/usr/local/include/libxml2
-I/usr/local/include/python3.1 -I/usr/local/include -I/usr/local/ssl/include
configure: using LDFLAGS= -L/usr/local/lib -L/usr/local/lib/python3.1
[Failure]:* Can't seem to definitively config PG to a specific version
of python - v3.1, at /usr/local/bin/python3
*[Results]:* configure seems to pick up python 2.4 configuration, but
apparently links to (desired) python 3.1 libs.
# ldd plpython2.so
libpython2.4.so.1.0 => /usr/lib/libpython2.4.so.1.0
*[Comments]:* Tried first to see if configure picks up the PYTHONPATH
env variable to find python's configuration. No joy. Using this variable
might be a reasonable approach for configure?

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Unable to create serial column even with permissions

2010-08-16 Thread Josh Berkus

> We could in theory back-patch this, since CreateSeqStmt won't ever go to
> disk in stored rules.  However, tweaking DefineRelation's API in stable
> branches seems fairly hazardous to third-party code.  Does it seem
> sufficient to fix the problem in 9.0 and up?

Might be worth asking a few interface developers what this will break.
However, given that the issue has existed for a year or more and I'm the
first one to report it formally, it clearly isn't that huge of an issue.

Any idea what version this got broken in?

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Unable to create serial column even with permissions

2010-08-09 Thread Josh Berkus
Severity: Minor
Version tested: 8.4.4
Platform: Solaris 10u8
Steps to reproduce:

1. Create table "sometable" owned by user "someuser", and fill it with a
million generate_series records.

2. Log in as the superuser "postgres".

3. Do:

alter table sometable add column someserial serial;

4. Postgres attempts to add and populate the serial column for this
table.  This takes a while.

5. At the very end, after waiting for creation and population and
locking the table for a while, you get:

ERROR:  sequence must have same owner as table it is linked to

What should happen instead:

If the creating user has permissions on the table sufficient to create a
column on the table, the sequence should be created as owned by the
table owner.

At the very least, postgres should throw an error before spending a lot
of time populating the serial column ( "Only the table owner can create
a serial column" ).

-- 
-- Josh Berkus
-----
Josh Berkus   PostgreSQL Experts Inc.
CEO   database professionals
josh.ber...@pgexperts.com www.pgexperts.com
1-888-743-9778 x.508  San Francisco

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] Drop one-argument string_agg? (was Re: [BUGS] string_agg delimiter having no effect with order by)

2010-08-05 Thread Josh Berkus

> Well, maybe we need to expend some more sweat on the error message then.
> But this patch was still a prerequisite thing, because without it there
> is no error that we can complain about.

Yes, I'd say an addition to the HINT is in order *assuming* at that
stage we can tell if the user passed an ORDER BY or not.

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] Re: Drop one-argument string_agg? (was Re: [BUGS] string_agg delimiter having no effect with order by)

2010-08-04 Thread Josh Berkus

> Great, I was afraid people would want another beta if we forced an
> initdb.  So a hearty +1 for fixing it and not doing another beta
> (pending other bugs obviously).

And, btw, there has been a lot of testing of pg_upgrade due to the
initdbs and otherwise.  I think 9.0 is going to have a pretty darned
solid pg_upgrade because of it.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] Drop one-argument string_agg? (was Re: [BUGS] string_agg delimiter having no effect with order by)

2010-08-04 Thread Josh Berkus

> Well, it'd take an initdb to get rid of it.  In the past we've avoided
> forcing initdb post-beta1 unless it was Really Necessary.  OTOH, we seem
> to be in the mode of encouraging beta testers to test pg_upgrade, so
> maybe that concern isn't worth much at the moment.

If it's causing bugs, drop it now.  If we include it in 9.0, we're stuck
with it for years.

I'm OK with forcing an initDB for RC1.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] auto-explain does not work with JSON & csvlog

2010-07-18 Thread Josh Berkus
Version: 9.0 beta 3
Severity: Contrib feature broken
Platform: Ubuntu Netbook Remix, Dell 10 Mini laptop
Workaround Possible?  No.

With 9.0, the obvious thing to do with autoexplain is to log JSON or XML
explain plans to a csvlog and then automatically process them.

However, when I attempt to do this, I get the following warning at
PostgreSQL startup time:

Loaded module "auto_explain"
Not safe to send CSV data

And on checking, auto-explain is indeed NOT sending anything to the
csvlog.  It's not sending anything to the regular log, either.

-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] [TESTERS] Location of certs -Windows 7 SSL mode?

2010-07-07 Thread Josh Berkus
On 7/7/10 1:50 AM, Magnus Hagander wrote:
> BTW, if you post bug reports to -bugs, it'll make a lot more people see them.

Sure, we just want to verify that it *is* a possible bug (and not pilot
error) first.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] local_preload_libraries filenames converted to lowercase

2010-06-23 Thread Josh Berkus

> That might seem a bit bizarre, but because of the special meaning of
> commas we'd need some quoting rule anyhow.  I'm not sure it's worth
> making local_preload_libraries have its own private parsing convention.
> Perhaps this is just a documentation deficiency.

Yes, I'd say that documentation is the answer, given.  Hmmm  are
double-quotes respected in postgresql.conf, though?  Need testing.


-- 
      -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] local_preload_libraries filenames converted to lowercase

2010-06-23 Thread Josh Berkus
Severity: minor

Tested On:  9.0b2, 8.4.4

Platform: SUN SPARC 4u Enterprise 450 Quad, presumably Solaris 10

Repeatable?  Yes

Description:

   See thread:
   http://archives.postgresql.org/pgsql-testers/2010-06/msg00020.php

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Re: regexp_matches illegally restricts rows -- just a documentation issue?

2010-04-06 Thread Josh Berkus

> While I understand why this is confusing, it's really very normal
> behavior for a SRF, and I don't really think it makes sense to
> document that this SRF behaves just like other SRFs...

It's likely to be used by people who do not otherwise use SRFs, and many
would not be prepared for the consequences.  It's not instinctive that a
regexp function would be an SRF in any case; if someone is not looking
closely at the docs, it would be easy to miss this entirely -- as 3
experienced PG people did yesterday.

Personally, I also think that PostgreSQL is wrong to allow an SRF in the
target list to restrict the number of rows output.  A subselect in the
target list does not do so.  However, that's completely another discussion.

--Josh Berkus



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Re: regexp_matches illegally restricts rows -- just a documentation issue?

2010-04-05 Thread Josh Berkus
On 4/5/10 9:16 PM, Josh Berkus wrote:

> I can't see how this is anything but a bug; as far as I know, nothing in
> the target list is allowed to restrict the number of rows which are
> returned by the query.  We should get 7 rows, 3 of which have an empty
> array or a NULL in the 2nd column.

Just noticed it's a SETOF[] function.  Which makes it odd that I can
call it in the target list at all, but explains the row restriction.

It's still confusing behavior (three regulars on IRC thought it was a
bug too) and users should be warned in the documentation.  Not sure
exactly where, though ... maybe in 9.7?

--Josh Berkus


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] regexp_matches illegally restricts rows

2010-04-05 Thread Josh Berkus
Severity: major (data loss)
Versions Tested: 8.4.2, 9.0 HEAD
Test Case:

create table regex_test ( id serial not null primary key, myname text );

insert into regex_test ( myname )
values ( 'josh'),('joe'),('mary'),('stephen'), ('jose'),
('kelley'),('alejandro');

select id, regexp_matches(myname, $x$(j[\w]+)$x$)
from regex_test;

The above will return 4 rows, not the 7 which are in the table.

I can't see how this is anything but a bug; as far as I know, nothing in
the target list is allowed to restrict the number of rows which are
returned by the query.  We should get 7 rows, 3 of which have an empty
array or a NULL in the 2nd column.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Error when lock conflict on REPLACE function

2010-03-15 Thread Josh Berkus
Severity: Annoyance
Versions Tested: 8.4.2
Platform: Linux RHEL 5.4
Reproduceable: always
Steps to reproduce:

1. Create a function.
2. In one session, start an explicit transaction.
3. Do a CREATE OR REPLACE on the same function, but do not commit.
4. Open a 2nd session, and an explicit transaction in that session.
5. Do a CREATE OR REPLACE on the same function in the 2nd session.
6. COMMIT the 2nd session.
7. COMMIT the 1st session.
8. You get:

ERROR: duplicate key value violates unique constraint
"pg_proc_proname_args_nsp_index"
SQL state: 23505

What should have happened: the 2nd replace should have succeeded.  Or it
should have given a user-friendly error message.  Opinions?

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4

2010-03-13 Thread Josh Berkus

> It's also my 3rd choice of solution behind fine-grained lock conflicts
> (1st) which would avoid many issues and master/standby in lock step
> (2nd).

Yeah, I just can't imagine you hunting down all of the corner cases for
fine-grained lock conflicts in time for 9.0.  Given what I've been
looking at, it seems like a LOT of work.

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4

2010-03-13 Thread Josh Berkus

> That's better, I was worried you'd gone all complimentary on me.

Never fear that!

Was that setting originally part of your design for HS?  If so, why did
you back off from it?

--Josh


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4

2010-03-10 Thread Josh Berkus
On 3/10/10 3:26 PM, Simon Riggs wrote:
> OK, that's enough to not remove it. I was aware of more negative
> thoughts and conscious of my own feelings about it being a kluge.

Well, it *is* a kludge, but it may be the best one for people who want
to use HS/SR to support web applications.  So I think we should work on
making it less kludgy.

Ultimately we're going to need publish-XID-to-master, but that's not
realistic for 9.0.

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4

2010-03-09 Thread Josh Berkus
All,

What I did:

1. Set up 9.0a4 doing SR replication with a 2nd 9.0a4
2. Ran pgbench for a while.
3. Aborted pgbench with Ctl-C
4. Changed vacuum_defer_cleanup_age in postgresql.conf and reloaded
5. Ran pgbench again, and got:

Sidney-Stratton:pg90 josh$ pgbench -c 2 -T 300 bench
starting vacuum...WARNING:  PD_ALL_VISIBLE flag was incorrectly set in
relation "pgbench_branches" page 0
WARNING:  PD_ALL_VISIBLE flag was incorrectly set in relation
"pgbench_branches" page 1
WARNING:  PD_ALL_VISIBLE flag was incorrectly set in relation
"pgbench_tellers" page 0
WARNING:  PD_ALL_VISIBLE flag was incorrectly set in relation
"pgbench_tellers" page 1

... not one I'm familiar with.  Issues?

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] [Fwd: [TESTERS] Numerics of diffrent scales Raises Type Mismatch Error in a Set Returning Function]

2010-03-07 Thread Josh Berkus
On 3/6/10 5:45 PM, Tom Lane wrote:
> The reason for the behavioral change is that plpgsql, which formerly
> had really crummy tuple conversion logic with a whole bunch of other
> deficiencies besides this one, now shares the logic used by
> ConvertRowtypeExpr.

Oh, yes, of course.  Should have thought of that.

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] [Fwd: [TESTERS] Numerics of diffrent scales Raises Type Mismatch Error in a Set Returning Function]

2010-03-06 Thread Josh Berkus
All,

I tested Noel's test case and verified that it does, in fact, break.
And functions on 8.4.

--Josh Berkus

 Original Message 
Subject: [TESTERS] Numerics of diffrent scales Raises Type Mismatch
Error in a Set Returning Function
Date: Tue, 2 Mar 2010 20:07:07 -0800
From: Noel Proffitt 
To: pgsql-test...@postgresql.org

[TEST REPORT]

[Release]: 9.0 Alpha 4

[Test Type]: feature

[Test]: NUMERICS OF DIFFERENT SCALE UNABLE TO CAST TO RESULTS IN SET
RETURNING FUNCTION

[Platform]: Linux RHEL/Fedora

[Parameters]:

[Failure]: Yes

[Results]: ERROR:  wrong record type supplied in RETURN NEXT
DETAIL:  Returned type numeric does not match expected type
numeric(14,2) in column 1.
CONTEXT:  PL/pgSQL function "check_numeric" line 5 at RETURN NEXT

-- Test case

CREATE TABLE a_table ( val NUMERIC );
INSERT INTO a_table VALUES (42);

CREATE TABLE b_table ( val NUMERIC(14,2) );

CREATE OR REPLACE FUNCTION check_numeric() RETURNS SETOF b_table AS
$$
DECLARE
  myrec RECORD;
BEGIN
  SELECT * INTO myrec FROM a_table;
  RETURN NEXT myrec;
  RETURN;
END;
$$ LANGUAGE 'plpgsql' IMMUTABLE;
SELECT * FROM check_numeric();

[Comments]: Works in Pg 8.3 and 8.4. Didn't see a change in the release
notes notifying of the behavior change.


-
HOWTO Alpha/Beta Test:
http://wiki.postgresql.org/wiki/HowToBetaTest
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-testers


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] RETURNS TABLE returns NULL set when called by another RETURNS TABLE

2010-02-23 Thread Josh Berkus

> val1 is just as ambiguous.  I think you got bit by the name collision;
> the output parameters would start out NULLs and thus lead to the
> described behavior, in versions before 9.0.

Aha, yeah, that's probably it.  Take this example as the reason we had
to change the behavior ...

--Josh Berkus

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] RETURNS TABLE returns NULL set when called by another RETURNS TABLE

2010-02-23 Thread Josh Berkus
Pavel, all:

Apparently if you use one returns table function to call a 2nd returns
table function, it returns a recordset which consists entirely of nulls.

Here's the test case:

create table srf_data ( id serial, cat int, val text );
insert into srf_data ( cat, val ) values
( 1, 'josh' ),
( 1, 'selena' ),
( 2, 'bruce' ),
( 2, 'josh' ),
( 3, 'robert' );

create or replace  function srf1 ( this_cat int )
returns table (
id1 int,
val1 text )
language sql as $f$
select id, val from srf_data where cat = $1;
$f$;

create or replace function srf2 ( )
returns table (
id1 int,
val1 text )
language plpgsql as $f$
begin
return query
select id1, val1 from srf1(1);
return;
end;
$f$;

select * from srf2();

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Possible stability issue: permanent hang on dropdb

2010-02-06 Thread Josh Berkus
On 2/6/10 8:20 PM, Robert Haas wrote:
> On Sat, Feb 6, 2010 at 7:43 PM, The Fuzzy Chef  wrote:
>>> It's too bad you didn't capture a stack backtrace at step #3 or step
>>> #6.  If you manage to reproduce the problem, that would be a good
>>> thing to try to get.
>> Well, I never got an actual crash.
> 
> That's OK - you can still attach gdb and see where it's hung up...  it
> would have been really nice to see what that "hung" drop database
> thought it was doing...

Yep, sorry.  Was trying to get the system working first, and then
afterwards thought it might be worth reporting.

Anything I can mine out of the logs or files?

--Josh Berkus


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Possible stability issue: permanent hang on dropdb

2010-02-06 Thread Josh Berkus

> It's too bad you didn't capture a stack backtrace at step #3 or step
> #6.  If you manage to reproduce the problem, that would be a good
> thing to try to get.

It never actually crashed.  And, of course, this was happening right
when I needed to go home and the server needed to be up for that.

--Josh

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] MD5 checksum or RPM for PostgreSQL 8.7.3

2009-04-03 Thread Josh Berkus

Dafina,


I am a member of a small group using PostgreSQL has a data backend and I
writing to request and MD5 checksum for PostgreSQL 8.7.3.
If an MD5 is not available, I would appreciate the location of an RPM
for 8.7.3, if there is one available.
Thank you very much for your assistance in this matter.


Um, there's no such thing as PostgreSQL 8.7.3.  The latest stable 
version is *8.3.7*.  Was that the version you meant?


If that's the case, then everything you want is here:

http://www.postgresql.org/download/linux
http://www.postgresql.org/ftp/source/v8.3.7/

--Josh Berkus


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-27 Thread Josh Berkus

Jeff Davis wrote:

On Fri, 2009-02-27 at 01:24 -0500, Tom Lane wrote:

Are you entirely sure that they don't?


Oh, you're right, of course:

postgres=# create type public.mytype as (i int);
CREATE TYPE
postgres=# create type public.mytype2 as (j mytype);
CREATE TYPE

-- pg_dump output:
CREATE TYPE mytype2 AS (
j mytype
);


Really?

Ok, I'll have to work up a reproduceable case, because I'm definitely 
getting the "public" qualification in the create type.


--Josh

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-27 Thread Josh Berkus

Tom Lane wrote:

Josh Berkus  writes:

When doing pg_dump in text mode, complext types will be dumped like this:



CREATE TYPE complex_foo (
var INT,
gar TEXT,
natch   public.foo_type
);


You didn't say which schema "complex_foo" is in?


Public.

--Josh

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Josh Berkus

Jeff,


In the general case though, for any object that refers to multiple other
objects, I don't see any way around explicit schema qualification. I
suppose it could be smart and say "foo_type is unique in my search path,
so I don't need to schema-qualify it".


Yeah, but for most other objects "public" is also excluded as well as 
pg_catalog.  For CREATE TYPE, "public" is explicit.



Have you considered working from the "custom" format rather than text?
I'm not sure whether it solves your problem, but I think it provides the
most information.


--Josh Berkus


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Josh Berkus

Jeff,


Functions are similar, actually. The argument list needs to specify
schema paths as well, if it's not in some expected place (I think it
does so for all schemas other than pg_catalog).


Except that they don't appear to do so.

--Josh

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Weird quirk with pg_dump of complex types

2009-02-26 Thread Josh Berkus

All,

This is not so much a bug as a weird inconsistency, which ought to go on 
a list somewhere so that it gets cleaned up the next time someone 
overhauls pg_dump:


Summary: CREATE TYPE uses explicit schemas
Versions Tested: 8.2.9, 8.3.5
Platform: Linux
Description of Issue:

When doing pg_dump in text mode, complext types will be dumped like this:

CREATE TYPE complex_foo (
var INT,
gar TEXT,
natch   public.foo_type
);

That is, a custom type in a complex type declaration is explicitly 
schema-qualified, even when the schema in question is in the default 
schema_path.  This is inconsistent with all other database objects, 
which use "SET search_path" to qualify the correct schemas.


This is only a real problem in that it may interfere with backup and/or 
schema comparison automation (like I'm trying to write right now).


--Josh Berkus


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Combination of Triggers and self-FKs produces inconsistent data

2009-01-29 Thread Josh Berkus

Tom,


You can't have your cake and eat it too, Josh.  If we make the RI
mechanism operate at a level underneath triggers, then we'll lose all
sorts of useful capability that people are depending on.  A couple of
examples:

* the ability to log table changes caused by RI cascades

* the ability to maintain row update timestamps when the update is
  caused by an RI cascade


Yeah, I can see that there isn't an obvious fix.  However, at the end of 
the day it means that RI in Postgres can be accidentally broken by user 
action without removing or disabling the constraint.  This isn't a 
comfortable thought; it sounds an awful lot like another OSS-DB.


Or to put it another way, we don't allow triggers to break UNIQUE 
constraints or CHECK constraints. All of the other constraints operate 
at a level below triggers.  Why are FKs different?



It doesn't say that, because it isn't true.  What is true is that if you
make a trigger that prevents updates from happening, it breaks RI
updates as well as directly-user-initiated updates. 


Again, if we're going to retain this issue, then it needs to be in the 
documentation that RI isn't enforced on the results of triggers. 
Because, polling 5 people on IRC who each have more than 3 years of 
PostgreSQL experience ... and two of whom are code contributors ... this 
issue surprised *all* of them.



Either way, you're
going to need to fix the trigger.


If you read to the end of the example, you'd see that I'm saying that 
the trigger should *fail*, with an error.  Not work.


Throughout the history of the project, no functionality which ends in a 
inconsistent data state has ever been acceptable which I can recall. 
When did we change our policy?


--Josh Berkus


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Combination of Triggers and self-FKs produces inconsistent data

2009-01-28 Thread Josh Berkus

Tom Lane wrote:

Josh Berkus  writes:

Summary: self-referential FKs are not enforced properly in the
presence of BEFORE triggers


This isn't a bug.  If you create triggers that prevent the RI actions
from being taken, it's your own problem.


Huh?  Since when was it OK by us to have data which violates a declared 
FK under *any* circumstances?  Where in our docs does it say that 
Foreign Keys are not enforced if the table has triggers on it?


--Josh

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Combination of Triggers and self-FKs produces inconsistent data

2009-01-28 Thread Josh Berkus

Version: 8.3.5
Install: self-compile on 64-bit Ubuntu Linux
also reproduced by AndrewSN on another platform
Summary: self-referential FKs are not enforced properly in the
presence of BEFORE triggers
Test Case:

-- create two tables, one of which is the master table (reftable) the 
other of which is a child which contains a tree structure (treetab):


create table reftable(
refid int primary key,
refname text
);


create table treetab (
id int primary key,
parent int,
refid int not null references reftable(refid) on delete cascade,
name text
);

-- now create a trigger function to maintain the integrity of the trees 
in treetab by "pulling up"

-- each node to its parent if intermediate nodes get deleted
-- this trigger is inherently flawed and won't work with the FK below

create function treemaint () returns trigger as $t$
begin
update treetab set parent = OLD.parent
where parent = OLD.id;
return OLD;
end; $t$ language plpgsql;

create trigger treemaint_trg before delete on treetab
for each row execute procedure treemaint();

-- populate reftable

insert into reftable
select i, ( 'Ref' || i::TEXT ) from generate_series(1,100) as g(i);

-- populate treetab with 10 rows each pointing to reftable

insert into treetab (id, refid)
select i, (( i / 10::INT ) + 1 )
from generate_series (1,900) as g(i);

-- create trees in treetab.  for this simple example each treeset is 
just a chain with each child node

-- pointing to one higher node

update treetab set parent = ( id - 1 )
where id >  (
select min(id) from treetab tt2
where tt2.refid = treetab.refid);

update treetab set "name" = ('tree' || parent::TEXT || '-' || id::TEXT);

-- now create a self-referential FK to enforce tree integrity.  This 
logically breaks the trigger


alter table treetab add constraint selfref foreign key (parent) 
references treetab (id);


-- show tree for id 45

select * from treetab where refid = 45;

 id  | parent | refid |name
-++---+-
 440 ||45 |
 441 |440 |45 | tree440-441
 442 |441 |45 | tree441-442
 443 |442 |45 | tree442-443
 444 |443 |45 | tree443-444
 445 |444 |45 | tree444-445
 446 |445 |45 | tree445-446
 447 |446 |45 | tree446-447
 448 |447 |45 | tree447-448
 449 |448 |45 | tree448-449


-- now, we're going to delete the tree.  This delete should fail with an 
error because the

-- trigger will violate "selfref"

delete from reftable where refid = 45;

-- however, it doesn't fail.  it reports success, and some but not all 
rows from treetab

-- are deleted, leaving the database in an inconsistent state.

select * from treetab where refid = 45;

 id  | parent | refid |name
-++---+-
 441 ||45 | tree440-441
 443 |441 |45 | tree442-443
 445 |443 |45 | tree444-445
 447 |445 |45 | tree446-447
 449 |447 |45 | tree448-449

-- this means we now have rows in the table which
-- violate the FK to reftable.

postgres=# select * from reftable where refid = 45;
 refid | refname
---+-
(0 rows)


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Handling of \ in array data display

2006-08-21 Thread Josh Berkus
Tom,

> This is documented behavior for arrays:
> http://developer.postgresql.org/docs/postgres/arrays.html#AEN5764
> and has been that way for a very long time.  If we change it we will
> break every array-using application on the planet, because it will
> in fact be impossible to parse an array value unambiguously.

Ok, so "yes, it's inconsistent, but we don't want to break backwards 
compatibility."  I can buy that ...

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [BUGS] Handling of \ in array data display

2006-08-18 Thread Josh Berkus
Alvaro,

> tarr[1] does not have a \, because it was eaten by the parser (so \y is
> the same as a plain y).  tarr[2] does have a single backslash, which for
> output purposes is shown escaped with another backslash when part of an
> array, but unescaped when not.  I'm not sure if this qualifies as a bug
> or not.

I think it does.   It's not consistent with how text values not in an array 
are displayed.   The whole reason I reported it was because of a user 
thinking their data wasn't being saved correctly, so it's causing 
confusion.

FWIW, I personaly think we should be using the ARRAY[] format for display 
anyway, but that would break some backwards compatibility ...

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[BUGS] Handling of \ in array data display

2006-08-18 Thread Josh Berkus

Issue: \ is escaped oddly when displaying the contents of array fields.
Severity: annoyance
Affects: 8.1.3, 8.1.4, 8.0.3, possibly others.
Demonstration of bug:

When saving \ escaped values into text array fields, the \ is escaped when 
displaying the contents of the array, leading to an appearance that the 
correct data was not saved:

scratch=# create table test_arr ( tarr text[] );
CREATE TABLE
scratch=# insert into test_arr values ( array['x\y','x\\y','x y'] );
INSERT 5695623 1
scratch=# select * from test_arr;
   tarr
---
 {xy,"x\\y","x y"}
(1 row)

scratch=# select tarr[1] from test_arr;
 tarr
--
 xy
(1 row)

scratch=# select tarr[2] from test_arr;
 tarr
--
 x\y
(1 row)

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[BUGS] ALTERed DEFAULTS not visible to PL/pgSQL cached plans

2006-03-11 Thread Josh Berkus
Folks,

Version: 8.1.3
Platform: SuSE Linux, GCC
Severity: mild data corruption
Reproducability: 100%

Steps to Reproduce:
(sample code attached)
1) Create a table.
2) Create a function which inserts a row into that table.
3) Run the function once.
4) ALTER the table with a new column and SET DEFAULT for that column.
5) Run the function again.
6) Re-load the function (via REPLACE)
7) Insert one more row using the function.
8) The table will have NULL values in the first TWO rows, not the first ONE 
row as it should.   This is because the DEFAULT value is not being "seen" 
by the cached plan of the function. As an example, the attached code 
produces:

ltreetest=# select * from bugtest;
 id |  name  | is_true
++-
  1 | Before ALTER   |
  2 | Look, its null |
  3 | Now its true.  | t

When it should produce:

ltreetest=# select * from bugtest;
 id |  name  | is_true
++-
  1 | Before ALTER   |
  2 | Look, its null | t
  3 | Now its true.  | t

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco
create table bugtest (
	id serial not null primary key,
	name text not null unique
);

create function insert_bugtest (
	vname text )
returns int as $f$
begin
	insert into bugtest ( name ) values ( vname );
	return currval('bugtest_id_seq');
end; $f$ language plpgsql security definer;

select insert_bugtest('Before ALTER');

alter table bugtest add is_true boolean;
alter table bugtest alter is_true set default true;

select insert_bugtest('Look, its null');

create or replace function insert_bugtest (
	vname text )
returns int as $f$
begin
	insert into bugtest ( name ) values ( vname );
	return currval('bugtest_id_seq');
end; $f$ language plpgsql security definer;

select insert_bugtest('Now its true.');

select * from bugtest order by id;
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[BUGS] plpgsql TEMP table issue not fixed in 8.1?

2005-08-31 Thread Josh Berkus
Folks,

I'd swear somebody committed a fix for the issue with temp tables inside 
plpgsql functions, like, months ago.  Yet I still get:

ERROR:  relation with OID 16607 does not exist
CONTEXT:  SQL statement "INSERT INTO tmp_runs ( run_id, batch, machine ) 
VALUES ( NEXTVAL('runs_run_id_seq'),  $1 ,  $2 [ $3 ] )"
PL/pgSQL function "generate_test_series" line 67 at SQL statement
ERROR:  relation with OID 16607 does not exist

This is CVS as of a week ago.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] PSQL commands not backwards-compatible

2005-08-31 Thread Josh Berkus
Tom,

> Can you point to any single one of the past five major releases in which
> psql's backslash commands *WERE* completely backwards-compatible with
> previous versions of PostgreSQL?  This is not breaking news.

Because this is the first time I can remember where *none* of the \ commands 
work with older servers.  And because the error we give is pretty ungraceful.

Hmmm ... how difficult would it be to give a useful error message instead of a 
SQL error?  Something like:

"I'm sorry, that PSQL command did not work.  Most likely, you are connecting 
to a different version of PostgreSQL."

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] PSQL commands not backwards-compatible

2005-08-30 Thread Josh Berkus
Tom,

> They've been broken on a fairly regular basis in past releases.
> Certainly 7.3 broke every single one because of the addition of
> schema syntax ...

Yeah, and we warned people about it, as I recall.   Also, we had about 25x 
less users then.   I think we should put something in the release notes:

WARNING: 8.1's "psql" is not completely backwards-compatible with previous 
versions of PostgreSQL.

--Josh

-- 
__Aglio Database Solutions___
Josh BerkusConsultant
josh@agliodbs.comwww.agliodbs.com
Ph: 415-752-2500Fax: 415-752-2387
2166 Hayes Suite 200San Francisco, CA

---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] PSQL commands not backwards-compatible

2005-08-30 Thread Josh Berkus
Folks,

Discovered some problems using 8.1 psql to connect to an 8.0 server:

test_stats=# \l
ERROR:  relation "pg_catalog.pg_roles" does not exist
test_stats=# \dt
ERROR:  relation "pg_catalog.pg_roles" does not exist
test_stats=# \dv
ERROR:  relation "pg_catalog.pg_roles" does not exist

This is from CVS, the day before the beta release.   Ignore me if it's 
already been fixed.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] pg_ctl inappropriately timing out?

2005-06-08 Thread Josh Berkus
Tom,

> It strikes me that the server shouldn't ever get a couple minutes behind
> on writing dirty buffers; this seems more like a bgwriter configuration
> issue than pg_ctl's fault.  Consider that in a standard Unix
> configuration you will get about 20 seconds between SIGTERM and SIGKILL
> when init wants to shut you down ...

Well, I can run an actual timer if you like.   But it's definitely more than 
20 seconds.   And we're using the default bgwriter config, perhaps I should 
set it to 0?

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] pg_ctl inappropriately timing out?

2005-06-07 Thread Josh Berkus
Folks:

Issue: pg_ctl seems to be prematurely timing out.
Version: 8.0.3
Severity: Annoying
Description:
1. Do an immense load on a database, like 2.5gb.
2. Right after the COPY finishes, send a pg_ctl -m fast stop to the server.
3. You will get:
pg_ctl: postmaster does not shut down

What Seems To Be Happening:
When you call pg_ctl --stop, the WAL is still writing out to the database 
and postmaster won't shut down until it's done, which can take a couple 
minutes.  In the meantime, pg_ctl is timing out and prematurely reporting 
failure.

Resolution:
Not sure, really.  Is there some way for pg_ctl to intelligently detect 
this situation?  To set a timeout value for pg_ctl via a command-line 
option?

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] BUG #1552: massive performance hit between 7.4 and 8.0.1

2005-03-21 Thread Josh Berkus
Brian,

> Yes, I will subscribe to the performance list, but strictly speaking the
> behaviour described should be considered a bug. The assumptions made in
> deciding what the query optimisations will be seem all skewed, and the
> end result is that the system
> isn't useful in very common cases. =/

I don't think your experience on this one query is descriptive of PostgreSQL 
in general.  What I'm saying is that you most likely have a tuning problem, 
not a bug.   

If possibilities for improving the optimizer come out of your discussion, 
well, about 1/2 of the primary postgresql programmers read the performance 
list.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] BUG #1552: massive performance hit between 7.4 and 8.0.1

2005-03-21 Thread Josh Berkus
Brian,

They PGSQL-PERFORMANCE list is really the appropriate place for performance 
issues like yours.  Subscribe?

http://www.postgresql.org/community/lists

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [BUGS] [pgsql-bugs] Daily digest v1.1387 (8 messages)

2005-03-09 Thread Josh Berkus
Tom,

> the issue is clearly that the known-false HAVING clause is pushed down
> inside the aggregation, as though it were WHERE. ÂThe existing code
> pushes down HAVING to WHERE if the clause contains no aggregates, but
> evidently this is too simplistic. ÂWhat are the correct conditions for
> pushing down HAVING clauses to WHERE?

When the HAVING clause refers to a unaltered GROUP BY column; that is, one 
whose contents are not aggregated, calculated, or aliased.  I can't think of 
any other condition which would be permissable.   I would guess that the 
reason why that test case bombs is that the planner detects that "2" is not 
aggregates, calculated, or aliased and assumes that it's a GROUP BY column.

The real problem with this query is that we have a constant column which is 
always in existance, thus producing a single row when run without the HAVING 
clause.  Personally, I've always felt that the SQL committee made a mistake 
in having aggregates of no rows produce a single null output row; it leads to 
wierdness like this here.   

Hopefully someone can back that up with an ANSI-SQL reference ...

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[BUGS] Index scans for unique index not visible in stats

2005-02-09 Thread Josh Berkus
Summary:  Index scans to enforce a unique index do not show up in pg_stats 
Severity:  Very Annoying
Verified On:  7.4.3, 7.4.6, 8.0.1
Description:
Index scans on the index of a unique constraint in order to verify uniqueness 
of inserted rows do not show up in the pg_stats views.  This is a problem 
because it can lead the DBA to think that index is not being used and is a 
candidate for dropping.

Example:

powerpostgres=# create table unq_test ( id int not null primary key, the_data 
text );
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "unq_test_pkey" 
for table "unq_test"
CREATE TABLE

powerpostgres=# insert into unq_test values ( 1, 'joe' );
INSERT 26277897 1
powerpostgres=# insert into unq_test values ( 2, 'mary' );
INSERT 26277898 1
powerpostgres=# insert into unq_test values ( 3, 'kevin' );
INSERT 26277899 1
powerpostgres=# insert into unq_test values ( 3, 'hal' );
ERROR:  duplicate key violates unique constraint "unq_test_pkey"
 
powerpostgres=# select * from pg_stat_user_indexes;
  relid   | indexrelid | schemaname |  relname   | indexrelname  | idx_scan | 
idx_tup_read | idx_tup_fetch
--++++---+--+--+---
 26277890 |   26277895 | public | unq_test   | unq_test_pkey |0 |   
 
0 | 0
(2 rows)
 
powerpostgres=# select * from pg_stat_user_tables;
  relid   | schemaname | relname | seq_scan | 
seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del
--++-+--+--+--+---+---+---+---
 26277890 | public | unq_test|0 |   
 
0 |    0 |     0 | 5 | 0 | 0



-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] Inconsistent behavior with TIMESTAMP WITHOUT and epoch

2005-01-27 Thread Josh Berkus
Tom,

> How so?  If you think that the timestamp-without-zone is relative to GMT
> rather than your local zone, you say something like
>   extract(epoch from (timestampvar AT TIME ZONE 'GMT'))

Ah, that didn't seem to work before.  I must have done the parens wrong.

> Quite honestly, you should be using timestamp WITH time zone for such an
> application anyway.  The timestamp without zone datatype is very
> strongly biased towards the assumption that the value is in your local
> timezone, and if you've actually got multiple possible settings of
> TimeZone then it's simply a great way to shoot yourself in the foot.

Well, I was thinking about this on the way to my office this AM, and realized 
that there's a fundamental gulf between timestamp-as-real-moment-in-time (the 
SQL timestamp and postgres timestamp) and timestamp-as-mark-on-the-calendar 
(what I'm dealing with), and that my trouble stems from trying to coerce the 
first into the second.

Maybe it's time to hack a datatype ...

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] Inconsistent behavior with TIMESTAMP WITHOUT and epoch

2005-01-27 Thread Josh Berkus
Tom,

> I don't believe there is anything wrong here.  extract(epoch) is defined
> to produce the equivalent Unix timestamp, and that's what it's doing.
> See the thread at
> http://archives.postgresql.org/pgsql-bugs/2003-02/msg00069.php

Darn.  I missed that discussion, I'd have argued with Thomas (not that I ever 
*won* such an argument ...)

The problem with the current functionality is that it makes it impossible to 
get a GMT Unix timestamp out of a TIMESTAMP WITHOUT TIME ZONE without string 
manipulation.   And for an application where you want the timestamps to be 
location-agnostic (such as this one, with servers on east and west coasts, 
and some talk about London), you want your timestamps stored as GMT.

However, having changed it in 7.3, I agree that we'll just cause trouble 
changing it back.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


  1   2   >