Re: [BUGS] BUG #8362: postres webserver

2013-08-05 Thread Christopher Browne
There are several things confusing here that seem to indicate that
this is not a bug...

1.  Postgres is not a web server.  It is a database management system,
and thus extensions one might hope to add to a web server are not
logical to expect to connect to Postgres.

2.  It sounds as though you are using a version of Postgres that has
been customized by EnterpriseDB, and, as that is not something that
the community has code for, we're not able to be of much assistance.
As a user of EnterpriseDB's products, you are presumably a customer,
and should expect to get customer support from them.  Most pointedly,
their build procedures are theirs, and not ours, and that's a place
where any help relevant to you would really need to come from
EnterpriseDB.


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #8260: problem with sequence and tablename

2013-06-27 Thread Christopher Browne
Object names are limited to a length of NAMEDATALEN-1, which defaults to
63, and are documented to be truncated.

http://www.postgresql.org/docs/9.2/static/sql-syntax-lexical.html

The behaviour you are observing appears to conform to the documentation,
and seems reasonable, so I wouldn't think this was a bug.


Re: [BUGS] 9.2.4: Strange behavior when wildcard is on left side of search string

2013-04-07 Thread Christopher Browne
This doesn't seem either buggy or strange...

An index on the ordering of that column is not helpful in handling a
leading wildcard, and so the query optimizer will, in such cases, revert,
correctly, to using a sequential scan and filtering the results.

If you have cases where this sort of wildcard needs to be fast, a
functional index could help.

Create index foo on tbl (reverse(col));

And reverse the wildcard so the index is usable:

Select * from tbl where reverse(col) like 'esrever%';

That query can harness the reversed index.

Unfortunately, no ordered index helps with

Select * from too where col like '%something%';

For that, a full text search index can help, but that is a longer story.

At any rate, what you are observing is no surprise, and consistent with
what many database systems do.


Re: [BUGS] How to run query by command prompt in Postgres

2013-03-20 Thread Christopher Browne
On Wed, Mar 20, 2013 at 8:22 AM, Kapil Kr. Khandelwal  wrote:
> Dear Team
>
> I want to a favour from your side. I want to implement clustering in
> postgres on windows server.  Please provide the list of steps. I want the
> answers of followings doubts.
>
> 1. How to write archive command
> 2. How to log shipping from primary to stand by server
> 3. how to restore database at stand by server from archived files
>
> I am waiting for your positive response. Thanks in advance.

Again, these do not sound like they are bugs in PostgreSQL that
should be reported here.

They could indicate bugs in the documentation if the documentation was
specifically deficient, but it is not evident that you have reviewed the
documentation, as I would expect different "doubts."

I don't know what version of PostgreSQL you are using; if you were
running 9.2, then the following might be relevant documentation:
  

That seems to describe the things you seem to be asking about.
If it is, then you should avail yourself of the documentation.

It is decidedly not a bug in PostgreSQL for you to fail to read the
documentation, and you should not report such things on this
mailing list.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] EnterpriseDB Advanced Server 8.2 Supported Platforms

2012-10-16 Thread Christopher Browne
On Tue, Oct 16, 2012 at 2:24 AM, Ricky Wong  wrote:

> EnterpriseDB team,
>
> ** **
>
> I was wondering which Linux OS (i.e. CentOS) the Advanced Server 8.2
> supports.
>
> ** **
>
> I was able to found Documentation for 8.3 (but not 8.2):
>
> http://www.enterprisedb.com/docs/en/8.3/server/supported-platforms.html***
> *
>
> ** **
>
> Wonder if you can provide reference for 8.2 (similar to the URL above).
> Thank you.
>
> ** **
>
> ** **
>
> Regards,
>
> ** **
>
> *Ricky Wong *| System Analyst | Isobar Hong Kong
>
> t. + 852 3962 4431 | f. + 852 3962 4567 | m. + 852 6194 5151 | e.
> ricky.w...@isobar.com
>
> ** **
>
> 16/F | 633 King's Road | North Point | Hong Kong | w. http://www.isobar.hk
> 
>
> (Learn about our global network at http://www.isobar.com)
>
> ** **
>
> [image: logo1]
>
> *Campaign **Asia Pacific Digital Agency of the Year 2011*
>
> ** **
>
> Follow us at http://www.twitter.com/isobar &
> http://www.twitter.com/isobarHK
>
> Become a fan at http://www.facebook.com/isobar
>
> ** **
>
> NOTICE: The information contained in the email may be private and
> confidential. If you are not the intended recipient(s) or have otherwise
> received this email in error, please delete the email and inform the sender
> as soon as possible. This email may not be disclosed, used or copied by
> anyone other than the intended recipient(s). Any opinions, statements or
> comments contained in this email are not necessarily those of the company.
> If you wish clarification of any matter, please request confirmation in
> writing. We take precautions to minimise the risk of this email containing
> a software virus but you should use virus checking software.
>
> ** **
>
> *Please consider the environment before printing this email*
>
> ** **
>

This seems like something you should contact EnterpriseDB about, as opposed
to the PostgreSQL project's mailing lists.

Pointedly, this is not remotely close to being "a bug."

Supposing EnterpriseDB doesn't support a particular Linux distribution you
wish to use, that might be something you'd want to complain about, to them,
but it doesn't seem to represent a problem with the PostgreSQL code base
that requires rectifying.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
<>

Re: [BUGS] How to run query by command prompt in Postgres

2012-06-27 Thread Christopher Browne
On Wed, Jun 27, 2012 at 7:15 AM, Kapil Kr. Khandelwal  wrote:
> I want to a favour from your side. I want to know, how to run sql query by
> command prompt in postgres sql.

This doesn't sound like a bug, and this mailing list is intended to
capture bugs discovered in Postgres.

Perhaps you should avail yourself of the documentation.  The following
links seem as though they may be relevant:

http://www.postgresql.org/docs/9.1/static/tutorial-accessdb.html
http://www.postgresql.org/docs/9.1/static/app-psql.html
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] postgresql 9.1 replicaiton query ( GNU/Linux replicate to Windows)

2012-03-12 Thread Christopher Browne
On Mon, Mar 12, 2012 at 5:31 AM, Nie, Guocong  wrote:
> Hi Team,
>
> I am doing postgresql 9.1  replication , the Master database is on GNU/Linux
> operation system , the Slave  database is on Windows 32 bit system.  But
> when I copy the Master database from Linux to Windows , then I am unlucky to
> bringup database on windows system.
>
> Could you please let me know how can I replicate database from Linux to
> Windows ?

The built-in replication requires that you are using the same version
of PostgreSQL on the same OS platform.  That doesn't seem to be
documented as clearly as one might have hoped it would be.

If you require replication across platforms, you will need to look
into one of the other replication systems such as Slony, Londiste, or
Bucardo.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6516: how to mound data folder in remote location

2012-03-05 Thread Christopher Browne
On Mon, Mar 5, 2012 at 5:02 AM,   wrote:
> The following bug has been logged on the website:
>
> Bug reference:      6516
> Logged by:          Shiva
> Email address:      skambhap...@tripodtech.net
> PostgreSQL version: 8.4.3
> Operating system:   RHEL5.5
> Description:
>
> Hi Team,
> I installed PostgreSQL in RHEL5.5 and created lvm and mounted data folder on
> lvm in RHEL Server, but the disk usage crossed more then 95% and restoration
> failed, i need to mount the data folder another server or NAS drive.
> Please help me how to overcome this issue.

Not a bug in PostgreSQL.

See the warnings on using network attached storage:


But the PostgreSQL "bugs" list is intended for reporting actual bugs
in PostgreSQL.  That your server has insufficient disk capacity for a
particular database is not a bug in PostgreSQL.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6514: mount data folder in remote location

2012-03-05 Thread Christopher Browne
On Mon, Mar 5, 2012 at 12:41 AM,   wrote:
> The following bug has been logged on the website:
>
> Bug reference:      6514
> Logged by:          Shanker
> Email address:      sanker...@gmail.com
> PostgreSQL version: 8.4.3
> Operating system:   RHEL5.5
> Description:
>
> Hi All,
> I am using PostgreSQL 8.4 server in RHEL5.5 Server the data directory is
> mounted with lvm but data directory disk usage 99% and restoration is failed
> in it. I want to mount data folder in another RHEL5.5 server or NAS drive.
> So please help me hot to mount data folder in remote location

Issues you may have administering your server's disk usage properly
doesn't indicate any bug in PostgreSQL.

I would observe that if you shift storage over to run atop NFS, you
may significantly increase the risk of server failure leading to
corruption or destruction of your database.  See the documentation:


At any rate, none of this indicates a bug in PostgreSQL.

-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] slony replication issue

2012-03-02 Thread Christopher Browne
On Fri, Mar 2, 2012 at 4:15 AM, Nie, Guocong  wrote:
> I use  slony version “slony1-1.2.20” , and postgresql version 9.1

You should really take this over to the Slony mailing list, it's not a
generic Postgres problem.

http://lists.slony.info/mailman/listinfo/slony1-general

> LOG:  server process (PID 30019) was terminated by signal 11: Segmentation
> fault

It seems pretty likely that this indicates some problem with compiling
the C functions for the log triggers.  If something goes wrong (which
is a broad matter), and the function accesses an undefined chunk of
memory, that would cause that one backend to have a segmentation
fault, which, as it may invalidate shared memory, leads to the
segfault being passed to the postmaster, which then tells all the
other backends of the problem.  Which is consistent with the
phenomenon you are seeing.

I observe that you're running a version that's several years old,
which predates Postgres 9.1.

Version 1.2.20 of Slony was released in December 2009, which was
before either Postgres 9.0 *or* 9.1 were released.   And commonly,
enough header stuff changes that we have usually needed to make some
minor modifications to get versions of Slony to work with new PG
releases.

I would not expect 1.2.20 to be compatible with Postgres 9.1, and it's
an ancient enough branch that we're not doing much with 1.2 to try to
keep it compatible.  I wouldn't recommend using 1.2.20, certainly not
for a new installation.  You might want to try Slony 2.0 (better) or
2.1 (quite a lot further better).
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Add statistics_collector_listen_addresses to fix hard-coding of "localhost"

2011-11-09 Thread Christopher Browne
On Thu, Nov 10, 2011 at 12:09 AM, Robert Young  wrote:
> The basic philosophy of system design is isolation.
> One system component's crash should better not interfere others.
> System should restrict destruction to minimum.

The "minimum" may not be able to be lowered to zero.

Such a lowering may be:
a) Completely impossible, or
b) So challenging that it's reasonable to treat it as undesirable to
go that far.

> 1.If the underling provide hostname or DNS facility, OK! It is better!
> PG should take the benefit of it.
> 2.If the underling lack of hostname or DNS facility, Anyway, PG should
> be founctional without it.
> #1 and #2 are NOT conflict.

If we ignore the use of UDS, Postgres essentially requires having a
properly functioning TCP/IP implementation.

In order to be able to cope with completely deranged TCP/IP
configuration would require at the very least some planning to know
how to cope with the erroneous conditions, and perhaps would imply a
need to implement a functioning TCP/IP stack.

I'm actually reasonably comfortable with requiring that the system's
TCP/IP implementation is not SO broken that it can't cope with
localhost, and I don't think I'm alone in this.  Apparently, a bunch
of people don't agree with your metric in this regard.

> Q2: How about without a file system?
> A2: We suppose PG support raw devices, if file system crashed, raw
> devices part of PG should remain founctional.
>
> Q3: Without an OS?
> A3: Think about vmware, If my vmware workstation crashed, I could
> transfer my guest machine to vmware ESX which runs on bare metal.

Both Q2 and Q3 head well into the realm where your assumption
essentially requires that Postgres implement its own operating system
from scratch.  There is increasing agreement that:
a) Such a requirement would be a bad idea, and
b) We have actually had benefit from NOT requiring such.

The big name DBMSes that *do* implement their own filesystems turn out
to be troublesome to run in emulated environments like VMWare.  In
contrast, running Postgres atop Linux atop VMWare works perfectly
well.

Further, since we *don't* require bare metal raw disk access:
a) We work fine on Linux, and all BSD flavors, and even Windows,
without some heinous burden of an API to emulate raw access atop each
one of those
b) We get full benefit of clever filesystem improvements as with
BTRFS, ZFS, and such, which would be effectively impossible if we were
doing "RawFS".
c) Let me repeat a) in a different way.  We don't need to have our own
filesystem implementation, and hence our own set of filesystem
hackers, as well as a further set of hackers responsible for building
a portability layer so that the custom filesystem would work on all
the operating systems we expect to run on.

All these things (expecting to have functioning TCP/IP, filesystem,
and operating system) are similar.  By expecting these services from
the operating system, we don't need to duplicate that functionality
into Postgres.  And improvements made to Linux or Solaris or
DragonflyBSD are things we can take advantage of, often simply by
installing Postgres on a newer OS version; we don't need to write a
single line of code.  I think you'll find that a large portion of our
users and developers find that preferable to your metrics/desires.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6266: Create temp tables on Slave

2011-10-25 Thread Christopher Browne
Well, it is imaginable for there to be some side mechanism to enable use of
temporary tables on a WAL-based replica, after all, we have leapt through
enough hoops to get read transactions working.

That being said...

A) a solution for how to do this has not yet emerged, so it seems not too
likely that such a feature will enter 9.2, let alone earlier versions.  It's
not yet really scheduled for 9.2, so it would be wiser to assume 9.3 or
later, ergo don't make any plans earlier than 2013...

B) yes, indeed, "logical" replication systems like Slony, Londiste, and
Bucardo are the alternative at present.

I have been hearing comments to the effect of these systems being passé -
when WAL replication does not support fairly important cases such as
allowing reports to use temporary tables, that seems rather premature.
On Oct 25, 2011 5:10 AM, "John R Pierce"  wrote:

> On 10/25/11 12:11 AM, Sally Nayer wrote:
>
>> Is there commercial solution that uses the same concept of wal streaming
>> and allows the Creation of temp tables on slave
>>
>
> WAL files represent 'delta changes' between the state of the postgres file
> system.  *anything* that changes database files on the slave side would
> render the WAL files inherently unusable.
>
> to do what you want would require using a fundamentally different
> replication method, such as Slony or Bucardo.
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-bugs
>


Re: [BUGS] PostGre compatible to RHEL 6.1

2011-09-29 Thread Christopher Browne
On Thu, Sep 29, 2011 at 7:23 AM, Yaamini Bist  wrote:
>
> Hi,
>
> It would be really great if you can provide me information Is PostGre 
> compatible to RHEL 6.1 ?

This is a mailing list for addressing bugs that are found with PostgreSQL.

There isn't any such system as "PostGre", and whether that is
compatible with a particular Linux-based system is not a bug in
PostgreSQL.

If you were interested in what support there is for PostgreSQL in
RHEL, you might want to check with the vendor.  They document this
reasonably well, even within their marketing literature.  See:

--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Fwd: [BUGS] Behaviour of triggers on replicated and non replicated tables

2011-06-16 Thread Christopher Browne
2011/6/10 Luiz K. Matsumura :
> Hi,
> I need help to know if the follow scenario is a expected behaviour, a bug of
> postgres or a bug of slony:
>
> Postgres v8.4.8
> Slony-I v 2.0.5
>
> I have table replicated with slony and that do some updates in another table
> not replicated.
>
> The trigger on replicated table was enabled on the slave database with the
> command:
>
> ALTER TABLE table1 ENABLE ALLWAYS TRIGGER trigger1;
>
> And this trigger is working fine as expected.
>
> The strange behaviour is that trigger do a update in another table not
> replicated, let´s say table2, and
> the triggers of this table is not fired.
> A unexpected behaviour IMHO, if I do
>
> ALTER TABLE table2 ENABLE ALWAYS TRIGGER trigger2;
>
> Then the trigger2 is fired now when trigger1 do a update in table2.
>
> My doubt is: since table2 is not replicated why they triggers dont fire even
> by a update command in
> a trigger of a replicated table ?

The *normal* behaviour that Slony has is that triggers on replicated
tables should, normally, not fire on a replica.

That's typically the right thing, as typically, the trigger fired on
the master, and shouldn't need to fire again.  A pretty usual case is
with foreign key triggers.  Reiterating, if the foreign key trigger is
checked on the master, there's not much sense in checking it again on
the replica.

There's why the "default" is that triggers aren't set to ENABLE ALWAYS.

Evidently you have another scenario, where you know you need to run
the triggers even on a subscriber.

I'm not 100% sure that I'm understanding the "doubt"...

If the trigger is enabled on table2, then it's going to run whenever
something updates table2; that's pretty well independent of any
replication infrastructure.

It's not clear to me whether your "trigger1" is specifically doing
some update to table2.

If the trigger on table1 is indeed firing, and trigger "trigger1"
calls a function that updates data in table2, then I'd fully expect
the trigger "trigger2" on table2 to, at that point, fire.

I can see a good reason for that sequence of events to break down,
namely if the function for "trigger1" doesn't actually find any data
to touch in table2.

For instance, if the function for trigger1 includes the query:

  update table2 set id = id * 2;

that would *usually* be expected to update all the tuples in table2,
and fire the trigger "trigger2" once for each tuple that is updated.

But supposing table2 happens to be empty, then that UPDATE request
will find no tuples to operate on, and so the trigger "trigger2" won't
fire as much as once.

That's where the "deep details" lie; whether trigger2 fires depends on
what the function that trigger1 fires actually does.

If you have a bug, or an unexpected data pattern, then perhaps
trigger2 doesn't fire even though you expected it to.  That's one of
the troublesome risks you find when programming with triggers.

I had a "trigger bug" come in this week.  I discovered that there was
quite a bad bug in a trigger function, and this bug had persisted for
nearly a year before someone finally did something that exercised it.
I felt rather dumb when I saw it, as, in retrospect, it was an obvious
error.  Sadly, a lot of our mistakes fall from things that, in
retrospect, are "obvious errors."

Triggers are weird enough that intuition fails pretty easily :-(.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5946: Long exclusive lock taken by vacuum (not full)

2011-03-28 Thread Christopher Browne
On Mon, Mar 28, 2011 at 4:01 PM, Tom Lane  wrote:
> Christopher Browne  writes:
>> - Grab timestamp
>> - Grab exclusive lock
>> - Process [Some number of pages]
>> - Check time.
>> - If [# of ms] have passed then check to see if anyone else has a lock
>> O/S on the table.
>>   - Commit & give up the lock for a bit if they do
>>   - Go back and process more pages if they don't
>
> Actually, we could simplify that even further.  Keep the code exactly
> as-is, but every small-number-of-pages, check to see if someone is
> waiting on a conflicting lock, and if so, fall out of the page checking
> loop.  Truncate away however many pages we know at that time are safe,
> and end the vacuum normally.
>
> We'd have to rejigger the stuff in the lock manager that tries to boot
> autovacuum off the lock forcibly, but with a bit of luck that would get
> less crocky not more so.
>
> This wouldn't really have any parameters that require tuning, I think,
> and the max delay till the lock is released is not too much more than
> the time needed for ftruncate().  The really good thing about it is that
> vacuum's work is never wasted.
>
>                        regards, tom lane

That mostly sounds excellent.

One caveat is that this has the risk, for a busy table, of having it
take nearly forever to get through the truncation of the empty space
at the end.

If the VACUUM falls out, under this logic, after truncating only a few
pages, then there's the considerable cost of rummaging through the
table, over and over, truncating only a few pages each time.

Supposing we set it up to truncate 32 pages (assuming that to be the
"safe" level), and there are 10 empty 1GB files at the end of the
table, then it's potentially going to take tens of thousands of VACUUM
requests to empty that whole chunk of space out.  That seems close
enough to "forever" for my purposes :-), assuming I'm understanding
that correctly.

I hope I'm wrong, and that there's potential here to get quite a bit
more pages than that dropped out.
-- 
http://linuxfinances.info/info/linuxdistributions.html

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5946: Long exclusive lock taken by vacuum (not full)

2011-03-28 Thread Christopher Browne
On Mon, Mar 28, 2011 at 2:20 PM, Robert Haas  wrote:
> On Mon, Mar 28, 2011 at 12:29 AM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> I think we've had a number of pieces of evidence that suggest that
>>> extending 8kB at a time is too costly, but I agree with Greg that the
>>> idea of extending an arbitrarily large table by 10% at a time is
>>> pretty frightening - that could involve allocating a gigantic amount
>>> of space on a big table.  I would be inclined to do something like
>>> extend by 10% of table or 1MB, whichever is smaller.
>>
>> Sure, something like that sounds sane, though the precise numbers
>> need some validation.
>
> Yeah.
>
>>> ... And a 1MB extension is probably also small enough
>>> that we can do it in the foreground without too much of a hiccup.
>>
>> Less than convinced about this.
>
> Well, I guess we can always try it and see.

Another approach might be to do something "adaptive"...

The iterative process might be wrapped with something like the following:

- Grab timestamp
- Grab exclusive lock
- Process [Some number of pages]
- Check time.
- If [# of ms] have passed then check to see if anyone else has a lock
O/S on the table.
  - Commit & give up the lock for a bit if they do
  - Go back and process more pages if they don't

This offers 3 parameters that are amenable to management:
 - How many pages to process at a time
 - How long to process between checking for lock requests
 - How long to give up processing

Robert's suggestion would be consistent with these being set to (128,?,?).

The adverse impact would be kept pretty small by something like (16,
10ms, 30ms).

And if the table *isn't* being avidly used, it can iterate
incessantly, not giving up the lock, because nobody else cares.

In the "busy with lots of other users of that table" case, it'll take
quite a long time to get the table's extra extensions truncated.
Indeed, it's pretty easy for other things to get *heavily* in the way.
-- 
http://linuxfinances.info/info/linuxdistributions.html

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5946: Long exclusive lock taken by vacuum (not full)

2011-03-25 Thread Christopher Browne
On Fri, Mar 25, 2011 at 5:09 PM, Greg Stark  wrote:
> On Fri, Mar 25, 2011 at 8:48 PM, Tom Lane  wrote:
>> Interesting, but I don't understand/believe your argument as to why this
>> is a bad idea or fixed-size extents are better.  It sounds to me just
>> like the typical Oracle DBA compulsion to have a knob to twiddle.  A
>> self-adjusting enlargement behavior seems smarter all round.
>>
>
> So is it ok for inserting one row to cause my table to grow by 90GB?
> Or should there be some maximum size increment at which it stops
> growing? What should that maximum be? What if I'm on a big raid system
> where that size doesn't even add a block to every stripe element?
>
> Say you start with 64k (8 pg blocks). That means your growth
> increments will be 64k, 70k, 77kl, 85k, 94k, 103k, 113k, 125k, 137k,
> ...
>
> I'm having trouble imagining a set of hardware and filesystem where
> growing a table by 125k will be optimal. The next allocation will have
> to do some or all of a) go back and edit the previous one to round it
> up, then b) add 128k more, then c) still have 6k more to allocate in a
> new allocation.

This is certainly a converse of the problem actually being pointed at
by the bug.

The bug indicates a situation where the table already has an enormous
pile of free space, all of it already sitting at the very end.
There's *at least* 1GB of space free, and in the case spoken of, there
was 10GB free.

The point of the exercise isn't to allocate new space - it is to
*deallocate* the huge quantity of dead space at the end of the table,
without blocking anybody unnecessarily.  I foresee that being somewhat
troublesome, mostly in view that stuff is still going on concurrently,
though it seems pretty plausible that one might *somewhat* safely
"fast-track" removal of all but the first of those empty extensions.

What seems natural-ish to me might include:
- Stomping a bit on the FSM replacement to make sure nobody's going to
be writing to the later extensions;
- Watching free space during the process so the "first" extension gets
re-opened up if the free space in the much earlier parts of the table
(e.g. - that are not planned to be dropped off) is running out.
-- 
http://linuxfinances.info/info/linuxdistributions.html

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5946: Long exclusive lock taken by vacuum (not full)

2011-03-25 Thread Christopher Browne
On Fri, Mar 25, 2011 at 11:17 AM, Tom Lane  wrote:
> "Maxim Boguk"  writes:
>> In my case vacuum tried to truncate last 10-15GB from 100Gb relation, and
>> each time (3) it was cost 10+ minutes of service downtime (because that
>> table was completely locked).
>
>> Is  that correct behaviour? Are here any way to speedup that process or at
>> least allow read-only queries during that time?
>
> Use autovacuum --- if there's something that wants to access the table,
> autovac will get kicked off the lock.  (Of course, the table may never
> get truncated then, but maybe you don't care.)

I could see this turning out somewhat unhappily...

If there's almost always something that wants to access the table,
which would be hardly surprising, in view that the table is being
updated with sufficient regularity that it's got 10GB+ of dead space
at the end, then you're eventually going to *HAVE* to vacuum it.
Transaction rollover, and what not.

The piper will need to get paid, eventually :-(.

So I don't think leaving it to autovacuum quite suffices, because
eventually there's a 10 minute "glut", or more, that needs to get
paid.
-- 
http://linuxfinances.info/info/linuxdistributions.html

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] DDL triggers

2009-05-22 Thread Christopher Browne
In an attempt to throw the authorities off his trail, pete...@gmx.net (Peter 
Eisentraut) transmitted:
> On Thursday 21 May 2009 10:52:33 Целуйко Дмитрий wrote:
>> When DDL triggers will be supported by PostgreSQL?
>
> There are currently no concrete plans for that, but if someone comes up with 
> a 
> good design and implementation, it could be acceptable.  But don't hold your 
> breath.

An API that provides access to the ALTER statement would be a mighty
good thing.

I'd suppose that something like NEW_STATEMENT would be nice to have
access to.
-- 
let name="cbbrowne" and tld="gmail.com" in String.concat "@" [name;tld];;
http://linuxdatabases.info/info/internet.html
Why do they sterilize needles for lethal injections? 

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #2810: restore to lower version of dump

2006-12-06 Thread Christopher Browne
Quoth [EMAIL PROTECTED] ("Murali"):
> The following bug has been logged online:
>
> Bug reference:  2810
> Logged by:  Murali
> Email address:  [EMAIL PROTECTED]
> PostgreSQL version: 8.1.4
> Operating system:   WINXP
> Description:restore to lower version of dump
> Details: 
>
> I have taken dump from 8.1.4 postgresql db + win xp.
> I have to restore it to 7.4.14 postgresql db + red hat linux

I don't see in what way this actually represents a bug.
-- 
let name="cbbrowne" and tld="gmail.com" in name ^ "@" ^ tld;;
http://linuxdatabases.info/info/spreadsheets.html
Rules of the Evil Overlord #35. "I  will not grow a goatee. In the old
days they made  you look diabolic. Now they just make  you look like a
disaffected member of Generation X." 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [BUGS] BUG #2608: dblink compile fails on AIX 5.3

2006-09-10 Thread Christopher Browne
Quoth [EMAIL PROTECTED] (Tom Lane):
> I wrote:
>> Done, but it's really pretty ugly.  I am thinking it'd make more sense
>> for Makefile.global to define a macro for the libraries libpq depends
>> on ...
>
> I went ahead and did this as per discussion here:
> http://archives.postgresql.org/pgsql-bugs/2006-08/msg00200.php
>
> Please see if I managed to break AIX ;-).  It'd be a good idea to
> compile with every configure option you can find turned on.

A run on buildfarm member "grebe" just completed successfully, now
with openssl support, so I gotta say "thumbs up!"

I don't have gettext libraries installed, so that won't get tested
right away.  SSL is an addition I'd quite like.  The list of "nice to
try" would be gettext, ldap, Kerberos; I don't think I can promise all
those during this particular cycle, but I'd certainly be keen on
adding them, ultimately.

I also just verified that readline is playing well, too.  Sweet!
(The difficulties of AIX + readline were the initiator of a fair bit
of change to linking of libs and such, so it's particularly pleasing
that this is working fine :-))
-- 
"cbbrowne","@","gmail.com"
http://linuxdatabases.info/info/emacs.html
Rules  of the  Evil Overlord  #146.  "If my  surveillance reports  any
un-manned  or  seemingly  innocent  ships  found where  they  are  not
supposed to be, they will  be immediately vaporized instead of brought
in for salvage." 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [BUGS] BUG #2608: dblink compile fails on AIX 5.3

2006-09-10 Thread Christopher Browne
In the last exciting episode, [EMAIL PROTECTED] ("Michael Hoffmann") wrote:
> The following bug has been logged online:
>
> Bug reference:  2608
> Logged by:  Michael Hoffmann
> Email address:  [EMAIL PROTECTED]
> PostgreSQL version: 8.1.2
> Operating system:   AIX 5.3
> Description:dblink compile fails on AIX 5.3
> Details: 
>
>>/usr/local/pgsql/src/postgresql-8.1.2/contrib/dblink>gmake
> ../../src/backend/port/aix/mkldexport.sh libdblink.a > libdblink.exp
> xlC -O2 -qmaxmem=16384 -qsrcmsg -qlonglong   -Wl,-bnoentry -Wl,-H512
> -Wl,-bM:SRE -o libdblink.so libdblink.a -Wl,-bE:libdblink.exp
> -L../../src/interfaces/libpq -L../../src/port -lpq
> -Wl,-bI:../../src/backend/postgres.imp
> ld: 0711-317 ERROR: Undefined symbol: .bindtextdomain
> ld: 0711-317 ERROR: Undefined symbol: .dgettext
> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more
> information.
> gmake: *** [libdblink.so] Error 8

I mentioned earlier that changing SHLIB_LINK might be helpful.

The patch that turns out to be needful for crypto is the following:

Index: Makefile
===
RCS file: /projects/cvsroot/pgsql/contrib/dblink/Makefile,v
retrieving revision 1.11
diff -u -r1.11 Makefile
--- Makefile27 Feb 2006 12:54:38 -  1.11
+++ Makefile10 Sep 2006 21:01:45 -
@@ -20,3 +20,5 @@
 include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
+
+SHLIB_LINK += $(filter -lssl -lcrypto -lssleay32 -leay32, $(LIBS))

In effect, we need to take LIBS, which contains all the libraries
pulled in as "defaults" for builds of objects in the main tree, filter
that against the 4 crypto libraries, and add them to the set of libs
we need to link with (e.g. - SHLIB_LINK).

I'm not sure what libraries gettext draws in; supposing it adds, to a
build of psql, -lgettext, then you might also add the line:

SHLIB_LINK += $(filter -lgettext, $(LIBS))
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').
http://cbbrowne.com/info/nonrdbms.html
"In computing, invariants are ephemeral."  -- Alan Perlis

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] BUG #2600: dblink compile with SSL missing

2006-09-07 Thread Christopher Browne
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Bruce Momjian) 
wrote:
> Where are we on this?

Nothing submitted so far.  If someone magically does something about
it, that's super.  I'm on vacation 'til Tuesday; if it's not dealt
with, I'll see if I can find something.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "cbbrowne.com")
http://linuxdatabases.info/info/lsf.html
Pay no attention to the PDP-11 behind the front panel.
-- PGS, in reference to OZ

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[BUGS] BUG #2600: dblink compile with SSL missing libraries

2006-08-30 Thread Christopher Browne

The following bug has been logged online:

Bug reference:  2600
Logged by:  Christopher Browne
Email address:  [EMAIL PROTECTED]
PostgreSQL version: CVS HEAD (8.2)
Operating system:   AIX 5.3
Description:dblink compile with SSL missing libraries
Details: 

If I try to build dblink when PG is configured "--with-openssl", the build
of the contrib module dblink breaks as follows:

[EMAIL PROTECTED]:/opt/rg/data_dba/build-farm/HEAD/pgsql.741430/
contrib/dblink $ gmake
../../src/backend/port/aix/mkldexport.sh libdblink.a > libdblink.exp
/opt/prod/gcc-4.1.1/bin/gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-g  -Wl,-bmaxdata:0x8000 -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE -o
libdblink.so libdblink.a -Wl,-bE:libdblink.exp -L../../src/interfaces/libpq
-L../../src/port -L/opt/freeware/lib -lpq -lpthread -lpthreads
-Wl,-bI:../../src/backend/postgres.imp 
ld: 0711-317 ERROR: Undefined symbol: .SSL_pending
ld: 0711-317 ERROR: Undefined symbol: .ERR_get_error
ld: 0711-317 ERROR: Undefined symbol: .ERR_reason_error_string
ld: 0711-317 ERROR: Undefined symbol: .SSL_write
ld: 0711-317 ERROR: Undefined symbol: .SSL_get_error
ld: 0711-317 ERROR: Undefined symbol: .SSL_read
ld: 0711-317 ERROR: Undefined symbol: .SSL_shutdown
ld: 0711-317 ERROR: Undefined symbol: .SSL_free
ld: 0711-317 ERROR: Undefined symbol: .X509_free
ld: 0711-317 ERROR: Undefined symbol: .SSL_connect
ld: 0711-317 ERROR: Undefined symbol: .SSL_get_peer_certificate
ld: 0711-317 ERROR: Undefined symbol: .X509_get_subject_name
ld: 0711-317 ERROR: Undefined symbol: .X509_NAME_oneline
ld: 0711-317 ERROR: Undefined symbol: .X509_NAME_get_text_by_NID
ld: 0711-317 ERROR: Undefined symbol: .SSL_new
ld: 0711-317 ERROR: Undefined symbol: .SSL_set_ex_data
ld: 0711-317 ERROR: Undefined symbol: .SSL_set_fd
ld: 0711-317 ERROR: Undefined symbol: .SSL_CTX_free
ld: 0711-317 ERROR: Undefined symbol: .TLSv1_method
ld: 0711-317 ERROR: Undefined symbol: .SSL_CTX_new
ld: 0711-317 ERROR: Undefined symbol: .SSL_CTX_load_verify_locations
ld: 0711-317 ERROR: Undefined symbol: .SSL_CTX_get_cert_store
ld: 0711-317 ERROR: Undefined symbol: .X509_STORE_load_locations
ld: 0711-317 ERROR: Undefined symbol: .SSL_CTX_set_verify
ld: 0711-317 ERROR: Undefined symbol: .SSL_library_init
ld: 0711-317 ERROR: Undefined symbol: .SSL_load_error_strings
ld: 0711-317 ERROR: Undefined symbol: .CRYPTO_set_id_callback
ld: 0711-317 ERROR: Undefined symbol: .CRYPTO_num_locks
ld: 0711-317 ERROR: Undefined symbol: .CRYPTO_set_locking_callback
ld: 0711-317 ERROR: Undefined symbol: .X509_STORE_set_flags
ld: 0711-317 ERROR: Undefined symbol: .SSL_get_ex_data
ld: 0711-317 ERROR: Undefined symbol: .PEM_read_X509
ld: 0711-317 ERROR: Undefined symbol: .PEM_read_PrivateKey
ld: 0711-317 ERROR: Undefined symbol: .X509_check_private_key
ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more
information.
collect2: ld returned 8 exit status
gmake: *** [libdblink.so] Error 1

If I add, to the GCC command line, requests for libssl and libcrypto...
-lssl -lcrypto

e.g. - command line: 

[EMAIL PROTECTED]:/opt/rg/data_dba/build-farm/HEAD/pgsql.741430/
contrib/dblink $ /opt/prod/gcc-4.1.1/bin/gcc -O2 -Wall -Wmissing-prototypes
-Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels
-fno-strict-aliasing -g  -Wl,-bmaxdata:0x8000 -Wl,-bnoentry -Wl,-H512
-Wl,-bM:SRE -o libdblink.so libdblink.a -Wl,-bE:libdblink.exp
-L../../src/interfaces/libpq -L../../src/port -L/opt/freeware/lib -lpq
-lpthread -lpthreads -lssl -lcrypto  -Wl,-bI:../../src/backend/postgres.imp


This builds fine without further complaint.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [BUGS] buglet in 7.1.4

2004-03-13 Thread Christopher Browne
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Mike Mascari) wrote:
> Lamar Owen wrote:
>
>> On Tuesday 09 March 2004 10:46 am, Tom Lane wrote:
>>
>>>Neil Conway <[EMAIL PROTECTED]> writes:
>>>
BTW, I can't really see the harm in putting out 7.1.x and 7.2.x
releases to fix compilation issues on modern systems.
>>>
>>>Also, quite frankly, I don't want to encourage people to keep using
>>>such old releases.  If they are installing on a new machine they should
>>>update to something newer and less buggy.
>> We need the older versions to be compilable on newer systems to ease
>> in version upgrades and migration.
>
> How could they find themselves in a situation where they have a 7.1
> installation that requires dumping for migration, but no binaries due
> to compilation errors? Isn't that a rather low-probability scenario?

The problem isn't so much that of "complete inability" to get
binaries, but rather of it becoming significantly inconvenient to get
those binaries.

What if we had a RHAT 6.3 system running PG 7.1, and the "system"
partition got dumped on?  We have the data directory; we'd like to
mount it on a spicy new RHAT 8.0 system, and recover it.

If I rummage around looking for tips, I can doubtless discover the set
of things that need to get patched in order to recompile on RHAT 8.0;
it sure would be nice to not have to rummage round for them.
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','cbbrowne.com').
http://www3.sympatico.ca/cbbrowne/sgml.html
"The beginning of wisdom for a [software engineer] is to recognize the
difference between getting a program  to work, and getting it  right."
-- M A Jackson, 1975

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [BUGS] BUG #1053: Configuration should be in /etc/postgres

2004-01-18 Thread Christopher Browne
[EMAIL PROTECTED] ("PostgreSQL Bugs List") writes:
> Description:Configuration should be in /etc/postgres
>
> Details: 
>
> It would be better if postgres kept the configurations files in
> standard /etc location. Like under /etc/postgres

This "fix" would prevent people from having multiple database
instances, which can be quite a useful thing.

Nothing prevents you from doing what Debian does, which is to
establish a symbolic link between the /etc location and where
configuration "really" is.
-- 
let name="cbbrowne" and tld="libertyrms.info" in String.concat "@" [name;tld];;
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] Postgresql 'eats' all mi data partition

2003-09-28 Thread Christopher Browne
[EMAIL PROTECTED] (Javier Carlos) writes:
>I did a vacuumbdb after the updates, and the space usage didn't down to 
> something reasonable. For example, I had a 250MB database, then I did about 
> 300 query updates, and mi partition growed up until fill all mi data partition 
> space of 15GB. After that I did an vacuumdb and only the space down 100MB. 
> After that I DROPPED the database, and the space down ALL the 15GB; It's very 
> weird, don't you think?

You may not have been expecting it, but it is no great surprise to me.

If you vacuumed after every ~10-20 of those query updates, then the
size shouldn't have gotten nearly that high, and performance would
likely have been quite a bit better.

In effect, every time you update substantially all of the data, you
should _definitely_ do a vacuum, as it will be quite likely to "reap"
a whole table's worth of dead tuples.

A VACUUM FULL will cut the size down absolutely, at the cost of
blocking updates during the vacuum.  If you run "ordinary VACUUMs"
along the way, they aren't as effective at cutting down on the space
used, but are non-blocking.  Probably it's better to regularly do
"ordinary VACUUMs."
-- 
"cbbrowne","@","libertyrms.info"
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] Postgresql 'eats' all mi data partition

2003-09-26 Thread Christopher Browne
[EMAIL PROTECTED] (Javier Carlos) writes:
> *** When I make a 'DROP DATABASE' to the database where that table belongs to, 
> mi /var partition returns to its original size (in this example to 9%).

VACUUM ANALYZE is your friend, here.

PostgreSQL uses MVCC, wherein each update results in the creation of a
new tuple which coexists with the old one in the database.  If you
update a particular row in the database 10 times, then there will be
11 copies, 10 of them being obsolete.

If you VACUUM the table or the database, the rows that are no longer
accessible will be cleared out, thus diminishing the amount of space
in use.

Things are somewhat more complicated than that; VACUUM looks at the
data page-by-page, and won't _always_ reclaim all free space.  The
more often you VACUUM, the more likely that such reclaimation will be
possible.

There is a program called pg_autovacuum that will vacuum automatically
when specific tables look like they need it.  

ANALYZE does something different but related; it updates the
statistics used by the query optimizer.  When you make major changes
to the table, it may be necessary to ANALYZE it in order for queries
to be efficient.
-- 
"cbbrowne","@","libertyrms.info"
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster