Tom Lane <[EMAIL PROTECTED]> writes:
> Are you sure this WHERE clause really expresses your intent? It seems
> awfully oddly constructed. Removing the redundant parens and clarifying
> the layout, I get
...
> That next-to-last major AND clause seems a rather unholy mix of join and
> restriction
On Feb 9, 2006, at 6:36 PM, Rafael Martinez wrote:
This is an application that we have not programmed, so I am not sure
what they are trying to do here. I will contact the developers.
Tomorrow
I will try to test some of your suggestions.
well, obviously you're running RT... what you want t
On Thu, 2006-02-09 at 18:22 -0500, Tom Lane wrote:
> Rafael Martinez Guerrero <[EMAIL PROTECTED]> writes:
> > WHERE ((ACL_2.RightName = 'OwnTicket'))
> > AND ((CachedGroupMembers_4.MemberId = Principals_1.id))
> > AND ((Groups_3.id = CachedGroupMembers_4.GroupId))
> > AND ((Principals_1.Disabled
Rafael Martinez Guerrero <[EMAIL PROTECTED]> writes:
> WHERE ((ACL_2.RightName = 'OwnTicket'))
> AND ((CachedGroupMembers_4.MemberId = Principals_1.id))
> AND ((Groups_3.id = CachedGroupMembers_4.GroupId))
> AND ((Principals_1.Disabled = '0') or (Principals_1.Disabled = '0'))
> AND ((Principals
In my experience, you don't want to store this stuff in the database.
In general, it will work fine, until you have to VACUUM the
pg_largeobject table. Unless you have a very powerful I/O subsystem,
this VACUUM will kill your performance.
> You're forgetting about cleanup and transactions. If yo
On Thu, 2006-02-09 at 13:46 -0600, Jim C. Nasby wrote:
> I looked at the estimates for the table access methods and they all
> looked ok, so I think the statistics are pretty up-to-date; there just
> aren't enough of them for the planner to do a good job.
>
VACUUM ANALYZE runs 4 times every hour,
On 2/9/06, Orion Henry <[EMAIL PROTECTED]> wrote:
>
> Hello All,
>
> I've inherited a postgresql database that I would like to refactor. It
> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
> design decisions were made that don't make sense any more. Here's the
> problem:
>
>
I looked at the estimates for the table access methods and they all
looked ok, so I think the statistics are pretty up-to-date; there just
aren't enough of them for the planner to do a good job.
On Thu, Feb 09, 2006 at 01:44:22PM -0600, Dave Dutcher wrote:
> First I'm wondering if the tables have
Orion Henry <[EMAIL PROTECTED]> writes:
> What I would LIKE to do but am afraid I will hit a serious performance wall
> (or am missing an obvious / better way to do it)
>
> 1) Merge all 133 client tables into a single new table, add a client_id
> column,
> do the data partitioning on the index
On Thu, Feb 09, 2006 at 11:07:06AM -0800, Orion Henry wrote:
>
> Hello All,
>
> I've inherited a postgresql database that I would like to refactor. It
> was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
> design decisions were made that don't make sense any more. Here's the
First I'm wondering if the tables have been recently analyzed. If an
analyze has been run recently, then it is probably a good idea to look
at the statistics target.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim C.
Nasby
Sent: Thursday, February 09
At least part of the problem is that it's way off on some of the row
estimates. I'd suggest upping the statisticss target on at least all of
the join columns to at least 100. (Note that it's doing a nested loop
thinking it will have only 2 rows but it actually has 22000 rows).
On Thu, Feb 09, 2006
On Thu, Feb 09, 2006 at 07:18:49AM -0800, Craig A. James wrote:
> Nate Byrnes wrote:
> >I must claim some ignorance, I come from the application world... but,
> >from a data integrity perspective, it makes a whole lot of sense to
> >store video, images, documents, whatever in the database rather
Hello All,
I've inherited a postgresql database that I would like to refactor. It
was origionally designed for Postgres 7.0 on a PIII 500Mhz and some
design decisions were made that don't make sense any more. Here's the
problem:
1) The database is very large, the largest table has 40 mil
Thanks, until Postgres can pay my bills (hopefully soon...) I will
have to be an Oracle guy. Aside from the filesystem being better at
managing large files (which I do agree) are there performance
implications for the storage in the DB?
Where I work, the question is not can you add the se
Nate Byrnes wrote:
I must claim some ignorance, I come from the application world... but,
from a data integrity perspective, it makes a whole lot of sense to
store video, images, documents, whatever in the database rather than on
the file system external to it. Personally, I would use LOB's, bu
Hello
We are running an application via web that use a lot of time to perform
some operations. We are trying to find out if some of the sql statements
used are the reason of the slow speed.
We have identified a sql that takes like 4-5000ms more than the second
slowest sql in out test server. I ho
I must claim some ignorance, I come from the application world... but,
from a data integrity perspective, it makes a whole lot of sense to
store video, images, documents, whatever in the database rather than on
the file system external to it. Personally, I would use LOB's, but I do
not know the
18 matches
Mail list logo