Fernando,
> AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion)
> FROM cont_publicacion cp1
> WHERE cp1.id_instalacion = cont_publicacion.id_instalacion
> AND cp1.id_contenido = cont_publicacion.id_contenido
>
I'd point at the following as being a sterling candidate for being a
cause of this being slow...
AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion)
FROM cont_publicacion cp1
WHERE cp1.id_instalacion = cont_publicacion.id_instalacion
Francisco,
> I think I understand your point, however it would be very laborious after
> you do all development to find out you need to de-normalize.
Not terribly. Views and Rules are good for this.
> On your experience at which point it would actually help to do this
> de-normalization in Pos
Title: Mensaje
Hi
all!
Really I don't know
what happened with this query. I'm running PG 7.3.1 on solaris, vaccumed (full)
every nigth.
The cardinality of
each table was:
cont_contenido: 97 rows
juegos_config: 40 rows
cont_publicacion: 446 rows
not huge
tables...
however, this query
On Fri, 1 Aug 2003, Josh Berkus wrote:
> My attitude toward these normalization vs. performance issues is consistenly
> the same: First, verify that you have a problem. That is, build the
> database with everything in one table (or with child tables for Nullable
> fields, as above) and try to r
Ron,
> You snipped out too much, because that's exactly what I said...
> Another way of writing it: only split the table if some of the fields
> are not unitary to the entity.
Sorry! No offense meant.
--
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of bro
On Fri, 2003-08-01 at 12:44, Josh Berkus wrote:
> Francisco,
>
> > Yes all fields belong to the same entity. I used 100 as an example it may
> > be something like 60 to 80 fields (there are two tables in question). I
> > don't formally do 3rd normal form, but for the most part I do most of
> > the
Jenny,
> OSDL has ported OSDL Database Test Suite 3(OSDL-DBT3) to PostgreSQL.
> It drives the database with an ad-hoc decision support workload. It
> helps database developers and users to identify database performance
> issues.
Way, way, cool! We've been waiting for something like this eagerl
OSDL has ported OSDL Database Test Suite 3(OSDL-DBT3) to PostgreSQL.
It drives the database with an ad-hoc decision support workload. It
helps database developers and users to identify database performance
issues.
OSDL-DBT3 is derived from TPC-H benchmark. TPC-H is an ad-hoc decision
support b
Francisco,
> Yes all fields belong to the same entity. I used 100 as an example it may
> be something like 60 to 80 fields (there are two tables in question). I
> don't formally do 3rd normal form, but for the most part I do most of
> the general concepts of normalization.
>
> > If not, then good
On Fri, 2003-08-01 at 12:14, Francisco J Reyes wrote:
> On Fri, 1 Aug 2003, Ron Johnson wrote:
>
> > Do all 100 fields *really* all refer to the same *one* entity,
> > with no repeating values, etc?
>
> Yes all fields belong to the same entity. I used 100 as an example it may
> be something like
On Fri, 1 Aug 2003, Ron Johnson wrote:
> Do all 100 fields *really* all refer to the same *one* entity,
> with no repeating values, etc?
Yes all fields belong to the same entity. I used 100 as an example it may
be something like 60 to 80 fields (there are two tables in question). I
don't formally
Christopher Browne <[EMAIL PROTECTED]> writes:
> The DOMAIN case I mentioned the other day had something odd going on
> that LOST the type information associated with the domain. Albeit
> that was on 7.3, whereas the changes in DOMAIN functionality that make
> them meaningfully useful come in 7.4.
On Fri, 2003-08-01 at 11:08, Francisco Reyes wrote:
> If a table which will be heavily used has numerous fields, yet only a
> handfull of them will be used heavily, would it make sense performance wise to split
> it?
>
> Example
> Table 1
> Field 1
>
> Field 100
>
> Table 2
> References Fie
If a table which will be heavily used has numerous fields, yet only a
handfull of them will be used heavily, would it make sense performance wise to split
it?
Example
Table 1
Field 1
Field 100
Table 2
References Field 1 of table1
.
Table n
References Field 1 of table 1
So table 1 basi
Stephan Szabo <[EMAIL PROTECTED]> writes:
> On Fri, 1 Aug 2003, Christopher Browne wrote:
>> Stephan Szabo said:
>> > What version are you using? In 7.3 and up it should be willing to
>> > consider moving the clause down, unless there's something like a type
>> > mismatch (because in that case it
On Fri, 1 Aug 2003, Christopher Browne wrote:
> Stephan Szabo said:
> >
> >
> > What version are you using? In 7.3 and up it should be willing to
> > consider moving the clause down, unless there's something like a type
> > mismatch (because in that case it may not be equivalent without a bunch
>
Stephan Szabo said:
>
> On Thu, 31 Jul 2003, Christopher Browne wrote:
>
>> select * from log_table where request_time between 'june 11 2003'
>> and
>>'june 12 2003';
>>
>> returns a plan:
>> Subquery Scan log_table (cost=0.00..10950.26 rows=1
On Thu, Jul 31, 2003 at 05:59:59PM -0500, Jim C. Nasby wrote:
>
> Well, if I don't do this it wants to seqscan a table that occupies 350k
> pages, instead of pulling a couple thousand rows. I started running it
> with the seqscan and it's already taken way longer than it does if I
> disable seqsc
19 matches
Mail list logo