On Wed, 11 Aug 2004 20:29:04 -0400, Jason Coene <[EMAIL PROTECTED]> wrote:
gf=# EXPLAIN ANALYZE SELECT id FROM comments WHERE userid = 51 ORDER BY
timestamp DESC LIMIT 5;
QUERY
PLAN
---
"Jason Coene" <[EMAIL PROTECTED]> writes:
> We have a lot of:
> SELECT whatever
> FROM ourtable
> WHERE field1 = X
> AND field2 = Y
> AND field3 = Z
> ORDER BY id DESC
> LIMIT 5
> With indexes:
> ourtable(id)
> ourtable(field1, field2, field3)
> Is it standar
> You mean you are doing
> SELECT ... WHERE userid = 42 ORDER BY timestamp DESC LIMIT 5;
> and hoping that separate indexes on userid and timestamp will get the
> job done? They won't. There are only two possible plans for this,
> neither very good: select all of user 42's posts and sort them
Rudi,
> Do you think an Index or Seq. scan should be used?
That was way too much data for way too simple a question. ;-)
The answer is: depends on how many rows you have. With any significant
number of rows, yes.
However, you probably only need to index the first 3-5 columns; that's
selecti
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] wrote:
|>>Anyway, with fsync enabled using standard fsync(), I get roughly
|>
|>300-400
|>
|>>inserts per second. With fsync disabled, I get about 7000 inserts per
|>>second. When I re-enable fsync but use the open_sync option, I can g
Hi,
This email is picking up a thread from yesterday on INSERTS and INDEXES.
In this case the question is to use and index or a sequential scan.
I have included the db DDL and SELECT query.
For each month I have a csv data dump of council property data.
So the First CD will have almost all unique r
On Thu, 12 Aug 2004 02:29 am, Jesper Krogh wrote:
> I gmane.comp.db.postgresql.performance, skrev Shridhar Daithankar:
> > On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:
> > > The "common" solution, I guess would be to store them in the filesystem
> > > instead, but I like to have them just
[snip]
>
> One question I do have though - you specifically mentioned NOW() as
> something to watch out for, in that it's mutable. We typically use COUNT()
> as a subselect to retrieve the number of associated rows to the current
> query. Additionally, we use NOW a lot, primarily to detect the s
"Jason Coene" <[EMAIL PROTECTED]> writes:
> A good example, a comments table where users submit TEXT data. A common
> query is to find the last 5 comments a user has submitted. The scan, while
> using an index, takes a considerable amount of time (> 0.5 sec is about as
> good as it gets). Again,
> > Hi Rod,
> >
> > I was looking at top and vmstat - which always show under 300MB
> "Active".
> > We may hit 400MB at peak. Everything I see (though this isn't my area
> of
> > expertise) points to most of the memory simply being unused. Results
> below,
> > am I missing something?
>
> This lo
On Wed, 2004-08-11 at 18:03, Jason Coene wrote:
> > -Original Message-
> > From: Rod Taylor [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, August 11, 2004 5:46 PM
> > To: Jason Coene
> > Cc: 'Merlin Moncure'; Postgresql Performance
> > Subject: Re: [PERFORM] Hardware upgrade for a high-traf
> -Original Message-
> From: Rod Taylor [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, August 11, 2004 5:46 PM
> To: Jason Coene
> Cc: 'Merlin Moncure'; Postgresql Performance
> Subject: Re: [PERFORM] Hardware upgrade for a high-traffic database
>
> > I'm wondering why our PG server is using
On Wed, 2004-08-11 at 17:31, Brian Hirt wrote:
> On Aug 11, 2004, at 3:18 PM, Jason Coene wrote:
> >
> > I'm wondering why our PG server is using so little memory... The
> > system has
> > 2GB of memory, though only around 200MB of it are used. Is there a PG
> > setting to force more memory usag
> I'm wondering why our PG server is using so little memory... The system has
> 2GB of memory, though only around 200MB of it are used. Is there a PG
This is the second time you've said this. Surely you're not implying
there is 1.8GB Free Memory -- rather than 1.8GB in Buffers or Cache.
Send ou
On Aug 11, 2004, at 3:18 PM, Jason Coene wrote:
I'm wondering why our PG server is using so little memory... The
system has
2GB of memory, though only around 200MB of it are used. Is there a PG
setting to force more memory usage towards the cache? Additionally,
we use
FreeBSD. I've heard that
>
> Right. The point is: is your i/o bottle neck on the read side or the
> write side. With 10-30 inserts/sec and fsync off, it's definitely on
> the read side. What's interesting is that such a low insert load is
> causing i/o storm problems. How does your app run with fsync on?
>
> With rea
On Tue, 2004-08-10 at 10:18 -0700, Josh Berkus wrote:
> Guys, just so you know:
>
> OSDL did some testing and found Ext3 to be perhaps the worst FS for PostgreSQL
> -- although this testing was with the default options. Ext3 involved an
> almost 40% write performance penalty compared with Ext2
> We do have a lot of INSERT/UPDATE calls, specifically on tables that
track
> user sessions, then of course things like comments, etc (where we'll
see
> 10-30 INSERT's per second, with TEXT field, and hundreds of reads per
> second). Additionally, our system does use a lot of aggregate
functions.
Jason,,
One suggestion i have, stay away from adaptec ZCR RAID products, we've
been doing testing on them, and they don't perform well at all.
--brian
On Aug 11, 2004, at 1:08 PM, Jason Coene wrote:
Thanks for all the feedback. To clear it up, we are definitely not CPU
bound at the moment. Any
On 8/11/2004 2:21 AM, Scott Marlowe wrote:
On Tue, 2004-08-10 at 23:42, Er GalvÃo Abbott wrote:
Greetings.
I have a question regarding performance of certain datatypes:
I have a field where I will store my clients phone numbers. I know
that this field will never exceed 15 characters, and I will sto
Thanks for all the feedback. To clear it up, we are definitely not CPU
bound at the moment. Any slowdown seems to be disk dependant, or from to
serialization due to a long query (due to disk).
We do have a lot of INSERT/UPDATE calls, specifically on tables that track
user sessions, then of cours
> OSDL did some testing and found Ext3 to be perhaps the worst FS for
> PostgreSQL
> -- although this testing was with the default options. Ext3 involved
an
> almost 40% write performance penalty compared with Ext2, whereas the
> penalty
> for ReiserFS and JFS was less than 10%.
>
> This concurs
I gmane.comp.db.postgresql.performance, skrev Shridhar Daithankar:
> On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:
> > The "common" solution, I guess would be to store them in the filesystem
> > instead, but I like to have them just in the database it is nice clean
> > database and applica
> On Wed, 2004-08-11 at 17:51, Merlin Moncure wrote:
>
> > One thing you might consider is materialized views. Your aggregate
> > functions are killing you...try to avoid using them (except min/max
on
> > an index). Just watch out for mutable functions like now().
> >
> > http://www.varlena.com/
Thanks, Michael.
You and "Evil Azrael" (lol) got me. Never thought about leading zeros.
Varchar it is!
Thanks a lot,
--
Er Galvão Abbott
Desenvolvedor Web
http://www.galvao.eti.br/
[EMAIL PROTECTED]
Michael Glaesemann wrote:
On Aug 11, 2004, at 4:27 PM, Er Galvão Abbott wrote:
It
Jesper Krogh <[EMAIL PROTECTED]> writes:
> I'd like some advise on storing binary data in the database.
> Currently I have about 300.000 320.000 Bytes "Bytea" records in the
> database. It works quite well but I have a feeling that it actually is
> slowing the database down on queries only relate
On Wed, Aug 04, 2004 at 14:00:39 +0200,
Ulrich Wisser <[EMAIL PROTECTED]> wrote:
This topic really belongs on the performance list. I have copied that
list and set followups to go there and copy you.
>
> my web application grows slower and slower over time. After some
> profiling I came to th
On Wed, 2004-08-11 at 17:51, Merlin Moncure wrote:
> One thing you might consider is materialized views. Your aggregate
> functions are killing you...try to avoid using them (except min/max on
> an index). Just watch out for mutable functions like now().
>
> http://www.varlena.com/varlena/Gener
On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:
> The "common" solution, I guess would be to store them in the filesystem
> instead, but I like to have them just in the database it is nice clean
> database and application design and if I can get PostgreSQL to "not
> cache" them then it should
Hi.
Please be a bit patient.. I'm quite new to PostgreSQL.
I'd like some advise on storing binary data in the database.
Currently I have about 300.000 320.000 Bytes "Bytea" records in the
database. It works quite well but I have a feeling that it actually is
slowing the database down on queri
> The issue that I think we're seeing is that the performance on the
3Ware
> RAID is quite bad, watching FreeBSD systat will show it at "100% busy"
at
> around "3.5 MB/s". When it needs to seek across a table (for, say, an
> aggregate function - typically a COUNT()), it slows the entire server
dow
On Aug 11, 2004, at 4:27 PM, Er Galvão Abbott wrote:
It will. As I've said I wont be storing any symbols.
It won't store leading zeros, however. This may or may not be an issue
for you.
test=# create table tel (name_id integer not null, tel_numeric
numeric(15) not null, tel_varchar varchar(15) n
It will. As I've said I wont be storing any
symbols.
--
Er Galvão Abbott
Desenvolvedor Web
http://www.galvao.eti.br/
[EMAIL PROTECTED]
Pierre-Frédéric Caillaud wrote:
Numeric won't store that :
(+33) 4 01 23 45 67
On Wed, 11 Aug 2004 02:42:33 -0300, Er Galvão Abbott
<[EMA
33 matches
Mail list logo