Reini Urban wrote:
Merlin Moncure schrieb:
A good benchmark of our application performance is the time it takes
to
read the entire bill of materials for a product. This is a
recursive
read of about 2500 records in the typical case (2408 in the test
case).
I always knew that COBOL
This was an intersting Win32/linux comparison. I expected
Linux to
scale better, but I was surprised how poorly XP scaled. It
reinforces our perception that Win32 is for low traffic servers.
That's a bit harsh given the lack of any further
investigation so far
isn't it?
On Mon, 22 Nov 2004 16:54:56 -0800, Josh Berkus [EMAIL PROTECTED] wrote:
Alexandre,
What is the common approach? Should I use directly the product_code as
my ID, or use a sequantial number for speed? (I did the same for the
company_id, this is a 'serial' and not the shor name of the
All,
Well, you should still escape any strings you're getting from a web
page so
you can ensure you're not subject to a SQL insert attack, even if you're
expecting integers.
Thanks,
Peter Darley
Well, your framework should do this for you :
integer specified in your database object class
Is this for Postgresql Cygwin? You surely can't mean for all server
tasks - if so, I would say that's *way* off. There is a difference,
but
it's more along the line of single-digit percentage in my experience -
provided you config your machines reasonably, of course.
(In my experience,
I ran quite a few file system benchmarks in RHAS x86-64 and FC2 x86-64
on a Sun V40z - I did see very consistent 50% improvements in bonnie++
moving from RHAS to FC2 with ext2/ext3 on SAN.
On Sun, 2004-11-14 at 23:51 -0800, William Yu wrote:
Greg Stark wrote:
William Yu [EMAIL PROTECTED]
Mike Mascari [EMAIL PROTECTED] writes:
When I query the view with a simple filter, I get:
explain analyze select * from p_areas where deactive is null;
The problem seems to be here:
- Seq Scan on _areas a (cost=0.00..2.48 rows=1 width=163) (actual
time=0.037..0.804 rows=48 loops=1)
My point was that there are two failure cases --- one where the cache
is
slightly out of date compared to the db server --- these are cases
where
the cache update is slightly before/after the commit.
I was thinking about this and ways to minimize this even further. Have
memcache clients add
Tom Lane wrote:
Mike Mascari [EMAIL PROTECTED] writes:
When I query the view with a simple filter, I get:
explain analyze select * from p_areas where deactive is null;
The problem seems to be here:
- Seq Scan on _areas a (cost=0.00..2.48 rows=1 width=163) (actual
time=0.037..0.804 rows=48
Dave Page wrote:
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: 23 November 2004 15:06
To: Dave Page
Cc: Merlin Moncure; [EMAIL PROTECTED];
PostgreSQL Win32 port list
Subject: Re: [pgsql-hackers-win32] scalability issues on win32
The
--- Mike Mascari [EMAIL PROTECTED] escribió:
Tom Lane wrote:
Mike Mascari [EMAIL PROTECTED] writes:
When I query the view with a simple filter, I get:
explain analyze select * from p_areas where
deactive is null;
The problem seems to be here:
- Seq Scan on
Jaime Casanova [EMAIL PROTECTED] writes:
Tom Lane wrote:
Why is it so completely off about the selectivity
of the IS NULL clause?
null values are not indexable, is that your question?
Uh, no. The problem is that the IS NULL condition matched all 48 rows
of the table, but the planner thought
Mike Mascari [EMAIL PROTECTED] writes:
Tom Lane wrote:
Why is it so completely off about the selectivity of the IS NULL clause?
I think this is a bug in ANALYZE not constructing statistics for columns
whose data is entirely NULL:
Um ... doh ... analyze.c about line 1550:
/* We can only
Alexandre Leclerc [EMAIL PROTECTED] writes:
Thanks for those tips. I'll print and keep them. So in my case, the
product_code being varchar(24) is:
4 bytes + string size (so possibly up to 24) = possible 28 bytes. I
did the good thing using a serial. For my shorter keys (4 bytes + up
to 6
Hi everyone,
Can anyone please explain postgres' behavior on our index.
I did the following query tests on our database:
db=# create index chatlogs_date_idx on chatlogs (date);
CREATE
db=# explain select date from chatlogs where date='11/23/04';
NOTICE: QUERY PLAN:
Index
Well you just selected a whole lot more rows... What's the total number of rows
in the table?
In general, what I remember from reading on the list, is that when there's no
upper bound on a query like this, the planner is more likely to choose a seq.
scan than an index scan.
Try to give your
16 matches
Mail list logo