select count(*) from mtable where day='Mon'
Results:
1. P3 600 512MB RAM MSSQL. It takes about 4-5 secs to
run. If I run a few queries and everything is cached,
it is sometimes just 1 second.
2. Athlon 1.3 Ghz 1GB RAM. PostgreSQL takes 7 seconds.
I have played with the buffers setting and
Post the result of this for us:
explain analyze select count(*) from mtable where
day='Mon';
On both machines.
Hi Chris --
PostgreSQL Machine:
Aggregate (cost=140122.56..140122.56 rows=1 width=0)
(actual time=24516.000..24516.000 rows=1 loops=1)
- Index Scan using day on mtable
mark durrant wrote:
PostgreSQL Machine:
Aggregate (cost=140122.56..140122.56 rows=1 width=0)
(actual time=24516.000..24516.000 rows=1 loops=1)
- Index Scan using day on mtable
(cost=0.00..140035.06 rows=35000 width=0) (actual
time=47.000..21841.000 rows=1166025 loops=1)
Index Cond:
Mark,
MSSQL Machine:
That Explain Analyze command doesn't work for MSSQL,
try this:
set showplan_all on
go
select ...
go
Harald
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
Hi all,
From whatever reading and surfing I have done, I have found that postgres is
good. Actually I myself am a fan of postgres as compared to mysql. However I
want to have some frank opinions before I decide something. Following are
some of the aspects of my schema, and our concerns --
- We
I am not trying to start a mysql vs postgres war so please dont
misunderstand me I tried to look around for mysql vs postgres articles,
but most of them said mysql is better in speed. However those articles were
very old so I dont know about recent stage. Please comment !!!
It is my
Hi Josh,
Thanks for the prompt reply !! Actually migration is inevitable. We have a
totally messed up schema, not normalized and stuff like that. So the goal of
the migration is to get a new and better normalized schema. That part is
done already. Now the decision point is, should we go with
On Tue, May 24, 2005 at 08:36:36 -0700,
mark durrant [EMAIL PROTECTED] wrote:
--MSSQL's ability to hit the index only and not having
to go to the table itself results in a _big_
performance/efficiency gain. If someone who's in
development wants to pass this along, it would be a
nice
I took an unbiased look and did some tests.
Objectively for me MYSQL was not an improvement for speed.
I had read the benchmarks in pcmagazine from a while back as well.
I did some tests using ODBC, and .net connections and also used aqua studios
(hooks up to both data bases) and found postgres a
Amit,
- We have lot of foreign keys between the tables
Do you need these keys to be enforced? Last I checked, MySQL was still
having trouble with foriegn keys.
- Most of the DB usage is Selects. We would have some inserts but that
would be like a nightly or a monthly process
So
- Most of the DB usage is Selects. We would have some inserts but that
would be like a nightly or a monthly process
So transaction integrity is not a real concern? This sounds like a data
warehouse; wanna try Bizgres? (www.bizgres.org)
I took a look at this. I have a few concerns with
On Tue, May 24, 2005 at 01:56:54PM -0400, Amit V Shah wrote:
I took a look at this. I have a few concerns with bizgres though -- I am
using jetspeed portal engine and Hibernate as my O/R Mapping layer.
If you have problems with performance, you might want to look into using JDBC
directly
Amit,
I took a look at this. I have a few concerns with bizgres though -- I am
using jetspeed portal engine and Hibernate as my O/R Mapping layer. I know
for sure that they dont support bizgres. Now the question is what
difference is there between bizgres and postgres ... I guess I will try
I'm far from an expert, so this may be off-base... but
perhaps a suggestion would be to allow a hint to be
sent to the optimizer if the user doesn't care that
the result is approximate maybe then this wouldn't
require adding more overhead to the indexes.
MSSQL has something like this with
It's common knowledge, it seems, that MySQL without transactions will be
a lot faster than Postgres on Inserts. And on Updates too, that is, unless
you have more than a few concurrent concurrent connections, at which point
the MySQL full table lock will just kill everything. And you don't
Amit V Shah wrote:
Hi Josh,
Thanks for the prompt reply !! Actually migration is inevitable. We have a
totally messed up schema, not normalized and stuff like that. So the goal of
the migration is to get a new and better normalized schema. That part is
done already. Now the decision point is,
Until you start worrying about MVC - we have had problems with the
MSSQL implementation of read consistency because of this 'feature'.
Alex Turner
NetEconomistOn 5/24/05, Bruno Wolff III [EMAIL PROTECTED] wrote:
On Tue, May 24, 2005 at 08:36:36 -0700,mark durrant [EMAIL PROTECTED] wrote:
Folks,
This gets brought up a lot. The problem is that the
index doesn't include
information about whether the current transaction
can see the referenced
row. Putting this information in the index will add
significant overhead
to every update and the opinion of the developers is
Pretty much. There has been discussion about allowing index-only access
to
frozen tables, i.e. archive partitions. But it all sort of hinges on
someone implementing it and testing
Would be interesting as a parameter to set at index creation (ie. if you
know this table will have a
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only access to
frozen tables, i.e. archive partitions. But it all sort of hinges on
someone implementing it and testing
Is there any way to expose the planner
Michael Stone wrote:
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only
access to frozen tables, i.e. archive partitions. But it all sort
of hinges on someone implementing it and testing
Is there any way to
--As Chris pointed out, how real-world is this test?
His point is valid. The database we're planning will
have a lot of rows and require a lot of summarization
(hence my attempt at a test), but we shouldn't be
pulling a million rows at a time.
If you want to do lots of aggregate analysis, I
If I pg_dump that database then create a new database (e.g. tempdb)
and upload the dump file (thus making a duplicate) then the same query
only takes 190ms !!
Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an
impact on these times.
Damn, for some reason I didn't read
SpaceBallOne wrote:
Wondering if someone could explain a pecularity for me:
We have a database which takes 1000ms to perform a certain query on.
If I pg_dump that database then create a new database (e.g. tempdb)
and upload the dump file (thus making a duplicate) then the same query
only
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to REINDEX to do that. Though REINDEX has the same lock that
Would CLUSTER / REINDEX still have an effect if our queries were done
via sequential scan?
SELECTS don't write to the database, so they have no effect at all on
vacuuming/analyzing. You only need to worry about that with writes.
This is a old database (as in built by me when i
was just
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
If I pg_dump that database then create a new database (e.g. tempdb)
and upload the dump file (thus making a duplicate) then the same query
only takes 190ms !!
Vacuum, vacuum analyse, and vacuum full analyse does not seem to have an
impact
SpaceBallOne wrote:
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to REINDEX to do that. Though REINDEX has
On Tue, May 24, 2005 at 21:39:15 -0500,
John A Meinel [EMAIL PROTECTED] wrote:
By the way, I think doing:
CREATE DATABASE tempdb WITH TEMPLATE = originaldb;
Is a much faster way of doing dump and load. I *think* it would recreate
indexes, etc. If it just does a copy it may not show the
29 matches
Mail list logo