Re: [PERFORM] Error while vacuuming

2011-11-07 Thread Samuel Gendler
On Mon, Nov 7, 2011 at 10:33 PM, Bhakti Ghatkar  wrote:

> Tom,
>
> Currently we are using version 9.0.1.
>
> Which version shall we update to? 9.05 or 9.1 ?
>

9.0.5 should be compatible with your installed db and contain any bug fixes
that have been released.  Which isn't to say that you shouldn't test and
make a backup before upgrading the binaries on your production server, of
course.

--sam


Re: [PERFORM] Error while vacuuming

2011-11-07 Thread Bhakti Ghatkar
Tom,

Currently we are using version 9.0.1.

Which version shall we update to? 9.05 or 9.1 ?

~Bhakti

On Fri, Nov 4, 2011 at 7:44 PM, Tom Lane  wrote:

> Bhakti Ghatkar  writes:
> > Hi ,
> > While performing full vacuum we encountered the error below:
>
>
> > INFO:  vacuuming "pg_catalog.pg_index"
> > vacuumdb: vacuuming of database "" failed: ERROR:  duplicate key
> value
> > violates unique constraint "c"
> > DETAIL:  Key (indexrelid)=(2678) already exists.
>
> > We are using Postgres 9.0.1
>
> > Can you please help us out in understanding the cause of this error?
>
> Try updating ... that looks suspiciously like a bug that was fixed a few
> months ago.
>
>regards, tom lane
>


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-07 Thread Mohamed Hashim
Hi all,

Thanks for all your responses.

Sorry for late response

Earlier we used Postgres8.3.10 with Desktop computer (as server) and
configuration of the system (I2 core with 4GB RAM) and also the application
was slow  i dint change any postgres config settings.

May be because of low config We thought the aplication is slow so we opted
to go for higher configuration server(with RAID 1) which i mentioned
earlier.

I thought the application will go fast but unfortunately there is no
improvement so i tried to change the postgres config settings and trying to
tune my queries wherever possible but still i was not able
to..improve the performance..


So will it helpful if we try GIST or GIN for integer array[] colum
(source_detail) with enable_seqscan=off and default_statistics_target=1000?

Regards
Hashim



On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni  wrote:

> Am 03.11.2011 17:08, schrieb Tomas Vondra:
>
>> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
>> 
>>
>> No doubt about that, querying tables using conditions on array columns is
>> not the best direction in most cases, especially when those tables are
>> huge.
>>
>> Still, the interesting part here is that the OP claims this worked just
>> fine in the older version and after an upgrade the performance suddenly
>> dropped. This could be caused by many things, and we're just guessing
>> because we don't have any plans from the old version.
>>
>> Tomas
>>
>>
>>
> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about
> problems after the migration from 8.x to 9.x. Or did I miss something here?
>
> Regards,
> Mario
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**
> org )
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-performance
>



-- 
Regards
Mohamed Hashim.N
Mobile:09894587678


Re: [PERFORM] Subquery in a JOIN not getting restricted?

2011-11-07 Thread Jay Levitt

Jay Levitt wrote:

And yep! When I do a CREATE TABLE AS from that view, and add an index on
user_id, it works just as I'd like.


Or not.  Feel free to kick me back over to pgsql-novice, but I don't get why 
the GROUP BY in this subquery forces it to scan the entire users table (seq 
scan here, index scan on a larger table) when there's only one row in users 
that can match:


create table questions (
  id int not null primary key,
  user_id int not null
);
insert into questions
  select generate_series(1,1100), (random()*2000)::int;

create table users (
  id int not null primary key
);
insert into users select generate_series(1, 2000);

vacuum freeze analyze;

explain analyze
select questions.id
from questions
join (
  select u.id
  from users as u
  group by u.id
) as s
on s.id = questions.user_id
where questions.id = 1;


Hash Join  (cost=42.28..89.80 rows=2 width=4) (actual time=0.857..1.208 
rows=1 loops=1)

   Hash Cond: (u.id = questions.user_id)
   ->  HashAggregate  (cost=34.00..54.00 rows=2000 width=4) (actual 
time=0.763..1.005 rows=2000 loops=1)
 ->  Seq Scan on users u  (cost=0.00..29.00 rows=2000 width=4) 
(actual time=0.003..0.160 rows=2000 loops=1)
   ->  Hash  (cost=8.27..8.27 rows=1 width=8) (actual time=0.015..0.015 
rows=1 loops=1)

 Buckets: 1024  Batches: 1  Memory Usage: 1kB
 ->  Index Scan using questions_pkey on questions  (cost=0.00..8.27 
rows=1 width=8) (actual time=0.012..0.013 rows=1 loops=1)

   Index Cond: (id = 1)
 Total runtime: 1.262 ms

This is on patched 9.0.5 built earlier today.  The real query has 
aggregates, so it really does need GROUP BY.. I think..


Jay

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Subquery in a JOIN not getting restricted?

2011-11-07 Thread Jay Levitt

Kevin Grittner wrote:

"Kevin Grittner"  wrote:


If I had made the scores table wider, it might have gone from the
user table to scores on the index.


Bah.  I just forgot to put an index on scores.user_id.  With that
index available it did what you were probably expecting -- seq scan
on questions, nested loop index scan on users, nested loop index
scan on scores.

You weren't running you test with just a few rows in each table and
expecting the same plan to be generated as for tables with a lot of
rows, were you?


No, we're a startup - we only have 2,000 users and 17,000 scores!  We don't 
need test databases yet...


But I just realized something I'd completely forgot (or blocked) - scores is 
a view.  And views don't have indexes.  The underlying tables are ultimately 
indexed by user_id, but I can believe that Postgres doesn't think that's a 
cheap way to do it - especially since we're still using stock tuning 
settings (I know) so its costs are all screwed up.


And yep!  When I do a CREATE TABLE AS from that view, and add an index on 
user_id, it works just as I'd like.  I've been meaning to persist that view 
anyway, so that's what I'll do.


Thanks for the push in the right direction..

Jay

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Subquery in a JOIN not getting restricted?

2011-11-07 Thread Kevin Grittner
"Kevin Grittner"  wrote:
 
> If I had made the scores table wider, it might have gone from the
> user table to scores on the index.
 
Bah.  I just forgot to put an index on scores.user_id.  With that
index available it did what you were probably expecting -- seq scan
on questions, nested loop index scan on users, nested loop index
scan on scores.
 
You weren't running you test with just a few rows in each table and
expecting the same plan to be generated as for tables with a lot of
rows, were you?
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] WAL partition filling up after high WAL activity

2011-11-07 Thread Richard Yen
Hi Everyone,

I recently saw a crash on one of our databases, and I was wondering if this
might be an indication of something with WAL that might be unexpectedly
creating more files than it needs to?

Nov  5 16:18:27 localhost postgres[25092]: [111-1] 2011-11-05 16:18:27.524
PDT [user=slony,db=uk dbhub.com (35180)
PID:25092 XID:2142895751]PANIC:  could not write to file
"pg_xlog/xlogtemp.25092": No space left on device
Nov  5 16:18:27 localhost postgres[25092]: [111-2] 2011-11-05 16:18:27.524
PDT [user=slony,db=uk dbhub.com (35180)
PID:25092 XID:2142895751]STATEMENT:  select "_sac_uk".forwardConfirm(2, 4, '
5003717188', '2011-11-05 16:18:26.977112');
Nov  5 16:18:27 localhost postgres[32121]: [7-1] 2011-11-05 16:18:27.531
PDT [user=,db=  PID:32121 XID:0]LOG:  server process (PID 25092) was
terminated by signal 6: Aborted
Nov  5 16:18:27 localhost postgres[32121]: [8-1] 2011-11-05 16:18:27.531
PDT [user=,db=  PID:32121 XID:0]LOG:  terminating any other active server
processes

If you look at this graph (http://cl.ly/2y0W27330t3o2J281H3K), the
partition actually fills up, and the logs show that postgres crashed.
 After postgres crashed, it automatically restarted, cleared out its WAL
files, and began processing things again at 4:30PM.

>From the graph, it looks like a vacuum on table m_dg_read finished at
4:08PM, which might explain why the downward slope levels off for a few
minutes:

> Nov  5 16:08:03 localhost postgres[18741]: [20-1] 2011-11-05 16:08:03.400
> PDT [user=,db=  PID:18741 XID:0]LOG:  automatic vacuum of table
> "uk.public.m_dg_read": index scans: 1
> Nov  5 16:08:03 localhost postgres[18741]: [20-2]   pages: 0 removed,
> 65356 remain
> Nov  5 16:08:03 localhost postgres[18741]: [20-3]   tuples: 31770
> removed, 1394263 remain
> Nov  5 16:08:03 localhost postgres[18741]: [20-4]   system usage: CPU
> 2.08s/5.35u sec elapsed 619.39 sec


Looks like right afterwards, it got started on table m_object, which
finished at 4:18PM:

> Nov  5 16:18:19 localhost postgres[18686]: [9-1] 2011-11-05 16:18:19.448
> PDT [user=,db=  PID:18686 XID:0]LOG:  automatic vacuum of table
> "uk.public.m_object": index scans: 1
> Nov  5 16:18:19 localhost postgres[18686]: [9-2]pages: 0 removed,
> 152862 remain
> Nov  5 16:18:19 localhost postgres[18686]: [9-3]tuples: 17084
> removed, 12455761 remain
> Nov  5 16:18:19 localhost postgres[18686]: [9-4]system usage: CPU
> 4.55s/15.09u sec elapsed 1319.98 sec


It could very well be the case that upon the finish of m_object's vacuum,
another vacuum was beginning, and it eventually just crashed because there
was no room for another vacuum to finish.

We encountered a situation like this last summer on 7/4/2010 for a
different database cluster -- a big vacuum-for-wraparound on at 15GB table
filled the pg_xlog partition--this is how we started monitoring the pg_xlog
file size and wraparound countdown.  Seems like there was some sort of
vacuum-for-wraparound process happening during the time of this crash, as
we also track the XID to see when we should expect a VACUUM FREEZE (
http://cl.ly/3s1S373I0l0v3E171Z0V).

Some configs:
checkpoint_segments=16
wal_buffers=8MB
#archive_mode=off
checkpoint_completion_target=0.9

Postgres version is 8.4.5

Note also that the pg_xlog partition is 9.7GB.  No other apps run on the
machine besides pgbouncer, so it's highly unlikely that files are written
to this partition by another process.  Also, our five largest tables are
the following:
gm3_load_times  | 2231 MB
m_object_paper  | 1692 MB
m_object  | 1192 MB
m_report_stats  | 911 MB
gm3_mark  | 891 MB

My biggest question is: we know from the docs that there should be no more
than (2 + checkpoint_completion_target) * checkpoint_segments + 1 files.
 For us, that would mean no more than 48 files, which equates to 384MB--far
lower than the 9.7GB partition size.  **Why would WAL use up so much disk
space?**

Thanks for reading, and thanks in advance for any help you may provide.
--Richard


Re: [PERFORM] Subquery in a JOIN not getting restricted?

2011-11-07 Thread Kevin Grittner
Jay Levitt  wrote:
> When I run the following query:
> 
> select questions.id
> from questions
> join (
>  select u.id as user_id
>  from users as u
>  left join scores as s
>  on s.user_id = u.id
> ) as subquery
> on subquery.user_id = questions.user_id;
> 
> the subquery is scanning my entire user table, even though it's
> restricted by the outer query.  (My real subquery is much more
> complicated, of course, but this is the minimal fail case.)
 
It's not a fail case -- it's choosing the plan it thinks is cheapest
based on your costing parameters and the statistics gathered by the
latest ANALYZE of the data.
 
> Is this just not a thing the optimizer can do?  Are there ways to
> rewrite this, still as a subquery, that will be smart enough to
> only produce the one row of subquery that matches
> questions.user_id?
 
Well, it can certainly produce the plan you seem to want, if it
looks less expensive.  It kinda did with the following script:
 
create table questions
  (id int not null primary key, user_id int not null);

insert into questions
  select generate_series(1,100), (random()*100)::int;

create table users (id int not null primary key);

insert into users select generate_series(1, 100);

create table scores
  (id int not null primary key, user_id int not null);

insert into scores select n, n
  from (select generate_series(1,100)) x(n);

vacuum freeze analyze;

explain analyze
select questions.id
from questions
join (
 select u.id as user_id
 from users as u
 left join scores as s
 on s.user_id = u.id
) as subquery
on subquery.user_id = questions.user_id;
 
Here's the plan I got, which scans the questions and then uses the
index to join to the users.  It's throwing the result of that into a
hash table which is then checked from a sequential scan of the
scores table.  If I had made the scores table wider, it might have
gone from the user table to scores on the index.
 
 Hash Right Join
 (cost=438.23..18614.23 rows=100 width=4)
 (actual time=2.776..161.237 rows=100 loops=1)
   Hash Cond: (s.user_id = u.id)
   ->  Seq Scan on scores s
   (cost=0.00..14425.00 rows=100 width=4)
   (actual time=0.025..77.876 rows=100 loops=1)
   ->  Hash
   (cost=436.98..436.98 rows=100 width=8)
   (actual time=0.752..0.752 rows=100 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 4kB
 ->  Nested Loop
 (cost=0.00..436.98 rows=100 width=8)
 (actual time=0.032..0.675 rows=100 loops=1)
   ->  Seq Scan on questions
   (cost=0.00..2.00 rows=100 width=8)
   (actual time=0.010..0.042 rows=100 loops=1)
   ->  Index Only Scan using users_pkey on users u
   (cost=0.00..4.34 rows=1 width=4)
   (actual time=0.005..0.005 rows=1 loops=100)
 Index Cond: (id = questions.user_id)
 Total runtime: 168.585 ms
 
If you want help figuring out whether it is choosing the fastest
plan, and how to get it do better if it is not, please read this
page and post the relevant information:
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Subquery in a JOIN not getting restricted?

2011-11-07 Thread Tom Lane
Jay Levitt  writes:
> When I run the following query:
> select questions.id
> from questions
> join (
>  select u.id as user_id
>  from users as u
>  left join scores as s
>  on s.user_id = u.id
> ) as subquery
> on subquery.user_id = questions.user_id;

> the subquery is scanning my entire user table, even though it's restricted 
> by the outer query.  (My real subquery is much more complicated, of course, 
> but this is the minimal fail case.)

> Is this just not a thing the optimizer can do?

Every release since 8.2 has been able to reorder joins in a query
written that way.  Probably it just thinks it's cheaper than the
alternatives.

(Unless you've reduced the collapse_limit variables for some reason?)

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Subquery in a JOIN not getting restricted?

2011-11-07 Thread Jay Levitt

When I run the following query:

select questions.id
from questions
join (
select u.id as user_id
from users as u
left join scores as s
on s.user_id = u.id
) as subquery
on subquery.user_id = questions.user_id;

the subquery is scanning my entire user table, even though it's restricted 
by the outer query.  (My real subquery is much more complicated, of course, 
but this is the minimal fail case.)


Is this just not a thing the optimizer can do?  Are there ways to rewrite 
this, still as a subquery, that will be smart enough to only produce the one 
row of subquery that matches questions.user_id?


Jay Levitt

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-07 Thread Claudio Freire
On Fri, Nov 4, 2011 at 4:07 PM, Claudio Freire  wrote:
>> Here again, you've set it to ten times the default value.  That
>> doesn't seem like a good idea.  I would start with the default and
>> tune down.
>
> Already did that. Waiting to see how it turns out.

Nope, still happening with those changes.

Though it did make sense that those settings were too high, it didn't
fix the strange blocking.

Is it possible that the query is locking all the tuples hit, rather
than only the ones selected?

Because the index used to reach the tuple has to walk across around 3k
tuples before finding the one that needs locking. They're supposed to
be in memory already (they're quite hot), that's why selecting is
fast, but maybe it's trying to lock all 3k tuples?

I don't know, I'm just throwing punches blindly at this point.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Predicates not getting pushed into SQL function?

2011-11-07 Thread Jay Levitt

Jay Levitt wrote:

Yes, that patch works great! Oddly enough, the workaround now does NOT work;
functions returning SETOF named composite types don't get inlined, but
functions returning the equivalent TABLE do get inlined. Let me know if you
need a failcase, but the bug doesn't actually affect me now :)


Never mind... I left a "strict" in my test.  Works great all around.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Predicates not getting pushed into SQL function?

2011-11-07 Thread Jay Levitt

Tom Lane wrote:
> Please don't send HTML-only email to these lists.

Oops - new mail client, sorry.

> Anyway, the answer seems to be that inline_set_returning_function needs
> some work to handle cases with declared OUT parameters.  I will see
> about fixing that going forward, but in existing releases what you need
> to do is declare the function as returning SETOF some named composite
> type


Yes, that patch works great!  Oddly enough, the workaround now does NOT 
work; functions returning SETOF named composite types don't get inlined, but 
functions returning the equivalent TABLE do get inlined.  Let me know if you 
need a failcase, but the bug doesn't actually affect me now :)


Jay

>
> create type matcher_result as (user_id int, match int);
>
> create or replace function matcher() returns setof matcher_result as ...


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL perform poorly on VMware ESXi

2011-11-07 Thread Ivan Voras
On 07/11/2011 11:36, Lucas Mocellin wrote:
> Hi everybody,
> 
> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top
> of VMware ESXi 4.1 U1.

I hope your hardware is Nehalem-based or newer...

> The problem is query are taking too long, and some times one query "blocks"
> everybody else to use the DB as well.

Ok, so multiple users connect to this one database, right?

> I'm a network administrator, not a DBA, so many things here can be "newbie"
> for you guys, so please be patient. :)

First, did you configure the server and PostgreSQL at all?

For FreeBSD, you'll probably need this in sysctl.conf:

vfs.hirunningspace=8388608
vfs.lorunningspace=6291456
vfs.read_max=128

and for PostgreSQL, read these:

http://www.revsys.com/writings/postgresql-performance.html
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

> I always think the bottleneck is disk I/O as I can see from the vSphere
> performance view, but the virtual machine is using exclusively the SATA
> disk with no concurrency with other machines.

I don't see why concurrency with other machines is relevant. Are you
complaining that multiple users accessing a single database are blocked
while one large query is executing or that this one query blocks other VMs?

If the single query generates a larger amount of IO than your VM host
can handle then you're probably out of luck. Virtualization is always
bad for IO. You might try increasing the amount of memory for the
virtual machine in the hope that more data will be cached.

> how do you guys deal with virtualization? any tips/recommendations? does
> that make sense the disk I/O? any other sugestion?

Are you sure it's IO? Run "iostat 1" for 10 seconds while the query is
executing and post the results.




signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] PostgreSQL perform poorly on VMware ESXi

2011-11-07 Thread k...@rice.edu
On Mon, Nov 07, 2011 at 08:36:10AM -0200, Lucas Mocellin wrote:
> Hi everybody,
> 
> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top
> of VMware ESXi 4.1 U1.
> 
> The problem is query are taking too long, and some times one query "blocks"
> everybody else to use the DB as well.
> 
> I'm a network administrator, not a DBA, so many things here can be "newbie"
> for you guys, so please be patient. :)
> 
> Clonning this database to another machine not virtualized, any "crap"
> machine, it runs a way fastar, I've measured one specific "SELECT" and at
> the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it
> took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT
> virtualized), it took only 2!!!
> 
> I always think the bottleneck is disk I/O as I can see from the vSphere
> performance view, but the virtual machine is using exclusively the SATA
> disk with no concurrency with other machines.
> 
> how do you guys deal with virtualization? any tips/recommendations? does
> that make sense the disk I/O? any other sugestion?
> 
> thanks in advance!
> 
> Lucas.

Hi Lucas,

Virtualization is not a magic bullet. It has many advantages but also
has disadvantages. The resources of the virtual machine are always a
subset of the host machine resources. In addition, the second layer of
disk I/O indirection through the virtual disk can effectively turn
a sequential I/O pattern into a random I/O pattern with the accompanying
10:1 decrease in I/O throughput.

I would recommend testing your I/O on your virtual machine.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL perform poorly on VMware ESXi

2011-11-07 Thread Tomas Vondra
On 7 Listopad 2011, 11:36, Lucas Mocellin wrote:
> Hi everybody,
>
> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top
> of VMware ESXi 4.1 U1.
>
> The problem is query are taking too long, and some times one query
> "blocks"
> everybody else to use the DB as well.
>
> I'm a network administrator, not a DBA, so many things here can be
> "newbie"
> for you guys, so please be patient. :)
>
> Clonning this database to another machine not virtualized, any "crap"
> machine, it runs a way fastar, I've measured one specific "SELECT" and at
> the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it
> took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT
> virtualized), it took only 2!!!

What is this "cloning" thing? Dump/restore? Something at the
filesystem-device level?

My wild guess is that the autovacuum is not working, thus the database is
bloated and the cloning removes the bloat.

Post EXPLAIN ANALYZE output of the query for both machines (use
explain.depesz.com).

Have you done any benchmarking on the virtualized machine to check the
basic I/O performance? A simple "dd test", bonnie?

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] PostgreSQL perform poorly on VMware ESXi

2011-11-07 Thread Lucas Mocellin
Hi everybody,

I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top
of VMware ESXi 4.1 U1.

The problem is query are taking too long, and some times one query "blocks"
everybody else to use the DB as well.

I'm a network administrator, not a DBA, so many things here can be "newbie"
for you guys, so please be patient. :)

Clonning this database to another machine not virtualized, any "crap"
machine, it runs a way fastar, I've measured one specific "SELECT" and at
the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it
took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT
virtualized), it took only 2!!!

I always think the bottleneck is disk I/O as I can see from the vSphere
performance view, but the virtual machine is using exclusively the SATA
disk with no concurrency with other machines.

how do you guys deal with virtualization? any tips/recommendations? does
that make sense the disk I/O? any other sugestion?

thanks in advance!

Lucas.