On Thu, May 15, 2014 at 11:01 AM, Johan De Meersman wrote:
> - Original Message -
>> From: "Larry Martell"
>> Subject: Re: Performance boost by splitting up large table?
>>
>> This table is queried based on requests from the users. There are 10
- Original Message -
> From: "Larry Martell"
> Subject: Re: Performance boost by splitting up large table?
>
> This table is queried based on requests from the users. There are 10
> different lookup columns they can specify, and they can provide any or
That
On Thu, May 15, 2014 at 4:14 AM, Johan De Meersman wrote:
>
> You've already had some good advice, but there's something much more simpler
> that will also give you a significant boost: a covering index.
>
> Simply put, the engine is smart enough to not bother with row lookups if
> everything yo
You've already had some good advice, but there's something much more simpler
that will also give you a significant boost: a covering index.
Simply put, the engine is smart enough to not bother with row lookups if
everything you asked for is already in the index it was using. You'll need to
kee
Hi Larry,
On May 14, 2014, at 5:05 AM, Larry Martell wrote:
> We have a table with 254 columns in it. 80% of the time, a very small
> subset of these columns are queried. The other columns are rarely, if
> ever, queried. (But they could be at any time, so we do need to
> maintain them.). Would I
Hi,
You could split the table into two and can avoid code changes by creating a
view which matches what code is looking for.
I think loading few fields vs 254 into memory will make a difference but if
your select statement only have specific fields you want and not the whole
row (and also given t
On 8/14/13 10:46 AM, Manuel Arostegui wrote:
2013/8/14 Andy Wallace mailto:awall...@ihouseweb.com>>
Hey all -
We have been focusing on performance in our systems a lot lately, and have
made some pretty
good progress. Upgrading the mySQL engine from 5.1 to 5.5 was eye-opening.
2013/8/14 Andy Wallace
> Hey all -
>
> We have been focusing on performance in our systems a lot lately, and have
> made some pretty
> good progress. Upgrading the mySQL engine from 5.1 to 5.5 was eye-opening.
>
> But there are still issues, and one in particular is vexing. It seems like
> a tuni
t kind of things are you doing? If Data Warehouse 'reports', consider
Summary Tables. Non-trivial, but the 'minutes' will become 'seconds'.
> -Original Message-
> From: Bruce Ferrell [mailto:bferr...@baywinds.org]
> Sent: Tuesday, July 30, 2013 7:08 A
On 07/30/2013 04:13 AM, Manivannan S. wrote:
Hi,
I've a table with 10 Million records in MySQL with INNODB engine. Using this
table I am doing some calculations in STORED PROCEDURE and getting the results.
In Stored Procedure I used the base table and trying to process all the records
in the
I think you're reducing the amount of rows referenced throughout the proc
using the view. This might be where you're seeing a performance difference.
If you create an innodb table where the structure and row count match the
view maybe you'll see another difference? I'll wait for Rick James' input
b
ginal Message-
> From: Denis Jedig [mailto:d...@syneticon.net]
> Sent: Wednesday, April 24, 2013 10:50 PM
> To: mysql@lists.mysql.com
> Subject: Re: Performance of delete using in
>
> Larry,
>
> Am 25.04.2013 02:19, schrieb Larry Martell:
>
> > delete
Larry,
Am 25.04.2013 02:19, schrieb Larry Martell:
delete from cdsem_event_message_idx where event_id in ()
The in clause has around 1,500 items in it.
Consider creating a temporary table, filling it with your "IN"
values and joining it to cdsem_event_message_idx ON event_id for
deleti
I changed it to delete one row at a time and it's taking 3 minutes.
On Wed, Apr 24, 2013 at 6:52 PM, Larry Martell wrote:
> That is the entire sql statement - I didn't think I needed to list the
> 1500 ints that are in the in clause.
>
> Also want to mention that I ran explain on it, and it is u
That is the entire sql statement - I didn't think I needed to list the
1500 ints that are in the in clause.
Also want to mention that I ran explain on it, and it is using the
index on event_id.
On Wed, Apr 24, 2013 at 6:49 PM, Michael Dykman wrote:
> You would have to show us the whole sql state
Am 14.05.2012 23:05, schrieb Nicolas Rannou:
> *1* to create 3 tables:*
> user - info about a user
> images - info about an image
> user_image_mapping
>
> *2* to create 2 tables*
> user - info about a user
> -> a field would contain a list which represents the ids of the images
> the user can
Rafael
Performance depends on several things, but none related with debian or
vmware per se. So we need more information about you configuration (ram,
buffers, etc) and you environment (concurrent users, transactions, etc).
Maybe you have not tuned your mysql and it is slow because of that.
Yup, I'm doing clean tests,lshutdown, and reload mysql each test.
The raid setup is similar, Faster is raid1 with 10k harddisk, slower is raid
10 with 15k.
Metrics show
Old raid
Secuecial writting 1G: 533 mb/s (using dd if=/dev/zero of=1G bs=1024
count=102400)
Secuencial reading 1G: 500 mb/s
New
On Sun, Feb 13, 2011 at 11:40 PM, Andrés Tello wrote:
> I have a test process, which runs in the "old server" in 35 seconds, the new
> server runs the same process in 110.
>
> There is a change of version from mysql 4.1.22 to 5.1.22.
> We were stuck at 5.1.22 because higher version give us anothe
Hi List,
In a 20m interval in our max load I have:
OS WAIT ARRAY INFO: reservation count 637, signal count 625
Mutex spin waits 0, rounds 19457, OS waits 428
RW-shared spins 238, OS waits 119; RW-excl spins 13, OS waits 8
(The values are the difference between the start and end of this 20m
inter
Hi,
We're chaning it to INT(9). Apparently someone remembered to change the type
of data in this field from an alphanumeric value to an INT(9).
I'm going to change this asap.
Thanks
BR
AJ
On Mon, Sep 6, 2010 at 5:17 AM, mos wrote:
> At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
>
>> Hi Johnn
At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_
On 9/3/2010 3:15 PM, Johnny Withers wrote:
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.
I have never noticed this. I don't think adding a new index will make a
difference.
You could try moving
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.
I have never noticed this. I don't think adding a new index will make a
difference.
You could try moving your log files to a different disk array tha
Hi,
When creating a table in MySQL with a PK it automatically creates an INDEX,
correct?
The Index_Length: 0 is rather strange..I've created a new INDEX on top of my
PK column on my test system and Index_Length shows a big value different
from 0. Do you think this might have any impact?
mysql> s
I think your MySQL instance is disk bound.
If you look at your iostats, md2, 12 and 22 have a ~10ms wait time before a
request can be processed. iostat is also reporting those disks are 75%+
utilized which means they are doing about all they can do.
Anyway you can add more disks? Add faster disks
Hi,
The DB is working on /var, which is md2 / md12 / md22.
extended device statistics
device r/sw/s kr/s kw/s wait actv svc_t %w %b
md2 0.1 80.00.4 471.4 0.0 1.0 12.2 0 94
md10 0.05.70.0 78.8 0.0 0.1 19.7 0 9
md1
Very confusing...
Why is index_length zero ?
On top of that, there's only 500K rows in the table with a data size of
41MB. Maybe InnoDB is flushing to disk too often?
What's the output of iostat -dxk 60 ? (run for a minute+ to get 2 output
girds)
--
*Johnny With
Hi,
mysql> SHOW TABLE STATUS LIKE 'clientinfo';
+++-++++-+-+--+---++-+-++---+--++-
What does
SHOW TABLE STATUS LIKE 'table_name'
Say about this table?
-JW
On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira wrote:
> Hi,
>
> I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
> makes a little difference but not enough for the application to run in real
>
Hi,
I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
makes a little difference but not enough for the application to run in real
time processing.
It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.
MySQL is eating 179MB of RAM and 5,4% of CPU.
PID USERNA
Ok, so I'm stumped?
What kind of hardware is behind this thing?
-JW
On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira wrote:
> Hi Johnny,
>
> mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
>
> ++-++---+---+-+-+--
On 02/09/2010 6:05 p, Alexandre Vieira wrote:
Hi Jangita,
I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's
246MB of 350MB free.
| Innodb_buffer_pool_pages_data | 6020 |
| Innodb_buffer_pool_pages_dirty| 1837 |
| Innodb_buffer_pool_pages_flushed | 673837
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_len
| ref | rows | Extra |
++-
Hi Travis,
Sorry, bad copy/paste. That DELETE statement is wrong.
The application executes:
DELETE FROM clientinfo WHERE userid='x';
BR
AJ
On Thu, Sep 2, 2010 at 5:23 PM, Travis Ard wrote:
> Have you considered adding a secondary index on the units column for your
> delete queries?
>
Have you considered adding a secondary index on the units column for your
delete queries?
DELETE FROM clientinfo WHERE units='155618918';
-Original Message-
From: Alexandre Vieira [mailto:nul...@gmail.com]
Sent: Thursday, September 02, 2010 8:46 AM
To: John Daisley; joh...@pixelated.net
Hi Jangita,
I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's 246MB
of 350MB free.
| Innodb_buffer_pool_pages_data | 6020 |
| Innodb_buffer_pool_pages_dirty| 1837 |
| Innodb_buffer_pool_pages_flushed | 673837 |
| Innodb_buffer_pool_pages_free | 157
On 02/09/2010 4:46 p, Alexandre Vieira wrote:
John, Johnny,
Thanks for the prompt answer.
...
We also run some other applications in the server, but nothing that consumes
all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
idle CPU time at any time.
TIA
BR
Alex
Increa
What is the hardware spec? Anything else running on the box?
Why are you replicating but not making use of the slave?
Can you post the output of SHOW CREATE TABLE?
Regards
John
On 2 September 2010 12:50, Alexandre Vieira wrote:
> Hi list,
>
> I'm having some performance problems on my 5.0.45-
Can you show us the table structure and sample queries?
On Thursday, September 2, 2010, Alexandre Vieira wrote:
> Hi list,
>
> I'm having some performance problems on my 5.0.45-log DB running on Solaris
> 8 (V240).
>
> We only have one table and two apps selecting, updating, inserting and
> delet
This is a good place to start:
https://launchpad.net/mysql-tuning-primer
-Original Message-
From: Johnny Withers [mailto:joh...@pixelated.net]
Sent: Tuesday, August 31, 2010 5:38 AM
To: Johan De Meersman
Cc: kranthi kiran; mysql@lists.mysql.com
Subject: Re: Performance Tunning
So, it
So, it's not just me that is stuck in this infinite loop? I thought I had
gone mad!
--
-
Johnny Withers
601.209.4985
joh...@pixelated.net
On Tue, Aug 31, 2010 at 5:23 AM, Johan De Meersman wrote:
> 1. Find out what is slow
> 2. Fix it
> 3. GOTO 1
>
> On Tue, Aug 31,
On 31/08/2010 12:23 p, Johan De Meersman wrote:
1. Find out what is slow
2. Fix it
3. GOTO 1
Good one Johan, Performance tuning depends alot on your table types,
your server, the version of MySQL, how you client applications access
your database, the size of your data, type of queries, indexes
1. Find out what is slow
2. Fix it
3. GOTO 1
On Tue, Aug 31, 2010 at 11:13 AM, kranthi kiran
wrote:
> Hi All,
> In performance tunning what are the steps can follow,please help
> me
>
> Thanks & Regards,
> Kranthi kiran
>
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt
Hi All,
In performance tunning what are the steps can follow,please help
me
Thanks & Regards,
Kranthi kiran
Hi Ortis,
How abt the hits or load i.e ( DML, DDL ) to the server.
My initial assessment after looking at you cnf file is
1) Calculate and place an appropriate value for innodb_buffer_pool_size
2) Reduse the innodb_thread_concurrency to 4 or 8.
and how about the no. of tables in the database and
On Friday 15 January 2010 13:55:18 fsb wrote:
> the example you gave would work with a range constraint:
>
> WHERE `bar_id` > 0 AND `bar_id` < 63
>
> but i guess this is not a general solution.
>
> i've done exactly this kind of select using an IN constraint very often.
> i've not had any trou
the example you gave would work with a range constraint:
WHERE `bar_id` > 0 AND `bar_id` < 63
but i guess this is not a general solution.
i've done exactly this kind of select using an IN constraint very often.
i've not had any trouble with lists of a few hundred so long as i have the
necessar
The problem is that I need to search/sort by ANY of the 284 fields at
times - 284 indexes is a bit silly, so there will be a lot of sequential
scans (table has 60,000 rows). Given that criteria, will fewer columns
in more tables provide a performance benefit?
-j
On Tue, 2009-08-04 at 16:03 -0700
We had an awkward setup, which forced us to use PGSQL for SpamAssassin.
Unfortunately the SA queries are not processed well by PGSQL.
Back in January we switched SA processing to MySQL. Bingo! Instant improvement
in overall performance, and no PGSQL maintenance required. This is not
sophistica
At 02:53 PM 3/18/2009, you wrote:
We are using the PostgreSQL currently to store the Bayes information. It
seems to periodically spend a lot of time 'vacumming' which of course
drives up disk load. The system admin has adjusted it so it only does
this at low load. I'm curious if anyone has act
Hi Jim,
On Tue, Dec 30, 2008 at 5:54 PM, Jim Lyons wrote:
> On Sat, Dec 27, 2008 at 12:38 PM, Jake Maul wrote:
>
>>
>> 3) Obviously it'd probably be faster if you weren't using
>> SQL_NO_CACHE... guessing you just did that to show us what it's like
>> that way?
>>
>>
> Why would SQL_NO_CACHE slo
On Sat, Dec 27, 2008 at 12:38 PM, Jake Maul wrote:
>
> 3) Obviously it'd probably be faster if you weren't using
> SQL_NO_CACHE... guessing you just did that to show us what it's like
> that way?
>
>
Why would SQL_NO_CACHE slow it down? By not checking the cache or storing
the resultset into cac
Mmm I just tested this and it does indeed work (although i tested with
slightly less rows :o) )
explain select count(*) , date_format(calldate, '%y-%m-%d') as m from
cdr_warehouse group by m \G
*** 1. row ***
id: 1
select_type: SIMPLE
Hi,
On Sat, Dec 27, 2008 at 6:15 PM, Chris Picton wrote:
> Hi
>
> I am trying to get to grips with understanding mysql performance.
>
> I have the following query:
>
> select sql_no_cache count(*), date_format(calldate, '%y-%m-%d') as m from
> cdr_warehouse group by m;
>
> This gives me:
> 115 ro
In the last episode (Dec 27), Chris Picton said:
> I am trying to get to grips with understanding mysql performance.
>
> I have the following query:
>
> select sql_no_cache count(*), date_format(calldate, '%y-%m-%d') as m from
> cdr_warehouse group by m;
>
> This gives me:
> 115 rows in set (59
I few random things come to mind...
1) Try the query with IGNORE INDEX calldate_idx ... I can't see how
this could possibly be faster, but I always like to check anyway. In
your case this should result in a full table scan, given the
information you've given us.
2) If the performance problem come
you mail like to find it by your self. simply use : explain
re,
wh
Yong Lee schrieb:
All,
Just curious as to which query would be better in terms of performance:
select * from (select * from a union select * from b) as c;
versus
select * from a union select * from b;
or would these
contained within this transmission.
> Date: Fri, 29 Aug 2008 15:30:16 +0200
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: mysql@lists.mysql.com
> Subject: Re: performance key-value - int vs ascii ?
>
> thx,
> the results support my suspect
>
> re,
>
thx,
the results support my suspect
re,
wh
Perrin Harkins schrieb:
On Fri, Aug 29, 2008 at 4:57 AM, walter harms <[EMAIL PROTECTED]> wrote:
Since diskspace is plenty i thinking about to use the name directly. does
anyone has any idea
what is the performance penalty ?
http://www.mysqlperfor
On Fri, Aug 29, 2008 at 4:57 AM, walter harms <[EMAIL PROTECTED]> wrote:
> Since diskspace is plenty i thinking about to use the name directly. does
> anyone has any idea
> what is the performance penalty ?
http://www.mysqlperformanceblog.com/2008/01/24/enum-fields-vs-varchar-vs-int-joined-table-w
ubject:
Re: Performance problem with more than 500 concurrent queries
Hi,
Could try your script with the key_buffer set to 0 ?
Regards,
Jocelyn Fournier
[EMAIL PROTECTED] a écrit :
> Hello,
>
> Thanks for you help. You can see the results in the .err file below.
I've
>
]>
To:
mysql@lists.mysql.com
Date:
26.06.2008 22:52
Subject:
Re: Performance problem with more than 500 concurrent queries
At 10:39 AM 6/26/2008, you wrote:
Hello,
thanks for the answer.
Where is the error.log stored? I run the mysqladmin, it requires the
password and it exits immediately.
Open files:20
Open streams: 0
Alarm status:
Active alarms: 265
Max used alarms: 279
Next alarm time: 28789
sh-3.2#
From:
mos <[EMAIL PROTECTED]>
To:
mysql@lists.mysql.com
Date:
26.06.2008 22:52
Subject:
Re: Performance problem with more than 500 concurrent querie
At 10:39 AM 6/26/2008, you wrote:
Hello,
thanks for the answer.
Where is the error.log stored? I run the mysqladmin, it requires the
password and it exits immediately. But I cannot find any error.log.
Thanks,
Guillermo
Guillermo,
Look in the \MySQL\Data\*.err file.
Also I don't
D]
> To:
> mysql@lists.mysql.com
> Date:
> 26.06.2008 17:39
> Subject:
> Re: Performance problem with more than 500 concurrent queries
>
>
>
> Hello,
>
> thanks for the answer.
>
> Where is the error.log stored? I run the mysqladmin, it requires the
> password
Sorry about the long signature in the email. I forgot to remove it...
Guillermo
From:
[EMAIL PROTECTED]
To:
mysql@lists.mysql.com
Date:
26.06.2008 17:39
Subject:
Re: Performance problem with more than 500 concurrent queries
Hello,
thanks for the answer.
Where is the error.log stored? I
;
Cc:
mysql@lists.mysql.com
Date:
26.06.2008 16:30
Subject:
Re: Performance problem with more than 500 concurrent queries
do this
mysqladmin -uroot -p debug
and check the error.log, see if there are any locks on the tables.
On 6/26/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> He
do this
mysqladmin -uroot -p debug
and check the error.log, see if there are any locks on the tables.
On 6/26/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Hello guys,
>
> I am new to this list and also kind of new to mysql too.
>
> I have a multi-thread application written in Ruby. The
On Tue, Apr 22, 2008 at 1:13 PM, "Bruno B. B. Magalhães"
<[EMAIL PROTECTED]> wrote:
> Hi Phill, Rob and Perrin,
>
> I forgot to attach the explain query from MySQL, course it's one of the most
> important things... Sorry!!!
>
> EXPLAIN SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
Hi Phill, Rob and Perrin,
I forgot to attach the explain query from MySQL, course it's one of
the most important things... Sorry!!!
EXPLAIN SELECT UNIX_TIMESTAMP(transactions.transaction_date) AS date,
transactions.transaction_complement AS complement,
On Tue, Apr 22, 2008 at 11:41 AM, Bruno B. B. Magalhães
<[EMAIL PROTECTED]> wrote:
> I thing
> the most problematic part of those queries are the date range part, should I
> use a different index only for this column to maintain the index small?
My experience with doing data warehousing in MySQL
On Tue, Apr 22, 2008 at 8:41 AM, Bruno B. B. Magalhães <
[EMAIL PROTECTED]> wrote:
> Hi everybody,
>
> I am back to this list after a long period away due to work time
> restrictions... I have great news and a few interesting applications that I
> will release to the mysql community very soon, mos
On Tue, Apr 22, 2008 at 8:41 AM, Bruno B. B. Magalhães <
[EMAIL PROTECTED]> wrote:
> Hi everybody,
>
> I am back to this list after a long period away due to work time
> restrictions... I have great news and a few interesting applications that I
> will release to the mysql community very soon, mos
I'm sure if you created an index on
client_id,client_unit_id,transaction_date (with optionally something else to
make unique) it would increase performance.
What does an EXPLAIN give you?
Phil
On Tue, Apr 22, 2008 at 11:41 AM, Bruno B. B. Magalhães <
[EMAIL PROTECTED]> wrote:
> Hi everybody,
>
17:34
CC: mysql@lists.mysql.com
Asunto: RE: Performance problem
On Fri, 18 Apr 2008, Francisco Rodrigo Cortinas Maseda
<[EMAIL PROTECTED]> wrote:
> > im new on the performance tuning of this database (MySQL 5.0.45,
> > rpm-based installation), and i have one performance probl
On Fri, 18 Apr 2008, Francisco Rodrigo Cortinas Maseda
<[EMAIL PROTECTED]> wrote:
> im new on the performance tuning of this database (MySQL 5.0.45,
> rpm-based installation), and i have one performance problem on our
> new installation:
...
> We are experiencing problems about the performance
I`ve resolved my problems without hardware manipulation.
Thanks to all.
-Mensaje original-
De: Francisco Rodrigo Cortinas Maseda
Enviado el: miércoles 16 de abril de 2008 18:57
Para: mysql@lists.mysql.com
Asunto: RV: Performance problem
Hi all,
im new on the performance tuning of thi
Cool it's good to know thank you.
On 25/01/2008, Jay Pipes <[EMAIL PROTECTED]> wrote:
> Nope, no difference, AFAIK.
>
> Alex K wrote:
> > Any ideas pertaining this newbie question?
> >
> > Thank you so much,
> >
> >> Hi Guys,
> >>
> >> Is there a performance hit when joining across multiple databa
Nope, no difference, AFAIK.
Alex K wrote:
Any ideas pertaining this newbie question?
Thank you so much,
Hi Guys,
Is there a performance hit when joining across multiple databases as
opposed to joining multiples tables in one database? Suppose the same
tables are available across all database
Any ideas pertaining this newbie question?
Thank you so much,
> Hi Guys,
>
> Is there a performance hit when joining across multiple databases as
> opposed to joining multiples tables in one database? Suppose the same
> tables are available across all databases.
>
> Thank you,
>
> Alex
>
--
MyS
Gunnar,
You might do some more investigating on these to see if there is an
index you could use to speed these up, 15.8 million records might be a
full table scan, even if it's not - it's clearly a whole heck of a lot
of data and that's going to give you a huge performance hit. I'm not
fa
At 3:51p -0500 onGunnar R. wrote, On 01/08/2008 03:51 PM:
That tool tells me 100% of the data is read from memory, not a byte from
disk... would there still be any point in getting more memory?
Any suggestions to where to go from here?
I dunno. My hunch is that could do some query optimizat
At 6:47a -0500 on 08 Jan 2008, Gunnar R. wrote:
Concerning slow queries, it seems there's a couple of different queries
that's being logged.
I haven't tried it yet, but this recently went by on debaday.debian.net:
mytop: a top clone for MySQL
http://debaday.debian.net/2007/12/26/mytop-a-top-
Thank you Erik!
HDs are OK, a couple of GB free. Not that it's a lot, but I can't imagine
it being too low for MySQL..
I'm aware memory is a bit low, but RAMBUS chips are hard to come by. They
don't have them in stock anywhere anymore. Also they are quite expensive.
It's almost like you could've
Thank you Erik!
HDs are OK, a couple of GB free. Not that it's a lot, but I can't imagine
it being too low for MySQL..
I'm aware memory is a bit low, but RAMBUS chips are hard to come by. They
don't have them in stock anywhere anymore. Also they are quite expensive.
It's almost like you could've
Gunnar,
us = user (things like MySQL/PHP/Apache)
sy = system (memory management / swap space / threading / kernel
processes and so on)
ni = nice (apps running only when nothing else needs the resource)
id = idle (extra cpu cycles being wasted)
wa = wait state (io wait for disk/network/memory)
Hello,
Thanks. I read the document, but unfortunately it didn't tell me anything
new..
One of the things I am a bit confused about is:
top - 22:08:12 up 6 days, 7:23, 1 user, load average: 4.36, 3.30, 2.84
Tasks: 134 total, 1 running, 133 sleeping, 0 stopped, 0 zombie
Cpu0 : 61.3% us,
Hi,
Thanks.
mysql> show processlist;
++---+---+---+-+--+--+--+
| Id | User | Host | db| Command | Time
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I've learned a bit about the environment this server is running in. It's
VMware with root NFS and storage NFS mount points for MySQL. I've been
told the throughput over NFS for my Server is from 20 to 30 MB/s.
The server has 3GB ram. I'm not s
Hi,
If you can follow this document:
http://www.ufsdump.org/papers/uuasc-june-2006.pdf
You should be able to figure out what's happening.
Cheers,
Andrew
-Original Message-
From: Gunnar R. [mailto:[EMAIL PROTECTED]
Sent: Tue, 01 January 2008 23:31
To: mysql@lists.mysql.com
Subject: Pe
-Original Message-
From: Per Jessen [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 02, 2008 7:51 AM
To: mysql@lists.mysql.com
Subject: Re: Performance problem - MySQL at 99.9% CPU
Gunnar R. wrote:
> I am thinking about buying a new dual core box (with IDE disks?), but
> I h
Gunnar R. wrote:
> I am thinking about buying a new dual core box (with IDE disks?), but
> I have to make sure this really is a hardware issue before I spend
> thousands of bucks.
I think you've got an application problem somewhere which you should
look into first. Hardware-wise I think you're d
Hi, please monitor what happened with mysql
show processlist
show innodb status
and also ps aux
because maybe some application makes your mysql busy
On Jan 2, 2008 7:31 AM, Gunnar R. <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am running a community site mainly based on phpBB. It has about 9.300
Hi,
On Jan 1, 2008 6:31 PM, Gunnar R. <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am running a community site mainly based on phpBB. It has about 9.300
> registered users, 650.000 posts and about 200.000 visitors/month (12 mill
> "hits"). The SQL database is about 700MB.
>
> It's all running on a co
Hi,
Your English is fine :) Your queries don't look too bad. It could be
there are no good indexes. Have you tried running EXPLAIN on them?
What version of MySQL are you using? You can also try profiling the
queries (by hand with SHOW STATUS, or more easily with MySQL Query
Profiler) to s
mysqlimport with parallel threads is worth giving a try. It is similar
to 'load data infile' but with concurrent threads loading the tables.
I think , it was added in mysql-5.1.18. But it is said to work with
previous versions also according to the author :
http://krow.livejournal.com/519655
Shure, load data is way faster than full inserts.
I was thinking:
while $warnings -lt 100%
do
dump ora-data | mysql database
done
swap IP-addr.
On Mon, July 23, 2007 19:59, B. Keith Murphy wrote:
> I think you will find the load data infile will work faster. I am performing
> testing right
On 7/23/07, mos <[EMAIL PROTECTED]> wrote:
Load data will of course be much faster. However to obtain the maximum
speed you need to load the data to an empty table, because then MySQL will
load the data without updating the index for every row that's added, and
will instead rebuild the index only
At 11:44 AM 7/23/2007, Sid Lane wrote:
all,
I need to migrate ~12GB of data from an Oracle 10 database to a MySQL
5.0one in as short a window as practically possible (throw tablespace
in r/o,
migrate data & repoint web servers - every minute counts).
the two approaches I am considering are:
1.
1 - 100 of 367 matches
Mail list logo