Hi List,
In a 20m interval in our max load I have:
OS WAIT ARRAY INFO: reservation count 637, signal count 625
Mutex spin waits 0, rounds 19457, OS waits 428
RW-shared spins 238, OS waits 119; RW-excl spins 13, OS waits 8
(The values are the difference between the start and end of this 20m
inter
Hi,
We're chaning it to INT(9). Apparently someone remembered to change the type
of data in this field from an alphanumeric value to an INT(9).
I'm going to change this asap.
Thanks
BR
AJ
On Mon, Sep 6, 2010 at 5:17 AM, mos wrote:
> At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
>
>> Hi Johnn
At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_
On 9/3/2010 3:15 PM, Johnny Withers wrote:
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.
I have never noticed this. I don't think adding a new index will make a
difference.
You could try moving
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.
I have never noticed this. I don't think adding a new index will make a
difference.
You could try moving your log files to a different disk array tha
Hi,
When creating a table in MySQL with a PK it automatically creates an INDEX,
correct?
The Index_Length: 0 is rather strange..I've created a new INDEX on top of my
PK column on my test system and Index_Length shows a big value different
from 0. Do you think this might have any impact?
mysql> s
I think your MySQL instance is disk bound.
If you look at your iostats, md2, 12 and 22 have a ~10ms wait time before a
request can be processed. iostat is also reporting those disks are 75%+
utilized which means they are doing about all they can do.
Anyway you can add more disks? Add faster disks
Hi,
The DB is working on /var, which is md2 / md12 / md22.
extended device statistics
device r/sw/s kr/s kw/s wait actv svc_t %w %b
md2 0.1 80.00.4 471.4 0.0 1.0 12.2 0 94
md10 0.05.70.0 78.8 0.0 0.1 19.7 0 9
md1
Very confusing...
Why is index_length zero ?
On top of that, there's only 500K rows in the table with a data size of
41MB. Maybe InnoDB is flushing to disk too often?
What's the output of iostat -dxk 60 ? (run for a minute+ to get 2 output
girds)
--
*Johnny With
Hi,
mysql> SHOW TABLE STATUS LIKE 'clientinfo';
+++-++++-+-+--+---++-+-++---+--++-
What does
SHOW TABLE STATUS LIKE 'table_name'
Say about this table?
-JW
On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira wrote:
> Hi,
>
> I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
> makes a little difference but not enough for the application to run in real
>
Hi,
I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
makes a little difference but not enough for the application to run in real
time processing.
It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.
MySQL is eating 179MB of RAM and 5,4% of CPU.
PID USERNA
Ok, so I'm stumped?
What kind of hardware is behind this thing?
-JW
On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira wrote:
> Hi Johnny,
>
> mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
>
> ++-++---+---+-+-+--
On 02/09/2010 6:05 p, Alexandre Vieira wrote:
Hi Jangita,
I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's
246MB of 350MB free.
| Innodb_buffer_pool_pages_data | 6020 |
| Innodb_buffer_pool_pages_dirty| 1837 |
| Innodb_buffer_pool_pages_flushed | 673837
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_len
| ref | rows | Extra |
++-
Hi Travis,
Sorry, bad copy/paste. That DELETE statement is wrong.
The application executes:
DELETE FROM clientinfo WHERE userid='x';
BR
AJ
On Thu, Sep 2, 2010 at 5:23 PM, Travis Ard wrote:
> Have you considered adding a secondary index on the units column for your
> delete queries?
>
Have you considered adding a secondary index on the units column for your
delete queries?
DELETE FROM clientinfo WHERE units='155618918';
-Original Message-
From: Alexandre Vieira [mailto:nul...@gmail.com]
Sent: Thursday, September 02, 2010 8:46 AM
To: John Daisley; joh...@pixelated.net
Hi Jangita,
I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's 246MB
of 350MB free.
| Innodb_buffer_pool_pages_data | 6020 |
| Innodb_buffer_pool_pages_dirty| 1837 |
| Innodb_buffer_pool_pages_flushed | 673837 |
| Innodb_buffer_pool_pages_free | 157
On 02/09/2010 4:46 p, Alexandre Vieira wrote:
John, Johnny,
Thanks for the prompt answer.
...
We also run some other applications in the server, but nothing that consumes
all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
idle CPU time at any time.
TIA
BR
Alex
Increa
What is the hardware spec? Anything else running on the box?
Why are you replicating but not making use of the slave?
Can you post the output of SHOW CREATE TABLE?
Regards
John
On 2 September 2010 12:50, Alexandre Vieira wrote:
> Hi list,
>
> I'm having some performance problems on my 5.0.45-
Can you show us the table structure and sample queries?
On Thursday, September 2, 2010, Alexandre Vieira wrote:
> Hi list,
>
> I'm having some performance problems on my 5.0.45-log DB running on Solaris
> 8 (V240).
>
> We only have one table and two apps selecting, updating, inserting and
> delet
Hi,
Your English is fine :) Your queries don't look too bad. It could be
there are no good indexes. Have you tried running EXPLAIN on them?
What version of MySQL are you using? You can also try profiling the
queries (by hand with SHOW STATUS, or more easily with MySQL Query
Profiler) to s
On Saturday 25 November 2006 17:54, John Kopanas wrote:
> The following query takes over 6 seconds:
> SELECT * FROM purchased_services WHERE (purchased_services.company_id =
> 535263)
What does EXPLAIN say about that query?
Have you done an optimize recently?
--
Scanned by iCritical.
--
MySQL
My innodb_buffer_pool_size is:
innodb_buffer_pool_size | 8388608
That looks like 8MB... that sounds small if I have a DB with over 1M
rows to process. No?
Yes, that's extremely small. I'd go for at least 256M, and maybe 512M
if your machine will primarily be doing mysql duties.
Did
At 08:31 PM 11/26/2006, John Kopanas wrote:
When I did a:
SELECT * FROM purchased_services WHERE company_id = 1000;
It took me 7 seconds. This is driving me crazy!
I am going to have to try this on another computer and see if I am
going to get the same results on another system. Argh...
T
Yes... with FORCE INDEX it still takes 7 seconds.
On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
In the last episode (Nov 26), John Kopanas said:
> On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
> >In the last episode (Nov 26), John Kopanas said:
> >> Thanks a lot for your help.
> >>
> >
In the last episode (Nov 26), John Kopanas said:
> On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
> >In the last episode (Nov 26), John Kopanas said:
> >> Thanks a lot for your help.
> >>
> >> The query should and only does return 1-6 rows depending on the id.
> >> Never more then that. Here a
When I did a:
SELECT * FROM purchased_services WHERE company_id = 1000;
It took me 7 seconds. This is driving me crazy!
I am going to have to try this on another computer and see if I am
going to get the same results on another system. Argh...
On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wro
In the last episode (Nov 26), John Kopanas said:
> Thanks a lot for your help.
>
> The query should and only does return 1-6 rows depending on the id.
> Never more then that. Here are the comperative EXPLAINs:
>
> mysql> EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
> ++-
The application is not in production yet but when it will go in
production the server will be considerably faster and have much more
RAM. But before I put the app in production I want to make sure it is
working properly. 500K rows does not sounds like that much in this
day in age. If I understa
Thanks a lot for your help.
The query should and only does return 1-6 rows depending on the id.
Never more then that. Here are the comperative EXPLAINs:
mysql> EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
++-++---+---+-+--
This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop. You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers. If your query has to access multiple
records spread out throughout the t
In the last episode (Nov 25), John Kopanas said:
> Sorry about these questions. I am used to working with DBs with less
> then 10K rows and now I am working with tables with over 500K rows
> which seems to be changing a lot for me. I was hoping I can get some
> people's advice.
>
> I have a 'com
If I just SELECT id:
SELECT id FROM purchased_services WHERE (company_id = 1000)
It takes approx 2-2.5s. When I look at the process list it looks like
that it's state seems to always be in sending data...
This is after killing the db and repopulating it again. So what is going on?
On 11/25/06
I tried the same tests with the database replicated in a MyISAM
engine. The count was instantaneous but the following still took
3-6seconds:
SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)
The following though was instantaneous:
SELECT * FROM purchased_services
Hi,
The performance of the data transfers using the direct socket connection
goes from <15 milli sec (in the lab) to ~32 milli sec (in pseudo
production env). But the database calls go from <1 sec to several
seconds (have not measured this yet). The database was exactly the same
in both trials.
For further clarification, what we are observing is that pull down lists
(which are already built on the GUI) take a long time to "complete"
processing. The processing we are performing upon user selection is
taking the selected element, updating 1 database column in 1 table with
the value, and th
Celona, Paul - AES wrote:
I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql database
on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and
"Celona, Paul - AES" <[EMAIL PROTECTED]> wrote on 06/03/2005 01:03:18
PM:
> I am running mysql 4.0.18 on Windows 2003 server which also hosts my
> apache tomcat server. My applet makes a connection to the mysql database
> on the server as well as a socket connection to a service on the same
> ser
On Thu, Dec 18, 2003 at 10:37:46AM -0600, Dan Nelson wrote :
> In the last episode (Dec 18), Markus Fischer said:
> > On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote :
> > > Raising sort_buffer_size and join_buffer_size may also help if your
> > > queries pull a lot of records.
> >
>
In the last episode (Dec 18), Markus Fischer said:
> On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote :
> > Raising sort_buffer_size and join_buffer_size may also help if your
> > queries pull a lot of records.
>
> From what I read from the manual, sort_buffer_size is only used
>
On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote :
> In the last episode (Dec 16), Markus Fischer said:
> > I'm investigating a performance problem with mysql server set up. The
> > server is running linux with 1GB ram. I'ld like to tune the
> > configuration of the server to use as much
Hi,
On Tue, Dec 16, 2003 at 10:23:05PM +1100, Chris Nolan wrote :
> How heavy is your usage of TEMPORARY TABLES? I don't use them much
> myself, but I'm sure that the others on the list will have something
> to say in that regard.
Here are the relevant numbers:
Created_tmp_disk_tables
In the last episode (Dec 16), Markus Fischer said:
> I'm investigating a performance problem with mysql server set up. The
> server is running linux with 1GB ram. I'ld like to tune the
> configuration of the server to use as much RAM as possible without
> swapping to the disc because of the big slo
Hi!
How heavy is your usage of TEMPORARY TABLES? I don't use them much
myself, but
I'm sure that the others on the list will have something to say in that
regard.
To get a better look at MySQL's usage of memory, you could try looking
at the output of
SHOW STATUS .
Regards,
Chris
Markus Fisc
> The main table is rather huge, it has 90 columns and now after
> three month it has 500.000 records... but in the end it has to store data of
> 36 month.
Hmm, I think you had better look at normalizing your data, and creating
indexes. Start with the indexes since that won't force you to make an
Do you use indexes?
See http://www.mysql.com/doc/en/CREATE_INDEX.html.
In my system a retrieval from a 24 million records table (3 columns) with a
result of 25 records only took 0.09 sec and 24 million records table with 5
columns 0.25 sec
Harald
- Original Message -
From: "Schonder, M
Matthias,
Can you send us your table index definitions and the output of an EXPLAIN
command on your query?
ie
DESCRIBE pool;
SHOW INDEX FROM pool;
EXPLAIN SELECT sendnr FROM pool where sendnr = 111073101180;
I'm pretty sure we can improve this - I've got a table with 55 million
records (though o
nething. BUT wud require
space and some time.
vikash
-Original Message-
From: Gunnar Lunde [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 14, 2003 5:17 PM
To: '[EMAIL PROTECTED]'
Subject: RE: Performance problems after record deletion.
Thank you for your reply, Vikash
We have d
To: 'Gunnar Lunde'; [EMAIL PROTECTED]
> Subject: RE: Performance problems after record deletion.
>
>
> This is what MYSQL manual 3.23.41 says, may be it helps you
>
> OPTIMIZE TABLE should be used if you have deleted a large part of a
> table or if you have made m
This is what MYSQL manual 3.23.41 says, may be it helps you
OPTIMIZE TABLE should be used if you have deleted a large part of a
table or if you have made many changes to a table with variable-length
rows (tables that have VARCHAR, BLOB, or TEXT columns). Deleted records
are maintained in a linked
Alex,
- Original Message -
From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 11:49 AM
Subject: Re: Performance Problems with InnoDB Row Level Locking..
o: Varshavchick Alexander <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: Performance Problems with InnoDB Row Level Locking...
>
> Alexander,
>
> - Original Message -
> From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
> To: "
Joe,
- Original Message -
From: "Joe Shear" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 2:13 AM
Subject: Re: Performance Problems with InnoDB Row Level Locking...
> Hi,
> On a side note, are there any
Alexander,
- Original Message -
From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 10:08 AM
Subject: RE: Performance Problems with InnoDB
Hi Heikki,
one more question please about innodb_flush_log_at_trx_commit: if there
was some way of increasing the delay between log flushes more than 1 sec,
can you estimate will it bring any real effect in performance? I know
it'll raise the risk of losing some last transactions if something
cra
Steve,
- Original Message -
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 1:23 AM
Subject: RE: Performance Problems with InnoDB Row Level Locking
teve
-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 2:54 PM
To: Orr, Steve
Cc: [EMAIL PROTECTED]
Subject: Re: Performance Problems with InnoDB Row Level Locking...
Steve,
- Original Message -
From: "Orr, Steve" <[EM
Steve,
- Original Message -
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 11:04 PM
Subject: RE: Performance Problems with InnoDB Row Level Locking...
> Heikki,
>
>
i
- Original Message -
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: "Orr, Steve" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 10:30 PM
Subject: Re: Performance Problems with InnoDB Row Level Locking...
> Steve,
>
Steve,
- Original Message -
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 9:49 PM
Subject: RE: Performance Problems with InnoDB Row Level Locki
t atomic then does that mean that MySQL
still does not pass the ACID test even with InnoDB?
Thanks again and I'm eagerly awaiting your reply.
Respectfully,
Steve Orr
-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 9:05 AM
To:
Alexander,
- Original Message -
From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 6:51 PM
Subject: Re: Performance Problems with InnoDB Row Level Lock
Heikki, one little question - is it a mistype, or can a flush log interval
duration be controlled by this option? The value should only be 0 or 1 as
the documentation says...
On Thu, 5 Sep 2002, Heikki Tuuri wrote:
> You can try setting
>
> innodb_flush_log_at_trx_commit=2
>
> if you can afford
Steve,
- Original Message -
From: ""Orr, Steve"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.mysql
Sent: Thursday, September 05, 2002 5:52 PM
Subject: Performance Problems with InnoDB Row Level Locking...
> Background:
> I've developed a simplistic Perl program to test database per
On Monday 29 April 2002 07:36 am, Wouter de Jong wrote:
> > In addition, here the resolved stack-trace:
> >
> > 0x806cb54 handle_segfault__Fi + 428
> > 0x8116c2a pthread_sighandler + 158
> > 0x80e715e mi_lock_database + 14
> > 0x80b5de8 external_lock__9ha_myisamP3THDi + 28
> > 0x8069d75 lock_exte
Subject: Re: Performance problems...
From: Vic Cekvenich <[EMAIL PROTECTED]>
===
DB's are IO bound. Get more cache in Raid? So 2 CPU should not help.
Conisder PostgreSQL.
Wouter de Jong wrote:
> Hi :)
>
> We're running 3 MySQL-servers for our customers databases
&
Wouter,
Monday, April 29, 2002, 1:42:36 PM, you wrote:
[]
WdJ> Trying to get some variables.
WdJ> Some pointers may be invalid and cause the dump to abort...
wd1> thd->query at 0x610a9940 is invalid pointer
wd1> thd->thread_id=868173
And that's why it's better to use MySQL precompiled binary!
> In addition, here the resolved stack-trace:
>
> 0x806cb54 handle_segfault__Fi + 428
> 0x8116c2a pthread_sighandler + 158
> 0x80e715e mi_lock_database + 14
> 0x80b5de8 external_lock__9ha_myisamP3THDi + 28
> 0x8069d75 lock_external__FPP8st_tableUi + 121
> 0x8069bea mysql_lock_tables__FP3THDPP8st_
In addition, here the resolved stack-trace:
0x806cb54 handle_segfault__Fi + 428
0x8116c2a pthread_sighandler + 158
0x80e715e mi_lock_database + 14
0x80b5de8 external_lock__9ha_myisamP3THDi + 28
0x8069d75 lock_external__FPP8st_tableUi + 121
0x8069bea mysql_lock_tables__FP3THDPP8st_tableUi + 362
0x
On Wed, 14 Nov 2001, Aaron Williams wrote:
> At 3:54 PM + 11/14/01, M. A. Alves wrote:
> >On Wed, 14 Nov 2001, Aaron Williams wrote:
> >> . . .
> >> I am not expert on innodb, or mysql either. I started playing with
> >> those values after reading the performance guide on Innodb.com.
> >
>
At 3:54 PM + 11/14/01, M. A. Alves wrote:
>On Wed, 14 Nov 2001, Aaron Williams wrote:
>> . . .
>> I am not expert on innodb, or mysql either. I started playing with
>> those values after reading the performance guide on Innodb.com.
>
>What is this site (Innodb.com)? It seems to show some k
On Wed, 14 Nov 2001, Aaron Williams wrote:
> . . .
> I am not expert on innodb, or mysql either. I started playing with
> those values after reading the performance guide on Innodb.com.
What is this site (Innodb.com)? It seems to show some kind of report on
some system associated with the HTTP r
>Hi,
>I'm running mysql 3.23.42 on a 1.4ghz athlon with 512mb of ram for my
>database server but this machine doesn't seem to be able to handle the load,
>which makes me think i must be doing something wrong. The primary job of
>this database is radius authentication for our 30k or so customers so
I would suggest creating a new table to hold vendor information. Then remove
the varchar vendor field in the parts table and replace it with an integer
that represents the vendorid from the vendor table you just created. This
should speed things up consideribly. You can do a left join any time you
75 matches
Mail list logo