Am 02.11.2017 um 20:09 schrieb Miguel González:
I have a VPS Web server (Apache 2.4/PHP 7.x + Varnish 4.1) running with
MySQL 5.6. I have 20 Gb of RAM. I serve Wordpress sites mainly all using
innodb.
So I´m thinking of ways of improving MySQL performance and of course
for that, you need
Hi,
I have a VPS Web server (Apache 2.4/PHP 7.x + Varnish 4.1) running with
MySQL 5.6. I have 20 Gb of RAM. I serve Wordpress sites mainly all using
innodb.
So I´m thinking of ways of improving MySQL performance and of course
for that, you need to measure.
Currently I´m checking, there are
Hello list,
I come here with a case I can say very strange.
I explain, we have a db server with mysql version 5.5.49 on debian
jessie. Some
days ago, our workflow allowed us to purge a large table from one of our
db
servers. By large I mean a table which had 350GB in size.
To purge that table,
On 7/24/2015 4:35 PM, Camilo Vieira wrote:
Hi,
My MySQL server is performing very slow inserts. Does somebody could help
me to understand what's happening?
... snip ...
---TRANSACTION 31D6D74, ACTIVE 27107 sec
mysql tables in use 8, locked 8
7470 lock struct(s), heap size 801208, 849088 row lo
- Original Message -
> From: "Camilo Vieira"
> Subject: Re: MySQL 5.5 Slow performance to insert
> $ ./mysqltuner.pl --user root --pass abril@123
Thank you for that password :-) I don't particularly like MySQLtuner myself, it
makes assumptions about your workl
oDB +MRG_MYISAM
[--] Data in MyISAM tables: 9K (Tables: 9)
[--] Data in InnoDB tables: 52G (Tables: 3468)
[--] Data in MEMORY tables: 30M (Tables: 44)
[!!] Total fragmented tables: 284
Performance Metrics
-
[--] Up for: 2d 17h 35m 17s (1M
Hi,
Your INSEERquery status is "Copying to tmp table",
this means fetching rows which has to be inserted is slow.
You should tune SELECT statement in your insert query.
Adding indexes and/or simplifying query and/or .. so on.
```
---TRANSACTION 31D6D74, ACTIVE 27107 sec
mysql tables in use 8, l
Hi,
My MySQL server is performing very slow inserts. Does somebody could help
me to understand what's happening?
mysql> show engine innodb status \G
*** 1. row ***
Type: InnoDB
Name:
Status:
=
150724 17:40:28
Hello Larry,
On 2/4/2015 3:37 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green wrote:
Hi Larry,
On 2/4/2015 3:18 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green
wrote:
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries.
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green wrote:
> Hi Larry,
>
>
> On 2/4/2015 3:18 PM, Larry Martell wrote:
>>
>> On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green
>> wrote:
>>>
>>> Hi Larry,
>>>
>>>
>>> On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One takes 4 hours
Hi Larry,
On 2/4/2015 3:18 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green wrote:
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes a
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green wrote:
> Hi Larry,
>
>
> On 2/1/2015 4:49 PM, Larry Martell wrote:
>>
>> I have 2 queries. One takes 4 hours to run and returns 21 rows, and
>> the other, which has 1 additional where clause, takes 3 minutes and
>> returns 20 rows. The main table being
Hi Larry,
On 2/1/2015 4:49 PM, Larry Martell wrote:
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching i
I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To s
Hi,
I have two servers Mysql 5.5 with master to master replication.
The second server failed and I needed to remove it to repair the operating
system.
After this incident the application users have been notice that the
application response have been very slow.
Both servers have RAID 1 (mirroring)
Greetings MySQL users --
If MySQL monitoring is something you are doing with one specific tool -- or
not at all -- then you might find this blog post on MySQL Performance
Monitoring in SPM worth a read:
http://wp.me/pwdA7-Xo
We frequently hear from organizations across industries who tackle
On Thu, May 15, 2014 at 11:01 AM, Johan De Meersman wrote:
> - Original Message -
>> From: "Larry Martell"
>> Subject: Re: Performance boost by splitting up large table?
>>
>> This table is queried based on requests from the users. There are 10
- Original Message -
> From: "Larry Martell"
> Subject: Re: Performance boost by splitting up large table?
>
> This table is queried based on requests from the users. There are 10
> different lookup columns they can specify, and they can provide any or
That
On Thu, May 15, 2014 at 4:14 AM, Johan De Meersman wrote:
>
> You've already had some good advice, but there's something much more simpler
> that will also give you a significant boost: a covering index.
>
> Simply put, the engine is smart enough to not bother with row lookups if
> everything yo
You've already had some good advice, but there's something much more simpler
that will also give you a significant boost: a covering index.
Simply put, the engine is smart enough to not bother with row lookups if
everything you asked for is already in the index it was using. You'll need to
kee
hem.). Would I expect to get a marked performance boost if I
> split my table up into 2 tables, one with the few frequently queried
> columns and another with less frequently queried ones? Doing this will
> require a lot of code changes, so I don't want to go down this path if
> it w
% of the time, a very small
> subset of these columns are queried. The other columns are rarely, if
> ever, queried. (But they could be at any time, so we do need to
> maintain them.). Would I expect to get a marked performance boost if I
> split my table up into 2 tables, one with the fe
We have a table with 254 columns in it. 80% of the time, a very small
subset of these columns are queried. The other columns are rarely, if
ever, queried. (But they could be at any time, so we do need to
maintain them.). Would I expect to get a marked performance boost if I
split my table up into
On 8/14/13 10:46 AM, Manuel Arostegui wrote:
2013/8/14 Andy Wallace mailto:awall...@ihouseweb.com>>
Hey all -
We have been focusing on performance in our systems a lot lately, and have
made some pretty
good progress. Upgrading the mySQL engine from 5.1 to 5.5 was eye-o
2013/8/14 Andy Wallace
> Hey all -
>
> We have been focusing on performance in our systems a lot lately, and have
> made some pretty
> good progress. Upgrading the mySQL engine from 5.1 to 5.5 was eye-opening.
>
> But there are still issues, and one in particular is vexi
Hey all -
We have been focusing on performance in our systems a lot lately, and have made
some pretty
good progress. Upgrading the mySQL engine from 5.1 to 5.5 was eye-opening.
But there are still issues, and one in particular is vexing. It seems like a
tuning problem
for sure - I notice this
ersman [mailto:vegiv...@tuxera.be]
> Sent: Sunday, August 11, 2013 2:16 PM
> To: Brad Heller
> Cc: Johnny Withers; MySQL General List
> Subject: Re: Concurrent read performance problems
>
> Good to hear. A word of warning, though: make sure you don't have more
> connections al
Good to hear. A word of warning, though: make sure you don't have more
connections allocating those buffers than your machine can handle memory-wise,
or you'll start swapping and performance will REALLY go down the drain.
A query/index based solution would still be preferred. Cou
Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
fixed the issue. Bumping them both to 512MB got our performance back
on-par. I came up with a way to avoid the contention using a complex set of
temp tables, but performance was abysmal.
By reverting to the more straight
;> >> have a large buffer pool and a plenty-big thread cache?
>> >> 2. What parameters can I tune to increase concurrent reads to
>these two
>> >> tables?
>> >>
>> >> *Long version:*
>> >>
>> >>
>> >> I've got
*
> >>
> >>
> >> I've got a MySQL server that has only about 50 connections open to it at
> >> any given time. It basically only has one OLAP q
> >> uery
> >> type being ran against
> >> it that amounts to something like this:
> &
(SELECT (3 fields with 1 aggregate)
>> FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;
>>
>> These queries are well indexed and run very well individually--sub-second,
>> usually even faster. When I run a few of these queries simultaneously (2-3
>> on
this:
>
>SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
>FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;
>
>These queries are well indexed and run very well
>individually--sub-second,
>usually even faster. When I run a few of these queries si
se queries simultaneously (2-3
on my laptop, 6-7 on our production boxes) performance grinds to a halt:
Consistently about 30 seconds to service a query.
Ideally, I'd like to be able to run 50+ of these queries concurrently.
I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on
t kind of things are you doing? If Data Warehouse 'reports', consider
Summary Tables. Non-trivial, but the 'minutes' will become 'seconds'.
> -Original Message-
> From: Bruce Ferrell [mailto:bferr...@baywinds.org]
> Sent: Tuesday, July 30, 2013 7:08 A
27;Sending data' is not happened with view, It's directly started with
'Sorting Result' state.
When I'm referring some MySQL sites and other blogs, I have seen that VIEWS
will never improve the performance. But here I see some improvements with a
view.
I would like to
I think you're reducing the amount of rows referenced throughout the proc
using the view. This might be where you're seeing a performance difference.
If you create an innodb table where the structure and row count match the
view maybe you'll see another difference? I'll wait f
t's directly started with
'Sorting Result' state.
When I'm referring some MySQL sites and other blogs, I have seen that VIEWS
will never improve the performance. But here I see some improvements with a
view.
I would like to know how VIEW is improving the performance.
Rega
ginal Message-
> From: Denis Jedig [mailto:d...@syneticon.net]
> Sent: Wednesday, April 24, 2013 10:50 PM
> To: mysql@lists.mysql.com
> Subject: Re: Performance of delete using in
>
> Larry,
>
> Am 25.04.2013 02:19, schrieb Larry Martell:
>
> > delete
Larry,
Am 25.04.2013 02:19, schrieb Larry Martell:
delete from cdsem_event_message_idx where event_id in ()
The in clause has around 1,500 items in it.
Consider creating a temporary table, filling it with your "IN"
values and joining it to cdsem_event_message_idx ON event_id for
deleti
plain on it, and it is using the
> index on event_id.
>
> On Wed, Apr 24, 2013 at 6:49 PM, Michael Dykman wrote:
>> You would have to show us the whole sql statement but often 'in' clauses can
>> be refactored into equivalent joins which tend to improve performance
>
hole sql statement but often 'in' clauses can
> be refactored into equivalent joins which tend to improve performance
> tremendously.
>
> - michael dykman
>
>
> On Wed, Apr 24, 2013 at 8:19 PM, Larry Martell
> wrote:
>>
>> I have a table that has 2.5 milli
- Original Message -
> From: "Rick James"
Hey Rick,
Thanks for your thoughts.
> * Smells like some huge LONGTEXTs were INSERTed, then DELETEd.
> Perhaps just a single one of nearly 500M.
I considered that, too; but I can see the on-disk size grow over a period of a
few months - it's
ctual data, and performance
>
>
>
> Hey list,
>
> I've got another peculiar thing going on :-) Let me give you a quick
> summary of the situation first: we host a number of Drupal sites, each
> site and it's db on separate VMs for reasons that are not important t
the lime green line of /data), growth occurs gradually
(and the issue happened in september, as well), until it seems to reach a
certain point. At some point, however, performance on that table (notably
select * - it's a drupal thing) pretty much instantly plummets, and the query
takes around hal
Am 29.11.2012 03:29, schrieb Dehua Yang:
> Finally , when the sync_binlog=0 , the commit statement disappear in the
> slow.log
in the subject you say exactly the opposite and if something
disappears in the slow.log it is good - so how should we help?
signature.asc
Description: OpenPGP digi
there is performance issues with a larger number of datafiles than a
single, that the reason innodb_file_per_table in not a default option.
other, with innodb_file_per_table, you'll use more resources, there can be
a problem if you have _many_ tables, there is obviously the problem to
keep n
returned to the
OS if you have a singe ibdata1.
In most cases, I recommend innodb_file_per_table=1.
> -Original Message-
> From: Pothanaboyina Trimurthy [mailto:skd.trimur...@gmail.com]
> Sent: Tuesday, May 15, 2012 4:58 AM
> To: mysql@lists.mysql.com
> Subject: Is there
ing help with the following:
>>> - what's the best way to run such a script without affecting the DB server
>>> performance?
>>>Should i limit the script to implement N number of records at a time?
>>>and then sleep or is there a better way?
>&g
i agree it wouldn't cause trouble, though it might lock mysql as there's
a number of other databases running on the same server.
so performance is an issue even if it's just a CPU/RAM peak.
On 5/31/12 11:23 AM, Reindl Harald wrote:
Am 31.05.2012 09:13, schrieb Roland RoLa
ple
sounds like a bad design without key
how can 30 updates take 2.4 seconds?
> i'm seeking help with the following:
> - what's the best way to run such a script without affecting the DB server
> performance?
> Should i limit the script to implement N number of recor
ffecting the DB server
performance? Should i limit the script to implement N number of records at a
time? and then sleep or is there a better way?
- What's the best practice of handling errors and warnings in such a situation?
- How can i prevent the runtime errors and mysql locks?
Any help wit
Check performance with script :
http://mysqltuner.pl/mysqltuner.pl - Variable tunning
http://hackmysql.com/mysqlidxchk - Unused Index
Thanks,
Prabhat
On Thu, May 24, 2012 at 4:32 PM, Rick James wrote:
> Thanks. I got tired of answering the same questions about buffer_pool and
> key_
('on','off') default NULL
Did you really mean to have 3 values (on, off, NULL)?
> -Original Message-
> From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
> Sent: Wednesday, May 23, 2012 7:10 PM
> To: mysql@lists.mysql.com
> Subject: Re: Need help for
Johnny
Thank you for the reply.
>Second, make sure your Innodb buffer pool is allocating as much ram as
>possible. I'd even go as far as adding another 8gb of ram to the
>server. The buffer pool setting is going to give you the best
>performance increase.
The problem is
Alex
Thank you for the advice.
Probably, we will put index (key) on both columns.
Thanks,
Yu
Alex Schaft さんは書きました:
>On 2012/05/24 07:37, Alex Schaft wrote:
>> You are selecting a record based on the value of data_id and
>> thold_enabled, but don't have an index on either? Add an index for both.
On 2012/05/24 07:37, Alex Schaft wrote:
> You are selecting a record based on the value of data_id and
> thold_enabled, but don't have an index on either? Add an index for both.
> If data_id is unique, then you would only need an index on that.
>
> Alex
>
On second thought, an index on thold_enabl
On 2012/05/24 04:10, Yu Watanabe wrote:
> 2. Instead INDEXes and schema design must be studied. Please provide:
> SHOW CREATE TABLE
> | thold_data | CREATE TABLE `thold_data` (
> `id` int(11) NOT NULL auto_increment,
> `rra_id` int(11) NOT NULL default '0',
> `data_id` int(11) NOT NU
query. Tuning will not help. Period.
>
>1. There are only a few things worth tuning -- see
>http://mysql.rjweb.org/doc.php/memory (they don't include the ones you tried)
>
>2. Instead INDEXes and schema design must be studied. Please provide:
>SHOW CREATE TABLE
&
BLE SIZE
EXPLAIN SELECT ...
> -Original Message-
> From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
> Sent: Tuesday, May 22, 2012 7:07 PM
> To: mysql@lists.mysql.com
> Subject: Need help for performance tuning with Mysql
>
> Hello all.
>
> I would like
Hi,
How much ever tuning you do at my.cnf will not help much, if you do not
tune your sql's.
Your first priority should be tune sql's, which will give you good
performance even with decent memory allocations and other settings
regards
anandkl
On Wed, May 23, 2012 at 3:45 PM, Andrew Mo
seem painless but if you have some legacy feature in place then things will
not work how you may expect them to.
Review your needs and see if a switch to innodb storage engine will give
you any performance gain. The locking differences alone might make this
worthwhile. TEST it.
You did not state
nd result.
>>>
>>> total used free sharedbuffers cached
>>> Mem: 81623807843676 318704 0 956325970892
>>> -/+ buffers/cache:17771526385228
>>> Swap: 8032492
used free sharedbuffers cached
>> Mem: 81623807843676 318704 0 956325970892
>> -/+ buffers/cache: 17771526385228
>> Swap: 8032492 235608008932
>>
>> Thanks,
>> Yu
>>
>>
>>
ng to give you the best
performance increase.
Also, what kind of hard disks do you have the data files on? Raid? No raid?
Sent from my iPad
On May 22, 2012, at 9:08 PM, Yu Watanabe wrote:
> Hello all.
>
> I would like to ask for advice with performance tuning with MySQL.
>
> Fo
> -/+ buffers/cache:17771526385228
> Swap: 8032492 235608008932
>
> Thanks,
> Yu
>
>
> Yu Watanabe さんは書きました:
>>Hello all.
>>
>>I would like to ask for advice with performance tuning with MySQL.
>>
>>Following are some d
Watanabe さんは書きました:
>Hello all.
>
>I would like to ask for advice with performance tuning with MySQL.
>
>Following are some data for my server.
>
>CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
>Memory : 8GB
>OS : RHEL 4.4 x86_64
>MySQL : MySQL 5.0.50sp1-enterprise
Hello all.
I would like to ask for advice with performance tuning with MySQL.
Following are some data for my server.
CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
Memory : 8GB
OS : RHEL 4.4 x86_64
MySQL : MySQL 5.0.50sp1-enterprise
Attached file
# my.cnf.txt : my.cnf
hi every one
Is there any performance difference, maintaining separate ibdata
files for each and every table insted of having one singl tabale for
all databases, for InnoDB Storage Engine.
please let me know the difference.
--
3murthy
--
MySQL General Mailing List
For list archives: http
Am 14.05.2012 23:05, schrieb Nicolas Rannou:
> *1* to create 3 tables:*
> user - info about a user
> images - info about an image
> user_image_mapping
>
> *2* to create 2 tables*
> user - info about a user
> -> a field would contain a list which represents the ids of the images
> the user can
Hi all,
We are currently designing a database for our application (python/mysql)
and we have some performance concern:
We would have "users" and "images".
"users" can view some "images".
"images" can be viewed by several "users&quo
sorry for the duplicate message.
just ignore it.
On 2012/03/09, at 10:11, Zheng Li wrote:
>
> what if I have to select all columns every time?
> any difference in performance?
>
> On 2012/02/29, at 4:41, Paul DuBois wrote:
>
>>
>> On Feb 28, 2012, at 9:59 A
what if I have to select all columns every time?
any difference in performance?
On 2012/02/29, at 4:41, Paul DuBois wrote:
>
> On Feb 28, 2012, at 9:59 AM, Zheng Li wrote:
>
>> for example
>> there are 2 tables to save same data
>> table A has 10 columns: a pr
- Original Message -
> From: "Baron Schwartz"
>
> You may be interested in this:
> http://www.mysqlperformanceblog.com/2009/09/28/how-number-of-columns-affects-performance/
Heeh, very interesting. You guys keep churning out the good stuff :-)
--
Bier met gren
You may be interested in this:
http://www.mysqlperformanceblog.com/2009/09/28/how-number-of-columns-affects-performance/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
even if I select all columns every time no matter it is Table A or B,
A has better performance?
On 2012/02/29, at 4:41, Paul DuBois wrote:
>
> On Feb 28, 2012, at 9:59 AM, Zheng Li wrote:
>
>> for example
>> there are 2 tables to save same data
>> table A has 10 co
olumns: a primary key column and 9 blob column
>> table B has 2 columns : a primary key column and 1 blob column which
>> includes all data in 2nd~10th columns of table A
>>
>> are there any differences in performance when selecting, inserting,
>> updating, and dele
able A
>
> are there any differences in performance when selecting, inserting, updating,
> and deleting data.
Sure. For example, with table A, you can select only those blob columns you're
interested in. With B, you have to select all of them if you want *any* of them.
--
P
At first blush, your problem would appear to concern the lack of index-use.
That's where I would begin my investigation. It might be painstaking, but
I would do something like this:
For each view
Look at the Join(s) and see what columns are being joined
Look at the tables and see what col
Hi all,
I've got some semi-general questions on the topics in the title. What I'm
looking for is more in the line of theory than query specifics. I am but a
poor peasant boy.
What I have is an application that makes heavy use of views. If I understand
views correctly (and I may not), views
app_colid_Examid rval Varchar 3
app_colid_Examid res Varchar 3
If i have 6 laks records [approx]; does it cause performance issue in win based
(mysql 5.1.x) ? I will not apply any kind of comparison in query (only
Select/update eill take place). I will manage the filters in the record
set/data set.
Other
Check your auto negotiate setting on your nic. Run ifconfig and see if there
are a lot of errors.
On Sep 29, 2011 10:13 AM, "Jim Moseby" wrote:
Yeah:
# host 72.30.2.43 /* yahoo.com */
43.2.30.72.in-addr.arpa domain name pointer ir1.fp.vip.sk1.yahoo.com.
# host 10.1.20.97 /* my windows box */
97
Yeah:
# host 72.30.2.43 /* yahoo.com */
43.2.30.72.in-addr.arpa domain name pointer ir1.fp.vip.sk1.yahoo.com.
# host 10.1.20.97 /* my windows box */
97.20.1.10.in-addr.arpa has no PTR record
>>> Todd Lyons 9/29/2011 10:26 AM >>>
On Thu, Sep 29, 2011 at 7:12 AM, Jim Moseby wrote:
> I still u
On Thu, Sep 29, 2011 at 7:12 AM, Jim Moseby wrote:
> I still use the old MySQL Administrator GUI on my windows box. A simple
> 'select * from tablename' that would return only three records takes just
> over a minute to return (although it says '3 records returned in 0.0086
> seconds' at the
mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2
Strange, strange problem.
Everything was fine yesterday morning, then all of a sudden any query over the
network takes a REALLY long time to return. If I log in at the server console,
every query is snappy-fast.
- Original Message -
> From: "Hank"
>
> Just an update. Using the "load index into cache" statement for the
> 200 million row indexed "source" table, my correlated update
> statement ran in 1 hour, 45 minutes to update 144 million rows. A
I don't think any other than > show full processlist. In which state query
is locked or not.
I/O related things you check at OS level.
On Thu, Sep 22, 2011 at 11:07 PM, jiangwen jiang wrote:
> Hi,
>
> Is there any performance toolds about UPDATE/INSERT querys?
> I want t
2011/09/23 12:56 +0200, Johan De Meersman
What it says, is "If you are going to run many operations". You are updating
many rows, but you are only doing ONE operation: a single update statement.
For what it's worth, the downloading HTML help claims this only for MyISAM
tables,
Hello Johan,
Just an update. Using the "load index into cache" statement for the
200 million row indexed "source" table, my correlated update statement
ran in 1 hour, 45 minutes to update 144 million rows. A 50% increase
in performance!
Thank you very much,
-Hank
On Fr
in the cache.
With 244 million records in the "source" table, I'm not sure that
would fit in the cache.
> Do you have an index on dest.key, too? That might help performance as well if
> it fits in memory, too,
> because you'll only need disk access for flushing w
the tables in the wrong order, or is it just
something you picked up off a blog without realising the impact?
> Source is indexed by key+seq (key is primary key, but seq is
> included as a covering index).
Good practice, that should prevent source from being read from disk, if your
index is f
Hi,
Is there any performance toolds about UPDATE/INSERT querys?
I want to monitor the UPDATE/INSERT performance, check out if there's any
performance bottleneck, for example:
slow INSERT/UPDATE
more I/O where execute INSERT
Regards
Thanks
J.W
On Thu, Sep 22, 2011 at 3:11 PM, Hassan Schroeder <
hassan.schroe...@gmail.com> wrote:
> On Thu, Sep 22, 2011 at 11:51 AM, Hank wrote:
> > Like I said, the problem is not just one particular SQL statement. It is
> > several dozen statements operating on tables with several hundred million
> > rec
48 PM, Ananda Kumar wrote:
>>>>>
>>>>>> Hi,
>>>>>> Why dont u use a stored proc to update rows ,where u commit for every
>>>>>> 1k or 10k rows.
>>>>>> This will be much faster than ur individual update stmt.
>>
t;>>
>>>>> Hi,
>>>>> Why dont u use a stored proc to update rows ,where u commit for every
>>>>> 1k or 10k rows.
>>>>> This will be much faster than ur individual update stmt.
>>>>>
>>>>> regards
>>&g
nk wrote:
>>>>
>>>>> That is what I'm doing. I'm doing a correlated update on 200 million
>>>>> records. One UPDATE statement.
>>>>>
>>>>> Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'
2011 at 8:24 PM, Hank wrote:
>>>
>>>> That is what I'm doing. I'm doing a correlated update on 200 million
>>>> records. One UPDATE statement.
>>>>
>>>> Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'
gt;> anandkl
>>
>> On Thu, Sep 22, 2011 at 8:24 PM, Hank wrote:
>>
>>> That is what I'm doing. I'm doing a correlated update on 200 million
>>> records. One UPDATE statement.
>>>
>>> Also, I'm not asking for a tutorial when
ing for a tutorial when not to use LOCK TABLES. I'm
>> trying
>> to figure out why, despite what the documentation says, using LOCK TABLES
>> hinders performance for large update statements on MYISAM tables when it
>> is
>> supposed to increase performance on exa
gt; records. One UPDATE statement.
>
> Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm
> trying
> to figure out why, despite what the documentation says, using LOCK TABLES
> hinders performance for large update statements on MYISAM tables when it is
>
1 - 100 of 1892 matches
Mail list logo