ersman [mailto:vegiv...@tuxera.be]
> Sent: Sunday, August 11, 2013 2:16 PM
> To: Brad Heller
> Cc: Johnny Withers; MySQL General List
> Subject: Re: Concurrent read performance problems
>
> Good to hear. A word of warning, though: make sure you don't have more
> connections al
Good to hear. A word of warning, though: make sure you don't have more
connections allocating those buffers than your machine can handle memory-wise,
or you'll start swapping and performance will REALLY go down the drain.
A query/index based solution would still be preferred. Could you for insta
Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
fixed the issue. Bumping them both to 512MB got our performance back
on-par. I came up with a way to avoid the contention using a complex set of
temp tables, but performance was abysmal.
By reverting to the more straight-forwar
True, which is why I said I suspected file-based sort :-) At one million rows,
that seems to be an accurate guess, too. Still on the phone, though, and in
bed. I'll read the thread better tomorrow, but you might get some benefit from
cutting out the subselect if that's possible.
If you have ple
Just because it says filrsort doesn't mean it'll create a file on disk.
Table schema and full query would be helpful here too
http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
On Aug 11, 2013 1:28 PM, "Brad Heller" wrote:
> Yes sorry, here's the explain. It w
Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
there is a lot of filesort goin' on here. Also note that I'm only using the
first two fields of the covering index (intentionally).
+--+-++---+---
On my phone now, but it smells of file-based sorting, making disk access the
bottleneck. Can you provide the explain?
Brad Heller wrote:
>Hey list, first time posting here so apologies if this is the wrong
>forum
>for this but I'm really out of options on how to solve this problem!
>
>*Short ver
Hey list, first time posting here so apologies if this is the wrong forum
for this but I'm really out of options on how to solve this problem!
*Short version:*
1. High concurrent reads, performing the same well-indexed query type to
the same two tables.
2. No additional traffic at all--just reads
Hi List,
In a 20m interval in our max load I have:
OS WAIT ARRAY INFO: reservation count 637, signal count 625
Mutex spin waits 0, rounds 19457, OS waits 428
RW-shared spins 238, OS waits 119; RW-excl spins 13, OS waits 8
(The values are the difference between the start and end of this 20m
inter
Hi,
We're chaning it to INT(9). Apparently someone remembered to change the type
of data in this field from an alphanumeric value to an INT(9).
I'm going to change this asap.
Thanks
BR
AJ
On Mon, Sep 6, 2010 at 5:17 AM, mos wrote:
> At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
>
>> Hi Johnn
At 04:44 AM 9/3/2010, Alexandre Vieira wrote:
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_
On 9/3/2010 3:15 PM, Johnny Withers wrote:
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.
I have never noticed this. I don't think adding a new index will make a
difference.
You could try moving
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.
I have never noticed this. I don't think adding a new index will make a
difference.
You could try moving your log files to a different disk array tha
Hi,
When creating a table in MySQL with a PK it automatically creates an INDEX,
correct?
The Index_Length: 0 is rather strange..I've created a new INDEX on top of my
PK column on my test system and Index_Length shows a big value different
from 0. Do you think this might have any impact?
mysql> s
I think your MySQL instance is disk bound.
If you look at your iostats, md2, 12 and 22 have a ~10ms wait time before a
request can be processed. iostat is also reporting those disks are 75%+
utilized which means they are doing about all they can do.
Anyway you can add more disks? Add faster disks
Hi,
The DB is working on /var, which is md2 / md12 / md22.
extended device statistics
device r/sw/s kr/s kw/s wait actv svc_t %w %b
md2 0.1 80.00.4 471.4 0.0 1.0 12.2 0 94
md10 0.05.70.0 78.8 0.0 0.1 19.7 0 9
md1
Very confusing...
Why is index_length zero ?
On top of that, there's only 500K rows in the table with a data size of
41MB. Maybe InnoDB is flushing to disk too often?
What's the output of iostat -dxk 60 ? (run for a minute+ to get 2 output
girds)
--
*Johnny With
Hi,
mysql> SHOW TABLE STATUS LIKE 'clientinfo';
+++-++++-+-+--+---++-+-++---+--++-
What does
SHOW TABLE STATUS LIKE 'table_name'
Say about this table?
-JW
On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira wrote:
> Hi,
>
> I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
> makes a little difference but not enough for the application to run in real
>
Hi,
I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
makes a little difference but not enough for the application to run in real
time processing.
It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.
MySQL is eating 179MB of RAM and 5,4% of CPU.
PID USERNA
Ok, so I'm stumped?
What kind of hardware is behind this thing?
-JW
On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira wrote:
> Hi Johnny,
>
> mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
>
> ++-++---+---+-+-+--
On 02/09/2010 6:05 p, Alexandre Vieira wrote:
Hi Jangita,
I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's
246MB of 350MB free.
| Innodb_buffer_pool_pages_data | 6020 |
| Innodb_buffer_pool_pages_dirty| 1837 |
| Innodb_buffer_pool_pages_flushed | 673837
Hi Johnny,
mysql> EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table | type | possible_keys | key | key_len
| ref | rows | Extra |
++-
; delete queries?
>
> DELETE FROM clientinfo WHERE units='155618918';
>
> -Original Message-
> From: Alexandre Vieira [mailto:nul...@gmail.com]
> Sent: Thursday, September 02, 2010 8:46 AM
> To: John Daisley; joh...@pixelated.net
> Cc: mysql@lists.mysql.com
elated.net
Cc: mysql@lists.mysql.com
Subject: Performance problems on MySQL
John, Johnny,
Thanks for the prompt answer.
mysql> SHOW CREATE TAB
Hi Jangita,
I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's 246MB
of 350MB free.
| Innodb_buffer_pool_pages_data | 6020 |
| Innodb_buffer_pool_pages_dirty| 1837 |
| Innodb_buffer_pool_pages_flushed | 673837 |
| Innodb_buffer_pool_pages_free | 157
On 02/09/2010 4:46 p, Alexandre Vieira wrote:
John, Johnny,
Thanks for the prompt answer.
...
We also run some other applications in the server, but nothing that consumes
all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
idle CPU time at any time.
TIA
BR
Alex
Increa
John, Johnny,
Thanks for the prompt answer.
mysql> SHOW CREATE TABLE clientinfo;
++--
What is the hardware spec? Anything else running on the box?
Why are you replicating but not making use of the slave?
Can you post the output of SHOW CREATE TABLE?
Regards
John
On 2 September 2010 12:50, Alexandre Vieira wrote:
> Hi list,
>
> I'm having some performance problem
Can you show us the table structure and sample queries?
On Thursday, September 2, 2010, Alexandre Vieira wrote:
> Hi list,
>
> I'm having some performance problems on my 5.0.45-log DB running on Solaris
> 8 (V240).
>
> We only have one table and two apps selecting,
Hi list,
I'm having some performance problems on my 5.0.45-log DB running on Solaris
8 (V240).
We only have one table and two apps selecting, updating, inserting and
deleting massively and randomly from this table.
The table is very simple. All SELECTs,INSERTs,UPDATEs and DELETEs have onl
Even more when you compare to a script executing the inserts, instead the
mysql client...
Olaf
On 6/5/08 12:06 PM, "mos" <[EMAIL PROTECTED]> wrote:
> At 10:30 AM 6/5/2008, you wrote:
>> Simon,
>>
>> In my experience load data infile is a lot faster than a sql file htrough
>> the client.
>> I
Olaf, Mike
Thanks for the input, the blob data is just text, I'll have a go at
using the load data command
Regards
Simon
mos wrote:
At 10:30 AM 6/5/2008, you wrote:
Simon,
In my experience load data infile is a lot faster than a sql file
htrough
the client.
I would parse the sql file an
At 10:30 AM 6/5/2008, you wrote:
Simon,
In my experience load data infile is a lot faster than a sql file htrough
the client.
I would parse the sql file and create a csv file with just the columns of
your table and then use load data infile using the created csv file
Olaf
Olaf,
Using a
Simon,
In my experience load data infile is a lot faster than a sql file htrough
the client.
I would parse the sql file and create a csv file with just the columns of
your table and then use load data infile using the created csv file
Olaf
On 6/5/08 4:52 AM, "Simon Collins" <[EMAIL PROTECTED]>
AIL PROTECTED]> wrote:
From: Simon Collins <[EMAIL PROTECTED]>
Subject: Re: Large import into MYISAM - performance problems
To: mysql@lists.mysql.com
Date: Thursday, June 5, 2008, 3:05 PM
I'm loading the data through the command below mysql -f -u root -p
enwiki < enwiki.sql
I can do - if the load data infile command definitely improves
performance and splitting the file does the same I have no problem with
doing this. It just seems strange that it's problems with the way the
import file is configured. I thought the problem would be somehow with
the table getting
You could load the data into several smaller tables and combine them
into a merged table which would have no real effect on the schema.
Ade
Simon Collins wrote:
I'm loading the data through the command below mysql -f -u root -p
enwiki < enwiki.sql
The version is MySQL 5.0.51a-community
I've
Simon,
Why dont u split the file and use LOAD DATA INFILE command which would
improve the performance while loading into an empty table with keys
disabled.
regards
anandkl
On 6/5/08, Simon Collins <[EMAIL PROTECTED]> wrote:
>
> I'm loading the data through the command below mysql -f -u root -p e
I'm loading the data through the command below mysql -f -u root -p
enwiki < enwiki.sql
The version is MySQL 5.0.51a-community
I've disabled the primary key, so there are no indexes. The CPU has 2
cores and 2 Gigs memory.
The import fell over overnight with a "table full" error as it hit 1T (
Hi,
Break up the file into small chunks and then import one by one.
On Wed, Jun 4, 2008 at 10:12 PM, Simon Collins <
[EMAIL PROTECTED]> wrote:
> Dear all,
>
> I'm presently trying to import the full wikipedia dump for one of our
> research users. Unsurprisingly it's a massive import file (2.7T)
Simon,
As someone else mentioned, how are you loading the data? Can you
post the SQL?
You have an Id field, so is that not the primary key? If so, the
slowdown could be maintaining the index. If so, add up to 30% of your
available ram to your key_bufer_size in your my.cnf file
Hi Simon,
How ur doing this import into ur table.
On 6/4/08, Simon Collins <[EMAIL PROTECTED]> wrote:
>
> Dear all,
>
> I'm presently trying to import the full wikipedia dump for one of our
> research users. Unsurprisingly it's a massive import file (2.7T)
>
> Most of the data is importing into
Dear all,
I'm presently trying to import the full wikipedia dump for one of our
research users. Unsurprisingly it's a massive import file (2.7T)
Most of the data is importing into a single MyISAM table which has an id
field and a blob field. There are no constraints / indexes on this
table.
Hi,
Your English is fine :) Your queries don't look too bad. It could be
there are no good indexes. Have you tried running EXPLAIN on them?
What version of MySQL are you using? You can also try profiling the
queries (by hand with SHOW STATUS, or more easily with MySQL Query
Profiler) to s
Hi all,
First sorry my bad english :)
I having a problem with a large join with 10 tables with 70Gb of text data,
some joins executed by index but some others not.
I´m work with HP SERVER (Proliant NL-150) a 2 Xeon 2 Duo with 3Gb Ram and
RAID 0.
When executed to a client with small datasets the
On Saturday 25 November 2006 17:54, John Kopanas wrote:
> The following query takes over 6 seconds:
> SELECT * FROM purchased_services WHERE (purchased_services.company_id =
> 535263)
What does EXPLAIN say about that query?
Have you done an optimize recently?
--
Scanned by iCritical.
--
MySQL
My innodb_buffer_pool_size is:
innodb_buffer_pool_size | 8388608
That looks like 8MB... that sounds small if I have a DB with over 1M
rows to process. No?
Yes, that's extremely small. I'd go for at least 256M, and maybe 512M
if your machine will primarily be doing mysql duties.
Did
At 08:31 PM 11/26/2006, John Kopanas wrote:
When I did a:
SELECT * FROM purchased_services WHERE company_id = 1000;
It took me 7 seconds. This is driving me crazy!
I am going to have to try this on another computer and see if I am
going to get the same results on another system. Argh...
T
Yes... with FORCE INDEX it still takes 7 seconds.
On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
In the last episode (Nov 26), John Kopanas said:
> On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
> >In the last episode (Nov 26), John Kopanas said:
> >> Thanks a lot for your help.
> >>
> >
In the last episode (Nov 26), John Kopanas said:
> On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
> >In the last episode (Nov 26), John Kopanas said:
> >> Thanks a lot for your help.
> >>
> >> The query should and only does return 1-6 rows depending on the id.
> >> Never more then that. Here a
When I did a:
SELECT * FROM purchased_services WHERE company_id = 1000;
It took me 7 seconds. This is driving me crazy!
I am going to have to try this on another computer and see if I am
going to get the same results on another system. Argh...
On 11/26/06, Dan Nelson <[EMAIL PROTECTED]> wro
In the last episode (Nov 26), John Kopanas said:
> Thanks a lot for your help.
>
> The query should and only does return 1-6 rows depending on the id.
> Never more then that. Here are the comperative EXPLAINs:
>
> mysql> EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
> ++-
The application is not in production yet but when it will go in
production the server will be considerably faster and have much more
RAM. But before I put the app in production I want to make sure it is
working properly. 500K rows does not sounds like that much in this
day in age. If I understa
Thanks a lot for your help.
The query should and only does return 1-6 rows depending on the id.
Never more then that. Here are the comperative EXPLAINs:
mysql> EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
++-++---+---+-+--
This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop. You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers. If your query has to access multiple
records spread out throughout the t
In the last episode (Nov 25), John Kopanas said:
> Sorry about these questions. I am used to working with DBs with less
> then 10K rows and now I am working with tables with over 500K rows
> which seems to be changing a lot for me. I was hoping I can get some
> people's advice.
>
> I have a 'com
If I just SELECT id:
SELECT id FROM purchased_services WHERE (company_id = 1000)
It takes approx 2-2.5s. When I look at the process list it looks like
that it's state seems to always be in sending data...
This is after killing the db and repopulating it again. So what is going on?
On 11/25/06
I tried the same tests with the database replicated in a MyISAM
engine. The count was instantaneous but the following still took
3-6seconds:
SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)
The following though was instantaneous:
SELECT * FROM purchased_services
Sorry about these questions. I am used to working with DBs with less
then 10K rows and now I am working with tables with over 500K rows
which seems to be changing a lot for me. I was hoping I can get some
people's advice.
I have a 'companies' table with over 500K rows and a
'purchased_services'
As others have suggested , turn your slow query log on in my.cnf , and set
your long-query_time, and you can view your slow queries in the *.log file
in your data dir, and then try to optimize them, you could also try mytop (
http://jeremy.zawodny.com/mysql/mytop/) , and check your queries in real
Is tat query is the problem ?
Then turn on your slow queies and try optimizing those slow queries ?
Post your queries and table description for further help :)
--Praj
On Wed, 29 Mar 2006 12:33:20 -0500
"Jacob, Raymond A Jr" <[EMAIL PROTECTED]> wrote:
>
> After a 23days of running mysql, I
Jacob, Raymond A Jr wrote:
After a 23days of running mysql, I have a 3GB database. When I use an
application
called base(v.1.2.2) a web based intrusion detection analysis console, the
mysqld utilization
shoots up to over 90% and stays there until the application times out or is
terminated.
Q
After a 23days of running mysql, I have a 3GB database. When I use an
application
called base(v.1.2.2) a web based intrusion detection analysis console, the
mysqld utilization
shoots up to over 90% and stays there until the application times out or is
terminated.
Question: Have I made some err
Hi,
When I started out I used to do a single query and store the data in a Perl/PHP
datastructure.
I've noticed with time that I'm treating MySQL as though it were part of
PHP/Perl. ie I call a MySQL
primitive everytime I need to read a table/lookup table etc. I develop
Shoppingbaskets/CMS sys
If you suddenly are spiking in unauthenticated connections, you may
be the target of a network attack. This could be just a random probe,
you may be a random target or someone may be targeting you. Although
if someone were specifically targeting you, you would probably be down.
I would chec
"my.cnf" add this: "skip-name-resolve" under "[mysqld]"
On 8/29/05, Callum McGillivray <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm pretty new to the list, so please be kind :)
>
> I'm having serious problems with our core mysql server.
>
> We are running a Dell Poweredge 2850 with dual Xeon 3
Hi all,
I'm pretty new to the list, so please be kind :)
I'm having serious problems with our core mysql server.
We are running a Dell Poweredge 2850 with dual Xeon 3.0 processors, RAID
5 and 1Gb memory.
There are 3 main databases running on this machine, one is a freeradius
database, one i
Hi,
The performance of the data transfers using the direct socket connection
goes from <15 milli sec (in the lab) to ~32 milli sec (in pseudo
production env). But the database calls go from <1 sec to several
seconds (have not measured this yet). The database was exactly the same
in both trials.
For further clarification, what we are observing is that pull down lists
(which are already built on the GUI) take a long time to "complete"
processing. The processing we are performing upon user selection is
taking the selected element, updating 1 database column in 1 table with
the value, and th
Celona, Paul - AES wrote:
I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql database
on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and
"Celona, Paul - AES" <[EMAIL PROTECTED]> wrote on 06/03/2005 01:03:18
PM:
> I am running mysql 4.0.18 on Windows 2003 server which also hosts my
> apache tomcat server. My applet makes a connection to the mysql database
> on the server as well as a socket connection to a service on the same
> ser
I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql database
on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and server, the
application perf
I have built a web site and I am testing it locally on my PC. Testing
through Internet Explorer is awfully slow and most of the time I am
getting error 'ASP 0113' script timed out. The table I am calling
records from is quite text heavy (a few hundred to a 1,000+ words per
field in some places)
On Thu, Dec 18, 2003 at 10:37:46AM -0600, Dan Nelson wrote :
> In the last episode (Dec 18), Markus Fischer said:
> > On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote :
> > > Raising sort_buffer_size and join_buffer_size may also help if your
> > > queries pull a lot of records.
> >
>
In the last episode (Dec 18), Markus Fischer said:
> On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote :
> > Raising sort_buffer_size and join_buffer_size may also help if your
> > queries pull a lot of records.
>
> From what I read from the manual, sort_buffer_size is only used
>
On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote :
> In the last episode (Dec 16), Markus Fischer said:
> > I'm investigating a performance problem with mysql server set up. The
> > server is running linux with 1GB ram. I'ld like to tune the
> > configuration of the server to use as much
Hi,
On Tue, Dec 16, 2003 at 10:23:05PM +1100, Chris Nolan wrote :
> How heavy is your usage of TEMPORARY TABLES? I don't use them much
> myself, but I'm sure that the others on the list will have something
> to say in that regard.
Here are the relevant numbers:
Created_tmp_disk_tables
In the last episode (Dec 16), Markus Fischer said:
> I'm investigating a performance problem with mysql server set up. The
> server is running linux with 1GB ram. I'ld like to tune the
> configuration of the server to use as much RAM as possible without
> swapping to the disc because of the big slo
Hi!
How heavy is your usage of TEMPORARY TABLES? I don't use them much
myself, but
I'm sure that the others on the list will have something to say in that
regard.
To get a better look at MySQL's usage of memory, you could try looking
at the output of
SHOW STATUS .
Regards,
Chris
Markus Fisc
Hello,
I'm investigating a performance problem with mysql server set up.
The server is running linux with 1GB ram. I'ld like to tune the
configuration of the server to use as much RAM as possible without
swapping to the disc because of the big slow down.
The current config
Hi, I have a performance issue I've tried resolving and I can't get rid
of it. Basically I have a database called lobby that any queries to it
must be as fast as possible inserts, and selects. It must do about 60
queries a second with no queries taking more then 50ms. I also have
another databa
> The main table is rather huge, it has 90 columns and now after
> three month it has 500.000 records... but in the end it has to store data of
> 36 month.
Hmm, I think you had better look at normalizing your data, and creating
indexes. Start with the indexes since that won't force you to make an
onder, Matthias" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, September 18, 2003 11:24 AM
Subject: Performance Problems
> Hei :)
>
> I have an extreme performance problem with a MySQL-DB.
> The database consists of 21 tables where all except three are stor
ds (though only 3 columns) and a search like the one you've got takes
0.07 seconds on a box similar to your dev box.
Andy
> -Original Message-
> From: Schonder, Matthias [mailto:[EMAIL PROTECTED]
> Sent: 18 September 2003 10:25
> To: '[EMAIL PROTECTED]'
>
has 500.000 records... but in the end it has to store data of
36 month.
But since the table has grown to over 350.000 records I ran into massive
performance problems. Querying for one record (Example: SELECT sendnr FROM
pool where sendnr = 111073101180) takes 8 seconds via command line!
The table
nething. BUT wud require
space and some time.
vikash
-Original Message-
From: Gunnar Lunde [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 14, 2003 5:17 PM
To: '[EMAIL PROTECTED]'
Subject: RE: Performance problems after record deletion.
Thank you for your reply, Vikash
We have d
To: 'Gunnar Lunde'; [EMAIL PROTECTED]
> Subject: RE: Performance problems after record deletion.
>
>
> This is what MYSQL manual 3.23.41 says, may be it helps you
>
> OPTIMIZE TABLE should be used if you have deleted a large part of a
> table or if you have made m
: '[EMAIL PROTECTED]'
Subject: Performance problems after record deletion.
> Hi
>
> We got a problem with a slow database after deleting records using the
> MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the
short
> story:
>
> We have a table with a l
> Hi
>
> We got a problem with a slow database after deleting records using the
> MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the short
> story:
>
> We have a table with a lot of data, at the moment there are 85 million
> records in our table. We developed a script that dele
Alex,
- Original Message -
From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 11:49 AM
Subject: Re: Performance Problems with InnoDB Row Level Locking..
o: Varshavchick Alexander <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: Performance Problems with InnoDB Row Level Locking...
>
> Alexander,
>
> - Original Message -
> From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
> To: "
Joe,
- Original Message -
From: "Joe Shear" <[EMAIL PROTECTED]>
To: "Heikki Tuuri" <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 2:13 AM
Subject: Re: Performance Problems with InnoDB Row Level Locking...
> Hi,
> On a side note, are there any
Alexander,
- Original Message -
From: "Varshavchick Alexander" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 10:08 AM
Subject: RE: Performance Problems with InnoDB
Hi Heikki,
one more question please about innodb_flush_log_at_trx_commit: if there
was some way of increasing the delay between log flushes more than 1 sec,
can you estimate will it bring any real effect in performance? I know
it'll raise the risk of losing some last transactions if something
cra
Steve,
- Original Message -
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 06, 2002 1:23 AM
Subject: RE: Performance Problems with InnoDB Row Level Locking
teve
-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 2:54 PM
To: Orr, Steve
Cc: [EMAIL PROTECTED]
Subject: Re: Performance Problems with InnoDB Row Level Locking...
Steve,
- Original Message -
From: "Orr, Steve" <[EM
Steve,
- Original Message -
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 11:04 PM
Subject: RE: Performance Problems with InnoDB Row Level Locking...
> Heikki,
>
>
i
- Original Message -
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: "Orr, Steve" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 10:30 PM
Subject: Re: Performance Problems with InnoDB Row Level Locking...
> Steve,
>
Steve,
- Original Message -
From: "Orr, Steve" <[EMAIL PROTECTED]>
To: "'Heikki Tuuri'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, September 05, 2002 9:49 PM
Subject: RE: Performance Problems with InnoDB Row Level Locki
1 - 100 of 122 matches
Mail list logo