Re: very strange performance issue

2009-06-27 Thread Chris Kantarjiev
Andrew Carlson said:

> 
> I know this is basic, but check that you recreated the indexes after you
> reloaded the snapshot.  That has bit me before.

I used myisamchk -r on the large table, and it has made a huge difference.

I had used myisamchk before to check the table and got no complaints. 
Most confusing ... but I'm happy with the result!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: very strange performance issue

2009-06-27 Thread Chris Kantarjiev
Perhaps some clues here: I started taking the problem query apart to see
what slows things down. I found a culprit, but I don't understand:

mysql> select count(*) from Crumb where customer_id=380 and Actual_Time BETWEEN 
'2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0;
+--+
| count(*) |
+--+
|  437 |
+--+
1 row in set (0.35 sec)

mysql> select count(*) from Crumb where customer_id=380 and Actual_Time BETWEEN 
'2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0 and RuleLimit >=0;
+--+
| count(*) |
+--+
|  437 |
+--+
1 row in set (6 min 15.93 sec)


Explain says that very few rows are being examined, but it takes a very
long time.


mysql> explain select count(*) from Crumb where customer_id=380 and Actual_Time 
BETWEEN '2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0\G  
--
explain select count(*) from Crumb where customer_id=380 and Actual_Time 
BETWEEN '2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0
--

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: Crumb
 type: range
possible_keys: 
ix_crumb_custid_actualtime,ix_crumb_errorcode,ix_Crumb_on_Actual_Time,ix_Crumb_on_ErrorCode_RuleLimit
  key: ix_crumb_custid_actualtime
  key_len: 12
  ref: NULL
 rows: 290
Extra: Using where
1 row in set (0.00 sec)

mysql> explain select count(*) from Crumb where customer_id=380 and Actual_Time 
BETWEEN '2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0 and 
RuleLimit >=0\G
--
explain select count(*) from Crumb where customer_id=380 and Actual_Time 
BETWEEN '2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0 and 
RuleLimit >=0
--

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: Crumb
 type: ref
possible_keys: 
ix_crumb_custid_actualtime,ix_crumb_errorcode,ix_Crumb_on_Actual_Time,ix_Crumb_on_ErrorCode_RuleLimit
  key: ix_Crumb_on_ErrorCode_RuleLimit
  key_len: 5
  ref: const
 rows: 38
Extra: Using where
1 row in set (0.00 sec)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



very strange performance issue

2009-06-27 Thread Chris Kantarjiev
I'm working in an environment where I have two similar servers, one
running "production" the other for "development". They're not very
dissimilar - both run 4.1.20-log, both run CentOS 4.

The development server has a cut-down snapshot of the production
database, and it's where we ... well, develop and test.

Neither is terrifically big - 32-bit machines with two cores and 4GB.

Recently, the development machine has gotten really slow. Really, REALLY
slow. Queries that take 10s of seconds on the production machine take
10s of minutes on the development machine; the dev machine is completely
CPU bound while running them, about 50% user 50% system. vmstat tells me
that lots of blocks are being read in, so my guess is that the system
is *really* thrashing the disk while pulling in pages through the
VM system. (One table is MyISAM, the other two innodb, the MyISAM
table has about 35M rows, the others considerably smaller.)

It's possible that this started when I imported the last snapshot,
but I'm not certain of that.

I have checked the queries, and they're using indices well. I have
done OPTIMIZE TABLE on the tables that are in use. I've made
sure that lots of memory is allocated (though it seems that mysqld
isn't actually using as much as it could). Top tells me that it's
only using 19.3% of physical memory, but I expect it to use a lot more.

I'm sort of stuck as to where to look next.

Here's my.cnf:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
tmpdir=/tmp
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
max_allowed_packet=32M
net_buffer_length=1M

key_buffer_size = 1536M
query_cache_limit = 128M
query_cache_size = 128M

max_heap_table_size = 32M
tmp_table_size = 32M

log-slow-queries = slow-queries

innodb_buffer_pool_size = 1536M
innodb_additional_mem_pool_size = 32M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50 
innodb_flush_method=O_DIRECT

[mysql.server]
user=mysql
basedir=/var/lib

[mysqld_safe]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MBRWithin bug?

2008-11-20 Thread Chris Kantarjiev

Aha! I get it! I *was* being an idiot. The longitude of @g1 is 12*2*, not 121...

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MBRWithin bug?

2008-11-19 Thread Chris Kantarjiev
Tom,

Thanks for the response - as I said, I might be being an idiot.

I really don't see where the problem is!

(37.4324379 -122.152) 

(37.428 -121.1575
 37.428 -121.1485
 37.437 -121.1485
 37.437 -121.1575
 37.428 -121.1575)

 -122.1529 is between -121.148 and -121.1575
   37.4324 is between 37.428 and 37.437

The points on the polygon are arranged LL, LR, UR, UL, LL of 
a rectangle:



37.437 -121.157537.437 -121.1485

37.428 -121.157537.428 -121.1485

What am I missing?

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MBRWithin bug?

2008-11-19 Thread Chris Kantarjiev



I'm trying to use the geospatial extensions for the first time. The basic idea 
is to track points and see if they fall within rectangles, so the core work that 
the database is doing is a query that involves MBRWithin.


It mostly works, but sometimes it doesn't when I think it should. Here's an 
example:


mysql> Set @g1 = GeomFromText('Point(37.4324379 -122.152)');
Query OK, 0 rows affected (0.00 sec)

mysql> Set @g2 = GeomFromText('Polygon((37.428 -121.1575,37.428 -121.1485,37.437
 -121.1485,37.437 -121.1575,37.428 -121.1575))');
Query OK, 0 rows affected (0.00 sec)

OK, but I know that the text doesn't get interpreted for legal syntax at this 
point, so I check:


mysql> select astext(@g1);
++
| astext(@g1)|
++
| POINT(37.4324379 -122.152) |
++
1 row in set (0.00 sec)

mysql> select astext(@g2);
+-+
| astext(@g2)  |
+-+
| POLYGON((37.428 -121.1575,37.428 -121.1485,37.437 -121.1485,37.437 
-121.1575,37.428 -121.1575)) |

+-+
1 row in set (0.00 sec)

Looks the same, hooray.

mysql> select mbrwithin(@g1, @g2);
+-+
| mbrwithin(@g1, @g2) |
+-+
|   0 |
+-+
1 row in set (0.00 sec)

mysql> select mbrwithin(@g2, @g1);
+-+
| mbrwithin(@g2, @g1) |
+-+
|   0 |
+-+
1 row in set (0.00 sec)

I may be being an idiot, but ... it sure looks to me as if @g1 lies smack in the 
center of @g2 (since that's how @g2 was computed!). I know that the syntax of 
the geospatial stuff is very picky, so maybe I've got something wrong there.


I'm running 5.0.67-community-nt ... are there any bugs in this stuff? I didn't 
find anything in the bug database.


Oddly enough, some other similar queries work as expected, so I'm at a loss.

Help?

Thanks,
chris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to use all system RAM for myisam db?

2007-08-01 Thread Chris Kantarjiev
The real problem that you're running into (been there, done that) is
that the MyISAM index code is only 32-bit clean. You can try to use
more than 4GB for the key_buffer, but it won't work; I don't recall
if the code implicitly limits the value, or just falls over.

One possible workaround is to use multiple (alternate) key_buffers
if your schema is such that that makes sense. You will want to
spend some time looking at the statistics to understand just
where you're running out of memory - if the "pressure" is
on the index or the tables themselves (or both).

As Dan said, MyISAM tries to get the system to do caching of
the table data. That works reasonably well on Linux and NetBSD
(at least), but joins and sorts will be slow above a certain
size, and there seems to be nothing that can be done about it.

Good luck.

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Chris Kantarjiev
Merge tables aren't an easy option here. There's a higher level
data partitioning that we're putting into place, though, and it's
been shown to help a lot. But I also want to get as much out of the
file system as I can.

> The "solution" I use for duplicate filtering is to to create a
> "load" table which loads a set of records, then compares those
> records against the merge table for duplicates, deleting any found.
> Then the load table is added to the merge table and the process is
> repeated for the next batch of data.

I don't think this will help us, but it's an interesting technique.
We use staging tables to cut the load in a bunch of places. 

I think the true answer to this particular problem lies outside SQL
and instead with a private index structure that is tuned for dealing
with duplicates...it would help if the MyISAM engine was a little
more clever about really large indexes.

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Chris Kantarjiev
> Yes, the article did lack a lot of methodology information.

This one is *lots* better.

http://www.bullopensource.org/ext4/sqlbench/index.html

Losing data is always bad - that's why it's in a database, not a filesystem.
But these systems have been pretty reliable and are on UPS, etc. This
is a created table, so it's not life critical ... but it is expensive.

> This might be a silly question, but did you max out the 
> myisam_sort_buffer_size 
> and key_buffer_size settings? Both can go up to 4GB.

Yup. Not nearly big enough - index is 15GB!
And the code seems to misbehave, leading to crashes, at the limit.

> You can also turn off indexing when loading the data, then turn it back on 
> when the data is loaded (if you haven't already).

We "need" INSERT IGNORE, so this isn't really an option for us,
unfortunately.

I'm going to mount them as ext2fs for the time being.

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Chris Kantarjiev
Brent,

Thanks for your response.

> Enabling journaling isn't going to halve your performance.

I was careful to say "write speed", not "performance". 

I already have my data and index files on separate drives (raid
volumes, actually, each made up of complete drives). What I see
is that the index drive is being clobbered during table creation,
because mysql can't keep it all in memory. This is a long standing
problem with MyISAM files, where the index code isn't 64-bit safe.

Yes, 64-bit. This is a quad-processor opteron with 16GB of ram.
The index file is 15GB these days, so even if My ISAM *could*
hold more than about 3GB of index in its data structures, it
probably wouldn't all fit in memory.

Did I mention that this is a "big data" problem?

Please don't tell me to use InnoDB. It's much too slow for this purpose.

> Here is an interesting article to read on ext3 journaling overhead.
> http://www-128.ibm.com/developerworks/library/l-fs8.html

Interesting, if only to show how dangerous it is to publish
results that aren't understood. The author doesn't say anything
about testing methodology, so I have no idea whether or not to
trust the results. 16MB files are toys; they easily fit completely
in memory and Linux makes it difficult to clear the buffer cache
between runs. Was the machine rebooted between every test? When 
he runs these tests again with files that are bigger than available
RAM, I'll be a lot more interested.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



use multiple CPUs?

2006-10-31 Thread Chris Kantarjiev
Apologies if this is covered elsewhere, but I can't seem to track down all the 
pieces...


I just installed mysql (4.1.21) on a multi-cpu opteron system running fedora 
core. I used the binary distribution,


mysql-standard-4.1.21-unknown-linux-gnu-x86_64-glibc23

instead of compiling it myself as I've done on other platforms. This is my first 
time running mysql on linux, so I'm still figuring things out.


Anyway, I can't tell what thread library, etc, is in use in this compiled 
binary. I want to make sure that mysqld will use both CPUs when appropriate - in 
previous installations using pthreads (on netbsd) I needed to tweak an 
environment variable in rc.conf


PTHREAD_CONCURRENCY=2   # use both CPUs in mysqld!
export PTHREAD_CONCURRENCY

in order to make this happen - and I only found that by accident, more or less.

Do I have to do something similar with this installation? I haven't really 
pushed it hard enough to tell if it will try to use both CPUs or not (always a 
tricky thing).


Thanks. I really wish this (libraries in use, how to run on multi-cpu machines) 
was better documented!


Best,
chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



UPDATE/lock problems?

2006-04-03 Thread Chris Kantarjiev
I'm seeing some very odd locking behaviour on 4.1.13:

mysql> show full processlist;
++--++--+-+--+++
| Id | User | Host   | db   | Command | Time | State  | 
Info


   |
++--++--+-+--+++
| 30 | len  | dick.landsonar.com:36746   | NULL | Sleep   |1 || 
NULL


   |
| 33 | len  | dick.landsonar.com:3233| us   | Sleep   | 7423 || 
NULL


   |
| 35 | len  | yertle.landsonar.com:64667 | us   | Sleep   |   38 || 
NULL


   |
| 36 | len  | yertle.landsonar.com:64666 | us   | Query   |   30 | init   | 
UPDATE crumb 
SET link_ID = 127624294,
dir_Travel = 'T',
last_Modified = SYSDATE()
WHERE link_ID IS NULL AND
customer_ID = 2 AND
source_ID = 2 AND
vehicle_ID = 43920 AND
actual_Time = '2005-03-11 01:19:40'  |
| 37 | len  | yertle.landsonar.com:64665 | us   | Query   |   30 | Locked | 
UPDATE crumb 
SET link_ID = 125170474,
dir_Travel = 'F',
last_Modified = SYSDATE()
WHERE link_ID IS NULL AND
customer_ID = 2 AND
source_ID = 2 AND
vehicle_ID = 161878 AND
actual_Time = '2005-08-14 15:26:17' |
| 38 | len  | yertle.landsonar.com:64664 | us   | Query   |   30 | Locked | 
UPDATE crumb 
SET link_ID = 125154498,
dir_Travel = 'T',
last_Modified = SYSDATE()
WHERE link_ID IS NULL AND
customer_ID = 2 AND
source_ID = 2 AND
vehicle_ID = 208475 AND
actual_Time = '2006-02-28 20:46:15' |

There is no other activity in the system. The db server is dead idle - no
CPU activity, no disk activity. I'm suspicious that there may be some
sort of bug in the lock manager that we are tickling; the deadlock breaks
eventually, but sometimes it takes 10 minutes or more.

Thoughts?

Thanks,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: stunningly slow query

2006-04-02 Thread Chris Kantarjiev
> The problem with Load Data is the larger the table, the slower it 
> gets because it has to keep updating the index during the loading process. 

Um, thanks. I'm not sure how Load Data got involved here, because
that's not what's going on.

> 
> > It's a MyISAM table. Are there separate logs files? If so, where?
> > I thought it was just .frm, .MYI and .MYD. We have MYI and MYD on
> > separate drives.
> 
> Log files usually default to the mysql data directory, eg. 
> /var/lib/mysql/

As I said, I don't think there are any log files for a MyISAM table.
InnoDB has separate logs.

> 
> Putting the database files on seperate drives may slow 
> things down alot too - unless others know better.
> 
> .frm is the database definition file. .MYI is the index 
> file, and .MYD is the data file. There is one each of these 
> files for each myisam table in the database.
> 
> I may be wrong, but I would have thought it better if these 
> are all together on the same disk and partition for each 
> table in the database?

This is counter-intuitive. Separating .MYI and .MYD means that
I can overlap the i/o. This is a standard strategy for other
databases (Oracle, in particular). I would be really surprised
if this was causing my problem.

> This feature can be activated explicitly. ALTER TABLE ... 
> DISABLE KEYS tells MySQL to stop updating non-unique indexes 
> for a MyISAM table. ALTER TABLE ... ENABLE KEYS then should 
> be used to re-create missing indexes.

> Can you post your show create table tbl_name statement for 
> these tables that involve slow queries?

| old_crumb |CREATE TABLE `old_crumb` (
  `link_ID` bigint(20) default NULL,
  `dir_Travel` char(1) default NULL,
  `customer_ID` int(11) NOT NULL default '0',
  `source_ID` int(11) NOT NULL default '0',
  `vehicle_ID` int(11) NOT NULL default '0',
  `actual_Time` datetime NOT NULL default '-00-00 00:00:00',
  `actual_TZ` varchar(30) default NULL,
  `reported_Time` datetime default NULL,
  `reported_TZ` varchar(30) default NULL,
  `speed_Format` int(11) default NULL,
  `speed` float default NULL,
  `direction` char(2) default NULL,
  `compass` int(11) default NULL,
  `speed_NS` float default NULL,
  `speed_EW` float default NULL,
  `distance` decimal(10,0) default NULL,
  `duration` decimal(10,0) default NULL,
  `latitude` decimal(10,5) default NULL,
  `longitude` decimal(10,5) default NULL,
  `report_Landmark` varchar(255) default NULL,
  `report_Address` varchar(255) default NULL,
  `report_Cross` varchar(255) default NULL,
  `report_City` varchar(255) default NULL,
  `report_State` char(2) default NULL,
  `report_Zip` varchar(10) default NULL,
  `report_County` varchar(255) default NULL,
  `category` int(11) default NULL,
  `speed_Limit` int(11) default NULL,
  `street` varchar(255) default NULL,
  `city` varchar(255) default NULL,
  `state` char(2) default NULL,
  `zip` varchar(10) default NULL,
  `county` varchar(255) default NULL,
  `match_Name` tinyint(1) default NULL,
  `name_Matched` tinyint(1) default NULL,
  `last_Modified` datetime default NULL,
  PRIMARY KEY  (`customer_ID`,`source_ID`,`vehicle_ID`,`actual_Time`),
  KEY `old_crumb_ix_reported_Time` (`reported_Time`),
  KEY `old_crumb_ix_link_ID` (`link_ID`,`dir_Travel`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=10 COMMENT='List of 
breadcrumbs already rolled up.' INDEX DIRECTORY='/var/mysql_idx/trimble/' |

This is the other

 link_area | CREATE TABLE `link_area` (
  `link_ID` bigint(20) NOT NULL default '0',
  `dir_Travel` char(1) NOT NULL default '',
  `area_ID` int(11) NOT NULL default '0',
  PRIMARY KEY  (`link_ID`,`dir_Travel`),
  KEY `link_area_ix_area_ID` (`area_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='Link area table' INDEX 
DIRECTORY='/var/mysql_idx/landsonar/' |

Inserts into the link_area were going very very slowly while data
was being moved into old_crumb. old_crumb is large - my suspicion
at this point is that the process of looking for key conflicts was
slowing things down and starving other query traffic.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: stunningly slow query

2006-03-31 Thread Chris Kantarjiev
> Are your logs and data on the same partition?  That's a bad idea for
> recovering from a blown part of the disk, but we also saw that one of
> our databases would crash when there were lots of
> inserts/updates/replaces -- other databases, which had the same
> version of MySQL and operating system, had the logs and data on a
> separate partition, and they did not crash.

It's a MyISAM table. Are there separate logs files? If so, where?
I thought it was just .frm, .MYI and .MYD. We have MYI and MYD on
separate drives.

We're investigating a possible MERGE organization. I'll report
back if we learn anything new.

Thanks,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: stunningly slow query

2006-03-30 Thread Chris Kantarjiev
> 
> That throws out my first theory about table locks.

That's what I thought, too.

> What do vmstat and top say? Is it CPU bound? I/O bound?

Certainly not CPU bound. Maybe I/O bound, not conclusive. My current
theory is that there is some thrashing on key buffer blocks.

> 
> Also you might want to do a "show status" before and after.

Before and after the short query?

| Handler_commit | 0  |
| Handler_delete | 20075144   |
| Handler_read_first | 18 |
| Handler_read_key   | 432092430  |
| Handler_read_next  | 510230999  |
| Handler_read_prev  | 0  |
| Handler_read_rnd   | 148798998  |
| Handler_read_rnd_next  | 1676270985 |
| Handler_rollback   | 0  |
| Handler_update | 122968312  |
| Handler_write  | 286299142  |
...
| Key_blocks_used| 2887383|
| Key_read_requests  | 2929797981 |
| Key_reads  | 8856159|
| Key_write_requests | 305442125  |
| Key_writes | 94187694   |
| Uptime | 268069 |

| Handler_commit | 0  |
| Handler_delete | 20075144   |
| Handler_read_first | 18 |
| Handler_read_key   | 432092430  |
| Handler_read_next  | 510230999  |
| Handler_read_prev  | 0  |
| Handler_read_rnd   | 148798998  |
| Handler_read_rnd_next  | 1676280460 |
| Handler_rollback   | 0  |
| Handler_update | 122968312  |
| Handler_write  | 286308623  |
...
| Key_blocks_used| 2887383|
| Key_read_requests  | 2929979657 |
| Key_reads  | 8859748|
| Key_write_requests | 305476828  |
| Key_writes | 94187704   |
| Uptime | 268102 |

OK, so in 33 seconds, it did 181676 read requests and 3589 reads, and 34703
write requests and 10 actual writes. 108 reads/sec, less than 1 write/sec.

systat vmstat tells me that the overall data rate off the index disk 
is relatively low, but the disk is pretty busy, so there's a lot of seeking.
The data disk is barely being touched.

> You might want to try putting the result of both "show status" calls 
> side by side to see if some unexpected resource is being used.

Nothing jumps out at me. Handler_read_rnd_next is increasing at 287/sec,
which isn't small...

The key_buffer is 3GB or 4GB on this system. I'm contemplating an upgrade
to 4.1 so I can have multiples.

Thanks for looking at this...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: stunningly slow query

2006-03-30 Thread Chris Kantarjiev
> 
> Can you post the output of SHOW FULL PROCESSLIST during the time when 
> both sets of queries are running?


mysql> show full processlist;
+-+--+--+---+-+--+--+--+
| Id  | User | Host | db| Command | Time | State
| Info  
   |
+-+--+--+---+-+--+--+--+
| 201 | len  | dick.landsonar.com:15405 | landsonar | Query   | 4033 | Sending 
data | insert ignore into trimble.old_crumb select * from trimble.crumba_rolled 
|
| 209 | len  | dick.landsonar.com:34684 | landsonar | Query   | 2| update   
| INSERT IGNORE INTO link_area
(link_ID, dir_Travel, area_ID)
VALUES
(20202282, 'T', 21014142) |
| 216 | len  | localhost| NULL  | Query   | 0| NULL 
| show full processlist 
   |
+-+--+--+---+-+--+--+--+

> Also what storage engine are you using for your tables?

MyISAM. We found InnoDb to be considerably slower for selects for our data,
though it's been a year since I did that experiment.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



stunningly slow query

2006-03-30 Thread Chris Kantarjiev
We're having some serious problems with concurrent queries.

This is a dual-processor amd64 machine with 16GB RAM, running NetBSD
and MySQL 4.0.25. key_buffer_size is 3GB.

When I have a long running query going, otherwise short queries take
a very very long time to execute. For example, I have

insert ignore into trimble.old_crumb select * from trimble.crumba_rolled

which is appending a lot of 'live' data to an 'archive' table.

Meanwhile, I'm trying to execute

INSERT IGNORE INTO link_area
(link_ID, dir_Travel, area_ID)
VALUES
(20202178, 'F', 21014195);

This latter query is taking between 6 and 45 *seconds* to run. Yow!

mysql> explain link_area;
+++--+-+-+---+
| Field  | Type   | Null | Key | Default | Extra |
+++--+-+-+---+
| link_ID| bigint(20) |  | PRI | 0   |   |
| dir_Travel | char(1)|  | PRI | |   |
| area_ID| int(11)|  | MUL | 0   |   |
+++--+-+-+---+

help?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: concurrency in mysql 4.0.25

2005-11-12 Thread Chris Kantarjiev



I see in the pthread(3) man page that there's an environment variable

  PTHREAD_CONCURRENCY  The number of concurrent threads to be run.
  This value should at least be 1, and smaller
  than or equal to the number of CPUs.

Do I need to set this to be more than 1 in the mysqld environment?


Apparently the answer to this is 'yes', based on a brief experiment I just did. 
Very nice.


I'd still like some tool that lets me see the threads inside a process...

chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



concurrency in mysql 4.0.25

2005-11-12 Thread Chris Kantarjiev
I'm running mysql 4.0.25 on netbsd 3, on a dual-processor opteron machine with 
16GB. I'm trying to make things faster, of course. All the data operations are 
on one very large table (about 20GB, index is 17GB), which I will eventually 
figure out how to split.


I have four processes working on this table at once; they select 20,000 or more 
row, chew for a while, and then update those rows. I have a lot of trouble with 
the index blocks getting flushed when updates are done in parallel, so there is 
some judicious locking to help that. (The key_buffer_size is set to 4GB, which 
is the maximum supported - I'd use more if I could.)


What I notice is that even though I have two client threads doing SELECT at the 
same time, I never manage to use more than one CPU, maybe dipping into the 
second one a small bit. The mysql config log indicates that it found pthreads 
and compiled with it. systat vmstat tells me that the disks are idle, memory is 
full, and top indicates that the mysqld process is CPU bound. There are many 
many syscalls being completed; I'm guessing that the blocks are moving from file 
buffer pages to the process.


Is there a tool (or option to ps or top) that lets me view the thread activity 
inside a process?


I see in the pthread(3) man page that there's an environment variable

  PTHREAD_CONCURRENCY  The number of concurrent threads to be run.
  This value should at least be 1, and smaller
  than or equal to the number of CPUs.

Do I need to set this to be more than 1 in the mysqld environment?

Thanks,
chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: really big key_buffer_size (> 4GB)?

2005-11-10 Thread Chris Kantarjiev

Chris Wells wrote:

Chris Kantarjiev wrote:

I'm running 4.0.25 on NetBSD 3 on an amd64. It appears that the 
key_buffer_size is being truncated to 4GB, even though I've set

it larger.

Could someone give me a hint about where to look? I'm a little
suspicious of ha_resize_key_cache() which is using longs internally
to hold the keybuff_size, but I suspect this is not quite the
source of the problem (there's no logic here to truncate the
input value).

Thanks,
chris



First thought - could 4GB be the file size limit for the filesystem you 
are using?


-Chris


The index file is 17GB. File size limit isn't 4GB.

And I don't immediately see how the file size limit would affect the setting of 
an internal variable.


The assorted key_buffer_size code (what a maze of twisty little passages!) has 
been partially arranged to expect ulonglong, but there are some lower layer 
handler routines that cast to long (I'm using MyISAM). That's pretty obviously 
wrong for me - but I haven't even found the place where the input value is being 
truncated so the cast to long does something other than return 0...




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



really big key_buffer_size (> 4GB)?

2005-11-10 Thread Chris Kantarjiev
I'm running 4.0.25 on NetBSD 3 on an amd64. It appears that the 
key_buffer_size is being truncated to 4GB, even though I've set
it larger.

Could someone give me a hint about where to look? I'm a little
suspicious of ha_resize_key_cache() which is using longs internally
to hold the keybuff_size, but I suspect this is not quite the
source of the problem (there's no logic here to truncate the
input value).

Thanks,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very large key_buffer on amd64?

2005-09-19 Thread Chris Kantarjiev
> 
> Maybe you've also hit the quirks of memory management and malloc, just as
> we've posted a while ago in http://lists.mysql.com/mysql/186930 ?

An interesting thread, but I'm on NetBSD, not Linux. But it's some place
to start looking around, I guess.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



very large key_buffer on amd64?

2005-09-19 Thread Chris Kantarjiev
We're starting to use mysql (4.0.25) on an amd64 machine (running
NetBSD-3). One of the reasons for doing this is to use much more
RAM - we tend to thrash the key_buffer on i386 because one of our
indexes is 10GB in size (the table is 15GB).

It appears that mysqld won't start if the setting for key_buffer
is more than 2GB. 

053419 11:34:15  Starting mysqld daemon with databases from /var/mysql
053419 11:34:15  mysqld started
mysqld: Couldn't allocate stack for idle thread!: Cannot allocate memory
053419 11:34:15  STOPPING server from pid file /var/mysql/vern.landsonar.com.pid
053419 11:34:15  mysqld ended

I'd like it to be 10GB (we have 16GB of RAM).  I'm guessing that
the variable(s) that deal with the key_buffer are 32-bit ints ...
is there a straightforward way to find all the relevant places and
make them 64-bit ints?

Or is the problem that every thread is ending up with its
own key_buffer, so there isn't enough memory to do this?

Or something else?

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



spreading the disk load

2005-09-14 Thread Chris Kantarjiev
I'd like to spread the disk arm load across multiple drives. At
the moment, we mostly use MyISAM tables, but we are also
experimenting with InnoDB.

What's the 'best practice' for doing this? There's no obvious
configuration that lets me designate one directory for index
and another for data - am I meant to do this with symlinks?
How can I do anything like that with InnoDB, which appears
to put everything in one massive file?

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



why so many table scans?

2005-07-25 Thread Chris Kantarjiev
I'm looking at the stats on one of our servers and trying to understand
why Handler_read_rnd_next is so high. It's  256.5M right now, which is
about 10x the total number of reported queries.

The machine is being used, almost entirely, for queries of the form:

select * from crumb 
 where link_id is null 
   and latitude > 39 
   and longitude > -98 
 limit 1;

link_id is indexed. There are about 8 million rows in the table,
and most of them have link_id = null right now. latitude and longitude
are not indexed - but my understanding is that mySQL will only
use one index, and link_id is the interesting one for us.

Are the latitude and longitude qualifiers the cause of the table scans?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]