Andrew Carlson said:
>
> I know this is basic, but check that you recreated the indexes after you
> reloaded the snapshot. That has bit me before.
I used myisamchk -r on the large table, and it has made a huge difference.
I had used myisamchk before to check the table and got no complaints.
M
Perhaps some clues here: I started taking the problem query apart to see
what slows things down. I found a culprit, but I don't understand:
mysql> select count(*) from Crumb where customer_id=380 and Actual_Time BETWEEN
'2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0;
+--+
| c
I'm working in an environment where I have two similar servers, one
running "production" the other for "development". They're not very
dissimilar - both run 4.1.20-log, both run CentOS 4.
The development server has a cut-down snapshot of the production
database, and it's where we ... well, develop
Aha! I get it! I *was* being an idiot. The longitude of @g1 is 12*2*, not 121...
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Tom,
Thanks for the response - as I said, I might be being an idiot.
I really don't see where the problem is!
(37.4324379 -122.152)
(37.428 -121.1575
37.428 -121.1485
37.437 -121.1485
37.437 -121.1575
37.428 -121.1575)
-122.1529 is between -121.148 and -121.1575
37.4324 is between
I'm trying to use the geospatial extensions for the first time. The basic idea
is to track points and see if they fall within rectangles, so the core work that
the database is doing is a query that involves MBRWithin.
It mostly works, but sometimes it doesn't when I think it should. Here's a
The real problem that you're running into (been there, done that) is
that the MyISAM index code is only 32-bit clean. You can try to use
more than 4GB for the key_buffer, but it won't work; I don't recall
if the code implicitly limits the value, or just falls over.
One possible workaround is to us
Merge tables aren't an easy option here. There's a higher level
data partitioning that we're putting into place, though, and it's
been shown to help a lot. But I also want to get as much out of the
file system as I can.
> The "solution" I use for duplicate filtering is to to create a
> "load" tabl
> Yes, the article did lack a lot of methodology information.
This one is *lots* better.
http://www.bullopensource.org/ext4/sqlbench/index.html
Losing data is always bad - that's why it's in a database, not a filesystem.
But these systems have been pretty reliable and are on UPS, etc. This
is a
Brent,
Thanks for your response.
> Enabling journaling isn't going to halve your performance.
I was careful to say "write speed", not "performance".
I already have my data and index files on separate drives (raid
volumes, actually, each made up of complete drives). What I see
is that the index
Apologies if this is covered elsewhere, but I can't seem to track down all the
pieces...
I just installed mysql (4.1.21) on a multi-cpu opteron system running fedora
core. I used the binary distribution,
mysql-standard-4.1.21-unknown-linux-gnu-x86_64-glibc23
instead of compiling it myself as
I'm seeing some very odd locking behaviour on 4.1.13:
mysql> show full processlist;
++--++--+-+--++---
> The problem with Load Data is the larger the table, the slower it
> gets because it has to keep updating the index during the loading process.
Um, thanks. I'm not sure how Load Data got involved here, because
that's not what's going on.
>
> > It's a MyISAM table. Are there separate logs file
> Are your logs and data on the same partition? That's a bad idea for
> recovering from a blown part of the disk, but we also saw that one of
> our databases would crash when there were lots of
> inserts/updates/replaces -- other databases, which had the same
> version of MySQL and operating syste
>
> That throws out my first theory about table locks.
That's what I thought, too.
> What do vmstat and top say? Is it CPU bound? I/O bound?
Certainly not CPU bound. Maybe I/O bound, not conclusive. My current
theory is that there is some thrashing on key buffer blocks.
>
> Also you might wan
>
> Can you post the output of SHOW FULL PROCESSLIST during the time when
> both sets of queries are running?
mysql> show full processlist;
+-+--+--+---+-+--+--+--
We're having some serious problems with concurrent queries.
This is a dual-processor amd64 machine with 16GB RAM, running NetBSD
and MySQL 4.0.25. key_buffer_size is 3GB.
When I have a long running query going, otherwise short queries take
a very very long time to execute. For example, I have
in
I see in the pthread(3) man page that there's an environment variable
PTHREAD_CONCURRENCY The number of concurrent threads to be run.
This value should at least be 1, and smaller
than or equal to the number of CPUs.
Do I need t
I'm running mysql 4.0.25 on netbsd 3, on a dual-processor opteron machine with
16GB. I'm trying to make things faster, of course. All the data operations are
on one very large table (about 20GB, index is 17GB), which I will eventually
figure out how to split.
I have four processes working on t
Chris Wells wrote:
Chris Kantarjiev wrote:
I'm running 4.0.25 on NetBSD 3 on an amd64. It appears that the
key_buffer_size is being truncated to 4GB, even though I've set
it larger.
Could someone give me a hint about where to look? I'm a little
suspicious of ha_resize_key_c
I'm running 4.0.25 on NetBSD 3 on an amd64. It appears that the
key_buffer_size is being truncated to 4GB, even though I've set
it larger.
Could someone give me a hint about where to look? I'm a little
suspicious of ha_resize_key_cache() which is using longs internally
to hold the keybuff_size, b
>
> Maybe you've also hit the quirks of memory management and malloc, just as
> we've posted a while ago in http://lists.mysql.com/mysql/186930 ?
An interesting thread, but I'm on NetBSD, not Linux. But it's some place
to start looking around, I guess.
--
MySQL General Mailing List
For list arc
We're starting to use mysql (4.0.25) on an amd64 machine (running
NetBSD-3). One of the reasons for doing this is to use much more
RAM - we tend to thrash the key_buffer on i386 because one of our
indexes is 10GB in size (the table is 15GB).
It appears that mysqld won't start if the setting for ke
I'd like to spread the disk arm load across multiple drives. At
the moment, we mostly use MyISAM tables, but we are also
experimenting with InnoDB.
What's the 'best practice' for doing this? There's no obvious
configuration that lets me designate one directory for index
and another for data - am I
I'm looking at the stats on one of our servers and trying to understand
why Handler_read_rnd_next is so high. It's 256.5M right now, which is
about 10x the total number of reported queries.
The machine is being used, almost entirely, for queries of the form:
select * from crumb
where link_id i
25 matches
Mail list logo