Re: convert varchar to char

2005-08-13 Thread Chris Elsworth
On Sat, Aug 13, 2005 at 05:37:56PM +0100, Pooly wrote:
> Damnit !
> Thanks for pointing it, I forgot these things.
> But it's a bit more subtle :

[snip]

Bah, should have waited another 5 minutes before I bothered posting my
last long-winded ramble ;)

> ALTER TABLE sessions MODIFY id char(32) NOT NULL DEFAULT '', MODIFY ip
> char(8) NOT NULL DEFAULT '0';

Cool, I didn't know you could do this though. Ta :)

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: convert varchar to char

2005-08-13 Thread Chris Elsworth
On Sat, Aug 13, 2005 at 04:01:38PM +0100, Pooly wrote:
> 
> ALTER TABLE sessions MODIFY ip char(8) NOT NULL DEFAULT '0';
> ALTER TABLE sessions MODIFY id char(32) NOT NULL DEFAULT '';

Hello,

Since you have two varchar columns, I don't think there's any way to
convert them both to char without dropping one first.

MySQL will always silently convert a char into a varchar if the table
is already a dynamic-row-length type (which it is, because the other
varchar makes it that way). So neither statement actually does
anything, they're both null operations.

The only way you can do this would be to move all data from, say, ip,
into another table temporarily, then drop that column, then change id
into a char, then create ip as a char, and import it all back.

This reveals a bit of a shortcoming in alter table that you can't
atomically modify two columns at once, which might get around this
problem.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump feature request

2005-02-14 Thread Chris Elsworth
Hello,

I have a feature request for mysqldump that would greatly ease a
current task I have; in my replicated setup my master is currently
MyISAM (as are most slaves), but I have one slave that is InnoDB for
testing.

Somehow or other, it's gotten itself out of sync, so I'm in the
position of needing to resync it.

If I were to do the usual mysqldump --add-drop-table $db | mysql
then everything will be recreated as MyISAM.

If mysqldump had a couple of extra options;
   --truncate-table
   --create-if-not-exists

Which, respectively, truncate a table before inserting any rows to it,
and only create a table if it doesn't already exist (merely by placing
the relevant already-implemented commands in 4.1 in the sql dump) I
would have a one-step process for resyncing my MyISAM master to a
slave of differing table types, by keeping the already-created slave
tables.

I'm sure these could probably come in useful for other scenarios too. 
Would this be possible/feasible/useful to anyone else?

Thanks,
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow query, how can i imporve it?

2005-02-12 Thread Chris Elsworth
On Fri, Feb 11, 2005 at 10:45:46AM -0500, [EMAIL PROTECTED] wrote:

> Normally I do not reply to myself but I just realized that in my previous 
> response I confused COUNT(*) (which is slow for InnoDB because it always 
> does a table scan to resolve the version lock of each and every row) with 

Hello all,

You just reminded me about this, I've been meaning to ask; are there
any plans to "fix" this for InnoDB? It seems like quite a serious
omission that InnoDB doesn't keep an accurate internal row count. Are
there technical reasons why this isn't done, or is it in the TODO for
any time soon? It's really one of the biggest things stopping me from
switching wholly to InnoDB :(

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: match a fulltext search with a "-" dash, can't match if - exist

2004-10-20 Thread Chris Elsworth
On Mon, Oct 18, 2004 at 11:19:55AM +0800, Louie Miranda wrote:
> 
> mysql> select * from fullsearch where match (title,body) against ('018-E');
> Empty set (0.00 sec)
> 
> 
> it returns an empty set, is it possible to also search with "-" dash? chars?

If I remember correctly, you need to pass the string as a phrase to fulltext:

select * from fullsearch where match (title,body) against ('"018-E"');

I'd prefer being able to escape the - with \, since using a phrase has
other disadvantages (like partial word matching goes out the window),
but you can't.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help! Nasty big table efficiency issues with GROUP BY

2004-07-20 Thread Chris Elsworth
On Tue, Jul 20, 2004 at 10:39:00AM +1000, Lachlan Mulcahy wrote:
> 
> Chris,
> 
> Have you checked your following server configurables:
> 
> sort_buffer_size:
>  - This is the size of the cache created by _each_ thread that requires
> ORDER BY or GROUP BY in a query.
> If you are doing a lot of large ordered queries you will need to increase
> this value otherwise MySQL will use _disk_ for sorting (this is very slow
> and largely undesirable).

This was at it's default 2MB, so I've raised it to 64; doesn't seem to
have helped much in the short term though :(

> 
> tmp_table_size:
>  - This is the maximum size of an "in memory" or HEAP temporary table. If a
> GROUP BY query causes the server to exceed this limit, an "on disk" table
> will be used. Once again, this is slow and undesirable.

This was 100MB, so that should be sufficient. I've altered
sort_buffer_size so I'll see how it goes over the next few hours.

Cheers for the pointer!

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help! Nasty big table efficiency issues with GROUP BY

2004-07-19 Thread Chris Elsworth
Hello,

I wonder if someone could shed some light on a problem that's been bothering
me for months. Please bear with me as I explain it..

I have two tables in question:

CREATE TABLE `News` (
  `FileID` int(8) unsigned NOT NULL auto_increment,
  `Subject` char(200) NOT NULL default '',
  `Category` tinyint(3) unsigned NOT NULL default '0',
  `SubCategory` smallint(5) unsigned NOT NULL default '0',
  `Date` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`FileID`),
  KEY `S_D` (`SubCategory`,`Date`),
  KEY `C_D_P` (`Category`,`Date`,`PostID`),
  KEY `C_P_D` (`Category`,`PostID`,`Date`,
  KEY `Subject` (`Subject`(10)),
  KEY `C_D` (`Category`,`Date`),
  FULLTEXT KEY `ft_Subject` (`Subject`)
) TYPE=MyISAM

CREATE TABLE `FileGroup` (
  `FileID` int(8) unsigned NOT NULL default '0',
  `GroupID` smallint(5) unsigned NOT NULL default '0',
  PRIMARY KEY  (`GroupID`,`FileID`),
  KEY `F_G` (`FileID`,`GroupID`),
) TYPE=MyISAM


News has about 2.5m rows and FileGroup has 3.1m rows.
For each row in News, there are one or more corresponding rows in
FileGroup, determining in which newsgroups each News article appears.

So, typically:
mysql> select * from News where FileID = 26222004\G
*** 1. row ***
 FileID: 26222004
Subject: some_usenet_post
   Category: 7
SubCategory: 706
   Date: 1090239675

mysql> select * from FileGroup where FileID = 26222004\G
*** 1. row ***
 FileID: 26222004
GroupID: 638
*** 2. row ***
 FileID: 26222004
GroupID: 799


The problem occurs when I want to get News rows that do not appear
in a specific group or set of groups. I also only want one row per News
article, not one row per group, so I have a GROUP BY (I could maybe use
DISTINCT too but they'd do pretty much the same) in there.

I end up with something like this:

SELECT * FROM News
  JOIN FileGroup ON (News.FileID = FileGroup.FileID)
  WHERE GroupID NOT IN (638) AND Category = 7
  GROUP BY News.FileID ORDER BY Date DESC 
  LIMIT 100

The resulting explain:
+---+--+-+---+-+-++--+
| table | type | possible_keys   | key   | key_len | ref | 
rows   | Extra|
+---+--+-+---+-+-++--+
| News  | ref  | PRIMARY,C_D_P,C_P_D,C_D | C_P_D |   1 | const   | 
595494 | Using where; Using temporary; Using filesort |
| FileGroup | ref  | F_G | F_G   |   4 | News.FileID | 
 1 | Using where  |
+---+--+-+---+-+-++--+

MySQL is being forced to create a temporary table because of the GROUP BY,
and it pretty much seems to end up scanning the entire table - queries are
taking upwards of 30 seconds. In the queries that I can remove the JOIN (I
don't always need it, because I don't always need to exclude items in
certain groups) it flies, because the temporary table and filesort pretty
much always go; I can fiddle with indexes to make that always the case; the
GROUP BY can go too, since the rows will always be unique.

I've typed myself out now so I'll keep the actual question short :)
What can I do about this? Is there a more efficient way to store this data
to avoid these horrific queries?

If I can supply any more relevant information I'll be only too pleased to.

Thanks for any hints in advance.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Server Configuration

2004-06-09 Thread Chris Elsworth
On Wed, Jun 09, 2004 at 01:45:49PM +0100, Marvin Wright wrote:
> Hi,
> 
> We are about to build some new database servers and I have some questions
> which I'd like some advice on.
> 
> The machines we are building have 4 Xeon 2GHz CPU's, 4 x 32GB SCSI disk
> using RAID 1+0 (so thats 64GB of storage) and 4 Gig of RAM.

Consider RAID10: http://www.acnc.com/04_01_10.html
As opposed to 0+1: http://www.acnc.com/04_01_0_1.html

You'd think they're the same but they're subtly different leading to
very different characteristics. Note the Recommended Application for
10 is a database server.

>   Which file system would you recommend for this ?  I've seen many
> recommendations for ReiserFS but have no experience of it.

I use xfs on my Debian MySQL server. Specs are pretty similar, two
2.8GHz Xeons, 4 36GB U320 drives (in RAID10, which is superb), and 4GB
of memory. My /db has 418 inodes used, and 16G used out of the 30G on
it; making for quite a large average filesize. To be honest, the
filesystem isn't really my bottleneck - with 4GB, MySQL and the OS
have tons of caching room, and the filesystem is doing maybe 40k/s of
sustained activity with the odd burst of real work. You'll probably
like to at least check xfs out.

>   Should I use a pre-compiled binary or should I compile one myself ?

I found it makes so little difference it's not worth worrying about. I
use the apt package for ease of upgrade and dependencies.

>   Should the 2 disks for storage be split up into partitions or just 1 large
> partition per disk ?

Always partition. You get to choose which filesystem suits each
partition best. My preference; ext3 for /, xfs for /db, ext2 for /dump.
/ does very little work but I want it consistant so ext3 is fine.
/dump stores backups (which are mirrored elsewhere) and I don't care
if its trashed, but I want it fast when I am using it.

> Is there anything else I should consider when configuring the machines that
> affect the performance ?

Linux 2.6 probably isn't in RedHat 7.3 base, but you'll want to try
it. It's faster than 2.4. My configuration was quite happy doing
35,000 selects per second (with super-smack, an arbitrary benchmarking
tool); with 2.4 it was a few thousand lower.


-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump under cron

2004-06-01 Thread Chris Elsworth
On Tue, Jun 01, 2004 at 09:28:37AM -0400, adam wrote:
> 
> mysqldump --user=root --password=  --opt bugs >
> $BACKUPDIR$BACKUPSQLFILE
> 
> My problem is that it does not seem to work when the crond calls the script.
> The result of the dump is a zero size sql file.

Don't you get the output of stderr mailed to you? That would probably
identify the cause, but at an off the top of my head guess, mysqldump
isn't in the default path for the user you're running this as? Specify
the full path to it in the crontab?

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "merge tables for big log files ?"

2004-05-31 Thread Chris Elsworth
On Mon, May 31, 2004 at 02:03:09PM +0200, Michael Arndt wrote:
> 
> Problem for this variant: merge table has to be dropped and recreated
>   periodically.
>   during the short lag interval the log merge table
>   does not exist

You don't have to drop and recreate it in MySQL. You can do it under
MySQL's feet without it noticing. The .MRG definition file is simply a
list of tables that make up the merge, with a couple of optional
configuration likes; the only one I use is #INSERT_METHOD.

You can create a new .MRG file very easily in Perl, consisting of the
new range of tables, then move it over the old .MRG, so in one atomic
operation the merge table now has a new subtable; in order to get
MySQL to notice however you have to do a flush tables. You can use
mysqladmin flush-tables to do that.

For insertion, this is where #INSERT_METHOD comes in handy. MySQL can
either insert to the first or last table:

$ tail -3 MessageIDs.MRG
MessageIDs_20040529
MessageIDs_20040530
#INSERT_METHOD=LAST

Now INSERT INTO merge_table will be inserting into
MessageIDs_20040530, which is today. Just after midnight, my Perl runs
and makes a new .MRG, and after the flush MySQL is inserting into the
next table without even realising.
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL performance on FreeBSD compared to Linux

2004-05-23 Thread Chris Elsworth
On Sat, May 22, 2004 at 11:25:54PM -0500, mos wrote:
> >
> >Once I wiped this and tried Linux (both gentoo, with their
> >patched-to-the-hilt 2.6.5 kernel, and Debian, with a stock 2.6.6 which
> >had just been released by the time I installed) this figure jumped to
> >35,000 queries per second.
> 
> First of all congratulations on getting queries that fast.  :)
>
> I have a few questions if you have the time:
> 
> 1) Are you using MyISAM tables?

The test tables super-smack uses are in the test database and are
indeed MyISAM.

> 2) How large is the table's row size? And the result size returned?

Here's the create definition from super-smack (select-key.smack):
CREATE TABLE http_auth (
username char(25) not null primary key,
pass char(25),
uid integer not null,
gid integer not null
)

If integers are 4 bytes I make that 58 bytes per row?

And this is the query:
SELECT * FROM http_auth WHERE username = '$word'
So it fetches the entire row, 58 bytes.

> 3) Are you sorting the results?

No.

> 4) What percentage are selects and updates?

This particular test is 100% selects. super-smack also has an
update-select.smack which does 1:1 updates and selects.

> 5) On average, how many rows are being returned for the query?

1 - the username is a unique key.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL performance on FreeBSD compared to Linux

2004-05-21 Thread Chris Elsworth
On Fri, May 21, 2004 at 05:18:09PM -0600, Sasha Pachev wrote:
> 
> It looks like FreeBSD was using only one CPU from your numbers. Try the 
> test with only 1 thread and a lot of iterations to avoid the influence of 
> overhead. I know very little about FreeBSD, but one thing I would check is 
> if the kernel was configured to be SMP-cabaple/enabled.

Both CPUs were definitely in use; since these are Xeons with
HyperThreading, there's effectively 4 logical CPUs for the OS to use.
The OS sees them all; if I only put one to use then top (over time)
shows just 25% CPU in use which is correct. I generally run
super-smack with 4 or 5 clients to exploit the entire CPUs.

The super-smack results scaled roughly linearly up to 4 clients and
then held fairly static as the number of clients grew beyond that, so
I'm fairly sure all the CPUs are being used - just not as well as they
should be. I didn't record firm numbers, I'd have get FreeBSD back on
to get some firm results for that.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL performance on FreeBSD compared to Linux

2004-05-21 Thread Chris Elsworth
Forenote: I have no wish to start an OS debate.

Hello,

I'm in the fortunate position of having a dual 2.8GHz Xeon with 4G of
ram and 4 10k SCSI disks (configured in RAID-10) to deploy as a new
MySQL server.

Since I'm a numbers freak, I've been running super-smack on it for the
last few days to see how it stacks up.

Tweaking various configs and kernel options, on any OS, obviously wins
a few hundred/thousand queries per second, but I'm really quite
surprised at one major difference.

Optimisations and tweaking aside, FreeBSD 5.2.1-p6 on this hardware
did well to achieve 17,000 queries per second, using super-smack's
select-key.smack with the query cache turned on. Nothing I could do,
and I spent days trying, got it much higher.

Once I wiped this and tried Linux (both gentoo, with their
patched-to-the-hilt 2.6.5 kernel, and Debian, with a stock 2.6.6 which
had just been released by the time I installed) this figure jumped to
35,000 queries per second.

Is FreeBSD really this crap for MySQL? I was quite horrified. FreeBSD
5 has a number of threading libraries, and I tried them all.
LinuxThreads won (slightly, there wasn't much in it). I'm very much a
FreeBSD fan and I'd quite like to keep FreeBSD on this machine before
it goes live, but the performance pales in comparison to Linux.

I had to do absolutely no tweaking to achieve 35,000 queries/sec in
Linux.

Has anyone else observed similar behaviour? Does anyone else have
similar hardware with FreeBSD on? Have you fared any better?

Thanks for any comments,
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: super-smack on FreeBSD?

2004-05-18 Thread Chris Elsworth
On Tue, May 18, 2004 at 03:46:46AM -0700, JG wrote:
> At 11:31 AM 5/18/2004 +0100, you wrote:
> >On Mon, May 17, 2004 at 08:43:44PM -0700, JG wrote:
> >>
> >> Has ANYONE been able to get super-smack-1.2 to compile as-is on FreeBSD?
> >>
> >> ./configure --with-mysql
> >> make
> >> make install
> >>
> >> Without errors?
> >
> >No, it required various code changes. A colleague of mine made the
> >changes, I can probably get them off him if you need them.
> 
> PLEASE do Chris...
> 
> I actually got it to compile after wrestling with it for awhile.
> 
> But I can't get it to work with a remote server that doesn't have 
> supersmack installed.

12:05  client.cc, just remove the #ifdef and the stuff in the #else

is apparently all that's involved.. We only tried it on a local
server though, so I don't know how to fix it working on a remote
server..

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: super-smack on FreeBSD?

2004-05-18 Thread Chris Elsworth
On Mon, May 17, 2004 at 08:43:44PM -0700, JG wrote:
> 
> Has ANYONE been able to get super-smack-1.2 to compile as-is on FreeBSD?
> 
> ./configure --with-mysql
> make
> make install
> 
> Without errors?

No, it required various code changes. A colleague of mine made the
changes, I can probably get them off him if you need them.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization needed

2004-05-11 Thread Chris Elsworth
On Wed, May 05, 2004 at 01:06:45PM -0400, Brent Baisley wrote:
> Basically, you can't, it's a limitation of the InnoDB format. If you 
> change the table type to MyISAM, that query would be almost 
> instantaneous. But you are probably using InnoDB for a reason, so you 
> may be stuck if you want a record count.

I might be way off base here, but couldn't he use the following index:

> >  INDEX protocol_TimeStamp (time_stamp)

Then do something like COUNT(*) WHERE time_stamp > '1979-';

Would this use the index to resolve it quickly? Or, having written
that and looked at it, will the 30% rule kick in? It probably will,
won't it.. Maybe a FORCE INDEX?

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MERGE tables and concurrent inserts

2004-03-24 Thread Chris Elsworth
On Wed, Mar 24, 2004 at 01:23:53PM +, Chris Elsworth wrote:
> 
> I'm wondering if specifying LOW_PRIORITY disables concurrent inserts
> explicitly, so I'll try removing that and see if anything improves,
> but in the meantime, if anyone has any advice, please share :)

Ignore that bit. Found the answer 10 seconds after sending in the
INSERT page.

"Note that LOW_PRIORITY should normally not be used with MyISAM tables
as this disables concurrent inserts."

Doh :)

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MERGE tables and concurrent inserts

2004-03-24 Thread Chris Elsworth
Hello,

Further to my MERGE table query the other day, I've now put it into
action and it seems to be working fine. However I'm noticing that
"INSERT LOW_PRIORITY" statements are blocking when there are SELECTs
running.

Does anyone know if concurrent inserts work with MERGE tables? Are
there any criteria I need to satisfy? I know for certain that the
table being used to INSERT to has *no* deleted rows in it. Do I need
to ensure that all children of the MERGE table have no deleted rows?

I'm wondering if specifying LOW_PRIORITY disables concurrent inserts
explicitly, so I'll try removing that and see if anything improves,
but in the meantime, if anyone has any advice, please share :)

Thanks,
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MERGE table with some packed tables?

2004-03-22 Thread Chris Elsworth
On Mon, Mar 22, 2004 at 01:40:29PM -0600, Dan Nelson wrote:
> In the last episode (Mar 22), Chris Elsworth said:
> > Now that's all well and good and I'm fairly sure it'll all work, but
> > another interesting idea I was wondering over was - can I myisampack
> > the tables that I know won't be updated anymore, and still MERGE
> > them, with other unpacked-tables?
> 
> It should work.  I think I tried something similar before deciding to
> just gzip the tables and if I needed them I'd uncompress them and query
> them individually :)

Ah, well these tables do need to be queryable and thus live; I only
ask about packing because it'll save space and (apparently) make them
faster; I've never actually played with packing MyISAM tables before.

Happen to know if MySQL will need a kick after the packing is done?
Would a flush-tables do, assuming I can guarantee there was nothing
outstanding to be written to the table before I started packing it?

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MERGE table with some packed tables?

2004-03-22 Thread Chris Elsworth
Hello,

Just a quick question to see if anyone's tried this and run into any
problems, or if it'll even work - I have a *huge* table that's just
crashed thanks to FreeBSD panicking, and the repair operation I'm
estimating is going to be another 4 hours :(

But anyway, I'm pondering over splitting this table up into a few
sub-tables then making the change transparent to overlying code via a
MERGE table. The data in this table is inserted hourly, and once it's
in there, it doesn't change until I delete it 25 days later. It's kind
of a rolling database. I've mused over the idea of making one MyISAM
table for each day, creating a new one every night, and updating the
MERGE schema - this also means I don't have to do a time consuming
DELETE of old data - I just take the table definition out of the
MERGE, move the datafile away so MySQL can't see it, and flush-tables.

Now that's all well and good and I'm fairly sure it'll all work, but
another interesting idea I was wondering over was - can I myisampack
the tables that I know won't be updated anymore, and still MERGE them,
with other unpacked-tables?

Sorry, this got a bit long and rambly :)
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB Hot Backup & new tablespace format

2004-02-07 Thread Chris Elsworth
Hello,

Does anyone know if InnoDB Hot Backup supports the new tablespace
format being introduced in the latest versions of InnoDB?

I'm quite tempted to switch from MyISAM to InnoDB using the new
tablespace format, but I'm put off by how inflexible InnoDB files seem
to be. I like being able to move .MYD and .MYI files around and have
any mysqld use them; InnoDB seems a bit picky about that.
Does the new one-file-per-tablespace format change any of that? Is
there any actual advantage to using it?

Thanks for any replies,
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: any ideas about it

2004-02-02 Thread Chris Elsworth
On Mon, Feb 02, 2004 at 07:16:13PM +0530, Nitin Mehta wrote:
> Hi all,
> 
> I m looking for any function or a work around to fetch numerical data without its 
> sign (-10 --> 10, 10 --> 10). Any ideas?

ABS() :

mysql> select abs(-10);
+--+
| abs(-10) |
+--+
|   10 |
+--+



-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Reset Auto-Incriment?

2004-01-26 Thread Chris Elsworth
On Mon, Jan 26, 2004 at 10:40:02AM -0600, Paul Fine wrote:
> I realize that this might be problematic/devastating if you had an AI PK and
> did this, however in my case there is no problem as there is no related data
> in the database yet lol.
> 
> My question is, how can I reset AI? For example I deleted several rows and
> now my AI starts with the AI number after the last row that was deleted. Ie.
> 1,2,3,4,9,10,11 when I want to start numbering at 5 not 9!

Use:
ALTER TABLE foo AUTO_INCREMENT = $x

Doesn't seem to be documented here, but it's in a comment at the
bottom:
http://www.mysql.com/doc/en/ALTER_TABLE.html

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: find duplicates

2004-01-13 Thread Chris Elsworth
On Tue, Jan 13, 2004 at 02:03:26AM -0500, [EMAIL PROTECTED] wrote:
> 
> select prodid,count(groupid) as cnt from products
>   where (groupid=65 or groupid=66)
>   group by imgsml having cnt>1;
> 
> I.e. replacing order by with a having clause. After trying many variations; are
> 'order by' and 'having' mutually exclusive? If so - how would you order the
> result table?

They shouldn't be, you just need to get the order right:

 select prodid,count(groupid) as cnt from products
   where (groupid=65 or groupid=66)
   group by imgsml having cnt>1 order by cnt;

Is perfectly valid syntax.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replicating Table Schema

2004-01-01 Thread Chris Elsworth
On Thu, Jan 01, 2004 at 05:42:59PM -0500, Gohaku wrote:
> Hi everyone,
> I was just curious if there's a shorthand way of replicating a Table 
> Schema.
> I use the following to create a new Table with the same schema.
> >create table new_table ( select * from table);
> >delete from new_table;

CREATE TABLE new_table LIKE old_table;

Available from MySQL 4.1 up:
http://www.mysql.com/doc/en/CREATE_TABLE.html

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: num rows / pages

2003-12-27 Thread Chris Elsworth
On Sat, Dec 27, 2003 at 02:08:08PM +, Abs wrote:
> hi
> i was trying to group my results 10 per page ($p per
> per page). if i use limit, then there's no way of
> knowing how many there are left so i can't give page
> numbers as:
> << first 2 3 4 last >>. perhaps running the query
> twice, first time wihtout limit to see how many there
> were and the 2nd just for a particular bunch with
> limit. wouldn't that load the mysql db? and what if
> i've got 10,000 rows or so? the query will take time.
> any solutions?

Use SQL_CALC_FOUND_ROWS.
Documented in http://www.mysql.com/doc/en/SELECT.html

You run your first query with SQL_CALC_FOUND_ROWS, then once you're
done with it, you run another ("SELECT FOUND_ROWS()") and you get the
total resultcount you would have got, had you not LIMIT'ed it.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help me - please

2003-12-18 Thread Chris Elsworth
On Thu, Dec 18, 2003 at 05:28:30PM -0200, Carlos Andr? Moura de Amorim wrote:

> 031217 14:32:34  Warning: setrlimit couldn't increase number of open files
> to more than 256 (request: 510)

It might be worth putting a "ulimit -n 1024" (or some other decent
number) in the rc.d script that starts mysql; then (assuming the
kernel is going to allow it) mysql will be able to change the number
of file descriptors it can open, to the best of my knowledge.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: NOT EXISTS

2003-12-15 Thread Chris Elsworth
On Mon, Dec 15, 2003 at 10:59:32AM -, Curley, Thomas wrote:
> Guys
> 
> Any idea why this query will not work in 4.0.13
> 
> select batch_id from BATCH
> where NOT EXISTS (select 1 from TXN where TXN.batch_id = BATCH.batch_id)
> 
> You have an error in your SQL syntax near 'EXISTS (select * from TXN where 
> TXN.batch_id = BATCH.batch_id)' at line 1

It's a subselect, and subselects aren't implemented in 4.0 - you need
4.1.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Questions about indexing

2003-12-15 Thread Chris Elsworth
On Sun, Dec 14, 2003 at 03:53:00PM -0500, Dan Anderson wrote:
> 
> I have a database I'm using  for a MMORPG (well, it isn't very
> MM because I'm something of a  noob), and I have a few questions about
> indexing.  I  am storing world data  in a database.  In  order to keep
> everything as  swift as  possible, I have  indexed everything.   And I
> really mean, everything -- a few dozen columns on a half dozen tables.
> 
> My question  is, is this the  right way?  I  figure that since

Not necessarily. You should analyze your SELECTS individually and
create the indexes that are going to be used. You may also see where
multi-column indexes could give you even more speed if you do this
(you might be able to benefit from 'using index' to pull all rows from
an index)

The only other thing you should be worried about is the size of the
indexfile; bigger indexfiles take longer to seek through; but since
you say they'll all fit in mem this is only a concern when you're
going to be updating them I suppose.

With such a small database it really boils down to just being tidy;
you don't want indexes you're not going to use.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ALTER TABLE .. ORDER BY

2003-12-10 Thread Chris Elsworth
Hello,

Just a quickie. Does anyone know if issueing an ALTER TABLE t ORDER BY c
is as good as an OPTIMIZE TABLE if I know the order I'll mostly be
sorting with? Does the ALTER TABLE operation reclaimed deleted rows,
for example? Or should I OPTIMIZE and then ALTER?

Cheers :)

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Delayed Insert Question

2003-12-09 Thread Chris Elsworth
On Tue, Dec 09, 2003 at 02:18:58PM +0100, David Bordas wrote:
> 
> I've read mysql doc sereval times, but i can't find any varaible that
> specify when the delayed queue was flushed.

Well, I suppose that's because there isn't one. The DELAYED thread
handles that by itself. You don't want it too large because if mysql
crashes while you have rows sat waiting to be written, they're lost.

> If I understand, I can increase delayed_insert_limit for better performance,
> but I should also increase the delayed_queue as well ?

If you increase delayed_insert_limit then you're effectively giving
the DELAYED thread more preferencee to the table; it will write more
rows (once it can, ie there's a phase of time where there's no locks
on the table) in a batch, which potentially makes other selects wait
longer.

Inserting delayed_queue_size means the clients can pile more and more
rows into the DELAYED thread while it gets chance to write. This may
give your clients a bit of a boost, but only if the DELAYED thread
fills up; at a default of 1000, you must be doing a lot of inserts to
reach that. Remember if you have a lot of rows waiting and mysql
crashes, they're lost.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Delayed Insert Question

2003-12-09 Thread Chris Elsworth
On Tue, Dec 09, 2003 at 12:17:41PM +0100, David Bordas wrote:

> So, i'm using INSERT DELAYED with some good succes.
> 
> But I've got a question.
> If i decrease delayed_insert_limit to ten secondes for example, is that mean
> that delayed_queue will be flushed every ten secondes ?
> Is there an other variable that specify the flush time ?

No - delayed_insert_limit refers to how many rows a DELAYED thread
will insert at once before checking if any other SELECTs are waiting
for the table. The process (and all related variables you can tweak)
are documented here:

http://www.mysql.com/doc/en/INSERT_DELAYED.html

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: string comparison query

2003-12-07 Thread Chris Elsworth
On Sun, Dec 07, 2003 at 12:11:21AM -0500, Alex Khordos wrote:
> 
> I am trying to work out the query to retrieve city name based on the zip
> code in the second table.
> How can I compare two strings? I tried LIKE in my query but it did not work.

Use SUBSTRING to cut down the extended zipcode to 5 characters, then
you can use a standard = comparison:
WHERE (SUBSTRING(table2.zip_code,0,5) = table1.zip_code)


-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.1.1. -> WITH QUERY EXPANSION

2003-12-07 Thread Chris Elsworth
On Sun, Dec 07, 2003 at 01:48:08PM +0100, Sergei Golubchik wrote:

> > What is WITH QUERY EXPANSION? I found no details in manual.
> 
> Fixed.
> "WITH QUERY EXPANSION" is now documented.

I'm sorry, but I must be blind. Where is it on
http://www.mysql.com/doc/en/Fulltext_Search.html ?

Did someone forget to update the site? ;)
-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL installation problem

2003-12-06 Thread Chris Elsworth
On Sat, Dec 06, 2003 at 08:20:57PM +0100, Franz Edler wrote:
> > 
> > Errno 13 is Permission Denied. Is the mysql daemon running with
> > sufficient privileges to read the directory and files within it?
> > All the database files should be owned by the mysql user, and are
> > generally in the mysql group, mode 660. Check that's the case.
>  
> All mysql-database files in /var/lib/mysql/mysql/ are owned by user=root and
> group=root, mode =660.
> There is no mysql group.
> 
> I installed as user "root" (using YaST) and created the database-files with
> mysql_install_db also as user "root".

During normal operation, mysql should run as the user "mysql", so it
can't read those. You'll want to chown them. Not having a mysql group
isn't critical, but if you haven't got a mysql user then I dare say
you need to check the installation process, because it probably hasn't
worked correctly.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL installation problem

2003-12-06 Thread Chris Elsworth
On Sat, Dec 06, 2003 at 03:52:05PM +0100, Franz Edler wrote:
> 
> The msqld.log file shows:
> 031206 15:05:20  mysqld started
> 031206 15:05:20   Fatal error: Can't open privilege tables: 
>   Can't find file: './mysql/host.frm' (errno: 13)
> 031206 15:05:20  Aborting
> 031206 15:05:20  /usr/sbin/mysqld: Shutdown Complete
> 
> What is wrong?

Errno 13 is Permission Denied. Is the mysql daemon running with
sufficient privileges to read the directory and files within it?
All the database files should be owned by the mysql user, and are
generally in the mysql group, mode 660. Check that's the case.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization on query with WHERE, GROUP BY and ORDER BY

2003-12-05 Thread Chris Elsworth
Hello,

On Fri, Dec 05, 2003 at 12:02:05PM +0100, Martin Gillstr?m wrote:
> 
> The table can look something like this:
> row_id INT PRIMARY KEY
> where_column SET('a','b','c')
> groupby_column VARCHAR(255)
> orderby_column DATE
> .. more rows that I need to fetch with the select.

> This is what I have tried but not seen any dramaticly speed improvements 
> with:
> I have tried but I can't get mysql to use one index only.
> A also get that mysql uses temp tables and also store them disk. I have 
> raised  tmp_table_size without any success.
> I have experimented with sort_buffer_size, read_rnd_buffer_size, 
> key_buffer_size.

As I understand it, you can't get MySQL to use an index for sorting
and grouping if you're sorting and grouping on a different row. This
one bites me too, and forces me to live with a "using temporary; using
filesort" query on one of my biggest busiest tables.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



DELETE on a huge table; how long *should* it take?

2003-12-05 Thread Chris Elsworth
Hello,

I have quite a large table, 45 million rows, which has 3 indexes on
it. The rows are evenly distributed across one particular index, which
records the time the row was inserted. At any given time there's
between 20 and 21 days worth of rows, and every night I delete
anything over 20 days. So I'm deleting about 2.2 million rows, with
what is basically:
DELETE QUICK from table WHERE indexed_row < UNIX_TIMESTAMP()-86400

I have PACK_KEYS=1 DELAY_KEY_WRITE=1, and this ia MyISAM table. Now,
roughly, should this take half an hour or more? It seems very disk
bound, producing lots of small disk transactions. I wouldn't really
mind, but the entire table is locked for the process and the site it's
powering grinds to a halt.

My first thought is to change it to InnoDB and use a transaction so
the delete can take as long as it wants without interrupting anything
else. I am however I bit worried about space; the MyISAM files are
using 5G for data + 763M for index; it's only an 18G drive thus I'm a
bit worried the InnoDB equivalent is going to be too big.

Any other pointers, speedup tips, ways to avoid this issue entirely?

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Repeated 100% CPU problem in FreeBSD

2003-11-27 Thread Chris Elsworth
Hello,

Most of the time, MySQL 4.0.16 runs absolutely fine on my FreeBSD 4.9
system. However, today, unexplicably, it's run into the same problem 3
times now; a couple of the threads suddenly start eating 100% CPU for
no good reason while doing a SELECT. They'll sit there until I kill
them, which results in a lot of hair tearing as it comes back up
checking all the tables.

I haven't been able to produce a core despite sending it a SIGABRT,
but I'll try recompiling with -g (do the supplied binaries have this?)
soon.

Pretty much all I can find around that might contribute relates to the
poor threading implementation on FreeBSD and the name resolving issue.
I should have worked around both of them; I've built with
linuxthreads, and I have skip-name-resolve in my.cnf.

Does *anyone* have any clues why this randomly happens? It had been
fine for 30 or so days prior to this, and today it's crashed 3 times
in this way. I've recently run a myisamchk -r *.MYI over all the
tables in the database which comes up clean.

Any help would be appreciated.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Index efficiency query

2003-11-27 Thread Chris Elsworth
On Wed, Nov 26, 2003 at 06:44:57PM -0600, Matt W wrote:
> Hi Chris,
> 
> It doesn't take MySQL any more or less time to update a unique index
> than a non-unique one. :-)

Ah, a nice simple answer to a long drawn out question :) Thanks Matt,
just the reassurance I was after, I didn't want inserts to suddenly
slow down by a factor of 10. I realise they'll slow down slightly with
another index to update, but never mind.
Now I can go give MySQL half an hours work to do creating this index
:)

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Unique Index efficiency query

2003-11-26 Thread Chris Elsworth
Hello,

Let me just outline the table in question first.
I have a rather large (40,000,000 rows) table as follows:

   Table: MessageIDs
Create Table: CREATE TABLE `MessageIDs` (
  `mid_msgid` char(96) NOT NULL default '',
  `mid_fileid` int(10) unsigned NOT NULL default '0',
  `mid_segment` smallint(5) unsigned NOT NULL default '0',
  `mid_date` int(10) unsigned NOT NULL default '0',
  `mid_bytes` mediumint(8) unsigned NOT NULL default '0',
  KEY `fid_msgid` (`mid_fileid`,`mid_msgid`(5)),
  KEY `fid_bytes` (`mid_fileid`,`mid_bytes`),
  KEY `mid_date` (`mid_date`)
) TYPE=MyISAM PACK_KEYS=1 DELAY_KEY_WRITE=1


Index details:
mysql> show indexes from MessageIDs;
+++---+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name  | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++---+--+-+---+-+--++--++-+
| MessageIDs |  1 | fid_msgid |1 | mid_fileid  | A | 
1114302 | NULL | NULL   |  | BTREE  | |
| MessageIDs |  1 | fid_msgid |2 | mid_msgid   | A |
20057449 |5 | NULL   |  | BTREE  | |
| MessageIDs |  1 | fid_bytes |1 | mid_fileid  | A | 
1114302 | NULL | NULL   |  | BTREE  | |
| MessageIDs |  1 | fid_bytes |2 | mid_bytes   | A |
40114898 | NULL | NULL   |  | BTREE  | |
| MessageIDs |  1 | mid_date  |1 | mid_date| A | 
1744126 | NULL | NULL   |  | BTREE  | |
+++---+--+-+---+-+--++--++-+

Now, what I want to do with this table is create a unique index on
(mid_fileid, mid_segment).

How does MySQL deal with ensuring that a unique index doesn't end up
non-unique with a table this large? Is making this index going to
proportionally slow down my inserts as the table grows? Would I be
better making it a non-unique index, and doing a select to ensure I'm
not inserting a duplicate?

Thanks for any tips,

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]