Re: Dismal performance on a 16G memory/8 core server - my.cnf settings?

2008-04-24 Thread Jeremy Cole

Hi,


 Someone suggested I try the -amd64 kernels which provide 64 bit but when I try
 to boot it I get various errors about this CPU does not support long
 (something) please use a 32-bit OS - the 64 bit install CD says the same
 message. So I assume these are not 64 bit CPUs.


They almost certainly are.  Look at the contents of /proc/cpuinfo.

You are probably using a 32-bit OS.  You can't use a lot of memory
efficiently unless you install a 64-bit OS, regardless of whether it
has big memory support.  But that's an x86_64 OS, not an AMD64 OS.
These are not the same architecture.


Er, since he's talking about a 6650, a 6th generation Dell machine, it 
very likely *does* have 32-bit CPUs.  And he's said it came with the 
PERC 3/DC card, which is a very old RAID card.  I would hope this 
machine didn't cost much, as it's quite old.


Besides that, though, x86_64 is exactly the amd64 architecture.  AMD 
came up with it, Linux called it amd64, and then when Intel copied it 
and called it EM64T, it was renamed in Linux to x86_64 to be more generic.


The above message is the exact one you get when you try to boot an 
x86_64 kernel on a 32-bit CPU.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Customer Survey :: an exercise in frustration

2007-12-03 Thread Jeremy Cole

Hi,

And, what's up with the vertical white lines on dark blue background as 
a section divider?


My eyes actually hurt looking at the survey page.  It makes it *very* 
difficult to read the questions.


Regards,

Jeremy

Daevid Vincent wrote:

I started to take this but gave up, here's a few suggestions:
 
1. you have questions that need a none option

2. NEVER make the questions mandatory. like #32. I don't give a shit about
most of those things, but you force me to check '1' for all of them. that's
wasting my time.
3. how many damn questions are in this freakin' survey! i'm on #32 and many
are multipart!!! 
 
I'm done dude. I've got no more time to be wasting on this.
 
UGHHH!!!



  _  

From: MySQL [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 11:02 AM

To: Daevid Vincent
Subject: MySQL Customer Survey 




New MySQL User Survey
Plus, a Chance to Win a free pass to the 2008 MySQL Conference  Expo!
Help shape the future direction of MySQL Products! 


MySQL is conducting a User Survey. Your answers are very important to us.
All information you provide will be kept confidential and will only be
reported on in aggregate form. 


Also, eligible users who complete this survey will be entered into a random
drawing to win a free pass to the 2008 MySQL Conference  Expo. 


To take this survey (please note that Zoomerang is the tool we use for our
surveys), please go to: 
http://www.zoomerang.com/survey.zgi


We look forward to and appreciate your participation. 

Sincerely, 


MySQL
Cupertino City Center Building
20450 Stevens Creek Blvd., Suite 350
Cupertino, CA 95014 

  _  


Copyright C 2007 MySQL AB|
http://www.mysql.com?elq=51B8EE3B2FBD4796A7CE3ACEAEE3899E www.mysql.com


Please
http://now.eloqua.com/u.asp?s=287elq=51B8EE3B2FBD4796A7CE3ACEAEE3899E
click here to unsubscribe

 
http://now.eloqua.com/e/FooterImages/FooterImage1.aspx?elq=51B8EE3B2FBD4796

A7CE3ACEAEE3899Esiteid=287
http://now.eloqua.com/e/FooterImages/FooterImage2.aspx?elq=51B8EE3B2FBD4796
A7CE3ACEAEE3899Esiteid=287 





--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Customer Survey :: an exercise in frustration

2007-12-03 Thread Jeremy Cole
I just finished it.  There were 56 questions, not counting the final 
give us your name stuff for the prize.  Most of the questions were 
mandatory.


This seems to be a fill out the form type of survey system rather than 
a self-adjusting system based on the answers to your previous questions. 
 Many of the questions asked were for things I had already said I 
didn't use...


Regards,

Jeremy

Daevid Vincent wrote:

I started to take this but gave up, here's a few suggestions:
 
1. you have questions that need a none option

2. NEVER make the questions mandatory. like #32. I don't give a shit about
most of those things, but you force me to check '1' for all of them. that's
wasting my time.
3. how many damn questions are in this freakin' survey! i'm on #32 and many
are multipart!!! 
 
I'm done dude. I've got no more time to be wasting on this.
 
UGHHH!!!



  _  

From: MySQL [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 11:02 AM

To: Daevid Vincent
Subject: MySQL Customer Survey 




New MySQL User Survey
Plus, a Chance to Win a free pass to the 2008 MySQL Conference  Expo!
Help shape the future direction of MySQL Products! 


MySQL is conducting a User Survey. Your answers are very important to us.
All information you provide will be kept confidential and will only be
reported on in aggregate form. 


Also, eligible users who complete this survey will be entered into a random
drawing to win a free pass to the 2008 MySQL Conference  Expo. 


To take this survey (please note that Zoomerang is the tool we use for our
surveys), please go to: 
http://www.zoomerang.com/survey.zgi


We look forward to and appreciate your participation. 

Sincerely, 


MySQL
Cupertino City Center Building
20450 Stevens Creek Blvd., Suite 350
Cupertino, CA 95014 

  _  


Copyright C 2007 MySQL AB|
http://www.mysql.com?elq=51B8EE3B2FBD4796A7CE3ACEAEE3899E www.mysql.com


Please
http://now.eloqua.com/u.asp?s=287elq=51B8EE3B2FBD4796A7CE3ACEAEE3899E
click here to unsubscribe

 
http://now.eloqua.com/e/FooterImages/FooterImage1.aspx?elq=51B8EE3B2FBD4796

A7CE3ACEAEE3899Esiteid=287
http://now.eloqua.com/e/FooterImages/FooterImage2.aspx?elq=51B8EE3B2FBD4796
A7CE3ACEAEE3899Esiteid=287 





--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle is acquiring...............................

2007-11-26 Thread Jeremy Cole

Hi,

I wouldn't say the Sleepycat/BDB acquisition had anything to do with 
MySQL.  BDB is much more useful *outside* of MySQL, and has a much 
bigger market there anyway.  The BDB storage engine is all but useless.


As for Innobase/InnoDB, their motives are still unclear. :)

Regards,

Jeremy

Shanmugam, Dhandapani wrote:

What is the reason for Oracle to detain Mysql Features like BDB,
INNODB...? Is there any Hidden Market strategies for that 



Thanks  Regards ,
Dhandapani S 



-Original Message-
From: Martijn Tonies [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 26, 2007 7:41 PM

To: MySql
Subject: Re: Oracle is acquiring...



surely mysql would just fork the last gpl innodb release if they got 
bummed by oracle...


I doubt if they can -- from the InnoDB website:
The GPLv2 License
The GNU General Public License version 2, under which both MySQL and
InnoDB are published, does not allow, without permission from MySQL AB
and Innobase Oy, linking of InnoDB and MySQL, or the client libraries of
MySQL, to a product which you distribute but which does not itself
satisfy the GNU GPLv2 license.



So if Innobase doesn't grant MySQL permission, they have no leg to stand
on.



Either way, I think Falcon is the future if you want to stay with MySQL
only,

other options are available if you want to go 3rd party...





Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Arabic text storing problem

2007-11-14 Thread Jeremy Cole

Hi,


Thanks for the link
Is there standalone jar/zip Just for auto_ef


Also see Perl module Encode::Guess.  There are others out there as well.

Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql eluding query debugging?

2007-11-10 Thread Jeremy Cole

Hi Moritz,


Thus, my questions:
- what's wrong with that query? I know they are big tables, but 
according to EXPLAIN, this should be fast enough, because mysql's seeing 
the indexes just fine.


As others have said, likely nothing wrong with the query per se.  It 
just sounds like you're hitting disk instead of cache.


- how can i tell mysql to switch off whatever cache or performance thing 
it is that makes debugging such a PITA? is there a way to disable it 
just for this query? (SELECT SQL_NO_CACHE doesn't seem to make a difference)


There are a few caches at play here:

1. query cache - caches entire query results; not what you're hitting here

2. storage engine caches - key_buffer_size for MyISAM and 
innodb_buffer_pool_size for InnoDB; caches index data for MyISAM and 
index and row data (technically pages) for InnoDB


3. OS disk cache; caches any data accessed from disk, not tunable for 
MyISAM, tunable using innodb_flush_method=O_DIRECT for InnoDB


I would think what is happening is that you don't have a large enough 
cache at level 2 above, which means your data doesn't fit in cache 
inside MySQL, so it gets cached in the OS at level 3, which is rather 
fickle and will page things out when you may not expect it.


Alternately, your system is busy enough that things still get paged out 
at level 2 above in which case the solution isn't necessarily as simple.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query performance plain/text versus AES_DECRYPT(): LIKE %..%

2007-10-24 Thread Jeremy Cole

Hi John,

OK, no conspiracy here.  Here is your problem:

25  $qry = sprintf(SELECT id, line FROM `encryptietest` 
WHERE AES_DECRYPT(`field`, '%') LIKE '%%%s%%', $enckey, $word);


You are missing the s in %s for your first string argument, which 
causes the query to be syntactically incorrect and fail.  So your AES 
test is only testing how quickly you can query with a syntax error. :)


After adding the s, the results I get are:


([EMAIL PROTECTED]) [~/datisstom/bench]$ php -q bench.php
Control test (plain/text LIKE %..%):1.383749s
Decrypt test (AES_DECRYPT() LIKE %..%): 1.441944s
done


Nonetheless, I'd still argue that this entire concept is doomed to 
terrible performance anyway.


Regards,

Jeremy

John Kraal wrote:

I put it here:

http://pro.datisstom.nl/tests/bench.tar.bz2

The encryption isn't really a *real* security measure, except for when 
somebody is stupid enough to install phpMyAdmin or anything equivalent 
and try to get personal data. The problem is the password needs to be 
anywhere on the application-server and if you're in, you're in. But it's 
a request and I'm happy to oblige. Even if it only stops them for 1 
minute (which could be enough).


Regards,

John

--
/ Humanique
/ Webstrategie en ontwikkeling
/ http://www.humanique.com/

-
Humanique zoekt een ervaren Web-ontwikkelaar (PHP).
Bekijk de vacature op http://www.humanique.com/
-


Jeremy Cole wrote:

Hi John,

Your attachment for the php code got stripped somewhere.  Can you post 
it somewhere (http preferable)?  In either case it's going to result in 
a full table scan, so they are actually both a bad strategy long term, 
but they should in theory perform as you would expect, with with 
encryption being slightly slower.


Have you tried with longer strings?

What is your customer's fear with having the data in plain text? 
Presumably in order to use this in your application, you will have the 
AES password stored in your application, and it will end up in logs 
(such as the slow query log) quite frequently.  I would think your data 
can be safer and your security more effective by setting some policies 
which are less intrusive into the actual workings of the data, such as 
encrypting backups and setting system-level policies.


Regards,

Jeremy

John Kraal wrote:

Dear you,

I've been working on encrypting some data for a customer. They want
their personal/sensitive information encrypted in the database, but they
want to be able to search it too, through the application. So we've been
thinking a bit, and just started trying and benchmarking some solutions
we thought up.

The next one really got my attention, I created a table with 4 fields:

1. id (primary/auto_increment, not really interesting)
2. field, with encrypted data
3. md5sum (it has no special use, we benched it though.)
4. line, always containing three words (the same three as encrypted)

When we started querying the table for random words (from lipsum.com),
it seems that searching in the encrypted fields was _lots_ faster.
Results below:

1.000 queries per field:

~$ php -q searchtest.php
Control test (plain/text LIKE %..%):1.409699s
Decrypt test (AES_DECRYPT() LIKE %..%): 1.226069s
done

1.000.000 queries per field:

~$ php -q searchtest.php
Control test (plain/text LIKE %..%):155.059671s
Decrypt test (AES_DECRYPT() LIKE %..%): 137.003216s
done

Actually, the only thing I could think of to say was: Well, at least 
it's consistent.


I've attached all the files I used for this test. Edit db.inc.php (add 
some more lipsum if you want), execute fill.php, and then have fun 
with bench.php.


Does any of you know why this is, how come, etc? I'm just very curious.

Regards,

John Kraal










--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query performance plain/text versus AES_DECRYPT(): LIKE %..%

2007-10-23 Thread Jeremy Cole

Hi John,

Your attachment for the php code got stripped somewhere.  Can you post 
it somewhere (http preferable)?  In either case it's going to result in 
a full table scan, so they are actually both a bad strategy long term, 
but they should in theory perform as you would expect, with with 
encryption being slightly slower.


Have you tried with longer strings?

What is your customer's fear with having the data in plain text? 
Presumably in order to use this in your application, you will have the 
AES password stored in your application, and it will end up in logs 
(such as the slow query log) quite frequently.  I would think your data 
can be safer and your security more effective by setting some policies 
which are less intrusive into the actual workings of the data, such as 
encrypting backups and setting system-level policies.


Regards,

Jeremy

John Kraal wrote:

Dear you,

I've been working on encrypting some data for a customer. They want
their personal/sensitive information encrypted in the database, but they
want to be able to search it too, through the application. So we've been
thinking a bit, and just started trying and benchmarking some solutions
we thought up.

The next one really got my attention, I created a table with 4 fields:

1. id (primary/auto_increment, not really interesting)
2. field, with encrypted data
3. md5sum (it has no special use, we benched it though.)
4. line, always containing three words (the same three as encrypted)

When we started querying the table for random words (from lipsum.com),
it seems that searching in the encrypted fields was _lots_ faster.
Results below:

1.000 queries per field:

~$ php -q searchtest.php
Control test (plain/text LIKE %..%):1.409699s
Decrypt test (AES_DECRYPT() LIKE %..%): 1.226069s
done

1.000.000 queries per field:

~$ php -q searchtest.php
Control test (plain/text LIKE %..%):155.059671s
Decrypt test (AES_DECRYPT() LIKE %..%): 137.003216s
done

Actually, the only thing I could think of to say was: Well, at least 
it's consistent.


I've attached all the files I used for this test. Edit db.inc.php (add 
some more lipsum if you want), execute fill.php, and then have fun with 
bench.php.


Does any of you know why this is, how come, etc? I'm just very curious.

Regards,

John Kraal








--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Could drbd randomly flip bits? Was: Database page corruption on disk occurring during mysqldump on a fresh database and Was: Spontaneous development of supremely large files on

2007-09-17 Thread Jeremy Cole

Hi Maurice,

If you're running into corruption both in ext3 metadata and in MySQL 
data, it is certainly not he fault of MySQL as you're likely aware.


There are absolutely many places where corruption could occur between 
MySQL and the physical bits on disk.  The corruption you're seeing does 
not appear to be just flipped bits, although I guess any corruption 
could be called that.  If you compare the two i_sizes you see from below:


 Inode 16257874, i_size is 18014398562775391, should be 53297152

53297152:

       
 0011 0010 1101 0100   

18014398562775391:

  0100     
 0011 0010 1101 0011 0001 0101 

Differences: 10 x 0-1, 1 x 1-0.

 Inode 2121855, i_size is 35184386120704, should be 14032896.

14032896:

       
  1101 0110 0010   

35184386120704:

    0010   
  1101 0110 0001 1100  

Differences: 4 x 0-1, 1 x 1-0

You can see that there are in fact many bits flipped in each.  I would 
suspect higher-level corruption than the actual disks (typical single 
bit or double bit flips, and generally 1-0 only) but lower than the OS 
(typical entire page corruptions of 4k-64k).


That leaves network, SATA controller, various system buses, and possibly 
stupid errors in DRBD (although I'd call this unlikely).


Do note that data on e.g. the PCI bus is not protected by any sort of 
checksum.  I've seen this cause corruption problems with PCI risers and 
RAID cards.  Are you using a PCI riser card?  Note that LSI does *not* 
certify their cards to be used on risers if you are custom building a 
machine.


Regards,

Jeremy

Maurice Volaski wrote:
In using drbd 8.0.5 recently, I have come across at least two 
instances where a bit on disk apparently flipped spontaneously in the 
ext3 metadata on volumes running on top of drbd.


Also, I have been seeing regular corruption of a mysql database, 
which runs on top of drbd, and when I reported this as a bug since I 
also recently upgraded mysql versions, they question whether drbd 
could be responsible!


All the volumes have been fscked recently and there were no reported 
errors. And, of course, there have been no errors reported from the 
underlying hardware.


I have since upgraded to 8.0.6, but it's too early to say whether 
there is a change.


I'm also seeing the backup server complain of not being files not 
comparing, though this may be a separate problem on the backup server.




The ext-3  bit flipping:
At 12:00 PM -0400 9/11/07, [EMAIL PROTECTED] wrote:

I have come across two files, essentially untouched in years, on two
different ext3 filesystems on the same server, Gentoo AMD 64-bit with
kernel 2.6.22 and fsck version 1.40.2 currently, spontaneously
becoming supremely large:

Filesystem one
Inode 16257874, i_size is 18014398562775391, should be 53297152

Filesystem two
Inode 2121855, i_size is 35184386120704, should be 14032896.

Both were discovered during an ordinary backup operation (via EMC
Insiginia's Retrospect Linux client).

The backup runs daily and so one day, one file must have grew
spontaneously to this size and then on another day, it happened to
the second file, which is on a second filesystem. The backup attempt
generated repeated errors:

EXT3-fs warning (device dm-2): ext3_block_to_path: block  big

Both filesystems are running on different logical volumes, but
underlying that is are drbd network raid devices and underlying that
is a RAID 6-based SATA disk array.




The answer to the bug report regarding mysql data corruption, who is 
blaming drbd!

http://bugs.mysql.com/?id=31038

 Updated by:  Heikki Tuuri
 Reported by: Maurice Volaski
 Category:Server: InnoDB
 Severity:S2 (Serious)
 Status:  Open
 Version: 5.0.48
 OS:  Linux
 OS Details:  Gentoo
 Tags:database page corruption locking up corrupt doublewrite

[17 Sep 18:49] Heikki Tuuri

Maurice, my first guess is to suspect the RAID-1 driver.



My initial report of mysql data corruption:
A 64-bit Gentoo Linux box had just been upgraded from MySQL 4.1 
to5.0.44 fresh (by dumping in 4.1 and restoring in 5.0.44) and 
almostimmediately after that, during which time the database was 
not used,a crash occurred during a scripted mysqldump. So I 
restored and dayslater, it happened again. The crash details seem 
to be trying tosuggest some other aspect of the operating system, 
even the memoryor disk is flipping a bit. Or could I be running 
into a bug in thisversion of MySQL?


Here's the output of the crash
---
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 533.
InnoDB: You may have to recover from a backup.
070827  3:10:04  InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex

[dump itself deleted 
forbrevity] 

Re: Could drbd randomly flip bits? Was: Database page corruption on disk occurring during mysqldump on a fresh database and Was: Spontaneous development of supremely large files on

2007-09-17 Thread Jeremy Cole

Hi Maurice,

Do you mean a Serially-Attached SCSI aka SAS controller, I assume?

Is this a custom build machine or a vendor integrated one?

Regards,

Jeremy

Maurice Volaski wrote:

On Sep 17, 2007  13:31 -0400, Maurice Volaski wrote:

 In using drbd 8.0.5 recently, I have come across at least two
 instances where a bit on disk apparently flipped spontaneously in the
 ext3 metadata on volumes running on top of drbd.

 Also, I have been seeing regular corruption of a mysql database,
 which runs on top of drbd, and when I reported this as a bug since I
 also recently upgraded mysql versions, they question whether drbd
 could be responsible!

Seems unlikely - more likely to be RAM or similar (would include cable
for PATA/SCSI but that is less likely an issue for SATA).



Shouldn't trip the ECC and produce machine check exceptions and ones 
that were unrecoverable?


The disks are part of hardware RAID with a SATA II cableless 
backplane and SATA-SCSI controller, so there is a SCSI cable and SCSI 
HBA (LSI Logic).


--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Could drbd randomly flip bits? Was: Database page corruption on disk occurring during mysqldump on a fresh database and Was: Spontaneous development of supremely large files

2007-09-17 Thread Jeremy Cole

Hi Maurice,

If you're running into corruption both in ext3 metadata and in MySQL 
data, it is certainly not he fault of MySQL as you're likely aware.


I am hoping they are not related. The problems with MySQL surfaced 
almost immediately after upgrading to 5.0.x.


It's possible that they are not related, but it could even be 5.0 
specific but still not a MySQL bug.  I.e. MySQL 5.0 could be doing 
something that steps on the bug and causes it to occur.  But, it's hard 
to say anything for sure.  Nonetheless, I generally don't bother 
worrying about the possibility of MySQL bugs until I'm sure that the OS 
and hardware are stable.


You can see that there are in fact many bits flipped in each.  I 
would suspect higher-level corruption than


I initially thought this as well, but the explanation on the ext3 
mailing list is that it really is just a lone flipped bit in both 
instances. The other differences are due to fsck padding out the 
block when it guesses what the correct size is.


Interesting.  Can you forward that mail to me personally, or summarize 
for the list?  I'd be interested to read the explanation.


Do note that data on e.g. the PCI bus is not protected by any sort 
of checksum.  I've seen this cause corruption problems with PCI 
risers and RAID cards.  Are you using a PCI riser card?  Note that 
LSI does *not* certify their cards to be used on risers if you are 
custom building a machine.


Yes, there is a riser card. Wouldn't this imply that LSI is saying 
you can't use a 1U or a 2U box?


Kind of.  Presumably you would be buying a vendor integrated solution 
where they have certified that the riser card and RAID card are 
compatible.  Presumably.  You'll also notice that most vendors are 
moving to controllers that aren't PCI{,-E,-X} slot based, and rather 
connect directly to a low-profile integrated slot.  This removes a few 
variables.  (And frees up some space.)


It's kind of scary there is no end-to-end parity implemented 
somewhere along the whole data path to prevent this. It sort of 
defeats the point of RAID 6 and ECC.


I agree, it's pretty damn scary.  You can read about the story and the 
ensuing discussion here:


http://jcole.us/blog/archives/2006/09/04/on-1u-cases-pci-risers-and-lsi-megaraid/


How did you determine this was the cause?


Isolating lots of variables.  The customer in question had a workload 
that could reproduce the problem reliably, although not in the same 
place or same time to be able to track things down, and not under debug 
mode (which likely slowed things down enough to not cause trouble).


I finally suggested that they isolate the riser card as a variable by 
plugging it directly into the slot.  Since it was a 1U machine, it 
required taking the metal frame off the card and leaving the case open 
(and hanging out into the datacenter aisle).  it could then be shown 
that with riser, corruption always occurred, and without the riser, 
corruption never occurred.


Obviously, running the machines with cases open and cards plugged in 
directly was not an option, so the only other possible option was 
chosen: move to all new hardware with integrated RAID.  (HP and their 
integrated SmartArray/cciss controller was chosen as a vendor in this case.)



Do you mean a Serially-Attached SCSI aka SAS controller, I assume?


No, it's SATA to SCSI.


Interesting.  I hadn't heard of such a thing until I just looked it up. 
 But in any case that adds yet another variable (and a fairly uncommon 
one) to the mix.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: The current version is 5.0.48, no?

2007-09-14 Thread Jeremy Cole

Hi Maurice,

Yep, it's very very confusing these days.  Unfortunately for all of us, 
MySQL's sales organization seems happy with the situation.  Blame them.


And, yes, you're correct, compared to MySQL Enterprise, 5.0.45 is indeed 
out of date now.  The releases are sequential, but the community one is 
only released every so often.  If you don't have a MySQL Enterprise 
login and you want to get the sources/binaries, you can get them from 
here (under the GPL):


http://mirror.provenscaling.com/mysql/enterprise/

Regards,

Jeremy

Maurice Volaski wrote:
Thank you for this info, but it just seems make a simple question a 
matter of confusion.


It tells us that MySQL is being marketed under two editions, but nowhere
does it say that the current release of each is matched bugfix for bugfix
and the version difference is just arithmetic.

Since community's 5.0.45 came out a few months ago and enterprise's 5.0.48
came out just a few weeks ago, and from the look of the release 
notes, I want to believe that community version is indeed out of date.




In the last episode (Sep 13), Maurice Volaski said:

 I just learned that the current version of MySQL is 5.0.48, described here
 http://dev.mysql.com/doc/refman/5.0/en/releasenotes-es-5-0.html and
 available from
 http://download.dorsalsource.org/files/b/5/165/mysql-5.0.48.tar.gz

The current Mysql Enterprise version is 5.0.48.  The current Mysql
Community version is 5.0.45.

Enterprise release notes:
http://dev.mysql.com/doc/refman/5.0/en/releasenotes-es-5-0.html

Community release notes:
http://dev.mysql.com/doc/refman/5.0/en/releasenotes-cs.html

Comparison:
http://www.mysql.com/products/which-edition.html




--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is bad hardware confusing MySQL and InnoDB?

2007-09-14 Thread Jeremy Cole

Hi Maurice,

You say the MySQL data wasn't on the stuck volume, but were the InnoDB logs?

What is the disk configuration?

It sounds to me like bad hardware/software, which, unfortunately MySQL 
and InnoDB cannot protect you from...


Regards,

Jeremy

Maurice Volaski wrote:
Some processes on a server (64-bit Gentoo Linux with MySQL 5.0.44), 
which seemed to be related to I/O on LVM volumes hung and it was 
necessary to force reboot it. The mysql data was not on an LVM volume 
though it still may have been affected since over time, more and more 
processes became unresponsive. While fsck recovered the journal and 
detected no problems on any volume, at least one database was not 
spared:


070911 23:40:34  InnoDB: Page checksum 3958948568, 
prior-to-4.0.14-form checksum 2746081740
InnoDB: stored checksum 2722580120, prior-to-4.0.14-form stored 
checksum 2746081740

InnoDB: Page lsn 0 491535, low 4 bytes of lsn at page end 491535
InnoDB: Page number (if stored to page already) 199,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page may be an index page where index id is 0 17
InnoDB: Also the page in the doublewrite buffer is corrupt.
InnoDB: Cannot continue operation.

Is it wrong to expect InnoDB to have avoided this or does it suggest 
that it couldn't have, i.e., a hardware defect?


--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Conflicting server IDs on slaves

2007-09-14 Thread Jeremy Cole

Hi David,

This sounds strange, and should NOT occur because of a server_id conflict.

Regards,

Jeremy

David Schneider-Joseph wrote:

Thank you.

We had a situation recently where two slaves had a conflicting server  
ID for several minutes, and shortly thereafter the master started  
reporting errors which were indicative of data corruption while  
executing queries.  This happened as the CPU usage climbed very  
rapidly, and ultimately the entire master machine crashed with an out  
of memory error.


Does this sound like something that could have been caused by a short- 
lived server ID conflict?  All servers involved were running 5.0.27.


Your answers would be most helpful!

Thanks,
David

On Sep 13, 2007, at 7:58 AM, Shawn Green wrote:


Hello David,

David Schneider-Joseph wrote:

Hi all,
What do you know about the effect of conflicting slave server IDs  
on the master in general?  And specifically, are you aware of any  
issues with MySQL 5.0.27?

Your help is very much appreciated.
Thanks!
David
Repeating the same Server ID in your slave servers is BAD. It has  
caused minor problems like duplicate entries on the slaves and  
major problems like over a TB of error logs in just a few minutes  
(because of failure to connect errors).  There are several very  
good reasons why *each and every* server in a replication setup  
needs its own, unique server_id. Many of them are discussed in the  
chapter on Replication:

http://dev.mysql.com/doc/refman/5.0/en/replication.html

To see what has been fixed in MySQL since 5.0.27 was released,  
please review the change logs documented here:

http://dev.mysql.com/doc/refman/5.0/en/releasenotes-cs-5-0.html

For a list of all other bugs (active and inactive) you are invited  
to research the bugs database (it is a public forum) at:

http://bugs.mysql.com

--
Shawn Green, Support Engineer
MySQL Inc., USA, www.mysql.com
Office: Blountville, TN
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /
  / /|_/ / // /\ \/ /_/ / /__
 /_/  /_/\_, /___/\___\_\___/
___/
 Join the Quality Contribution Program Today!
 http://dev.mysql.com/qualitycontribution.html






--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: excessive time spent in statistics status

2007-09-14 Thread Jeremy Cole

Hi Shawn, Lucio,


SELECT STRAIGHT_JOIN 
FROM ...
LEFT JOIN ...
WHERE ...
...


Just to correct a point here... if a query uses only LEFT JOIN or RIGHT 
JOIN, the join order is fixed by the query's order itself, so using 
STRAIGHT_JOIN should have no effect whatsoever.  Equally important, 
since the join order is fixed when you use LEFT JOIN, you *must* list 
the joins in the correct order when writing the query, otherwise you 
will see very poor performance.


MySQL's optimizer cannot reorder the joins because it has the potential 
to change the result of the query.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DIAGNOSTIC MYSQL

2007-09-14 Thread Jeremy Cole

Hi Krishna,

No, nothing like what you're asking for exists free or open source. 
Look into Groundwork and Nagios, though they will not do any really 
fancy monitoring directly out of the box, with a bit of configuration 
you can get most of what you want.


Regards,

Jeremy

Krishna Chandra Prajapati wrote:

Hi Bullington,

MySQL Monitering Service is a paid one. There is any open source free
software which can  moniter mysql heaths and tells the changes need to be
done on system variables.

Regards,
Prajapati

On 9/13/07, J.R. Bullington [EMAIL PROTECTED] wrote:

You can also try the MySQL Monitoring Service. It's a great tool that
looks at your server's health and tells you if any changes need to be made.

It's especially helpful in tuning your server variables, query cache, and
index usage. It also sends mail if you need / want it.


From: Krishna Chandra Prajapati [EMAIL PROTECTED]
Sent: Thursday, September 13, 2007 10:25 AM
To: MySql mysql@lists.mysql.com
Subject: DIAGNOSTIC MYSQL

Hi All,

Is there is any script or tool that can diagnostic the mysql. It can check
for the errors, give suggestion, give reports and send emails.
It can check the mysql system variables and give suggestion.

Thanks,
Prajapati








--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bitwise logic

2007-09-10 Thread Jeremy Cole

Hi,

You can actually unpack them using some fairly cryptic stuff.  I would 
only use this to unpack them once to re-store them as INT.  Here's an 
example:


CREATE TABLE ip (packed CHAR(4));
INSERT INTO ip (packed) VALUES (0xB16212C);

mysql SELECT * FROM ip;
++
| packed |
++
|
  !,   |
++
1 row in set (0.00 sec)

mysql SELECT
-   INET_NTOA(
- (ord(substring(packed, 1, 1))  24) +
- (ord(substring(packed, 2, 2))  16) +
- (ord(substring(packed, 3, 3))   8) +
- (ord(substring(packed, 4, 4)))
-   ) AS unpacked
- FROM ip;
+-+
| unpacked|
+-+
| 11.22.33.44 |
+-+
1 row in set (0.00 sec)

Regards,

Jeremy

Baron Schwartz wrote:
I think Gerald was suggesting you convert the IP's into integers and 
then do bitwise stuff on them.  I don't know of a way to do what you're 
asking.


Baron

Wagner, Chris (GEAE, CBTS) wrote:

Those functions concern dotted quad IP addresses, not packed binaries.

Anybody know of some obscure MySQL functions to do bit logic on strings
or get MySQL to recognize a char sequence as an integer?

Gerald L. Clark wrote:

Wagner, Chris (GEAE, CBTS) wrote:

Hi.  I've got IP addresses stored in the database as packed binaries.
i.e. binary(4).  I put them in there that way so that I could do bitwise
logic on them to do subnet searches.  e.g. ...WHERE `ip`  'mask' =
'network'.  Only it turns out that as far as I can tell MySQL can't do
bit logic on strings, only integers.  I came up with an onerous SQL call
that eats the 4 bytes and makes an integer out of them but there's got
to be a better way.  The worst case is that I redo the database to have
all the IP's as integers.

Anybody know of a way to make MySQL either do the bit logic on the
string or convert/cast the string as an integer?  Thanks.



Try INET_ATON() and INET_NTOA().

--
Gerald L. Clark
Supplier Systems Corporation




--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Really strange index/speed issues

2007-09-10 Thread Jeremy Cole

Hi Chris,

Chris Hemmings wrote:

Hello,

I have a table, currently holding 128,978 rows...  In this table, I have a 
section column (int) and a price column (int).  Every row has a section of 1 
currently, every row has a price, ranging from 1 to 10,000.


I have an index on both columns separately.

Have a look at these two queries, can someone tell me why there is such a 
difference in speed of execution?  (Note difference in price qualifier)




SELECT *
FROM `table1`
WHERE price 0
AND section =1
ORDER BY price
LIMIT 0 , 30

Showing rows 0 - 29 (128,978 total, Query took 0.9462 sec)

Explain output: 1 SIMPLE table1 ALL section,price NULL NULL NULL 96734 Using 
where; Using filesort




SELECT *
FROM `table1`
WHERE price 1
AND section =1
ORDER BY price
LIMIT 0 , 30


Showing rows 0 - 29 (128,949 total, Query took 0.0008 sec)

Explain output: 1 SIMPLE table1 ALL section,price NULL NULL NULL 96734 Using 
where; Using filesort




Other info:

Query cacheing = off
MySQL version = 5.0.32
OS  = Debian Sarge

Sure, the second query returns 29 fewer records than the first, but should 
that make the difference in time?


Hope you can shed some light onto this :-)


Did you run both queries multiple times and average the time taken? 
Otherwise, it seems likely that in one instance the data was cached, and 
in the other it was not.  The query_cache being off does not affect 
caching in this sense.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Integrity on large sites

2007-05-25 Thread Jeremy Cole

Hi Naz,

Just to throw out (plug) an ongoing project:

  http://www.hivedb.org/

From the site:


HiveDB is an open source framework for horizontally partitioning MySQL 
systems. Building scalable and high performance MySQL-backed systems 
requires a good deal of expertise in designing the system and 
implementing the code. One of the main strategies for scaling MySQL is 
by partitioning your data across many servers. While it is not difficult 
to accomplish this, it is difficult to do it in such a way that the 
system is easily maintained and transparent to the developer.



We've been working on HiveDB precisely to avoid the large amount of 
(quite specialized) code in the application.


Regards,

Jeremy

Naz Gassiep wrote:

Wow.
The problem with sharding I have is the large amount of code
required in the app to make it work. IMHO the app should be agnostic to
the underlying database system (by that I don't mean the DB in use such
as MySQL or whatever or the schema, I mean the way the DB has been
deployed) so that changes can be made to it without having to worry
about impacting app code. This is one of my fundamental design imperatives.

Then again, I'm not a regular MySQL user so I don't know what is and
is not the norm in the MySQL world.

- Naz.

Evaldas Imbrasas wrote:

You certainly have a right to disagree, but pretty much every
scalability talk at the MySQL conference a few weeks ago was focused
on data partitioning and sharding. And those talks very given by folks
working for some of the most popular (top 100) websites in the world.
It certainly looks like data partitioning is the way to go in the
MySQL world at this point, probably at least until production-ready
and feature-full MySQL Cluster is out. And even then large percentage
of dotcom companies would use data partitioning instead since it can
be implemented on commodity hardware.

Once again, we're talking *really* big websites using MySQL (not
Oracle or SQL Server or whatever) here. Most websites won't ever need
to partition their production databases, and different RDMS might have
different approaches for scalability.


On 5/24/07, Naz Gassiep [EMAIL PROTECTED] wrote:

Data partitioning? Sorry, I disagree that partitioning a table into more
and more servers is the way to scale properly. Perhaps putting
databases' tables onto different servers with different hardware
designed to meat different usage patterns is a good idea, but data
partitioning was a very short lived idea in the world of databases and
I'm glad that as an idea it is dying in practice.




--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best filesystem for mysql

2007-05-02 Thread Jeremy Cole

Hi Jeff,

There isn't really a clear winner in any case, but the tests done in the 
article linked to are highly suspect.  It would be much more interesting 
to see real tests done with real hardware...


Nonetheless, the usual answer I would have is to lean towards ease of 
administration and good understanding, rather than getting 1% better 
performance.  That usually means ext3.


Regards,

Jeremy

Jeff Pang wrote:

hello list,

I saw this article for the suitable filesystem for mysql.
http://www.bullopensource.org/ext4/sqlbench/


From what I saw,the best filesystem for MyISAM is ext3,the best filesystem for 
InnoDB is Reiserfs.

How about your thought on it?Thanks.

50€ AMAZON-Einkaufsgutschein bei Bestellung von Arcor-DSL:
Viel oder wenig? Schnell oder langsam? Unbegrenzt surfen + telefonieren
ohne Zeit- und Volumenbegrenzung? DAS TOP ANGEBOT JETZT bei Arcor: günstig
und schnell mit DSL - das All-Inclusive-Paket für clevere Doppel-Sparer,
nur  39,85 €  inkl. DSL- und ISDN-Grundgebühr!
http://www.arcor.de/rd/emf-dsl-2



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: IS NULL Question

2007-04-30 Thread Jeremy Cole

Hi John,

Are you sure they are actually NULL and not NULL (i.e. the string NULL)?

Try this:

SELECT first, last, ISNULL(suffix), LENGTH(suffix) FROM persons LIMIT 5;

Regards,

Jeremy

John Kebbel wrote:

I'm having problems understanding NULL. I grasp what a NULL value is,
but I can't get NULL to perform correctly. For instance, if I do a
Select statement, I see that I have columns with a NULL value.

select first, last, suffix  from persons LIMIT 5;
+---+--++
| first | last | suffix |
+---+--++
| Benjamin  | Page | NULL   |
| Jonathan  | Watson   | NULL   |
| Jose  | Thorson  | NULL   |
| Alejandro | Nickels  | NULL   |
| Griselda  | Richards | NULL   |
+---+--++
5 rows in set (0.01 sec)

Logically, it seems that a Select statement should find these five plus
any other NULL values in the suffix column. However, such a select
statment returns an empty set.

mysql select first, last, suffix  from persons where suffix IS NULL;
Empty set (0.00 sec)

Does anyone see what I'm doing wrong? (Thanks in advance for any help.)






--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why doesn't the InnoDB count() match table status?

2007-03-26 Thread Jeremy Cole

Hi Daevid,

Ugh.  How about not going berserk on the public mailing list?

We can understand that you're upset that you didn't read the manual 
before starting a MyISAM to InnoDB conversion.  You didn't do your 
research and now you're being hit by a very simple (and not really all 
that unexpected) difference between storage engines.



You're about 5 years too late for this converation, but I recall it


Really? People have just happily accepted this absurd limitation for _five_
years? Wow.


Yes.  And it's not likely to change for a long time, either.


having to do with the fact that when you're on a table that supports
transactions, you don't know exactly how many records a particular
session has available to it unless you actually go and count them. 
Depending on your settings, you may or may not see rows inserted by

other uncommitted sessions, and they may disappear if the other
sessions roll their transactions back.


You know how many are *IN* the table on the disk at that particular moment.
That's all that needs to be shown!?
So if someone isn't using transactions, then that number will be accurate.
This isn't rocket science.


This actually has a lot less to do with transactions, and more to do 
with multiversioning.  The number of rows can and will be different 
within different sessions and there is no trivial way to keep a simple 
count up to date efficiently.


And, if you are using a transactional storage engine, there is no such 
thing as not using transactions.  Even if you don't use BEGIN/COMMIT 
there are still implicit transactions for each statement.  That's the 
whole point.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Making a prefixed install / hard-wired global configuration files?

2007-03-25 Thread Jeremy Cole

Hi Nick,

This should do it:

  --sysconfdir=DIR   read-only single-machine data [PREFIX/etc]

Regards,

Jeremy

Nick Hill wrote:
Is there an easy way to build MySQL client library, server and start-up 
script so that /etc/my.cnf is ignored?


I need to build client and server so that it in no way interacts with 
the global system install, and build programs against that client 
library which also don't interact with the global system install.


My attempts so far show that I need to make hacks to get it to work, 
where I expected something like ./configure --prefix=/my/prefix 
--global_config=blah would do the trick but doesn't seem to be the case.


I notice Linux distros often have their configs in places other than 
/etc/my.cnf.  Have I overlooked something?




--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Want a Free Ride to MySQL Conference and Expo 2007?

2007-03-15 Thread Jeremy Cole

Hello All,

This year, Proven Scaling is sponsoring three people to attend the MySQL 
Conference and Expo 2007 in sunny Santa Clara, California, April 23-26.


The MySQL Conference and Expo is a great place to meet the MySQL 
developers, other MySQL users, and many interesting people. It is the 
premier MySQL event of the year, and has been getting bigger and bigger 
each year. If you really want to go, but you haven't been able to afford 
it for any reason, Free Ride is for you.


From your submissions, a panel will choose three people to receive:

  * Round-trip airfare from their location to SJC, SFO, or OAK airport
  * Transportation from the airport to hotel/conference
  * Hotel accomodations
  * A meal stipend
  * A full conference pass, provided by MySQL AB (Thanks, MySQL!)

While all entries will be considered, we will give preference to those 
who submit complete answers, demonstrate creativity and drive, can get 
to Northern California for a practical cost, and who would not otherwise 
be able to make it to the conference. Submission is open to everyone 
worldwide.


Entries will be accepted until Midnight PDT (GMT-7), March 23, 2007, and 
the winners will be announced shortly thereafter.


You can enter by visiting:

  http://www.provenscaling.com/freeride/

Good luck, and we hope to see you at the conference!

Regards,

Jeremy Cole
Owner / MySQL Geek
Proven Scaling LLC

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Camp: Proven Scaling offering Free Ride

2006-10-18 Thread Jeremy Cole

Hi All,

As you may know, MySQL Camp, a gathering of the best minds in MySQL, is 
coming up soon, in just over three weeks.  It will be held at Google HQ, 
in Mountain View, California, November 10-12.  You can find out more 
information about the conference here:


http://mysqlcamp.org/

Proven Scaling would like to sponsor airfare and hotel for one person to 
attend who could not otherwise make it.  Read more about it here:


http://jcole.us/blog/archives/2006/10/18/want-a-free-ride-to-mysql-camp/

If you're interested in attending on our dime, follow the directions in 
that post, and send an email to [EMAIL PROTECTED] containing:


* Your full, real name
* Your blog or website, if any
* Your company, school, or organization, if any
* Your location, and closest airport
* Any contributions you’ve made to MySQL or the MySQL community in 
the form of code, bug fixes, time, or otherwise
* Why you feel you are the best or most qualified person for Proven 
Scaling to sponsor to MySQL Camp
* Anything else you’d us like to know about yourself as we consider 
who to sponsor


We will accept proposals until midnight Pacific Daylight Time (GMT-8), 
Friday, October 20, 2006.  That's only a couple of days away!


Good luck!

Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL service startup time

2006-07-05 Thread Jeremy Cole

Hi Rob,


I have an application self-installer program which also installs
MySQL and sets it up. This is all on Windows.

I have a problem in that when the installer runs 'net start MySQL',
it returns immediately but the MySQL daemon is not ready for
connections immediately. As the next step in the installation is to
create the application database, I need to wait until I can connect.

What's the best way to achieve this? At the moment I have a rather
crude 5 second 'sleep', but that isn't always long enough. Any ideas?


The only way really to handle this is to loop and delay until MySQL
responds to a simple ping like SELECT 1.

Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie - CREATE VIEW Question

2006-07-04 Thread Jeremy Cole

Hi,


Say I have the following tables;

siteID,name
--
site1, XYZ
site2, RSQ

ID,site,data

1, site1, M
2, site2, Q
3, site2, Y
4, site1, P 


... etc.

And I want to create a view like this;


siteID,name,data
--
site1, XYZ, (M,P)
site2, RSQ, (Q,Y)

where all the related column data in the second table is placed in another
column. How can I do this? Is there a function that can group these values
into one variable or array?


Requires 4.1 or higher:

SELECT
  table1.siteID,
  table1.name,
  GROUP_CONCAT(table2.data SEPARATOR ,) AS all_data
FROM table1
JOIN table2 ON table1.siteID=table2.site
GROUP BY table1.siteID

Regards,

Jeremy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: enum query results strange.

2006-07-04 Thread Jeremy Cole

Hi,


both queries would have the limit on them, so they would only return 10
rows. but yes, there are probably about 10x as many records with true than
with anything else.


If there are only six possible values, and one values occurs ten times 
as often as the other five values, that means it occurs more than 50% of 
the time.


As a general rule, an index on such a column will NOT be useful for 
selecting rows having the often-occurring value.  Such an index is only 
useful if you *only* need to select the non-occurring values.


What you CAN, perhaps do, though, is create a multi-column index 
instead, so that MySQL doesn't need to scan all rows and order them for 
you.  Your query was:



select *, id as vid, user_id as uid from video where (file_complete =
'true') order by undt desc limit 0,10; 


Creating an index on (file_complete, undt) should work nicely:

  ALTER TABLE video ADD INDEX (file_complete, undt);

Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problems with: MySQL 5.0.21 - 64bit

2006-07-02 Thread Jeremy Cole

Hi Gabriel,


Yesterday MySQL died again... there is nothing in the log files... (be
it mysqld.log, .err, or /var/log/messages).

The hosting provider is running an application called 'big-brother'
and a lot of sef-faults appear in the logs from this. I don't know if
thins is the cause... if this application is faulting mysql also...

The behaviour: the whole machine freezes... all that can be done is to
hit the RESET button on the machine !


If the whole machine freezes, this sounds like bad memory, bad cpu, or 
similar, rather than a MySQL problem.  Regardless of how poorly an 
application could be written, it should not be possible for it to freeze 
the machine.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: howto set mysql to readonly

2006-07-02 Thread Jeremy Cole

Hi Jehan,

OK so it seems to be a bad idea ... I was expecting that I missed a 
magic command like set readonly on all databases ...


I think you did:

  SET GLOBAL read_only=1;

This will keep all non-SUPER and non-replication users from writing to 
the database.


Regards,

Jeremy

--
high performance mysql consulting
www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing 3Gb File

2006-06-09 Thread Jeremy Cole

Hi Ian,


I am trying to import a 3.2Gb sql dump file back into my sql server (4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)


That error message comes from some single place trying to allocate 178MB 
at a single time.  Do you have large BLOBs in your data?


This error message means that mysqld is beind denied memory by the OS, 
either because you are actually out of memory, or because your ulimit 
has been reached (more likely).


Check your ulimits for your system with ulimit -a, and adjust if 
necessary in the mysql.server or mysqld_safe script (those both run as 
root, so can adjust ulimits upwards).


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Copy large amount of data from non-normalized to normalized table

2006-06-06 Thread Jeremy Cole

Hi Rob,


So each of the old rows splits into 30 new rows.


Really 180 rows, right?


The existing table has about 85000 rows, equating to over 15 million in the new 
structure.
Ways I have tried end up creating massive queries or just hogging the server 
for absolutely ages - what is the best way to perform this update?


Doing it in a single query is possible, but not really feasible.

How about 180 queries, generated by:

for part in `seq 0 5`; do
  for col in `seq 1 30`; do
echo INSERT INTO new_event_data (event_id, index, p)  \
 SELECT event_id, (30*${part})+${col} as index, p${col}  \
 FROM old_event_data;
  done
done

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: procedure analyse() returns bad type?

2006-06-06 Thread Jeremy Cole

Hi Gaspar,


OK, thanks for the info. Seems like procedure analyse() has some other
bugs, for example despite controlling the number of possible enum
fields to e.g. 16, it returns an enum with 256 elements.


By the way, I opened a bug for this issue:

  http://bugs.mysql.com/20305

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: procedure analyse() returns bad type?

2006-06-05 Thread Jeremy Cole

Hi Gaspar,


The following command returns recommended type FLOAT(3,6) for a
double(10,6) column:

mysql CAT -t -u catadmin -p -e \
select * from temp procedure analyse(16,8192)  temp.log

Optimal_fieldtype = FLOAT(3,6) NOT NULL.

The trouble is that FLOAT(3,6) does not make sense, with FLOAT(M,D),
M  D+2 should hold.

Is procedure analyse(16,8192) old? Or I am doing sth wrong?

I have MySQL 5.0.22 under FC3 Linux on AMD64bit.


Looks like a bug.  The code in question is:


  if (num_info.dval  -FLT_MAX  num_info.dval  FLT_MAX)
sprintf(buff, FLOAT(%d,%d), num_info.integers, 
num_info.decimals);

  else
sprintf(buff, DOUBLE(%d,%d), num_info.integers, 
num_info.decimals);



It should likely use num_info.integers+num_info.decimals for the M.

Regards,

Jeremy
--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help with storing a large binary number as bigint

2006-06-02 Thread Jeremy Cole

Hi Byron,


 This should be a relatively simple question but after looking through
the manual for a while I didn't see how to do it. I am working in php
and need to store a large binary number in the database. When I try and
convert it to a int in php its too big so it converts it to a float and
I loose all all precision witch makes it worthless to me. Is there a way
to let mysql do the conversion since the number I am working with is
48bits and bigint can store 63bits??? I would need to be able to insert
and select it as a binary string?


Yes, you can do this.  Probably the best way to do this is to insert it 
as a hexadecimal number.


Is the binary string in big or little endian form?

If it's big endian, you can convert it directly to hex with bin2hex(), 
prefix it with 0x, and insert it as:


INSERT INTO mytable (x, y, ...) VALUES (0x123456, 0x654321);

If it's in little-endian format, you will need to do the byte-swapping 
before/after the bin2hex.  (In pure PHP, it might be easier after.)


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: question+comment : Re: set DEC as a column name : forbidden

2006-05-31 Thread Jeremy Cole

Hi Gilles,


I search in the manual with ERROR 1064, and then get the page :
http://dev.mysql.com/doc/refman/4.1/en/error-messages-server.html

Error: 1064 SQLSTATE: 42000 (ER_PARSE_ERROR)
Message: %s near '%s' at line %d

I cannot figure out what is wrong from this info, whereas error code 
1063 or 1065 are far more explicit.


Why the error code (1064) is not telling : wrong usage of reserved word
or something like this ? that could be linked to a page showing some 
exemple of what to do ?


If MySQL's parser could easily tell that you had used a reserved word in 
a place where it can't be used as such, they wouldn't need to be 
reserved words, would they? :)


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select 9000000000000.10*19.80 wrong result 178200000000001.97

2006-05-30 Thread Jeremy Cole

Hi Wang,


 select 9.10*19.80 wrong result 1782001.97
 if = 9000.10*19.80 right  result178201.98
This is similiar to sum(qty*unitpri) result .


Welcome to the world of floating point arithmetic.  If you need such 
precision, try MySQL 5.0, as it should be capable of doing that 
calculation using its new precision math support.


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select 9000000000000.10*19.80 wrong result 178200000000001.97

2006-05-30 Thread Jeremy Cole

Hi,


You don't need to take it to 16 digits :

mysql 4 :

select 1/666;
+---+
| 1/666 |
+---+
|  0.00 |
+---+

mysql 5.0.18:
select 1/666;
++
| 1/666  |
++
| 0.0015 |
++


Actually, this is an entirely different case, it's a matter of where 
MySQL decides how many digits to show by default.  You can influence 
that quite easily:


mysql select 1.0/666;
+-+
| 1.0/666 |
+-+
|   0.00150150150 |
+-+
1 row in set (0.00 sec)

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: aha! replication breaking due to duplicate queries

2006-05-19 Thread Jeremy Cole

Hi Sheeri,


So I've seen replication break a few times due to duplicate queries.
A few times it was around a server crashing, but I thought perhaps it
was because of the crash.  (ie, master sends a query, crashes, and
then tries to send the query again when it comes back up).

But in the past 16 hours, it's happened twice.  Both times, no crash.
No network problems that we know of.  No other query problems.

Therefore, we've deduced it's in the code -- it's trying to insert a
field with a duplicate primary key.  I've sent that off to the
developers, who will hit things with sticks.

However, why does MySQL transfer over DML queries that fail?  If they
have an error, shouldn't they not replicate?  This seems like a very
large bug to me.


It shouldn't, except in *very* limited circumstances (where you've 
already shot yourself in the foot a few times, generally).


I have seen a case quite a few times where the slave hiccups and 
apparently runs the same query twice.  Is it possible that this is what 
you're seeing?


Is your system replicating a mix of large (images, maybe) and small 
(single-row) updates?


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Streaming LOB Data

2006-05-08 Thread Jeremy Cole

Hi Robert,


Anyone know for sure if the memory needed to insert a LOB is a
percentage of the system's available memory or if it is allocated from
the innodb_buffer_pool_size? IOW, how should my configuration settings
be modified to allow the insertion of larger blobs? :)


The majority of the memory needed for that operation will come from the 
system's available memory.  How much memory it will consume will depend 
somewhat on how the query is sent over.


You should count on at least 2x the size of the blob being needed on the 
server for a query of the form:


  INSERT INTO tbl (id, image) VALUES (id, blob data);

The original query will be stored in its original form, and the binary 
data will be stored in its parsed and unescaped form.


The storage engine may still make yet another copy of it, but I'm not 
sure that InnoDB does.  I suspect it does not.


One thing you can do to save some of the memory is to run it as:

  INSERT INTO tbl (id, image) VALUES (id, LOAD_FILE(filename));

This of course would require that you have the file available on the 
MySQL server to load in.  LOAD_FILE() will return the contents of the file.


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Outfile syntax and out of memory

2006-05-08 Thread Jeremy Cole

Hi Johan,


I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!


This is because the mysql client program is trying to read the entire 
result into its own memory.  Try adding the '-q' option to mysql:


  mysql -q  queryfile.sql  outfile.txt

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: updates during database dump

2006-05-08 Thread Jeremy Cole

Hi Luke,

When mysql is doing a dump, do the updates that happen during the dump 
get included in the dump.


I assume you mean 'mysqldump'.

I have a dump that starts at 11pm and goes for 2 hours. If someone 
updates data at say 11:45pm, does that update get included in the dump?


When does the window, on what gets included in a dump, close?


By default, mysqldump doesn't do anything to ensure a consistent backup. 
 If you want one, you have to specify an option to get one, which will 
depend on which storage engines you're using.


For InnoDB: Use the --single-transaction option.  The window closes when 
the dump starts.  Users in other transactions/sessions will still be 
able to write, but you won't see their writes in this transaction.


For MyISAM: Use the --lock-tables option.  The window closes when the 
dump starts.  Users won't be able to write at all, to any tables being 
dumped, while the dump is running.


In general, you want a consistent snapshot of all tables from the same 
point in time, and you will want to use one of the above options to get it.


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bind-address by name under 5.0.18

2006-02-28 Thread Jeremy Cole

Hi James,


Sort of.  If I run the command manually, it does.

But my script uses su to run the server as an unprivileged user.

What happens if you:

su -m mysql -c 'sh -c /usr/local/bin/mysqld_safe  
--defaults-extra-file=/var/db/mysql/my.cnf --user=mysql --datadir=/var/db/mysql 
--pid-file=/var/db/mysql/www.example.com.pid --bind-address=localhost  /dev/null '


Hmm, I don't see why it shouldn't work when run inside su... odd.

As far as I know, there are no requirements for superuser in order to 
bind to certain IPs.


I'm also hopeful to get multi-IP support, so that I can have the 
server listen on two NICs or what-have-you, such as:


[mysqld]
bind_address=foo bar localhost


This is a reasonable feature request, and I'll try to do it (I don't see 
why it can't work).  It would require some changes to how the address to 
bind to are passed around (currently a variable called my_bind_addr).


I haven't tried that yet, so perhaps it is already functional as 
well, I'll try to get to testing that soon.


It's not.  Here's the code that handles bind-address from sql/mysqld.cc:

  6733case (int) OPT_BIND_ADDRESS:
  6734  if ((my_bind_addr= (ulong) inet_addr(argument)) == INADDR_NONE)
  6735  {
  6736struct hostent *ent;
  6737if (argument[0])
  6738  ent=gethostbyname(argument);
  6739else
  6740{
  6741  char myhostname[255];
  6742  if (gethostname(myhostname,sizeof(myhostname))  0)
  6743  {
  6744sql_perror(Can't start server: cannot get my own 
hostname!);

  6745exit(1);
  6746  }
  6747  ent=gethostbyname(myhostname);
  6748}
  6749if (!ent)
  6750{
  6751  sql_perror(Can't start server: cannot resolve hostname!);
  6752  exit(1);
  6753}
  6754my_bind_addr = (ulong) 
((in_addr*)ent-h_addr_list[0])-s_addr;

  6755  }
  6756  break;



And with other DBs, I can set 'sql.example.com' to resolve to N
IPs (N=4 in this example):

10.0.0.10
10.0.1.10
10.0.2.10
10.0.3.10

Then I can have N separate hosts share the same start-up script,
because the SQL server binds only to those IPs which are local.  Since
each of the N servers has only one of those IPs locally, each
server automatically binds to the correct IP.  And if the IPs ever
change, I just update DNS, and there's no need to do any maintenance
on the start-up scripts.  These IPs would not necessarily be in the 
same subnet and in fact, are CNAMEs, such as sql.example.com

defined as:

sql.example.com.IN  CNAME dallas-sql.example.com
sql.example.com.IN  CNAME cleveland-sql.example.com
sql.example.com.IN  CNAME portland-sql.example.com
sql.example.com.IN  CNAME sydney-sql.example.com

I am grateful to be making progress.  Thank you for your assistance.


Hmm, I guess you'd expect to at least get warnings for each of these 
that couldn't be bound to?


This seems like a very strange way to configure a machine.  To each, his 
own. :)


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Silicon Valley MySQL Meetup Group: Storage Engines in Depth

2006-02-28 Thread Jeremy Cole
Interested in MySQL? Live in peninsula/south bay? This MySQL Meetup is 
for you!


Join us for the March MySQL Meetup on March 13th at 7pm in Santa Clara! 
The topic for this month is Storage Engines In-Depth! I'll cover the 
details of both MyISAM and InnoDB, how each works, its strengths and 
weaknesses, and proper configuration of each. I'll wrap up by talking 
about the future of storage engines in MySQL -- some of the new work 
going on in 5.0 and 5.1, and some demos!


The topics each month are varied, but you can be sure to learn a lot at 
every meeting. Find out the next meeting dates, the topics, and RSVP at 
Meetup.com:


http://mysql.meetup.com/101/

I hope to see you there!

Regards,

Jeremy Cole
MySQL Meetup Organizer
Silicon Valley

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bind-address by name under 5.0.18

2006-02-27 Thread Jeremy Cole

Hi James,


I would like to be able to bind to a host name, rather than
an IP number.  IP numbers come and go, and are beyond the
control of anyone who doesn't have their own direct allocation.  
But since I own my domain, a host name is more permanent.


Adding this functionality is fairly trivial, with the caveat that it 
will have to use the first address returned by gethostbyname(), so if 
you have multiple IPs assigned to a hostname (for e.g. round-robin DNS) 
it will be interesting. :)


This is probably a 5-10 line patch, including error checking.

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bind-address by name under 5.0.18

2006-02-27 Thread Jeremy Cole

Hi,

Thank you very much!  I'm willing to help test if you can provide me a diff 
of your changes.


Well, that was easy.  I checked the code, and it turns out that the 
functionality is already there.  I just tested it on my laptop (running 
5.0.13) and it does indeed work.


I did:

* echo 10.0.0.1 foo  /etc/hosts
* ifconfig lo0 alias 10.0.0.1
* add to /etc/my.cnf:
  [mysqld]
  bind-address=foo
* restart mysqld
* mysql -h 10.0.0.1  -- confirm connection or error from mysqld
* mysql -h 127.0.0.1 -- confirm failure to connect at all

I didn't know that this worked.  Learn something new every day.

Are you having a problem with it?  Does it not work for you?

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Byte Swapping

2006-02-10 Thread Jeremy Cole

Hi David,


This might be an unusual question because mysql always stores in little
endian for all platforms (if I'm not mistaken).  However, I am receiving
data from several different platforms (custom boards with various
processors) in raw form.  I will have stored as a configuration option
what endianess it is.

So what I need to do is write a procdure, or use a mysql function (if
there is one available) to byte swap the data (for integer types).  I'm
really just hoping someone has done this, or it is implemented as a mysql
function and I just wasn't able to find it.


This seems very strange that you're storing it in raw form.  Are you 
sure that you actually are?


Anyway, swapping between big/little endian is not difficult in C.

The code is all here:

  http://jcole.us/svn/libmygis/mygis.h

e.g.:

#define SWAP(x, y)   { (x) ^= (y) ^= (x) ^= (y); }

#define MYGIS_SWAP4(pos)   { \
  SWAP(((byte *)(pos))[0], ((byte *)(pos))[3]); \
  SWAP(((byte *)(pos))[1], ((byte *)(pos))[2]); \
}

That's enough to swap 32-bits (4-bytes) of whatever is stored at any 
pointer address.


I would encourage that you:

a. make sure you really are storing the data raw -- it seems very odd
b. stop storing the data raw... do the swap before storing

Can you give more details about what you're doing?

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Silicon Valley MySQL Meetup: MySQL Optimization, Monday, Feb 13, 2006 @ 7:00pm in Santa Clara, CA

2006-02-07 Thread Jeremy Cole

Hi All,

Are you interested in MySQL?  Do you live/work in Silicon Valley?

Join a fun group of professionals for the monthly MySQL Meetup in 
Silicon Valley, held at Yahoo! in Santa Clara, California.  The topic 
this month is:


  MySQL Optimization and the EXPLAIN command
  Speaker: Jeremy Cole, Yahoo! Inc. (that's me!)

Please sign up and RSVP here, if you plan on attending:

  http://mysql.meetup.com/101/events/4829967/

If you're interested in other topics or decide to join the group, please 
visit the polls and let us know what your skill level is and what topics 
you're interested in:


  http://mysql.meetup.com/101/poll/

If you have any questions about the group or the event, please feel free 
to contact me directly.


See you on Monday!

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help...I am desparate

2006-01-03 Thread Jeremy Cole

Hi Connie,


060103 15:54:02 [ERROR] Can't start server: Bind on TCP/IP port:
Permission denied



060103 15:54:02 [ERROR] Do you already have another mysqld server
running on port: 1000 ?


You can't bind to a port less than 1024 unless you're running as root.
I suspect that's the problem here.  Try another port, higher than 1024.

I'm kind of curious why you aren't running it on the standard 3306?

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL with InnoDB on a readonly filesystem.

2005-11-23 Thread Jeremy Cole

Hi Ralph,

Is it possible to run MySQL with InnoDB enabled on a readonly 
filesystem. We only intend to run some select query's on this server.


I guess I can ask the obvious question.  Why are you trying to use
InnoDB, if your data/filesystem is read only?  Why not use MyISAM instead?

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Server take 20s to connect

2005-11-23 Thread Jeremy Cole

Hi Luiz,

I've set up an mysql server with version 4.1.15 by compiling the 
.src.rpm on an linux machine running conectiva linux 10 with 1.5Gb of 
RAM on an Pentiun 4 2.88 Gz machine


The app I have  is running on windows connecting to this linux server by 
using direct ip address for mysql linux machine, but is taking more then 
20s to connect.

 also no firewall active on linux server


Sounds like the machine where your MySQL server is running has a broken 
DNS configuration, or the machine that purports to provide reverse DNS 
mappings for your client machine is broken.


Try using the 'host' command to determine where the problem is.

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Getting no of different rows by group by clause

2005-11-22 Thread Jeremy Cole

Hi Abishek,


I want the output as:
LHR 3
AKL 2
AWL 1


This should do it:

SELECT name, COUNT(*) as num
FROM tab_name
GROUP BY name
ORDER BY num DESC

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A key question

2005-11-18 Thread Jeremy Cole

Hi Mikhail,


Thank you for your help.

I do have an exact situation you have assume I have. Here is the output
of SHOW CREATE TABLE

  `price_data_ticker` char(8) NOT NULL default '',
  `price_data_date` date NOT NULL default '-00-00',


 ...


  UNIQUE KEY `tidadx` (`price_data_ticker`,`price_data_date`)


 ...


As you can see, Unique KEY is on two first fields, but most of the work,
joins  searches, will be done on the second field price_data_date.


Could you provide some example queries?

Likely the solution is to create another index on price_data_date, that 
could be used for searches by date that do not include ticker.


As I mentioned before, an index on (a, b) can be used for (a) but not 
for (b) alone.  However, it usually doesn't make sense to create an 
index on (b, a) as well, since if you have both columns in your query, 
usually the index on (a, b) would be fine.  So I would suggest adding an 
index:


  ALTER TABLE `TICKER_HISTORY_PRICE_DATA`
ADD INDEX (price_data_date);

Keep in mind that will lock the table to add the index, and may take a 
few minutes (although I would expect less than two minutes for 32M rows) 
so it might not be a good idea to run while the market is open. :)


If you could provide the exact query you were running, I could confirm 
that it would or would not help. :)


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A key question

2005-11-17 Thread Jeremy Cole

Hi Mikhail,


I may not have been precise in my question, but the Unique Index in
question is a two fields index, and I was looking to find out wisdom
from the List if there is sense and/or experience in keying second(left)
field on in the Unique Index to speed up a search.


If you have a UNIQUE(a, b), then MySQL can use it as an index for (a), 
or (a, b), but NOT for (b).  In this context, it won't help generally to 
create an index on (a), but it may help to create one on (b) depending 
on your queries.



I am dealing with 32M rows table, where second field in the Unique Index
is a date field. Unfortunately for my simple SELECT MAX(Date) as
Latest_Date from THE_TABLE took 4 minutes and some seconds, so before I
will go and buy bigger server I needed to re-assure myself that there is
no other way.


Four minutes to find a MAX(date) is too long for any kind of hardware. 
It should be much faster.  Can you post the output of:


  * SHOW CREATE TABLE tbl
  * SHOW VARIABLES

FYI:

mysql select max(dep_time) from ontime_all;
+-+
| max(dep_time)   |
+-+
| 2005-05-31 23:59:00 |
+-+
1 row in set (49.76 sec)

mysql select count(*) from ontime_all;
+--+
| count(*) |
+--+
| 33395077 |
+--+
1 row in set (0.00 sec)

Could be a lot faster, even, but these are MERGE tables so it's really 
65 tables that are being checked...



Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LIKE problem?

2005-11-11 Thread Jeremy Cole

Hi,


Moreover, the sum of the results of these two queries

select count(*) from user where username like 'a%';
select count(*) from user where username not like 'a%' or username is null;

is not the same for all letters of the alphabet:

letter like not-like sum

n   2304 59317 61621
o  0 60797 60797
p   3048 58573 61621


Sounds like a corrupt index.  Try CHECK TABLE and REPAIR TABLE.

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Delete all but the newest 100 records?

2005-11-07 Thread Jeremy Cole

Hi Brian,


Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104


Yahoo employs MySQL geeks? Always wondered what db that big index  runs 
on...  :)


Do note that Yahoo! is a lot more than a search engine. :)  The big 
index is not running in MySQL, but a whole lot more is.


And yes, we employ lots of geeks of all sorts, MySQL included.  If some 
MySQL Geek were looking for gainful employment, their resume/CV would be 
quite welcome, sent to me. ;)


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Delete all but the newest 100 records?

2005-11-06 Thread Jeremy Cole

Hi Brian,

I'd like to delete all but the newest 100 records from a table. I  have 
a timestamp field. I can't figure out how to do this in a single  SQL 
statement - can anyone help? Thanks!


This should work:

DELETE FROM tbl ORDER BY ts DESC LIMIT 100, 99;

Note that ORDER BY and LIMIT are a MySQL-extensions to DELETE, not part 
of the SQL standards.


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database equivalent to NorthWind for M$ SQL

2005-08-12 Thread Jeremy Cole

Hi Scott,

Is there any database file similiar to M$'s Northwind that I can use to play 
with? It would be nice if there is one inside MySQL by default for newbies 
to start out with.


Microsoft's document often used Northwind as an example to teach functions.


How about FlightStats:

  http://dl.flightstats.us/

You can try out a front-end to the data:

  http://flightstats.us/

It's all public domain.  Read about where it comes from:

  http://flightstats.us/about.php

Regards,

Jeremy

--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: could a Trigger execute a outside command?

2005-06-27 Thread Jeremy Cole

Hi Ted,


When a new record is added to a specific table, I want to
react to it right away, running a perl script to do something.

Right now, I use a cronjob that checks the table every minuet.
It is slow(it could waits up to a whole min.) and not efficient.

To my understanding, I would need to put a trigger on this table. When 
the trigger

is triggered, it will call the same perl script through UDF. Not sure
whether I get it right.


Seems like a better approach might be:

* Build a UDF that sends a SIGALRM to your Perl script.
  - You can test this independently by calling it with SELECT.
  - This will mean getting the PID somehow.
  - Likely, the Perl script will need to write its PID somewhere.
  - You'll read the PID in your UDF and call kill(pid, SIGALRM)

* Write your Perl script so that it looks like so:

  while(1) {
do_work();
check if the while should be broken
sleep 60;
  }

* Upon receiving the SIGALRM, the Perl script will be woken up early and 
get to its work immediately.  If something fails for any reason, you go 
back to the old behaviour of checking every 60 seconds.


Regards,

Jeremy

--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: zip code search within x miles

2005-04-28 Thread Jeremy Cole
Hi,
known as covering indexes.  The advantage to a covering index is that if 
your data is numeric and in the index, the engine can read the data 
All correct, except that the data does not have to be numeric.  It must, 
however, be completely indexed, not indexed by prefix.  That requirement 
usually excludes most BLOB/TEXT fields, so you wouldn't normally include 
a BLOB/TEXT in a covered index discussion.

Covered indexes work fine with CHAR/VARCHAR and are quite common.
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: weird characters from mysqldump?

2005-04-13 Thread Jeremy Cole
Hi Steve,
I'm trying to export data from mysql 4.1.3 with mysqldump.
I'm getting weird characters from the system. Here's what I've discovered
so far:
' becomes ^À^Ù, e.g. didn't becomes didnâ^À^Ùt
- becomes â^À^Ó, e.g. 1-2 becomes 1â^À^Ó2
è becomes è, e.g. Entrèe becomes Entrèe
What gives?
Looks like an application of the GIGO (Garbage In, Garbage Out) 
principle to me.  The above data wouldn't have happened to come from a 
Microsoft Office application, would it?

Looks like Smart Quotes etc., strikes again, to me.
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Duplicated records

2005-04-05 Thread Jeremy Cole
Hi Scott,
Here is my novice question:
I need to find duplicates in Batch category when I issued this query:
SELECT * FROM QA WHERE Batch=439584414;
Result:
90577
1   26615334JulieAnt25  5   5   439584414   
2004-10-03 00:00:00 2004-10-03
00:00:000   90575
1   56575334JulieAnt25  5   5   439584414   
2004-10-03 00:00:00 2004-10-03 00:00:00 0   
How do I issue a query that finds duplicated Batch number?
SELECT Batch, COUNT(*) as Nr
FROM QA
GROUP BY Batch
HAVING Nr  1
Regards,
Jeremy
--
Jeremy Cole (currently in Bangalore)
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How to find missing record?

2005-04-03 Thread Jeremy Cole
Hi,
Both tables should hold the same number of records.
However, I've discovered that 'Close' is one less than 'Open' (1693 vs 
1694).

How can I find out which record is missing from 'Close'? I know it's not 
the case of an extra entry in 'Open' because 1694 divides evenly by 7, 
whereas 1693 doesn't.
This should do it:
SELECT Open.id, Open.day
FROM Open
LEFT JOIN Close ON Open.id=Close.id AND Open.day=Close.day
WHERE Close.id IS NULL
Regards,
Jeremy
--
Jeremy Cole (currently in Bangalore)
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How to import data from Dbase3?

2005-03-23 Thread Jeremy Cole
Hi,
How do I import data from Dbase3 into MySQL?
My software, libmygis, is mainly designed for loading a set of 
Shapefiles into MySQL (.SHP, .SHX, .DBF) but it's perfectly well capable 
of loading just the dBase III/IV component.  Grab libmygis-0.5.1 from:

  http://jcole.us/software/libmygis/
You should be able to load a file, e.g. foo.dbf, by doing:
  tools/mysqlgisimport -S foo | mysql -u myuser -p mydb
It will automatically create a table suitable for the dBase file and 
load the records in with the above command.

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: long PHP mysql_connect times

2005-03-18 Thread Jeremy Cole
Restating apache resets the issue i.e.  the next pconnects takes 5 
seconds.  Quitting the browser has no effect.  I am currently testing 
now to see if there is a time out issue i.e. if I wait 5 minutes will 
the delay reappear?
FWIW, this sounds like a slow-to-respond DNS resolver.  Are Apache and 
MySQL on seperate machines, by any chance?  Or, alternatively, are you 
connecting locally on the machine using the machine's DNS name?

E.g.: mysql_connect(foo.example.com, ...) instead of 
mysql_connect(localhost, ...)

Can you try this:
Whatever name you are using in mysql_connect(), run this:
$ host foo.example.com
Then, take the IP that gives you and do the same:
$ host 1.2.3.4
Mainly you're looking for the delay in this second step.
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: long PHP mysql_connect times [Resolved]

2005-03-18 Thread Jeremy Cole
Hi Dan,
It was the lack of a reverse DNS entry.  I had the host resolved to an 
IP but now reverse arpa entry.  I added the DNS PTR record and viola!
Excellent.
Also FWIW, you're not gaining anything by disconnecting after each query 
in your PHP code.  Connect once at the beginning of the script, 
disconnect once at the end.  All the extraneous connect/disconnect puts 
extra, unnecessary load on the database server.

I would have never though of / figured that one out.
Nobody knows... the troubles I've seen... :)
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help with LIKE

2005-03-18 Thread Jeremy Cole
Hi Mevers,
SELECT * FROM woorden WHERE string LIKE % woord %
It's not going to be efficient (at all) but you can do:
SELECT * FROM woorden WHERE foo LIKE CONCAT('%', woord, '%')
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: replication errors

2005-03-10 Thread Jeremy Cole
Hi Chris,
OK, I can't take this anymore. :)
Now, I've been running for not even 1 hour with the skip-errors 
enabled A quick check, on ONE table...
snip
So there's already data missing on the slave *shrugs*
Yep, you told it to SKIP ERRORS.  That means that if any query generates
an error, for any reason, it's just going to skip over it and move on. 
So data was missing on your slave the moment you restarted replication 
-- it skipped the UPDATE it had a problem with.  From that moment on, 
your replication was out of sync.

And this is not the first time it's happening either...  It really 
makes me doubt whether MySQL is the right approach to take to this 
whole replication vs data redundancy scenario.
You do realize that MySQL 5.x is ALPHA, right?  Alpha doesn't mean
works beautifully on production systems, it means probably, very
likely, broken in some way.  Why are you using an ALPHA version and
expecting it to work perfectly, and when it doesn't, doubting MySQL
and replication?  If you want it to be stable, use a STABLE version.
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: select where multiple joined records match

2005-02-14 Thread Jeremy Cole
Hi,
Thanks, but unless I'm missing something, that will return the same as  
SUBJECT=English and (GRADE=1 or GRADE=2), which is resource 
records  1,2,3, and 4 - too many matches.  Am I missing something?
How about this:
SELECT resources.id, resources.title
FROM resources
LEFT JOIN goals AS goal_a
  ON resources.id=goal_a.resource_id
  AND goal_a.subject=English AND goal_a.grade=1
LEFT JOIN goals AS goal_b
  ON resources.id=goal_b.resource_id
  AND goal_b.subject=English AND goal_b.grade=2
WHERE goal_a.id IS NOT NULL
  AND goal_b.id IS NOT NULL
Alternately:
SELECT resources.id, resources.title, COUNT(*) as nr
FROM resources
LEFT JOIN goals
  ON resources.id=goals.resource_id
  AND goals.subject=English
WHERE goals.grade IN (1, 2)
GROUP BY resources.id
HAVING nr = 2
(The above is untested, since you didn't provide your example table in 
SQL, and I am unwilling to re-format it into CREATE TABLE/INSERT 
statemnts to test things.  The concepts are solid.)

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Empty 'Relay_Master_Log_File'?

2005-02-14 Thread Jeremy Cole
Hi Atle,
Yesterday I set up 4 new MySQL 4.0.22 replication slaves on 2 FreeBSD
4.10-R machines (via mysqld_multi). We have 10 other identical slaves and
I followed the same procedure as always when seeding them. The new slaves
were working fine, however when I issued SHOW SLAVE STATUS the
Relay_Master_Log_File field was empty. I reseeded one of them [with
freshly created seeds] a couple of times with the same result every time.
Then this morning I woke up and found that Relay_Master_Log_File was
populated..
I haven't been able to find anything about this online yet, and I am
wondering if anyone else has experienced this? The reason why I caught
this is that our monitoring software relies on that field being populated
to calculate replication delay.
AFAIK, the log file name is not known by the slave unless it either 
receives a rotate log event (go to next log) or you have started it 
against a particular log file.  If you start replication from the 
beginning (not specifying a log file) against a master, the first 
filename is not passed down.

In order to force a filename to be displayed you could either: FLUSH 
LOGS on the master, or specify the log file name in the CHANGE MASTER on 
the slave.

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Importing ArcView Shapefile into a Database

2005-02-12 Thread Jeremy Cole
Hi David,
I want to import some data into my database, but it's
in an ArcView Shapefile, which I've never worked with.
Does anyone know what kind of software I can use to
access this data? Actually, I'd probably import it
into a spreadsheeet first, then save it as a csv file
and import it into my database.
I've been working on a library called libmygis which is able to read 
varoius types of GIS data files (mainly Shapefiles for now).

You're in luck.  I've recently gotten a working version of 
mysqlgisimport which is able to take a SHP/SHX/DBF set and turn it into 
SQL directly to be imported.

You can grab the source code of libmygis at:
http://jcole.us/software/libmygis/
If you have any questions or need help getting it to work (hey, it's 
new!) please feel free to drop me a line.  Same goes for feature 
requests. :)

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Importing ArcView Shapefile into a Database

2005-02-12 Thread Jeremy Cole
Hi David,
This sounds really cool. One question, though...when
you talk about downloading the source code, are you
saying your program is for Linux only? As much as I
hate Microsoft, I'm still stuck with Windows XP.
I developed it on Linux, and haven't ever tried to compile it on 
Windows, as I don't have a Windows compiler suite handy.  It would 
probably mostly compile on Windows. ;)

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Importing ArcView Shapefile into a Database

2005-02-12 Thread Jeremy Cole
Hi,
Sweet!  Thank you Jeremy!
FYI, make sure you grab the libmygis-0.3.tar.gz or later instead of any 
other version.  I had some silly mistakes that made the DBF file 
required instead of optional.  I've also ported to Mac OS X in 0.3.

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
Desk: 408 349 5104
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Compound Order By

2005-02-09 Thread Jeremy Cole
Hi,
I'd like to do a dual order by. First, I want to order by City, then I 
want to order name. That way, the names within the city are alphabetized.

I can easily do an order by City into a temp table, then order by name 
from within the temp table. Is this the best way? Is this the only way?
ORDER BY city, name
http://dev.mysql.com/doc/mysql/en/sorting-rows.html
I hope you're wearing a flame-retardant suit, because the flames are 
about to fly. :)

Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Syntax diagram, where is it located in the doc?

2005-02-04 Thread Jeremy Cole
Hi Thomas,
I'm looking for the syntax diagram for MySQL and can't find it. I have
Just to give you a complete answer to your question, this is what is 
valid (I've stripped out the C code parts to leave just the definition):

where_clause:
/* empty */
| WHERE expr
expr:   expr_expr
| simple_expr
expr_expr:
expr IN_SYM '(' expr_list ')'
| expr NOT IN_SYM '(' expr_list ')'
| expr BETWEEN_SYM no_and_expr AND expr
| expr NOT BETWEEN_SYM no_and_expr AND expr
| expr OR_OR_CONCAT expr
| expr OR expr
| expr XOR expr
| expr AND expr
| expr LIKE simple_expr opt_escape
| expr NOT LIKE simple_expr opt_escape
| expr REGEXP expr
| expr NOT REGEXP expr
| expr IS NULL_SYM
| expr IS NOT NULL_SYM
| expr EQ expr
| expr EQUAL_SYM expr
| expr GE expr
| expr GT_SYM expr
| expr LE expr
| expr LT expr
| expr NE expr
| expr SHIFT_LEFT expr
| expr SHIFT_RIGHT expr
| expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| expr '|' expr
| expr '^' expr
| expr '' expr
| expr '%' expr
| expr '+' INTERVAL_SYM expr interval
| expr '-' INTERVAL_SYM expr interval
simple_expr:
simple_ident
| literal
| '@' ident_or_text SET_VAR expr
| '@' ident_or_text
| '@' '@' opt_var_ident_type ident_or_text
| sum_expr
| '-' expr %prec NEG
| '~' expr %prec NEG
| NOT expr %prec NEG
| '!' expr %prec NEG
| '(' expr ')'
| '{' ident expr '}'
| MATCH ident_list_arg AGAINST '(' expr ')'
| MATCH ident_list_arg AGAINST '(' expr IN_SYM BOOLEAN_SYM 
MODE_SYM ')'
| BINARY expr %prec NEG

Maybe this is more along the lines of what you're looking for...
snip all of the random functions
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Logging Data: Should I use MyIsam or InnoDB?

2005-01-18 Thread Jeremy Cole
Hi Thomas,
I have a curious issue here, maybe someone can help.
I have a single process that inserts data into tables that contain
purely logging information.  This table is then searched by our Care
department to troubleshoot issues.  I am looking for the best way to
store this data, and the structure on the backend.
There are 50 million inserts into table LOG a day.  The primary index
on the table is seconds from 1971.  I only need to keep 60 days worth
of data, and the table is only used for read purposes.  This is my
design criteria, but my problem is how to delete old data without
crashing the log writer that is atteched to the table.
OK, how about this:
Use MyISAM and MERGE tables.  Keep one table per day.  E.g.:
log_2005_01_15
log_2005_01_16
log_2005_01_17
log_2005_01_18
etc.
Use MERGE tables to give you the 60 day (and perhaps e.g. 14 day, 30 
day, 7 day, etc.) read views that you need, like so:

CREATE TABLE log_view_7day (
  ...
) TYPE=MERGE UNION=(
  log_2005_01_12,
  log_2005_01_13,
  log_2005_01_14,
  log_2005_01_15,
  log_2005_01_16,
  log_2005_01_17,
  log_2005_01_18
);
Create another MERGE table for today using INSERT_METHOD:
CREATE TABLE log_view_today (
  ...
) TYPE=MERGE INSERT_METHOD=FIRST UNION=(
  log_2005_01_18
);
You can then do all of your inserts from the log writer into the today 
table, and do your reads against the various MERGEs.

Every day at exactly midnight, you would use ALTER TABLE (which is 
atomic) to redefine the UNION of the MERGE definition of the various tables.

When you want to delete your old data, it's simply a matter of doing 
an ALTER TABLE to remove them from the MERGE, and using DROP TABLE to 
drop the log__mm_dd table after that.

Also note that you could compress the old data and leave it live (but 
read only) in case you ever need it.  myisampack can help you with that.

I hope that helps!
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Logging Data: Should I use MyIsam or InnoDB?

2005-01-18 Thread Jeremy Cole
Hi Thomas,
Thanks, this is what I was originally thinking of, but how I am
getting rid of the data in log_view_today?  OR, are you saying that
log_view_today is a merge table for only the current day?  That table
def is defined every night?  Then I would go about dropping
everything in whatever fashion I need.
Basically, at midnight you would do a sequence like this:
CREATE TABLE log_2005_01_19 ( ... );
ALTER TABLE log_view_today UNION=(log_2005_01_19);
ALTER TABLE log_view_7day UNION=(..., log_2005_01_19);
etc. etc.
You could actually create the tables beforehand, and only do the ALTER 
TABLEs themselves at midnight.

Note that this procedure has a race condition in that, depending on when 
the table actually switches over, you may have some records on either 
side of the split that don't belong.  You can always move those back 
manually with something like:

INSERT INTO log_2005_01_18 SELECT * from log_2005_01_19 WHERE datefield 
 2005-01-19 00:00:00;
DELETE FROM log_2005_01_19 WHERE datefield  2005-01-19 00:00:00;

or
INSERT INTO log_2005_01_19 SELECT * from log_2005_01_18 WHERE datefield 
= 2005-01-19 00:00:00;
DELETE FROM log_2005_01_18 WHERE datefield = 2005-01-19 00:00:00;

In some cases it might be easier to do the switch always e.g. 5 seconds 
before midnight, so that any records falling on the wrong side of the 
edge will always be on the same side.  That makes things easier sometimes.

When I recreate the merge table for just the current day, don't I
have to drop the merge table, or it just gets recreated
automatically.  I am not sure why you reference atomic on ALTER
TABLE . . . , if there is a log writer attached to that table, won't
I have to wait for a lock?  What do you mean by atomic?  I understand
the term atomic transaction, just not sure of your context to this
example.
What I mean is, you can use ALTER TABLE to change the definition (e.g. 
which tables it contains) of the MERGE table.  This happens atomically 
(no INSERTs will error, and no records could conceivably be split by 
the sudden change).

Does that all make sense?
Regards,
Jeremy
--
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: select @qty1=@qty+qty from goods2 , result 0 ?

2005-01-06 Thread Jeremy Cole
Hi Shuming,

 select qty, @[EMAIL PROTECTED] allqty from goods2

That should be @qty1:[EMAIL PROTECTED]

You are doing a comparison using =, which is returning false (0).

Regards,

Jeremy

-- 
Jeremy Cole
Technical Yahoo - MySQL (Database) Geek

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: merge tables for big log files ?

2004-05-31 Thread Jeremy Cole
Hi Michael,
Problem for this variant: merge table has to be dropped and recreated
  periodically.
  during the short lag interval the log merge table
  does not exist
You don't have to drop and re-create the table.  Please don't edit the MRG 
file directly either.

You can accomplish this easily by using the ALTER TABLE statement:
CREATE TABLE t1 (id INT NOT NULL, c CHAR(10) NOT NULL, PRIMARY KEY(id));
CREATE TABLE t2 (id INT NOT NULL, c CHAR(10) NOT NULL, PRIMARY KEY(id));
CREATE TABLE t_merge (id INT NOT NULL, c CHAR(10) NOT NULL, KEY(id)) 
TYPE=MERGE UNION=(t1, t2);

So now you have a MERGE table containing both tables.  If you want to add a 
third one later:

CREATE TABLE t3 (id INT NOT NULL, c CHAR(10) NOT NULL, PRIMARY KEY(id));
ALTER TABLE t_merge UNION=(t1, t2, t3);
Using ALTER TABLE with a MERGE table is an atomic operation and can be done 
while the server is up and running full-speed without any problems.

Regards,
Jeremy
--
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Mr. Jeremy Cole [EMAIL PROTECTED]
  / /|_/ / // /\ \/ /_/ / /__   MySQL AB, Training and Consulting Manager
 /_/  /_/\_, /___/\___\_\___/   Streetsboro, Ohio  USA
___/   www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Some amazing query numbers from MySQL 4.0

2001-11-20 Thread Jeremy Cole

Hi Craig,

 Just wanted to gloat about how much I love the new 4.0 version =)

:)

 ===
 Server version: 4.0.0-alpha-log
 Protocol version:   10
 Connection: Localhost via UNIX socket
 Client characterset:latin1
 Server characterset:latin1
 UNIX socket:/tmp/mysql.sock
 Uptime: 9 days 12 hours 19 min 55 sec

 Threads: 408  Questions: 183163443  Slow queries: 4009  Opens: 43522  Flush
 tables: 1  Open tables: 239  Queries per second avg: 222.828
 ===

Some free configuration advice:

* You should turn on your slow query log and find out what those 4000+
  slow queries are.

* You shoudl increase your ``table_cache'' variable, ``Opens'' is much
  higher than ``Open_tables'' indicating that MySQL is having to close
  other tables in order to open some for new clients.

Otherwise, looks good!

 This is running on a dual 1K cpu with a gig of ram (2.4.14 Linux kernel).
 The beauty is that even with these numbers MySQL isn't coughing/crashing or
 corrupting any data.  I've actually gone as high as 250 QPS and still not
 seen any slowdowns or problems with MySQL.  3.23 seemed to have some
 problems handling such a load with this system configuration4.0 was a
 major blessing to me.

Version 3.23 should not have had any problems handling the load you're
seeing, either.  Did you perhaps have a version of 3.23 that _wasn't_
compiled by MySQL AB (you didn't get it from mysql.com) installed before?

 Excellent work to the MySQL dev team =)

Thanks!

Regards,

Jeremy

-- 
To learn more about consulting, visit http://www.mysql.com/consulting/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Mr. Jeremy Cole [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__   MySQL AB, On-Site Consultant
/_/  /_/\_, /___/\___\_\___/   Clarksville, Tennessee  USA
   ___/   www.mysql.com



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php