Optimizing GROUP BY and ORDER BY

2008-07-25 Thread Michael Stearne
I have a query:

SELECT Country, COUNT( Country ) AS Cnt FROM properties WHERE (
Country != 'USA' AND Country != 'US' AND Country != 'Unit' AND Country
!= 'United States' AND Country != ' ' AND Country IS NOT NULL ) GROUP
BY Country ORDER BY Cnt DESC LIMIT 8

that gets the top 8 non-US countries from the properties table. There
is about 500,000 rows in the properties table.

This is a costly query:
++-++---+---+-+-+--+---+---+
| id | select_type | table  | type  | possible_keys | key |
key_len | ref  | rows  | Extra
|
++-++---+---+-+-+--+---+---+
|  1 | SIMPLE  | properties | range | Country   | Country | 7
 | NULL | 74602 | Using where; Using index; Using temporary; Using
filesort |
++-++---+---+-+-+--+---+---+
1 row in set (0.00 sec)

Any ideas on how to get rid of the Using temporary; Using filesort
or do this in a better way with PHP?

Thanks!
Michael

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing GROUP BY and ORDER BY

2008-07-25 Thread Michael Stearne
On Fri, Jul 25, 2008 at 12:35 PM, Arthur Fuller [EMAIL PROTECTED] wrote:
 ORDER BY implies a sort of the result set. I don't think there is any way
 around that.

I guess so. What I am doing is to just run the query once per day and
store the results in memcache.

Michael



 Arthur

 On Fri, Jul 25, 2008 at 4:27 AM, Michael Stearne [EMAIL PROTECTED]
 wrote:

 I have a query:

 SELECT Country, COUNT( Country ) AS Cnt FROM properties WHERE (
 Country != 'USA' AND Country != 'US' AND Country != 'Unit' AND Country
 != 'United States' AND Country != ' ' AND Country IS NOT NULL ) GROUP
 BY Country ORDER BY Cnt DESC LIMIT 8

 that gets the top 8 non-US countries from the properties table. There
 is about 500,000 rows in the properties table.

 This is a costly query:

 ++-++---+---+-+-+--+---+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref  | rows  | Extra
|

 ++-++---+---+-+-+--+---+---+
 |  1 | SIMPLE  | properties | range | Country   | Country | 7
 | NULL | 74602 | Using where; Using index; Using temporary; Using
 filesort |

 ++-++---+---+-+-+--+---+---+
 1 row in set (0.00 sec)



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query/Key Optimization

2008-05-06 Thread Michael Stearne
Hi.

The main table for our site is called properties and it gets hit quite
often (several times per second) something like:

 Queries Total: 41,496 Avg/Sec: 6.89  Slow: 0
 Cache Hits   : 15,096 Avg/Sec: 2.51  Now/Sec: 0.00  Ratio: 36.38%
 Threads Total: 1 Active: 1 Cached: 76
 Key Efficiency: 94.41%  Bytes in: 114  Bytes out: 6,713

This properties table is very simple. (Pasted below) There is about
500,000 rows in the table and we are experiencing long queries like:

SELECT * FROM properties WHERE 1 =1 AND properties.Published 0 AND
properties.Deleted 1 AND properties.state = 'ca' AND TYPE =
'Residential' AND Image1  '' ORDER BY id DESC LIMIT 0 , 35

An explain on that yields:

| id | select_type | table  | type| possible_keys
 | key | key_len | ref  | rows  | Extra
 |
++-++-+-+-+-+--+---+---+
|  1 | SIMPLE  | properties | index_merge |
Type,TypeSubType,StateIndex | Type,StateIndex | 1,67| NULL | 45048
| Using intersect(Type,StateIndex); Using where; Using filesort |

Is there anything you can see with the table or key design that might
be causing this slowdown?  There are 5 databases: 1 master, 4 slaves
replicated.  The master is only used for INSERTs, UPDATEs and DELETEs.
 The properties table is INNODB.  Should it me MyISAM?

Thanks for any help!
Michael


CREATE TABLE properties (
  id int(11) unsigned NOT NULL auto_increment,
  UserID int(11) unsigned NOT NULL default '0',
  `Type` enum('Commercial','Residential') NOT NULL default 'Residential',
  Subtype varchar(64) NOT NULL default '0',
  Zip varchar(10) default '',
  Heading varchar(84) NOT NULL default '',
  Address1 varchar(128) NOT NULL default '',
  Address2 varchar(32) default NULL,
  Unit varchar(32) default NULL,
  Neighborhood varchar(64) default NULL,
  City varchar(64) NOT NULL default '0',
  State varchar(64) default '',
  Country varchar(4) default 'USA',
 .
..
..
  ListingContactHTML varchar(255) default NULL,
  IsShare tinyint(1) default '0',
  IsSublet tinyint(1) default '0',
  PRIMARY KEY  (id),
  KEY `Type` (`Type`),
  KEY Subtype (Subtype),
  KEY TypeSubType (`Type`,Subtype),
  KEY CityHood (City,Neighborhood),
  KEY GoogleBase (GoogleBase),
  KEY Zip (Zip),
  KEY AddressSearch (Heading,Zip,City,Neighborhood,Address1,Unit),
  KEY StateIndex (State),
  KEY ListingContactRemoteCode (ListingContactRemoteCode),
  KEY LeaseType (LeaseType),
  KEY CreationDate (CreationDate),
  KEY LastMapLookup (LastMapLookup),
  KEY UserID (UserID),
  KEY Country (Country),
  KEY LatLon (lat,lon),
  KEY CityStateType (City,State,`Type`),
  KEY BatchUpdateRemoteListingID (BatchUpdateRemoteListingID),
  KEY CountryType (Country,`Type`),
  KEY Country_2 (Country,City,State)
) ENGINE=InnoDB AUTO_INCREMENT=907758 DEFAULT CHARSET=latin1
AUTO_INCREMENT=907758 ;

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query/Key Optimization

2008-05-06 Thread Michael Stearne
As a note.  The query itself may not be taking long but there are many
Sorting result  and Copying to tmp table  in myTop.

Thanks,
Michael



On Tue, May 6, 2008 at 3:26 PM, Michael Stearne [EMAIL PROTECTED] wrote:
 Hi.

  The main table for our site is called properties and it gets hit quite
  often (several times per second) something like:

   Queries Total: 41,496 Avg/Sec: 6.89  Slow: 0
   Cache Hits   : 15,096 Avg/Sec: 2.51  Now/Sec: 0.00  Ratio: 36.38%
   Threads Total: 1 Active: 1 Cached: 76
   Key Efficiency: 94.41%  Bytes in: 114  Bytes out: 6,713

  This properties table is very simple. (Pasted below) There is about
  500,000 rows in the table and we are experiencing long queries like:

  SELECT * FROM properties WHERE 1 =1 AND properties.Published 0 AND
  properties.Deleted 1 AND properties.state = 'ca' AND TYPE =
  'Residential' AND Image1  '' ORDER BY id DESC LIMIT 0 , 35

  An explain on that yields:

  | id | select_type | table  | type| possible_keys
  | key | key_len | ref  | rows  | Extra
  |
  
 ++-++-+-+-+-+--+---+---+
  |  1 | SIMPLE  | properties | index_merge |
  Type,TypeSubType,StateIndex | Type,StateIndex | 1,67| NULL | 45048
  | Using intersect(Type,StateIndex); Using where; Using filesort |

  Is there anything you can see with the table or key design that might
  be causing this slowdown?  There are 5 databases: 1 master, 4 slaves
  replicated.  The master is only used for INSERTs, UPDATEs and DELETEs.
   The properties table is INNODB.  Should it me MyISAM?

  Thanks for any help!
  Michael


  CREATE TABLE properties (
   id int(11) unsigned NOT NULL auto_increment,
   UserID int(11) unsigned NOT NULL default '0',
   `Type` enum('Commercial','Residential') NOT NULL default 'Residential',
   Subtype varchar(64) NOT NULL default '0',
   Zip varchar(10) default '',
   Heading varchar(84) NOT NULL default '',
   Address1 varchar(128) NOT NULL default '',
   Address2 varchar(32) default NULL,
   Unit varchar(32) default NULL,
   Neighborhood varchar(64) default NULL,
   City varchar(64) NOT NULL default '0',
   State varchar(64) default '',
   Country varchar(4) default 'USA',
   .
  ..
  ..
   ListingContactHTML varchar(255) default NULL,
   IsShare tinyint(1) default '0',
   IsSublet tinyint(1) default '0',
   PRIMARY KEY  (id),
   KEY `Type` (`Type`),
   KEY Subtype (Subtype),
   KEY TypeSubType (`Type`,Subtype),
   KEY CityHood (City,Neighborhood),
   KEY GoogleBase (GoogleBase),
   KEY Zip (Zip),
   KEY AddressSearch (Heading,Zip,City,Neighborhood,Address1,Unit),
   KEY StateIndex (State),
   KEY ListingContactRemoteCode (ListingContactRemoteCode),
   KEY LeaseType (LeaseType),
   KEY CreationDate (CreationDate),
   KEY LastMapLookup (LastMapLookup),
   KEY UserID (UserID),
   KEY Country (Country),
   KEY LatLon (lat,lon),
   KEY CityStateType (City,State,`Type`),
   KEY BatchUpdateRemoteListingID (BatchUpdateRemoteListingID),
   KEY CountryType (Country,`Type`),
   KEY Country_2 (Country,City,State)
  ) ENGINE=InnoDB AUTO_INCREMENT=907758 DEFAULT CHARSET=latin1
  AUTO_INCREMENT=907758 ;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Single Column Indexes Vs. Multi Column

2008-01-09 Thread Michael Stearne

For a query like:

SELECT  id FROM properties  WHERE `Country` = 'USA' AND  
Type='Residential'


Is an multi-column index that is (Country, Type) better or worse or  
the same as a single index Country and another single index Type.


Thanks,
Michael


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication Falls Out Of Sync With LOAD DATA

2007-11-29 Thread Michael Stearne
On Nov 28, 2007 11:18 PM, B. Keith Murphy [EMAIL PROTECTED] wrote:
 The reason I asked about version is that it looks like there is problem
 replcating a load data infile command from some versions of 4.x to 5.x
 slaves.

Master and Slaves are 5.x.  Hopefully I've figured out the issue.
When the slave would fall out of sync, I would resync using rsync. In
that rsync command I was using the --delete option which would delete
the relay log from the slave machine.  Since the relay log was being
removed constantly, the replication was in a strange state. That
problem is fixed so hopefully things should stay in sync.

Michael

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication Falls Out Of Sync With LOAD DATA

2007-11-28 Thread Michael Stearne
We have replication set up for 1 master and 4 slaves. When resynced  
everything appears to work fine. Come back a couple hours later and  
the machines are out of sync. The only thing I can think of that could  
cause this is that we are inserting some data on the master  
(updates,inserts,deletes) using LOAD DATA INFILE. Does this cause a  
problem for replication?


Thanks,
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication vs. mysql-table-sync

2007-11-25 Thread Michael Stearne
Is mysql-table-sync design to be used as a fix for when your  
replication is out of sync OR can it be used instead of replication?


Thanks,
Michael


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Use Samba Share For Data Directory

2007-01-26 Thread Michael Stearne

On 1/26/07, Dominik Klein [EMAIL PROTECTED] wrote:

Michael Stearne schrieb:
 We have 5.0.27 installed on a CentOS machine that doesn't have a ton
 of disk space.  Is it possible to point the data directory to lie on a
 samba connected share?  The samba share does not support Unix file
 permissions so it is not possible to set mysql as the owner of the
 files.  Is this possible at all?

If you use proper mount-options, you can set the owner of the files.

mount -t smbfs -o uid=mysql $SHARE $DESTINATION

In general: This should not be a problem, but it will be slow as the
network is propably slower than your local disc. Guess you knew that.


Yep.  I'll try this . Thanks a lot!

Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Use Samba Share For Data Directory

2007-01-25 Thread Michael Stearne

We have 5.0.27 installed on a CentOS machine that doesn't have a ton
of disk space.  Is it possible to point the data directory to lie on a
samba connected share?  The samba share does not support Unix file
permissions so it is not possible to set mysql as the owner of the
files.  Is this possible at all?

Thanks,
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to pronounce MyISAM and InnoDB

2007-01-07 Thread Michael Stearne
I just say 

My, I,  Sam and inno, d, b 

Michael 
  

-Original Message-
From: js  [EMAIL PROTECTED]
Date: Mon, 8 Jan 2007 00:09:15 
To:mysql@lists.mysql.com
Subject: How to pronounce MyISAM and InnoDB

Hi list,

Sorry for this silly question but I've been always had trouble
pronouncing MyISAM and InnoDB.
How do you pronunce them?

Thanks in advance.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Last Inserted ID Using LOAD DATA

2006-09-19 Thread Michael Stearne

I have a group of updates that need to be done using LOAD DATA INFILE.
Within this file there are some INSERTS.  Is there anyway that after
an INSERT happens I can use the auto-increment ID that was just
generated in the next statement.  Similar to PHP's mysql_insert_id()
function.

Thanks,
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UK Postcodes

2006-01-25 Thread Michael Stearne
On 1/24/06, James Harvard [EMAIL PROTECTED] wrote:
 It's the Royal Mail. Ordnance Survey, the government mapping agency for the 
 UK, are in on it too.

 To read their web site (as I have done a couple of years ago, and just now 
 too) you would think it had never occurred to them that people might want to 
 deploy the data as part of a web site. It's all about licensing the data by 
 the number of 'terminals'. Extraordinary.

 Their prices are fairly extraordinary too.

 I've always meant to write and complain to Royal Mail, Ordnance Survey, the 
 RM's independent regulator and anyone else I could think of about this 
 inflated, monopoly pricing which can only be hindering UK businesses from 
 developing localised on-line services.


I guess we'll wait until Google or Yahoo gets them online. :-)  We
only need to do 1 lookup for the address for each listing and store it
in our database so hopefull they'll offer it in the UK like Yahoo
Local does for the U.S.

Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Postcode Search

2006-01-23 Thread Michael Stearne
We use the inradius function as described here:

http://www.phparch.com/discuss/index.php?t=msgth=878

It's not the fast way I'm sure but seems to be pretty reliable. 
Instead of the zipdata table you would point it at the Postcodes table
and the Properties table.  You'll also need to add an X,Y (lat,lon)
value for each of your Properties.

Will you allow address lookup as a service for public use (which would
be great) or can you let us know where you got the coordinates for the
postal codes (Royal Mail?).

Thanks!
Michael

P.S. Yahoo provides good free geocoding for the U.S.
http://developer.yahoo.net/maps/rest/V1/geocode.html

On 1/23/06, Shaun [EMAIL PROTECTED] wrote:
 Hi,

 We have a dataset of uk postcodes and their relevant X-Coordinates and
 Y-Coordinates, a table of properties (houses), a table of users and a table
 of offices - users work in an office - table structures below.

 Is it possible to run a search of all properties in the properties table
 that come within a certain distance of the users postcode, currently we do
 this by downloading all properties into an array and stripping out the ones
 that don't come within the radius with php.

 Any advice would be greatly appreciated.


 # -- MySQL dump --
 #
 # Table structure for table 'Offices'
 #
 CREATE TABLE Offices (
   Office_ID int(11)  DEFAULT '' NOT NULL auto_increment,
   Subscriber_ID int(11),
   Type varchar(10),
   Address_Line_1 varchar(50),
   Address_Line_2 varchar(50),
   City varchar(50),
   County varchar(50),
   Postcode varchar(10),
   Telephone varchar(12),
   Fax varchar(12),
   Radius tinyint(4),
   PRIMARY KEY (Office_ID)
 );

 #
 # Table structure for table 'Postcodes'
 #
 CREATE TABLE Postcodes (
   PCDSECT varchar(6)  DEFAULT '' NOT NULL ,
   SORTSECT varchar(6),
   PCDDIST varchar(4),
   SORTDIST varchar(4),
   PCDAREA char(2),
   X_COORD double(7,1) unsigned   ,
   Y_COORD double(7,1) unsigned   ,
   PRIMARY KEY (PCDSECT)
 );

 #
 # Table structure for table 'Properties'
 #
 CREATE TABLE Properties (
   CHK varchar(20),
   Property_ID int(11)  DEFAULT '' NOT NULL auto_increment,
   Insertion_Date date,
   Status varchar(20),
   Property_Name_Or_Number varchar(50),
   Address_Line_1 varchar(50),
   Address_Line_2 varchar(50),
   City varchar(50),
   County varchar(50),
   Postcode varchar(12),
   PRIMARY KEY (Property_ID)
 );

 #
 # Table structure for table 'Users'
 #
 CREATE TABLE Users (
   User_ID int(11)  DEFAULT '' NOT NULL auto_increment,
   Office_ID int(11),
   Type varchar(20),
   Title varchar(4),
   Firstname varchar(20),
   Lastname varchar(20),
   Password varchar(20)  DEFAULT '' NOT NULL ,
   Email varchar(50),
   PRIMARY KEY (User_ID)
 );

 # --- Dump ends ---



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem With FulltText Index and VarChar

2005-12-30 Thread Michael Stearne
On 12/30/05, Gleb Paharenko [EMAIL PROTECTED] wrote:
 Hello.

  I guess my question is, how do I make sure the full text search is
  being done against the CommentsIDX and not just against the individual
  fields...


 This shows that the search is done against concatenation of the fields:

 mysql select a, b, match(a,b) against('search words') from ab where
 match(a,b) against('+search +words' in boolean mode);
 ++---++
 | a  | b | match(a,b) against('search words') |
 ++---++
 | search | words |1.7514755725861 |
 ++---++


Thanks so much for your help.  Everything makes sense.  One this I did
come across though is that if you are using a multi-column index you
have to include all the columns in the index in your select statement.
e.g. MATCH (Comment, heading, ...)

Thanks,
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem With FulltText Index and VarChar

2005-12-29 Thread Michael Stearne
On 12/28/05, Gleb Paharenko [EMAIL PROTECTED] wrote:
 Hello.

  #1054 - Unknown column 'CommentsIDX' in 'where clause'
  My table structure contains:FULLTEXT KEY `CommentsIDX` (`Comments`)

 You should use column names not index names in your queries. Please,
 provide CREATE statement for your tables and problematic queries. With
 this information it'll be easier to help you.


Thanks this is the a snippet of the table structure:
CREATE TABLE `properties` (
  `id` int(11) unsigned NOT NULL auto_increment,
  `UserID` int(11) unsigned NOT NULL default '0',
  `Type` enum('Commercial','Residential') NOT NULL default 'Residential',
  `Subtype` varchar(64) NOT NULL default '0',
  `Zip` varchar(10) NOT NULL default '',
  `Heading` varchar(84) NOT NULL default '',
  `Address1` varchar(32) NOT NULL default '',
  `Address2` varchar(32) default NULL,
.
  KEY `TypeSubType` (`Type`,`Subtype`),
  KEY `CityHood` (`City`,`Neighborhood`),
  FULLTEXT KEY `CommentsIDX`
(`Comments`,`Subtype`,`Heading`,`Zip`,`Address1`,`Neighborhood`,`City`,`Country`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=101131 ;

I guess my question is, how do I make sure the full text search is
being done against the CommentsIDX and not just against the individual
fields...

SELECT * FROM properties WHERE MATCH
(Comments,Subtype,Heading,Zip,Address1,Neighborhood,City,Country)
AGAINST (
search words WITH QUERY EXPANSION)

Thanks,
Michael



 Michael Stearne wrote:
  I am trying to do a fulltext search with a multi-field index using MySQL 4.=
  1.15.
 
  When I create a full text index of my Comments field which is of type
  TEXT.  I can do a fulltext search fine.
 
  But when I add another field (like a varchar or even Text) to that
  index or change the name of the index I get and error like:
 
  SELECT * FROM properties WHERE MATCH (CommentsIDX)
  AGAINST (
  item1 OR item2 AND (item3 AND item4 NOT (item5 OR item6))
  WITH QUERY EXPANSION
  )
 
  MySQL said: Documentation
  #1054 - Unknown column 'CommentsIDX' in 'where clause'
 
  My table structure contains:FULLTEXT KEY `CommentsIDX` (`Comments`)
 
  Any ideas?
 
  Thanks
 


 --
 For technical support contracts, goto https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.NET http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
___/   www.mysql.com




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem With FulltText Index and VarChar

2005-12-27 Thread Michael Stearne
I am trying to do a fulltext search with a multi-field index using MySQL 4.1.15.

When I create a full text index of my Comments field which is of type
TEXT.  I can do a fulltext search fine.

But when I add another field (like a varchar or even Text) to that
index or change the name of the index I get and error like:

SELECT * FROM properties WHERE MATCH (CommentsIDX)
AGAINST (
item1 OR item2 AND (item3 AND item4 NOT (item5 OR item6))
WITH QUERY EXPANSION
)

MySQL said: Documentation
#1054 - Unknown column 'CommentsIDX' in 'where clause'

My table structure contains:FULLTEXT KEY `CommentsIDX` (`Comments`)

Any ideas?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem with FullText Indexes

2005-12-27 Thread Michael Stearne
I am trying to do a fulltext search with a multi-field index using MySQL 4.1.15.

When I create a full text index of my Comments field which is of type
TEXT.  I can do a fulltext search fine.

But when I add another field (like a varchar or even Text) to that
index or change the name of the index I get an error like:

SELECT * FROM properties WHERE MATCH (CommentsIDX)
AGAINST (
item1 OR item2 AND (item3 AND item4 NOT (item5 OR item6))
WITH QUERY EXPANSION
)

MySQL said: Documentation
#1054 - Unknown column 'CommentsIDX' in 'where clause'

My table structure contains:FULLTEXT KEY `CommentsIDX` (`Comments`)

Any ideas?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MACOSX and PHP and MYSQL

2002-04-02 Thread Michael Stearne

There is a great page with instructions and binaries for this stuff at 
http://www.entropy.ch/software/macosx/

Michael

Bensin Joseph wrote:
 I have MACOSX 10.1.3.
 PHP runs fine with no problem could someone help me install or direct me 
 to where i can get instruction of how to install MYSQL on my machine 
 please.
 
 -
 Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: OS X

2002-03-05 Thread Michael Stearne

Try this like, http://www.entropy.ch/software/macosx/ , they're great 
instructions and binaries.

Michael



On Tuesday, March 5, 2002, at 06:38 PM, Ray Hughes wrote:

 I have been going out of 'my' mind.  I am trying to install mysql on 
 a Macintosh OS X server.  Can anyone give me instructions on how to 
 do this?  I have been to numerous sites with no luck.  We run OS 
 10.1.2 server,  I have mySQL 3.23.49 running (from entropy).  During 
 the install I had to change the hostname of my server but that 
 seemed easy enough.  The reason for the database is for WebEvent 
 calendar software.  Now when I try to install the WebEvent software 
 it tells me I need the DBI  module installed.  I have tried CPAN, it 
 tells me it can't find make (I am running Perl 5.6) and something 
 about MD5 security.  Is there anywhere a program that installs all 
 of this in one, easy to use package?  All I am looking for is a 
 small database for a school.  Also where can I find information 
 about the way my server file system works.  I am not a UNIX person 
 but I do need to begin learning.--Ray



 -
 Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Tighly packed table

2002-01-25 Thread Michael Stearne

Wow, I feel like I wasted time just asking! :-) But my goal was not to 
save disk space it was to optimize the queries on the table as this is a 
test project for mySQL/OS X versus Unify/SCO or Unify/Linux or mySQL/Linux.

I am kind of partial to the mySQL/OS X combo, so I wanted it to work as 
optimized as possible considering it is a 500Mhz iMac G3 going against a 
DP PII 850 Linux box.  Like I said the results are very promising.

Thanks,
Michael


DL Neil wrote:

Michael,

Let's round it up to 3 million rows (I'm lazy at math too!)
Let's say you currently allow 15 bytes per name.
Let's say the longest name on file is 12 characters.

The exercise would save 3 bytes/row multiplied by 3M rows, ie 9MB
(yes, let's ignore binary-decimal differences too)

If you had two name fields (first- and family-name).
Woohoo that's a potential saving of 18MB
I'm also generous (to a fault) so round it up to 20MB.

If you go out to buy a small PC HDD today, the smallest catalog product might be 40GB
(let's assume they quote formatted capacity - they don't, but there am I being 
half-full/-baked again)

Thus we have the ability to save 0.0005% against total capacity of a new drive.
Of course, the designer might have allowed way too much name-space (pun hah!) or the 
table may have other
'compressible' columns.
Let's go for a saving of 0.001%

A new drive costs how much?
Your hourly rate is how much?
How long will the job take you?
How many cups of coffee is that?
Can the client carry the cost of all that coffee?
Won't your stomach rebel at the mistreatment?

Mind you, most of the above is made up - I don't have any faults!
Time for me to go refill my glass (with healthy fruit juice)!
=dn

PS after enjoying myself, let me point out that if the 'name' fields are currently 
defined as variable length,
this exercise would allow you to make them fixed length. If you can 'wipe out' all 
the variable width columns in
the table, performance will improve significantly!


Hahaha.  This is a static database.  But you are right I don't know
how much this will actually help.  Hard disk isn't an issue.  It was
just an experiment...(that I have no time for anyway!)

Thanks,
Michael


On Friday, January 25, 2002, at 06:19 PM, DL Neil wrote:

...and because no one has been really cynical...

After that query runs, then prepare for a coffee overload whilst you
perform the ALTER TABLE, then get ready
because if you shorten the field to (say) 12 characters/bytes the
very next day, someone with a 13 character
name is going to try to register!

I'm wondering just how much space this 'little' exercise is going to
save, either as a ratio of the size of the
db, or as a ratio of HDD size?

My glass is half-empty!
=dn


- Original Message -
From: Michael Stearne [EMAIL PROTECTED]
To: Roger Karnouk [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: 24 January 2002 22:58
Subject: Re: Tighly packed table


The problem is, this query really hurts (I don't know if it finishes)
for unindexed field for 2.9 million rows.  But I'm sure it will finish
eventually.

Michael

Roger Karnouk wrote:

select max(length(firstname)) from TableName;

-Original Message-
From: Michael Stearne [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 24, 2002 4:38 PM
To: Christopher Thompson
Cc: [EMAIL PROTECTED]
Subject: Re: Tighly packed table


Christopher Thompson wrote:

At 04:10 PM 1/24/2002 -0500, Michael Stearne wrote:

We have a somewhat large read-only table (2.9 million recs). I am
wonder if there is a utility that will look at each row of each
columns and come up with a summary of the largest field (in
character
length) for each column.  For example, scan each row's firstname
field and report that the longest first name is 12 characters.
That
way I can ALTER the firstname field to be a char or varchar of 12?
What would be better BTW?

I don't know if CHAR or VARCHAR is better for you but as to the
query
here, it would seem easiest to write a short program to query all
the
rows and programatically determine the longest column length.

That said, you could probably set up a SQL statement for it.
There's
a LENGTH function in SQL, isn't there?  The statement would look
SIMILAR to the following:

SELECT MAX(LENGTH(t1.FIRSTNAME)) AS fnamelength FROM TableFoo t1,
TableFoo t2 WHERE LENGTH(t2.FIRSTNAME) = fnamelength;

Looks good to me, thanks.

Michael

(Please note that my university SQL instructor pointed out that I
wrote SQL statements backwards to anyone else he had ever
taught.  For
that matter, I did Prolog backwards, too.  :)



-
Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try:
http://lists.mysql.com/php/unsubscribe.php

Re: Tighly packed table

2002-01-25 Thread Michael Stearne

Yeah, I think in the end what I will do is change a lot of the columns 
back to char from varchar.  I was thinking this would save space making 
for a smaller faster DB, but the inherent overhead in a varchar field is 
not worth the space savings, which DL made crystal clear.

Thanks,
Michael


DL Neil wrote:

Michael: see also my PS comment

Dobromir: Michael and I were joking between us, hence the silly comments appearing.
I apologise if this did not communicate.
IMHO the pragmatics of the exercise made it a waste of time/effort - even when I 
over-stated the savings at
every opportunity!
On a Friday afternoon a little speculation and humor is a good way to start the 
weekend!

You are 100% correct, the disk space occupied by a table is not the sum of the the 
length of its data-rows. For
example, there is always space left for expansion/INSERTions.
However in this case, because it is a R/O table, it could be squashed right down.

I cannot comment if a table containing varchar/variable length fields can be 
compressed more or less than a
table with only fixed length fields.
Basically varchar allows one to potentially 'trade' disk space savings for a 
degradation in query response
times.

Some do not realise that by removing variable length fields to a 'companion table', 
any queries which access the
(fixed-length) table without needing to look at the variable-length field(s), will 
speed up significantly.

Thanks for providing some 'real' numbers. That was of interest.
=dn


- Original Message -
From: Dobromir Velev [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: 25 January 2002 16:11
Subject: RE: Tighly packed table


Hi,
If your column is of type VARCHAR, you want save much space (at least not as
much as DL Neil said). The specifications of tha varchar column type is that
it uses as much bytes as the data in it. Of course this will make your
indexes smaller (if this column is indexed).
A few days before I decided to optimize one of my tables (5 milion rows) and
altered a varchar(250) field to a varchar(100).
The size of the MYD data file changed with less than 1Mb so you see that
there was not much use of doing it.

Dobromir Velev
Software Developer
http://www.websitepulse.com/


-Original Message-
From: DL Neil [mailto:[EMAIL PROTECTED]]
Sent: Saturday, January 26, 2002 5:39 PM
To: Michael Stearne
Cc: Michael Stearne; Roger Karnouk; [EMAIL PROTECTED]
Subject: Re: Tighly packed table


Michael,

Let's round it up to 3 million rows (I'm lazy at math too!)
Let's say you currently allow 15 bytes per name.
Let's say the longest name on file is 12 characters.

The exercise would save 3 bytes/row multiplied by 3M rows, ie 9MB
(yes, let's ignore binary-decimal differences too)

If you had two name fields (first- and family-name).
Woohoo that's a potential saving of 18MB
I'm also generous (to a fault) so round it up to 20MB.

If you go out to buy a small PC HDD today, the smallest catalog product
might be 40GB
(let's assume they quote formatted capacity - they don't, but there am I
being half-full/-baked again)

Thus we have the ability to save 0.0005% against total capacity of a new
drive.
Of course, the designer might have allowed way too much name-space (pun
hah!) or the table may have other
'compressible' columns.
Let's go for a saving of 0.001%

A new drive costs how much?
Your hourly rate is how much?
How long will the job take you?
How many cups of coffee is that?
Can the client carry the cost of all that coffee?
Won't your stomach rebel at the mistreatment?

Mind you, most of the above is made up - I don't have any faults!
Time for me to go refill my glass (with healthy fruit juice)!
=dn

PS after enjoying myself, let me point out that if the 'name' fields are
currently defined as variable length,
this exercise would allow you to make them fixed length. If you can 'wipe
out' all the variable width columns in
the table, performance will improve significantly!


Hahaha.  This is a static database.  But you are right I don't know
how much this will actually help.  Hard disk isn't an issue.  It was
just an experiment...(that I have no time for anyway!)

Thanks,
Michael


On Friday, January 25, 2002, at 06:19 PM, DL Neil wrote:

...and because no one has been really cynical...

After that query runs, then prepare for a coffee overload whilst you
perform the ALTER TABLE, then get ready
because if you shorten the field to (say) 12 characters/bytes the
very next day, someone with a 13 character
name is going to try to register!

I'm wondering just how much space this 'little' exercise is going to
save, either as a ratio of the size of the
db, or as a ratio of HDD size?

My glass is half-empty!
=dn


- Original Message -
From: Michael Stearne [EMAIL PROTECTED]
To: Roger Karnouk [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: 24 January 2002 22:58
Subject: Re: Tighly packed table


The problem is, this query really hurts (I don't know if it finishes)
for unindexed field for 2.9 million rows

Re: How to unjar a package!

2002-01-25 Thread Michael Stearne

It's like tar.  I do jar xvf file.jar

man jar  even works!  :-)

Michael


Rahadul Kabir wrote:

can some one please tell me how to unjar a package, like a package which
comes with .jar extension (executable file).
thanks

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Varchars Vs. Chars on Read Only Table

2002-01-24 Thread Michael Stearne

When doing selects on a read-only myISAM packed table, are varchar 
fields still slower than char fields?

Thanks,
Michael




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Tighly packed table

2002-01-24 Thread Michael Stearne

We have a somewhat large read-only table (2.9 million recs). I am wonder 
if there is a utility that will look at each row of each columns and 
come up with a summary of the largest field (in character length) for 
each column.  For example, scan each row's firstname field and report 
that the longest first name is 12 characters.  That way I can ALTER the 
firstname field to be a char or varchar of 12?  What would be better BTW?

Thanks,
Michael Stearne


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Tighly packed table

2002-01-24 Thread Michael Stearne

Christopher Thompson wrote:

 At 04:10 PM 1/24/2002 -0500, Michael Stearne wrote:

 We have a somewhat large read-only table (2.9 million recs). I am 
 wonder if there is a utility that will look at each row of each 
 columns and come up with a summary of the largest field (in character 
 length) for each column.  For example, scan each row's firstname 
 field and report that the longest first name is 12 characters.  That 
 way I can ALTER the firstname field to be a char or varchar of 12?  
 What would be better BTW?


 I don't know if CHAR or VARCHAR is better for you but as to the query 
 here, it would seem easiest to write a short program to query all the 
 rows and programatically determine the longest column length.

 That said, you could probably set up a SQL statement for it.  There's 
 a LENGTH function in SQL, isn't there?  The statement would look 
 SIMILAR to the following:

 SELECT MAX(LENGTH(t1.FIRSTNAME)) AS fnamelength FROM TableFoo t1, 
 TableFoo t2 WHERE LENGTH(t2.FIRSTNAME) = fnamelength;

Looks good to me, thanks.

Michael


 (Please note that my university SQL instructor pointed out that I 
 wrote SQL statements backwards to anyone else he had ever taught.  For 
 that matter, I did Prolog backwards, too.  :)



 -
 Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail 
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Tighly packed table

2002-01-24 Thread Michael Stearne

The problem is, this query really hurts (I don't know if it finishes) 
for unindexed field for 2.9 million rows.  But I'm sure it will finish 
eventually.

Michael

Roger Karnouk wrote:

select max(length(firstname)) from TableName;

-Original Message-
From: Michael Stearne [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 24, 2002 4:38 PM
To: Christopher Thompson
Cc: [EMAIL PROTECTED]
Subject: Re: Tighly packed table


Christopher Thompson wrote:

At 04:10 PM 1/24/2002 -0500, Michael Stearne wrote:

We have a somewhat large read-only table (2.9 million recs). I am 
wonder if there is a utility that will look at each row of each 
columns and come up with a summary of the largest field (in character 
length) for each column.  For example, scan each row's firstname 
field and report that the longest first name is 12 characters.  That 
way I can ALTER the firstname field to be a char or varchar of 12?  
What would be better BTW?


I don't know if CHAR or VARCHAR is better for you but as to the query 
here, it would seem easiest to write a short program to query all the 
rows and programatically determine the longest column length.

That said, you could probably set up a SQL statement for it.  There's 
a LENGTH function in SQL, isn't there?  The statement would look 
SIMILAR to the following:

SELECT MAX(LENGTH(t1.FIRSTNAME)) AS fnamelength FROM TableFoo t1, 
TableFoo t2 WHERE LENGTH(t2.FIRSTNAME) = fnamelength;


Looks good to me, thanks.

Michael

(Please note that my university SQL instructor pointed out that I 
wrote SQL statements backwards to anyone else he had ever taught.  For 
that matter, I did Prolog backwards, too.  :)



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail 
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Tighly packed table

2002-01-24 Thread Michael Stearne

I actually have all the records in a 1GB text file, so here comes perl 
to the rescue!!  (Easier than C to me, maybe I'll do it in Java as an 
exercise.)

BTW if anyone has any questions on the ability of MacOS X with medium 
size DBs.  This DB is running on a 500Mhz iMac G3 with 1GB of RAM and 
Mac OS X.  Which is performing as better it seems than the SCO and Linux 
boxes running the same DBs.

Thanks,
Michael


Christopher Thompson wrote:

 At 05:58 PM 1/24/2002 -0500, Michael Stearne wrote:

 The problem is, this query really hurts (I don't know if it finishes) 
 for unindexed field for 2.9 million rows.  But I'm sure it will 
 finish eventually.


 Yes, it will really hurt.  After all, there's no way for MySQL to do 
 this other than pulling in this column one at a time from each and 
 every row in the table and doing the string count.

 See if you can do a SELECT COUNT(*) FROM Table in a reasonable amount 
 of time.  If you can, you may want to do this programatically in a 
 small C++ (or C, or whatever) program.  It _may_ be faster.  It may 
 not, there'll be a lot of network traffic.  SQL servers tend to be 
 rather bad at string manipulation, something like this would be easy 
 in C or in Perl.








-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Tighly packed table

2002-01-24 Thread Michael Stearne

Hahaha.  This is a static database.  But you are right I don't know 
how much this will actually help.  Hard disk isn't an issue.  It was 
just an experiment...(that I have no time for anyway!)

Thanks,
Michael


On Friday, January 25, 2002, at 06:19 PM, DL Neil wrote:

 ...and because no one has been really cynical...

 After that query runs, then prepare for a coffee overload whilst you 
 perform the ALTER TABLE, then get ready
 because if you shorten the field to (say) 12 characters/bytes the 
 very next day, someone with a 13 character
 name is going to try to register!

 I'm wondering just how much space this 'little' exercise is going to 
 save, either as a ratio of the size of the
 db, or as a ratio of HDD size?

 My glass is half-empty!
 =dn


 - Original Message -
 From: Michael Stearne [EMAIL PROTECTED]
 To: Roger Karnouk [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: 24 January 2002 22:58
 Subject: Re: Tighly packed table


 The problem is, this query really hurts (I don't know if it finishes)
 for unindexed field for 2.9 million rows.  But I'm sure it will finish
 eventually.

 Michael

 Roger Karnouk wrote:

 select max(length(firstname)) from TableName;

 -Original Message-
 From: Michael Stearne [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, January 24, 2002 4:38 PM
 To: Christopher Thompson
 Cc: [EMAIL PROTECTED]
 Subject: Re: Tighly packed table


 Christopher Thompson wrote:

 At 04:10 PM 1/24/2002 -0500, Michael Stearne wrote:

 We have a somewhat large read-only table (2.9 million recs). I am
 wonder if there is a utility that will look at each row of each
 columns and come up with a summary of the largest field (in 
 character
 length) for each column.  For example, scan each row's firstname
 field and report that the longest first name is 12 characters.  
 That
 way I can ALTER the firstname field to be a char or varchar of 12?
 What would be better BTW?


 I don't know if CHAR or VARCHAR is better for you but as to the 
 query
 here, it would seem easiest to write a short program to query all 
 the
 rows and programatically determine the longest column length.

 That said, you could probably set up a SQL statement for it.  
 There's
 a LENGTH function in SQL, isn't there?  The statement would look
 SIMILAR to the following:

 SELECT MAX(LENGTH(t1.FIRSTNAME)) AS fnamelength FROM TableFoo t1,
 TableFoo t2 WHERE LENGTH(t2.FIRSTNAME) = fnamelength;


 Looks good to me, thanks.

 Michael

 (Please note that my university SQL instructor pointed out that I
 wrote SQL statements backwards to anyone else he had ever 
 taught.  For
 that matter, I did Prolog backwards, too.  :)



 -
 Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: 
 http://lists.mysql.com/php/unsubscribe.php





 -
 Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: 
 http://lists.mysql.com/php/unsubscribe.php






 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php