Re: why so many table scans?

2005-07-25 Thread Michael Stassen

Chris Kantarjiev wrote:


I'm looking at the stats on one of our servers and trying to understand
why Handler_read_rnd_next is so high. It's  256.5M right now, which is
about 10x the total number of reported queries.

The machine is being used, almost entirely, for queries of the form:

select * from crumb 
 where link_id is null 
   and latitude > 39 
   and longitude > -98 
 limit 1;


link_id is indexed. There are about 8 million rows in the table,
and most of them have link_id = null right now. latitude and longitude
are not indexed - but my understanding is that mySQL will only
use one index, and link_id is the interesting one for us.

Are the latitude and longitude qualifiers the cause of the table scans?


Table scans happen when there is no *useful* index.  To be useful, an index 
must both exist and return a relatively small number of rows, where relatively 
small means less than approximately 30% of the rows in the table.  More than 
that, and the table scan is usually faster than using the index.


The optimizer considers the restrictions in your WHERE clause, looks for 
possible indexes, and eliminates non-useful indexes from consideration.  If 
any indexes are left, it chooses the best one (the one which returns the 
fewest rows).


Devananda wrote:
> Since most of the rows have link_id = NULL, using that as the first
> condition in the WHERE clause does not help MySQL reduce the number of
> rows it needs to check (therefor MySQL prefers to do a full table scan).

Sort of.  The order of requirements in the WHERE clause is irrelevant.  The 
problem is that most rows have link_id set to NULL, so the index on link_id 
is not a useful index for this query.  An index on latitude or longitude might 
be useful, if one of those conditions sufficiently restricted the number of 
rows to consider.


> My suggestion is create a composite index on (latitude, longitude,
> link_id) and change your queries to
>
> SELECT * FROM crumb WHERE latitude > 39 AND longitude > -98 AND link_id
> IS NULL LIMIT 1;
>
> I think this will use the composite index, thus avoiding a full table scan.

No composite index will be fully used here.  MySQL uses composite indexes from 
left to right, *stopping on the first key part used in a range* rather than to 
match a constant.  "WHERE latitude > 39" is a range, so the composite index on 
(latitude, longitude, link_id) will be no better than a single column index on 
latitude.


> See http://dev.mysql.com/doc/mysql/en/mysql-indexes.html for in depth
> info on how MySQL uses indexes.
>
> As a side note, it's a good idea to run EXPLAIN on your queries to see
> what index they are using, what cardinality the index has, and pay close
> attention to the "Extra" field in the results of EXPLAIN. That field
> will tell you if MySQL thinks it has to do a full table scan, create a
> temporary table, etc.

Absolutely.  Run EXPLAIN.

> Regards,
> Devananda vdv

Bruce Dembecki wrote:
> Yes, they almost certainly are the cause of the problem.

Adding restrictions to the WHERE clause joined with AND cannot prevent the use 
of an index.  Indeed, extra restrictions with matching indexes increase the 
chance an index can be used.


> A query may
> only use one index, but the table can have several, and the MySQL
> Optimizer will choose the most appropriate index for the query it is
> running. In a case such as this where most entries have a null  link_id
> you are hurting from having no index covering the other  columns. If
> this was all the table was going to do and all your  queries were of
> this form, you could make an index on link_ID,  latitude, longitude...
> as long as most everything is null it's the  equivalent of using an
> index on latitude, longitude... but as things  change (I assume you
> don't expect them to stay null) your one index  will accommodate that...

No composite index with longitude last will ever look at the longitude part of 
the index if the latitude part is used to satisfy a range restriction, as is 
the case here.  Separate indexes on latitude and longitude are actually better 
in this case than a composite of the two, as that would roughly double the 
chance of an index being useful.


> However... when you say what's important to you is the link_id, I
> assume you mean that's what is important in the result... not what is
> important in the search itself (as it clearly isn't now if they are
> mostly null). The thing is to remember, while a query may use only  one
> index, a MySQL table can have many (don't go nuts here), so by  adding
> an index for latitude, longitude you are buying yourself a  bunch of
> performance.

It is entirely possible that this (type of) query will not benefit from any 
index, in which case adding indexes will only slow inserts without any speedup 
in selects.


> Beware of course... too many indexes or too complicated and they can  be
> a performance issue in their own right. The trick is to put in the
> right indexes for your d

Re: Quotation marks in string causing repace to not work.

2005-07-25 Thread Ludwig Pummer

Gregory Machin wrote:
I tried the following  


UPDATE temp SET 'file_content' = REPLACE(file_content, '' ,
'');

but it didn't work, i think thing problem is that the string i need to
replace / null has quotation marks .. how can i work around this 


You need to escape the quotation marks. Also, did you mean to write 
'file_content', or `file_content` ?


Try

UPDATE temp SET `file_content` = REPLACE(file_content, 'require($_SERVER[\'DOCUMENT_ROOT\']."/scripts/define_access.php");?>', '');


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to select first 1000 records like MySQL Control Center 0.9?

2005-07-25 Thread Peter Brawley




Siegfried Heintze wrote:
>Some dialects of SQL have SELECT [FIRST|TOP 1000] clause for their SELECT
>syntax. I looked at the syntax for mysql and it does not appear to have this
>feature.

>Apparently, however, this is possible because the MySQL Control Center does
>this. How does it do it?
See
  [LIMIT {[offset,] row_count | row_count OFFSET offset}]
at http://dev.mysql.com/doc/mysql/en/select.html

PB


Siegfried Heintze wrote:

  Some dialects of SQL have SELECT [FIRST|TOP 1000] clause for their SELECT
syntax. I looked at the syntax for mysql and it does not appear to have this
feature.

Apparently, however, this is possible because the MySQL Control Center does
this. How does it do it?

Thanks,
Siegfried


  



No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.9.4/57 - Release Date: 7/22/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

How to select first 1000 records like MySQL Control Center 0.9?

2005-07-25 Thread Siegfried Heintze
Some dialects of SQL have SELECT [FIRST|TOP 1000] clause for their SELECT
syntax. I looked at the syntax for mysql and it does not appear to have this
feature.

Apparently, however, this is possible because the MySQL Control Center does
this. How does it do it?

Thanks,
Siegfried


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqld chewing up cpu in the background.

2005-07-25 Thread Richard Dale
> I am fairly sure that there aren't any queries being run, but while in
> the background, my mysqld process chews up exactly 50% of my cpu. It
> runs queries nicely and has excellent response times for most any
> query I throw at it, but its causing issues for other apps.

Try using mytop to see if there are queries going on.

It's like the unix "top" command.
http://jeremy.zawodny.com/mysql/mytop/

Also try:
SHOW PROCESSLIST;

If you use InnoDB:
show innodb status;

Best regards,
Richard Dale.
Norgate Investor Services
- Premium quality Stock, Futures and Foreign Exchange Data for
  markets in Australia, Asia, Canada, Europe, UK & USA -
www.premiumdata.net 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why so many table scans?

2005-07-25 Thread Bruce Dembecki


On Jul 25, 2005, at 3:47 PM, Chris Kantarjiev wrote:



link_id is indexed. There are about 8 million rows in the table,
and most of them have link_id = null right now. latitude and longitude
are not indexed - but my understanding is that mySQL will only
use one index, and link_id is the interesting one for us.

Are the latitude and longitude qualifiers the cause of the table  
scans?



Yes, they almost certainly are the cause of the problem. A query may  
only use one index, but the table can have several, and the MySQL  
Optimizer will choose the most appropriate index for the query it is  
running. In a case such as this where most entries have a null  
link_id you are hurting from having no index covering the other  
columns. If this was all the table was going to do and all your  
queries were of this form, you could make an index on link_ID,  
latitude, longitude... as long as most everything is null it's the  
equivalent of using an index on latitude, longitude... but as things  
change (I assume you don't expect them to stay null) your one index  
will accommodate that...


However... when you say what's important to you is the link_id, I  
assume you mean that's what is important in the result... not what is  
important in the search itself (as it clearly isn't now if they are  
mostly null). The thing is to remember, while a query may use only  
one index, a MySQL table can have many (don't go nuts here), so by  
adding an index for latitude, longitude you are buying yourself a  
bunch of performance.


Beware of course... too many indexes or too complicated and they can  
be a performance issue in their own right. The trick is to put in the  
right indexes for your data and for your queries, don't just add  
indexes for indexing sake.


Best Regards, Bruce


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why so many table scans?

2005-07-25 Thread Devananda
Since most of the rows have link_id = NULL, using that as the first 
condition in the WHERE clause does not help MySQL reduce the number of 
rows it needs to check (therefor MySQL prefers to do a full table scan). 
My suggestion is create a composite index on (latitude, longitude, 
link_id) and change your queries to


SELECT * FROM crumb WHERE latitude > 39 AND longitude > -98 AND link_id 
IS NULL LIMIT 1;


I think this will use the composite index, thus avoiding a full table scan.

See http://dev.mysql.com/doc/mysql/en/mysql-indexes.html for in depth 
info on how MySQL uses indexes.


As a side note, it's a good idea to run EXPLAIN on your queries to see 
what index they are using, what cardinality the index has, and pay close 
attention to the "Extra" field in the results of EXPLAIN. That field 
will tell you if MySQL thinks it has to do a full table scan, create a 
temporary table, etc.


Regards,
Devananda vdv


Chris Kantarjiev wrote:

I'm looking at the stats on one of our servers and trying to understand
why Handler_read_rnd_next is so high. It's  256.5M right now, which is
about 10x the total number of reported queries.

The machine is being used, almost entirely, for queries of the form:

select * from crumb 
 where link_id is null 
   and latitude > 39 
   and longitude > -98 
 limit 1;


link_id is indexed. There are about 8 million rows in the table,
and most of them have link_id = null right now. latitude and longitude
are not indexed - but my understanding is that mySQL will only
use one index, and link_id is the interesting one for us.

Are the latitude and longitude qualifiers the cause of the table scans?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Alternatives to performing join on normalized joins?

2005-07-25 Thread Siegfried Heintze
Shawn (and anyone else who will listen):

 

I'm already running out of RAM (actually, virtual memory/page file space)
just trying to display all the job titles without even joining them with
anything. I have to use a LIKE clause to just get a portion of them.

 

So, I could:

(1) Have multiple database connections going concurrently where the
first one joins everything except the keywords. As I'm iterating thru the
first result set with the fetch function, I could get a list of keyword
foreign keys for each row with a second database connect and store this in a
second result set. Is this a common approach? Are secondary database
connections cheap?

(2) I could try to store the first join in a hashmap first and then
iterate but I've already demonstrated that the hashmap takes too much
memory.

(3) I could create a new column of type string for each job title. This
would contain a comma separated list of integer foreign keys for the
keywords. This is the non-normalized option and you discouraged this
approach.

(4) I could have a very wide result set. Let assume I have a jobtitle
(joined with a job posting and company) with 26 keywords. That means 26 rows
in the result set are identical except the keyword foreign key (fk) column.
I have to then insert the logic to detect the fact that everything except
the keyword fk column is identical. Are you advocating this approach? It
seems like it requires a lot of computer space and computer time and (worst
of all) my time. I believe this is the classical approach, however.

 

Which would you choose?

 

Thanks,

 

Siegfried

 

  _  

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 25, 2005 9:31 AM
To: Siegfried Heintze
Cc: mysql@lists.mysql.com
Subject: Re: Alternatives to performing join on normalized joins?

 



"Siegfried Heintze" <[EMAIL PROTECTED]> wrote on 07/24/2005 11:35:36 AM:

> I have a large number of job titles (40K). Each job title has multiple
> keywords making a one-to-many parent-child relationship.
> 
> If I join job title with company name, address, company url, company city,
> job name, job location, job url (etc...) I have a mighty wide result set
> that will be repeated for each keyword.
> 
> What I have done in the past (in a different, much smaller, application)
is
> perform a join of everything except the keyword and store everything in a
> hashmap. 
> 
> Then I iterate thru each wide row in the hashmap and perform a separate
> SELECT statement foreach row in this hashmap to fetch the multiple
keywords.
> 
> Whew! That would be a lot of RAM (and paging) for this application.
> 
> Are there any other more efficient approaches?
> 
> Thanks,
> Siegfried
> 
> 

There are two major classes of efficiency when dealing with any RDBMS: time
efficiency (faster results), space efficiency (stored data takes less room
on the disk). Which one are you worried about? 

If it were me, I would start with all of the data normalized: 
* a Companies table (name, address, url, city, etc) 
* a Job Titles table (a list of names) 
* a Keywords table (a list of words used to describe Job Titles) 
* a JobPosting table ( Relates Companies to Job Titles. Should also
be used to track things like dateposted, dateclosed, salary offered, etc.) 
* a Postings_Keywords table (matches a Posting to multiple
Keywords). 

I would only denormalize if testing showed a dramatic improvement in
performance by doing so. I would think that the Job Title to Keyword
relationship would be different between Companies. One company posting for a
"Programmer" may want VB while another wants PHP and PERL. By associating
the Keywords with a Posting (and not just the Job Title), you can make that
list Company-specific. 


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine 








why so many table scans?

2005-07-25 Thread Chris Kantarjiev
I'm looking at the stats on one of our servers and trying to understand
why Handler_read_rnd_next is so high. It's  256.5M right now, which is
about 10x the total number of reported queries.

The machine is being used, almost entirely, for queries of the form:

select * from crumb 
 where link_id is null 
   and latitude > 39 
   and longitude > -98 
 limit 1;

link_id is indexed. There are about 8 million rows in the table,
and most of them have link_id = null right now. latitude and longitude
are not indexed - but my understanding is that mySQL will only
use one index, and link_id is the interesting one for us.

Are the latitude and longitude qualifiers the cause of the table scans?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query on a very big table

2005-07-25 Thread Michael Monashev
Hello

CA> The  query  takes  ages to run (has been running for over 10 hours
CA> now). Is this normal?

I  think,  your  indexes are too complicated. MySQL have to rebuild it
usually after ALTER TABLE.
  

Sincerely,
Michael,
 http://xoib.com/ http://3d2f.com/
 http://qaix.com/ http://ryxi.com/
 http://gyxe.com/ http://gyxu.com/
 http://xywe.com/ http://xyqe.com/




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysqld chewing up cpu in the background.

2005-07-25 Thread Dan Baughman
Hello All,

I am fairly sure that there aren't any queries being run, but while in
the background, my mysqld process chews up exactly 50% of my cpu. It
runs queries nicely and has excellent response times for most any
query I throw at it, but its causing issues for other apps.

What can I do to find out what its doing?  This started happening
shortly after I did a full text index with no stop words or minium
word length on a database with a few million entries, perhaps a
paragraph or two each.  Is there some log somehwere where mysqld will
tell me what its trying to do?

Please help,
Thanks,
Dan Baughman

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Phone Number Storage

2005-07-25 Thread Bruce Dembecki


On Jul 25, 2005, at 1:23 PM, Sujay Koduri wrote:


I guess anywhere we have 3 levels of hierarchies for a phone number.
(Country code, Area code and the actual number).
The advantage of seperating them into different columns(Either an  
integer or
a string) is that he can group different phone numbers based on  
area code or

country code.



The key issue is less separating them into columns, but more one of  
global use... As Joerg was saying, many countries have area codes or  
even phone numbers that start with a zero - 0. If you store phone  
numbers as an integer the leading zero will be dropped, thus meaning  
you are storing incomplete data.


It does also in most applications make sense to store the area code  
and country code as separate strings... but don't fall into the trap  
of thinking a zero at the front of a phone number or an area code and  
even some country codes isn't important.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Phone Number Storage

2005-07-25 Thread Bartis, Robert M (Bob)
That may be true, but I don't think the augments provided by Joerg necessitate 
a single column or multiple columns. His points, leading zeros, sorting, etc go 
more to the native data type that should be used and are valid in either case.

Bob

-Original Message-
From: Sujay Koduri [mailto:[EMAIL PROTECTED]
Sent: Monday, July 25, 2005 4:23 PM
To: Joerg Bruehe; mysql@lists.mysql.com
Cc: Asad Habib
Subject: RE: Phone Number Storage



I guess anywhere we have 3 levels of hierarchies for a phone number.
(Country code, Area code and the actual number).
The advantage of seperating them into different columns(Either an integer or
a string) is that he can group different phone numbers based on area code or
country code. 

sujay 

-Original Message-
From: Joerg Bruehe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 26, 2005 1:34 AM
To: mysql@lists.mysql.com
Cc: Sujay Koduri; Asad Habib
Subject: Re: Phone Number Storage

Hi!

Sujay Koduri wrote (re-ordered):
> -Original Message-
> From: Asad Habib [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 25, 2005 11:53 PM
> To: mysql@lists.mysql.com
> Subject: Phone Number Storage
> 
> Is it better to store phone numbers as strings or as integers? 
> Offcourse, storing them as integers saves space but this requires 
> extra processing of the user's input (i.e. CPU time). Are there any 
> other advantages/disadvantages of doing it one way or the other?
> 
> - Asad


> I think it is better to store the phone numbers as strings only. As 
> phone numbers may also include '-', if you allow entering 
> international numbers, it is good to store them as strings only.
> 
> Or you can ask the area code and the actual number seperately and 
> store them seperately in two columns as integers.
> 

IMO, this is quite an USA-centric view in the answer: In general, phone
numbers will also contain a country code.

Outside the USA, it is quite common that codes (area or country) may begin
with a leading "0" which any numeric type would drop as not significant, so
you _must_ use strings for these.

Also: A telephone number is no numeric value, arithmetic operations do not
make sense on it. Think of extensions: phone numbers 1234-0 and
1234-56 are related, so you would order them (if at all) as strings and not
as numeric values.

The same applies to postal codes, social security numbers, part numbers etc.

While you may use a numeric type for some ID value you want to generate
yourself (using autoincrement), IMO this is on the borderline of correct
modeling. For phone numbers, you should use strings.

HTH,
Joerg

--
Joerg Bruehe, Senior Production Engineer MySQL AB, www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Phone Number Storage

2005-07-25 Thread Sujay Koduri

I guess anywhere we have 3 levels of hierarchies for a phone number.
(Country code, Area code and the actual number).
The advantage of seperating them into different columns(Either an integer or
a string) is that he can group different phone numbers based on area code or
country code. 

sujay 

-Original Message-
From: Joerg Bruehe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 26, 2005 1:34 AM
To: mysql@lists.mysql.com
Cc: Sujay Koduri; Asad Habib
Subject: Re: Phone Number Storage

Hi!

Sujay Koduri wrote (re-ordered):
> -Original Message-
> From: Asad Habib [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 25, 2005 11:53 PM
> To: mysql@lists.mysql.com
> Subject: Phone Number Storage
> 
> Is it better to store phone numbers as strings or as integers? 
> Offcourse, storing them as integers saves space but this requires 
> extra processing of the user's input (i.e. CPU time). Are there any 
> other advantages/disadvantages of doing it one way or the other?
> 
> - Asad


> I think it is better to store the phone numbers as strings only. As 
> phone numbers may also include '-', if you allow entering 
> international numbers, it is good to store them as strings only.
> 
> Or you can ask the area code and the actual number seperately and 
> store them seperately in two columns as integers.
> 

IMO, this is quite an USA-centric view in the answer: In general, phone
numbers will also contain a country code.

Outside the USA, it is quite common that codes (area or country) may begin
with a leading "0" which any numeric type would drop as not significant, so
you _must_ use strings for these.

Also: A telephone number is no numeric value, arithmetic operations do not
make sense on it. Think of extensions: phone numbers 1234-0 and
1234-56 are related, so you would order them (if at all) as strings and not
as numeric values.

The same applies to postal codes, social security numbers, part numbers etc.

While you may use a numeric type for some ID value you want to generate
yourself (using autoincrement), IMO this is on the borderline of correct
modeling. For phone numbers, you should use strings.

HTH,
Joerg

--
Joerg Bruehe, Senior Production Engineer MySQL AB, www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Regex problem..

2005-07-25 Thread Lamont R. Peterson
On Monday 25 July 2005 01:56pm, Michael Stassen wrote:
> Gregory Machin wrote:
> > Hi.
> >
> > Please could you advise me...
> > I have php pages that are stored in a mysql database, that will later
> > be imported into a cms
> > I want to use mysql's regex funtion to remove unwanted php code and
> > update links to images
> > and urls.
> >
> > But i cant seem to get my brian around the regex part ...
> >
> > i want to remove the header include 
> > i tried <\?php[^>][header1].*\?> , and other attempts but no luck ..
> > unfortunetly i can do a normal string replace because of varations,in
> > the code ...

This regex would not match your case.  Try this on for size:

<\?php.*require.*header1.*\?>

> > Many Thanks
>
> MySQL's REGEX is only for matching, not for replacement.  If you really
> need a regex search and replace, you may find it easier to dump your data,
> edit it, and reimport it.  If you are determined to do it in mysql, you'll
> need a (probably ugly) combination of mysql string functions.  See the
> manual for details:
>
> 
> 

As Michael suggests, better to do this in an app.  It should not take you very 
long to write a quick data importer in PHP that would take care of all this 
including using preg_replace () where you want it.
-- 
Lamont R. Peterson <[EMAIL PROTECTED]>
Founder [ http://blog.openbrainstem.net/peregrine/ ]
OpenBrainstem - Intelligent Open Source Software Engineering


pgpzhFjg8ymNz.pgp
Description: PGP signature


Re: Phone Number Storage

2005-07-25 Thread Joerg Bruehe

Hi!

Sujay Koduri wrote (re-ordered):

-Original Message-
From: Asad Habib [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 25, 2005 11:53 PM

To: mysql@lists.mysql.com
Subject: Phone Number Storage

Is it better to store phone numbers as strings or as integers? Offcourse,
storing them as integers saves space but this requires extra processing of
the user's input (i.e. CPU time). Are there any other
advantages/disadvantages of doing it one way or the other?

- Asad




I think it is better to store the phone numbers as strings only. As phone
numbers may also include '-', if you allow entering international numbers,
it is good to store them as strings only. 


Or you can ask the area code and the actual number seperately and store them
seperately in two columns as integers. 



IMO, this is quite an USA-centric view in the answer: In general, phone 
numbers will also contain a country code.


Outside the USA, it is quite common that codes (area or country) may 
begin with a leading "0" which any numeric type would drop as not 
significant, so you _must_ use strings for these.


Also: A telephone number is no numeric value, arithmetic operations do 
not make sense on it. Think of extensions: phone numbers 1234-0 and 
1234-56 are related, so you would order them (if at all) as strings and 
not as numeric values.


The same applies to postal codes, social security numbers, part numbers etc.

While you may use a numeric type for some ID value you want to generate 
yourself (using autoincrement), IMO this is on the borderline of correct 
modeling. For phone numbers, you should use strings.


HTH,
Joerg

--
Joerg Bruehe, Senior Production Engineer
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Regex problem..

2005-07-25 Thread Michael Stassen

Gregory Machin wrote:


Hi.

Please could you advise me...
I have php pages that are stored in a mysql database, that will later
be imported into a cms
I want to use mysql's regex funtion to remove unwanted php code and
update links to images
and urls.

But i cant seem to get my brian around the regex part ...

i want to remove the header include 
i tried <\?php[^>][header1].*\?> , and other attempts but no luck ..
unfortunetly i can do a normal string replace because of varations,in
the code ...

Many Thanks


MySQL's REGEX is only for matching, not for replacement.  If you really need a 
regex search and replace, you may find it easier to dump your data, edit it, 
and reimport it.  If you are determined to do it in mysql, you'll need a 
(probably ugly) combination of mysql string functions.  See the manual for 
details:





Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Regex problem..

2005-07-25 Thread Gregory Machin
Hi.

Please could you advise me...
I have php pages that are stored in a mysql database, that will later
be imported into a cms
I want to use mysql's regex funtion to remove unwanted php code and
update links to images
and urls.

But i cant seem to get my brian around the regex part ...

i want to remove the header include 
i tried <\?php[^>][header1].*\?> , and other attempts but no luck ..
unfortunetly i can do a normal string replace because of varations,in
the code ...

Many Thanks
-- 
Gregory Machin
[EMAIL PROTECTED]
[EMAIL PROTECTED]
www.linuxpro.co.za
Web Hosting Solutions
Scalable Linux Solutions 
www.iberry.info (support and admin)
www.goeducation (support and admin)
+27 72 524 8096

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Phone Number Storage

2005-07-25 Thread Sujay Koduri

I think it is better to store the phone numbers as strings only. As phone
numbers may also include '-', if you allow entering international numbers,
it is good to store them as strings only. 

Or you can ask the area code and the actual number seperately and store them
seperately in two columns as integers. 

sujay 

-Original Message-
From: Asad Habib [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 25, 2005 11:53 PM
To: mysql@lists.mysql.com
Subject: Phone Number Storage

Is it better to store phone numbers as strings or as integers? Offcourse,
storing them as integers saves space but this requires extra processing of
the user's input (i.e. CPU time). Are there any other
advantages/disadvantages of doing it one way or the other?

- Asad

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to use Like Clause in Perl? Works fine in MySQL control center!

2005-07-25 Thread John Trammell
>From 'perldoc perldata':

  Variable substitution inside strings is limited to scalar
  variables, arrays, and array or hash slices.  (In other
  words, names beginning with $ or @, followed by an optional
  bracketed expression as a subscript.)

You can check this from the command line:

  % perl -le 'print "$s -- @s -- %s"'
  --  -- %s

So the '%' isn't the issue here.  The issue is certainly the (mis)use of
join(), as was pointed out by a previous poster.


> -Original Message-
> From: Jeremiah Gowdy [mailto:[EMAIL PROTECTED] 
> Sent: Monday, July 25, 2005 9:14 AM
> To: Siegfried Heintze; mysql@lists.mysql.com
> Subject: Re: How to use Like Clause in Perl? Works fine in 
> MySQL control center!
> 
> When you use double quotes for strings in Perl, Perl looks 
> through your 
> strings for variables like $foo, and replaces them with the 
> current value of 
> $foo.  This is called interpolation.  When you use single quotes, it 
> considers your string a literal.
> 
> So when you use double quotes, you need to escape any special 
> characters 
> like $ % " or @.  When you use single quotes, the only 
> character you have to 
> worry about is '.  Here are ways you could make this string work.
> 
> Double quotes with special characters escaped (due to interpolation)
> 
> "SELECT 'David!' LIKE '\%D\%v\%'"
> 
> Single quotes with double quote usage for the SQL quoting (no 
> escaping 
> required)
> 
> 'SELECT "David!" LIKE "%D%v%"'
> 
> Single quotes with single quotes escaped for the SQL quoting
> 
> 'SELECT \'David!\' LIKE \'%D%v%\''
> 
> Keep in mind that interpolation is work, so using one of the 
> single quotes 
> strings which does not search your string for variables to 
> replace is going 
> to be higher performance than the double quoted version, although the 
> difference may be a little or a lot depending on how many 
> times the string 
> is interpreted (if it is in a loop or something).
> 
> 
> - Original Message - 
> From: "Siegfried Heintze" <[EMAIL PROTECTED]>
> To: 
> Sent: Friday, July 22, 2005 4:03 PM
> Subject: How to use Like Clause in Perl? Works fine in MySQL 
> control center!
> 
> 
> > I'm having trouble getting the like clause to work. It 
> seems to work fine 
> > in
> > the MySQL Control Center 9.4.beta. I'm using MySQL 4.0.23-debug.
> >
> > use DBH;
> > my $sth = DBH->prepare("SELECT 'David!' LIKE '%D%v%'");
> > $sth->execute();
> > my $row;
> > print join(@$row,",")."\n" while ($row = $sth->fetch);
> >
> >
> > This does not print a "1" in perl. It just prints a ",".
> >
> > I've posted a query on this in [EMAIL PROTECTED] with no luck.
> >
> > Anybody have any suggestions?
> > Thanks,
> > Siegfried
> >
> > Here is DBH.pm. Below that is my original post in 
> [EMAIL PROTECTED]
> >
> >
> > package DBH;
> > use DBI;
> > require Exporter;
> > our @ISA = qw(Exporter);
> > our @EXPORT = qw(DBH); # Symbols to be exported by default
> > our @EXPORT_OK = qw(); # Symbols to exported by request
> > our $VERSION = 0.1;
> >
> >
> > our $dbh;
> > sub DBH{
> >unless ( $dbh && $dbh->ping ) {
> >$dbh = DBI->connect ( 
> 'dbi:mysql:dbname=hotjobs;host=SALES', 'xyz',
> > 'xyz' ) ;
> >die DBI->errstr unless $dbh && $dbh->ping;
> >}
> >return $dbh;
> > }
> >
> > 1;
> > 
> --
> --
> > 
> >
> >
> > The following code works with Activestate perl 8.4/MySQL. 
> If I comment the
> > second line, however, it does not work. No error messages 
> and no results.
> >
> > If I use the MySQL Enterprise console and type in my first SELECT 
> > statement
> > that includes the LIKE clause, it works.
> >
> > I'm stumped. There must be something strange with that "%", 
> but I cannot
> > figure it out.
> > Anyone got any suggestions?
> >
> > Siegfried
> >
> > my $sJobTitle = "SELECT sName FROM keywords ORDER BY sName 
> WHERE sName 
> > LIKE
> > '%'";
> >  $sJobTitle = q[SELECT sName FROM keywords ORDER BY sName];
> >
> >  my $sth = DBH->prepare($sJobTitle);
> >  $sth->execute();
> >  my $row;
> >  while ($row = $sth->fetch){
> >push @sResult,"".join( "", @$row)."\n";
> >  }
> >
> >
> > -- 
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> >
> > 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Phone Number Storage

2005-07-25 Thread Asad Habib
Is it better to store phone numbers as strings or as integers? Offcourse,
storing them as integers saves space but this requires extra processing of
the user's input (i.e. CPU time). Are there any other
advantages/disadvantages of doing it one way or the other?

- Asad

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query on a very big table

2005-07-25 Thread Devananda
Not knowing what the ALTER TABLE is changing, I can't really say. Could 
you send the table structure (as it was before the ALTER TABLE)? 
Remember that MySQL is actually changing the data for every row, and 
potentially rebuilding indexes as well, so it has a lot of work to do 
from that single statement. Taking 10 hours to alter a very large table 
is not unheard of, though it really depends on your table structure.


Regards,
Devananda vdv


Christos Andronis wrote:


Hi all,
we are trying to run the following query on a table that contains over 600 million rows: 


'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10) 
UNSIGNED DEFAULT NULL FIRST'

The query takes ages to run (has been running for over 10 hours now). Is this 
normal?

As a side issue, is MySQL suited for such big tables? I've seen a couple of 
case studies with MySQL databases over 1.4 billion rows but it is not clear to 
me whether this size corresponds to the whole database or whether it is for a 
single table.

The MySQL distribution we're using is 4.1.12. The database sits on a HP 
Proliant DL585 server with 2 dual-core Opterons and 12 GB of RAM, running Linux 
Fedora Core 3.

Thanks in advance for any responses

-Christos Andronis



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow inserts on InnoDB

2005-07-25 Thread Catalin Trifu

Hi,

Thanks for the reply.
The setup is the following:
  Dual Processor SuSE 9.0 (kernel 2.4.21 SMP), apache 2.0.54, php 5.0.4, 
mysql-4.1.12 (RPM), 2GB RAM, 80GB scsi RAID 5
  The database config file is this one:


[mysqld]
port= 3306
socket  = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 128M
max_allowed_packet = 16M
table_cache = 512
sort_buffer_size = 64M
net_buffer_length = 8K
myisam_sort_buffer_size = 64M
thread_cache = 32
query_cache_size = 128M
thread_concurrency = 2
set-variable = max_connections=1000
set-variable = interactive_timeout=120
set-variable = wait_timeout=120
set-variable = query_prealloc_size=2M
set-variable = transaction_prealloc_size=2M
read_buffer_size = 32M

log-long-format
log-slow-queries = /var/log/mysqld.slow.log

innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:1000M;ibdata2:500M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_arch_dir = /var/lib/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 64M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 32M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50


The services run all on the same machine so network bottleneck does not 
come into
discussion. Also the query i gave as an example is taken from the slow query 
log.
The system faces from time to time heavy loads of up to 1000 
requests/second.


Catalin



[EMAIL PROTECTED] wrote:

news <[EMAIL PROTECTED]> wrote on 07/25/2005 10:41:46 AM:



 Hi,


 I have the following table :

CREATE TABLE `raw_outgoing_sms` (
  `id` bigint(20) NOT NULL auto_increment,
  `id_gsm_operator` bigint(20) NOT NULL default '0',
  `id_shortcode` bigint(20) NOT NULL default '0',
  `msisdn` varchar(20) NOT NULL default '',
  `sender` varchar(20) NOT NULL default '',
  `ts` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,

  `text` text,
  `udh` text,
  `data` text NOT NULL,
  `dlr_status` varchar(20) default NULL,
  `dlr_url` text,
  PRIMARY KEY  (`id`),
  KEY `idx_outgoing_gsm_op` (`id_gsm_operator`),
  KEY `idx_outgoing_shortcode` (`id_shortcode`)
) ENGINE=InnoDB

 When i insert data into it it takes around 11 seconds. Why ?


Thanks,
Catalin




The time it takes to process any statement includes but is not limited to:

client encoding time (bundles your statement for transport)
network lag (the time it takes to get the bundled statement to the server)
statement parsing time (the processing required for the server to both 
validate and understand the syntax of your request)

statement processing time:
wait for locks to clear (concurrent processes may be getting in 
your way)

set locks
read/write data on disk (how slow are your disks?)
updating indexes, if necessary (again, how slow are your disks?)
clearing locks
network lag (the time it takes for the server to respond with the 
completion status of your statement)
client decoding time (to unbundle the results from the networking 
protocols)


Any or all of those may be causing your delay. You will have to provide us 
more details about your setup before we could make a more educated guess.


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow inserts on InnoDB

2005-07-25 Thread Devananda

Catalin,

I was able to create the table with the CREATE statement you pasted, and 
insert a row with some simple data.


mysql> insert into raw_outgoing_sms 
(id_gsm_operator,id_shortcode,msisdn,sender,text,dlr_url) values 
(10,20,'19284720','deva','hello world','yahoo.com');

Query OK, 1 row affected (0.02 sec)

As SGreen pointed out, there are lots of possible contributing factors. 
Leaving out network, application, or disk latency, I wonder if the 
slowness is caused by the InnoDB configuration on your server...


Could you send a copy of your server's my.cnf file, and some information 
about the servers themselves (CPU, RAM, etc)? Seeing the configuration 
parameters would help us to understand where your problem may be coming 
from.



Regards,
Devananda vdv



Catalin Trifu wrote:

Hi,


 I have the following table :

CREATE TABLE `raw_outgoing_sms` (
  `id` bigint(20) NOT NULL auto_increment,
  `id_gsm_operator` bigint(20) NOT NULL default '0',
  `id_shortcode` bigint(20) NOT NULL default '0',
  `msisdn` varchar(20) NOT NULL default '',
  `sender` varchar(20) NOT NULL default '',
  `ts` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,

  `text` text,
  `udh` text,
  `data` text NOT NULL,
  `dlr_status` varchar(20) default NULL,
  `dlr_url` text,
  PRIMARY KEY  (`id`),
  KEY `idx_outgoing_gsm_op` (`id_gsm_operator`),
  KEY `idx_outgoing_shortcode` (`id_shortcode`)
) ENGINE=InnoDB

 When i insert data into it it takes around 11 seconds. Why ?


Thanks,
Catalin




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow inserts on InnoDB

2005-07-25 Thread SGreen
news <[EMAIL PROTECTED]> wrote on 07/25/2005 10:41:46 AM:

>   Hi,
> 
> 
>   I have the following table :
> 
> CREATE TABLE `raw_outgoing_sms` (
>`id` bigint(20) NOT NULL auto_increment,
>`id_gsm_operator` bigint(20) NOT NULL default '0',
>`id_shortcode` bigint(20) NOT NULL default '0',
>`msisdn` varchar(20) NOT NULL default '',
>`sender` varchar(20) NOT NULL default '',
>`ts` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
> CURRENT_TIMESTAMP,
>`text` text,
>`udh` text,
>`data` text NOT NULL,
>`dlr_status` varchar(20) default NULL,
>`dlr_url` text,
>PRIMARY KEY  (`id`),
>KEY `idx_outgoing_gsm_op` (`id_gsm_operator`),
>KEY `idx_outgoing_shortcode` (`id_shortcode`)
> ) ENGINE=InnoDB
> 
>   When i insert data into it it takes around 11 seconds. Why ?
> 
> 
> Thanks,
> Catalin
> 

The time it takes to process any statement includes but is not limited to:

client encoding time (bundles your statement for transport)
network lag (the time it takes to get the bundled statement to the server)
statement parsing time (the processing required for the server to both 
validate and understand the syntax of your request)
statement processing time:
wait for locks to clear (concurrent processes may be getting in 
your way)
set locks
read/write data on disk (how slow are your disks?)
updating indexes, if necessary (again, how slow are your disks?)
clearing locks
network lag (the time it takes for the server to respond with the 
completion status of your statement)
client decoding time (to unbundle the results from the networking 
protocols)

Any or all of those may be causing your delay. You will have to provide us 
more details about your setup before we could make a more educated guess.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: storing php pages with sql queries in a mysql database

2005-07-25 Thread Lamont R. Peterson
On Saturday 23 July 2005 09:05am, Gregory Machin wrote:
> Hi all.
> I'm writing a php script to store the contents of html and php pages
> in a data base, it works well until there are mysql queries in the
> pages source then give errors such as this one.
>
> Query failed: You have an error in your SQL syntax near 'temp'
>
> how do stop mysql from trying to interperate this data and blindly store it
> ??

Another solution would be to base64 encode the data before storing it in the 
database.
-- 
Lamont R. Peterson <[EMAIL PROTECTED]>
Founder [ http://blog.openbrainstem.net/peregrine/ ]
OpenBrainstem - Intelligent Open Source Software Engineering


pgpgxxCSkArq0.pgp
Description: PGP signature


Re: Correct way to use innodb_file_per_table?

2005-07-25 Thread Bruce Dembecki


On Jul 25, 2005, at 5:33 AM, Marvin Wright wrote:


Hi,

Thanks for your reply.

I've only just moved all tables to there own table space so that I  
can put

certain databases on different disks.
Right now my shared tablespace does not hold any databases.
I'm aware that I still need the shared table space but I don't need  
200gb

now, I just want to decrease it down to 10Gb.

It seems a bit daft that I still have to dump all tables even when  
they are
in their own tablespace.  I guess this is because the table  
definitions are

still stored in the shared space.

Marvin.

Hi! These are good questions... Heikki once told me that if there is  
no activity going on AND the innodb status page shows nothing being  
processed AND everything is up to date according to the innodb status  
page, you could (in theory) shutdown mysql and bring it back with a  
new shared table space under these circumstances... That is going to  
require that every connection to the database server be idle, or  
better still shut off... Depending on how your machines access your  
database server that may be easy or hard to do...


We had some character set issues to work on and were (are - it's an  
ongoing project) needing to do a dump and an import to do the move  
from 4.0 to 4.1 at the same time... So we didn't actually try and  
bounce a server into a smaller shared table space live... I have  
total control over my client connections to the database server and  
can easily prevent them from connecting with a hardware load  
balancer, and I'm still not sure I would want to try that though.


Hint if you are going the dump and import route... The fastest way to  
dump and for sure the fastest way to import is to use mysqldump -- 
tab=/var/tmp/somewhere and use mysqlimport to import the tab  
delimited data... using --tab on the dump creates two files for each  
table.. an sql file with the create table statement, and a txt file  
with the tab delimited data... We create our databases using cat /var/ 
tmp/somewhere/*sql | mysql ourDatabase, and then use mysqlimport  
ourDatabase /var/tmp/somewhere/*.txt - mysql import is smart enough  
to insert data into tables matching the filename, it's the fastest  
way to do the whole dump and import thing by a lot.


On the issue of how much shared space, Heikki told me 200Mbytes would  
be far more than we would need if everything is  
innodb_file_per_table... but as my old file space was made with 2000M  
files I just kept ibdata01 and commented out the rest of the line  
certainly haven't any issues with the 2Gbyte shared table space, I  
would think 10G would be overkill (I think my 2G is overkill).


The only other area we discovered was an issue is that if you are  
running a 32 bit file system there is likely to be a a problem on any  
table that needs more file space than the file system will give a  
single file. The solutions here are to use a 64 bit file system which  
doesn't care so much, or create a larger shared table space and turn  
off innodb_file_per_table and alter the table to innodb (even if it  
is already innodb, altering it like this will recreate it new). turn  
on innodb_file_per_table again and that table will stay in the shared  
table space, the rest will be in their own files. the main problem  
here is that once the file reached the OS limit InnoDB thought the  
table was full(which technically it was)... so Innodb's autoextending  
files don't know how to launch a second file once the File system's  
upper limit has been reached.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Alternatives to performing join on normalized joins?

2005-07-25 Thread SGreen
"Siegfried Heintze" <[EMAIL PROTECTED]> wrote on 07/24/2005 11:35:36 
AM:

> I have a large number of job titles (40K). Each job title has multiple
> keywords making a one-to-many parent-child relationship.
> 
> If I join job title with company name, address, company url, company 
city,
> job name, job location, job url (etc...) I have a mighty wide result set
> that will be repeated for each keyword.
> 
> What I have done in the past (in a different, much smaller, application) 
is
> perform a join of everything except the keyword and store everything in 
a
> hashmap. 
> 
> Then I iterate thru each wide row in the hashmap and perform a separate
> SELECT statement foreach row in this hashmap to fetch the multiple 
keywords.
> 
> Whew! That would be a lot of RAM (and paging) for this application.
> 
> Are there any other more efficient approaches?
> 
> Thanks,
> Siegfried
> 
> 

There are two major classes of efficiency when dealing with any RDBMS: 
time efficiency (faster results), space efficiency (stored data takes less 
room on the disk). Which one are you worried about?

If it were me, I would start with all of the data normalized: 
* a Companies table (name, address, url, city, etc)
* a Job Titles table (a list of names)
* a Keywords table (a list of words used to describe Job Titles)
* a JobPosting table ( Relates Companies to Job Titles. Should 
also be used to track things like dateposted, dateclosed, salary offered, 
etc.)
* a Postings_Keywords table (matches a Posting to multiple 
Keywords).

I would only denormalize if testing showed a dramatic improvement in 
performance by doing so. I would think that the Job Title to Keyword 
relationship would be different between Companies. One company posting for 
a "Programmer" may want VB while another wants PHP and PERL. By 
associating the Keywords with a Posting (and not just the Job Title), you 
can make that list Company-specific.


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine







very slow inserts on InnoDB

2005-07-25 Thread Catalin Trifu

Hi,


 I have the following table :

CREATE TABLE `raw_outgoing_sms` (
  `id` bigint(20) NOT NULL auto_increment,
  `id_gsm_operator` bigint(20) NOT NULL default '0',
  `id_shortcode` bigint(20) NOT NULL default '0',
  `msisdn` varchar(20) NOT NULL default '',
  `sender` varchar(20) NOT NULL default '',
  `ts` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
  `text` text,
  `udh` text,
  `data` text NOT NULL,
  `dlr_status` varchar(20) default NULL,
  `dlr_url` text,
  PRIMARY KEY  (`id`),
  KEY `idx_outgoing_gsm_op` (`id_gsm_operator`),
  KEY `idx_outgoing_shortcode` (`id_shortcode`)
) ENGINE=InnoDB

 When i insert data into it it takes around 11 seconds. Why ?


Thanks,
Catalin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Questions about backups, InnoDB tables, etc.

2005-07-25 Thread Michael Monashev
Hello

JT> Did you try that link?  When I follow it, I get a search results page
JT> saying <>.  Too
JT> bad it doesn't actually show the search results 

Sorry. Try this links
http://solutions.mysql.com/search.php?pc=4%2C86&q=backup&level=0
http://solutions.mysql.com/software/?c=backup


Sincerely,
Michael,
 http://xoib.com/ http://3d2f.com/
 http://qaix.com/ http://ryxi.com/
 http://gyxe.com/ http://gyxu.com/
 http://xywe.com/ http://xyqe.com/




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql +events

2005-07-25 Thread SGreen
"Darryl Hoar" <[EMAIL PROTECTED]> wrote on 07/21/2005 04:12:39 PM:

> Greetings,
> I am currently using Mysql 3.23.52.   I am looking for events (when a 
record
> is added, mod'd or deleted
> from a table, and event is sent to each client connected to the DB.
> Firebird can do this,
> but before I port my application to firebird, I was wondering if Mysql 
had
> something similar ?
> 
> thanks in advance,
> Darryl
> 
> 

The closest thing to "events" in MySQL will be Triggers coming in v 5.0.x. 
 There is no external events you can either register as a callback for or 
trap against. The closest thing you can do with your version is to monitor 
the binlog and detect the changes.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: avoiding conversion during insert?

2005-07-25 Thread SGreen
Jacek Becla <[EMAIL PROTECTED]> wrote on 07/21/2005 02:47:20 PM:

> Hi,
> 
> Is there a way to insert binary data representing numbers
> directly into (preferably MyISAM) table. We are trying to
> avoid conversion from ASCII to float/double/int...
> 
> Thanks,
> Jacek
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 

It depends, are you dealing with floating point or integer values? For 
floating point (real) values, I think you are stuck going through ASCII to 
get the data into MySQL. For integer values, what are you worried about?

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Questions about backups, InnoDB tables, etc.

2005-07-25 Thread SGreen
"Ryan Stille" <[EMAIL PROTECTED]> wrote on 07/21/2005 05:47:28 PM:

> I'm trying to get a handle on MySQL backups and hot backups using MyISAM
> and InnoDB tables together.  We plan to switch from SQL Server to MySQL
> soon.
> 
> How are you guys handling full-backups of databases with mixed MyISAM
> and InnoDB tables?  From what I've read (and I've been reading a lot),
> if we are using only one or the other then it is a pretty simple matter
> to get a clean backup.  Use --lock-tables for MyISAM, or
> --single-transaction if we using only InnoDB tables.
> 
> I've been doing some testing and came across something I don't
> understand.  I filled my test InnoDB formatted table with a lot of data
> so mysqldump will take a while to complete.  Then I start mysqldump on
> this database with the --single-transaction option.  While that is
> running, I insert a record into the table.  It completes sucessfully.  I
> then run a query and am able to see that record in the database.  The
> mysqldump is still running.  How is this record getting inserted into
> the database?  I thought it was locked while the dump was happening?  I
> thought it would get queued up and inserted when the mysqldump is
> finished.  The record was NOT in the dump, this part made sense.
> 
> Thanks for any help.
> -Ryan
> 
> 

While mysqldump is doing its thing, it has a transaction open. When you 
performed your update, you started a second transaction. The first 
transaction is isolated from the changes that appear in the second just as 
the second can't see any changes that happen in the first. Transacted 
changes will only be written into the database when the transaction 
commits.

Let me walk you through what I think was going on:

T1: mysqldump gets a read lock on every table in the database
T2: You insert a record.
T3: You looked for the record you just inserted and found it.
T1: mysqldump finishes and commits the transaction

Did that make sense?

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Which Engine to use...

2005-07-25 Thread SGreen
Scott Hamm <[EMAIL PROTECTED]> wrote on 07/21/2005 09:39:48 AM:

> I'm now trying to learn engines in MySQL. When I migrated from M$ SQL to 

> MySQL to learn the migration process and executed the following:
> 
> SELECT
> *
> FROM 
> QA
> LEFT JOIN 
> Batch 
>  ON 
> Batch.QAID=QA.ID
> LEFT JOIN 
> QAErrors 
>  ON 
> QAErrors.ID=Batch.QEID
> WHERE 
> QA.ID  
>  BETWEEN 
> '106805' 
>  AND 
> '107179'
> ORDER BY 
> QA.ID ;
> 
> M$ SQL executed and brought up result in 2 seconds
> where MySQL took 801 seconds and where
> Batch datalength is around 18.5 MB,
> QAErrors is around 464KB and
> QA is around 3.5MB
> 
> Which engine should I use and should I apply to all these tables or?
> 
> Batch/QAErrors/QA is most frequent used in database.
> -- 
> Power to people, Linux is here.

Engine choices will only help you deal with concurrency issues.  MyISAM 
uses table locking while InnoDB uses row-level locking and supports 
transactions. What it sounds like is an INDEXING issue. If you used the MS 
SQL technique of creating several single-column indexes (to duplicate an 
existing table) you will not get optimal performance from MySQL.  You need 
to determine the best indexes to cover the majority of your query cases.

If you could, please post the results of SHOW CREATE TABLE for these 
tables: Batch, QAErrors, and QA so that we can review your indexes.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



Re: Saving a whole txt file in a database

2005-07-25 Thread Nuno Pereira

Sebastian wrote:
the example i just showed you doesn't need to use delimited syntax. it 
will load the entire contents of the file into the column you specify.. 
try it a test:


CREATE TABLE `table_test` (
 `content` mediumtext NOT NULL
) ENGINE=MyISAM;


Mediumtext can be too much, depending on the size of files you store. 
See http://dev.mysql.com/doc/mysql/en/string-types.html for more details 
about other *text data-types.



create a txt file 'data.txt' with some text, then run query:

LOAD DATA INFILE '/path/to/data.txt' INTO TABLE table_test (content);

P.S. click 'reply all' so others on the list can see the messages.


Gregory Machin wrote:

Sorry for not clarifying, I have looked into LOAD DATA INFILE 
'/path/to/data.txt' INTO TABLE tb1

(col1,col2,...);
but by my understanding it only works for delimited text files, the
text files are html and php web pages that need to be migrated to
CMS.When i tried with one file i ended up with 24 empty fields in the
column..

On 7/21/05, Sebastian <[EMAIL PROTECTED]> wrote:
 


what you mean by "whole txt" the entire contents of the file or the
actual file itself?

to load the contents in it:

LOAD DATA INFILE '/path/to/data.txt' INTO TABLE tb1 (col1,col2,...);

otherwise the column has to be blob type, using normal insert query:
INSERT INTO table VALUES (data...)

Gregory Machin wrote:




Hi all
How does one save a whole txt file in a largetext column ?
Ihave found l lots  on delimited filesbut non on saving a whole text 
file .


Many thanks


--
Nuno Pereira
email: [EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to use Like Clause in Perl? Works fine in MySQL control center!

2005-07-25 Thread Jeremiah Gowdy
When you use double quotes for strings in Perl, Perl looks through your 
strings for variables like $foo, and replaces them with the current value of 
$foo.  This is called interpolation.  When you use single quotes, it 
considers your string a literal.


So when you use double quotes, you need to escape any special characters 
like $ % " or @.  When you use single quotes, the only character you have to 
worry about is '.  Here are ways you could make this string work.


Double quotes with special characters escaped (due to interpolation)

"SELECT 'David!' LIKE '\%D\%v\%'"

Single quotes with double quote usage for the SQL quoting (no escaping 
required)


'SELECT "David!" LIKE "%D%v%"'

Single quotes with single quotes escaped for the SQL quoting

'SELECT \'David!\' LIKE \'%D%v%\''

Keep in mind that interpolation is work, so using one of the single quotes 
strings which does not search your string for variables to replace is going 
to be higher performance than the double quoted version, although the 
difference may be a little or a lot depending on how many times the string 
is interpreted (if it is in a loop or something).



- Original Message - 
From: "Siegfried Heintze" <[EMAIL PROTECTED]>

To: 
Sent: Friday, July 22, 2005 4:03 PM
Subject: How to use Like Clause in Perl? Works fine in MySQL control center!


I'm having trouble getting the like clause to work. It seems to work fine 
in

the MySQL Control Center 9.4.beta. I'm using MySQL 4.0.23-debug.

use DBH;
my $sth = DBH->prepare("SELECT 'David!' LIKE '%D%v%'");
$sth->execute();
my $row;
print join(@$row,",")."\n" while ($row = $sth->fetch);


This does not print a "1" in perl. It just prints a ",".

I've posted a query on this in [EMAIL PROTECTED] with no luck.

Anybody have any suggestions?
Thanks,
Siegfried

Here is DBH.pm. Below that is my original post in [EMAIL PROTECTED]


package DBH;
use DBI;
require Exporter;
our @ISA = qw(Exporter);
our @EXPORT = qw(DBH); # Symbols to be exported by default
our @EXPORT_OK = qw(); # Symbols to exported by request
our $VERSION = 0.1;


our $dbh;
sub DBH{
   unless ( $dbh && $dbh->ping ) {
   $dbh = DBI->connect ( 'dbi:mysql:dbname=hotjobs;host=SALES', 'xyz',
'xyz' ) ;
   die DBI->errstr unless $dbh && $dbh->ping;
   }
   return $dbh;
}

1;




The following code works with Activestate perl 8.4/MySQL. If I comment the
second line, however, it does not work. No error messages and no results.

If I use the MySQL Enterprise console and type in my first SELECT 
statement

that includes the LIKE clause, it works.

I'm stumped. There must be something strange with that "%", but I cannot
figure it out.
Anyone got any suggestions?

Siegfried

my $sJobTitle = "SELECT sName FROM keywords ORDER BY sName WHERE sName 
LIKE

'%'";
 $sJobTitle = q[SELECT sName FROM keywords ORDER BY sName];

 my $sth = DBH->prepare($sJobTitle);
 $sth->execute();
 my $row;
 while ($row = $sth->fetch){
   push @sResult,"".join( "", @$row)."\n";
 }


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



query on a very big table

2005-07-25 Thread Christos Andronis
Hi all,
we are trying to run the following query on a table that contains over 600 
million rows: 

'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10) 
UNSIGNED DEFAULT NULL FIRST'

The query takes ages to run (has been running for over 10 hours now). Is this 
normal?

As a side issue, is MySQL suited for such big tables? I've seen a couple of 
case studies with MySQL databases over 1.4 billion rows but it is not clear to 
me whether this size corresponds to the whole database or whether it is for a 
single table.

The MySQL distribution we're using is 4.1.12. The database sits on a HP 
Proliant DL585 server with 2 dual-core Opterons and 12 GB of RAM, running Linux 
Fedora Core 3.

Thanks in advance for any responses

-Christos Andronis


RE: Correct way to use innodb_file_per_table?

2005-07-25 Thread Marvin Wright
Hi,

Thanks for your reply.

I've only just moved all tables to there own table space so that I can put
certain databases on different disks.
Right now my shared tablespace does not hold any databases.
I'm aware that I still need the shared table space but I don't need 200gb
now, I just want to decrease it down to 10Gb.

It seems a bit daft that I still have to dump all tables even when they are
in their own tablespace.  I guess this is because the table definitions are
still stored in the shared space.

Marvin.

-Original Message-
From: Ware Adams [mailto:[EMAIL PROTECTED] 
Sent: 25 July 2005 12:53
To: Marvin Wright
Cc: mysql@lists.mysql.com
Subject: Re: Correct way to use innodb_file_per_table?

On Jul 25, 2005, at 5:47 AM, Marvin Wright wrote:

> You recommend to dump tables before changing then re-import them back.  
> But if all databases are in there own tablespace I should need to do 
> this dump should I ?

Unfortunately I think that's your only option to create a new table space.
One way to avoid that and not waste the space might be to move some large
tables into the shared table space and use file-per- table for new tables.
You'd just switch file-per-table off, run ALTER TABLE tablename TYPE=INNODB
to move the table into the shared space then switch file-per-table on.  This
won't work for a lot of table structures, but it might be a way for you to
use the space.

> I want to reduce it to about 10gb, that should be enough for all its 
> temporary storage and logs.

You probably know this, but regardless of whether you use file-per- table or
not you still need the separate InnoDB log files.

--Ware


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Correct way to use innodb_file_per_table?

2005-07-25 Thread Ware Adams

On Jul 25, 2005, at 5:47 AM, Marvin Wright wrote:

You recommend to dump tables before changing then re-import them  
back.  But
if all databases are in there own tablespace I should need to do  
this dump

should I ?


Unfortunately I think that's your only option to create a new table  
space.  One way to avoid that and not waste the space might be to  
move some large tables into the shared table space and use file-per- 
table for new tables.  You'd just switch file-per-table off, run  
ALTER TABLE tablename TYPE=INNODB to move the table into the shared  
space then switch file-per-table on.  This won't work for a lot of  
table structures, but it might be a way for you to use the space.



I want to reduce it to about 10gb, that should be enough for all its
temporary storage and logs.


You probably know this, but regardless of whether you use file-per- 
table or not you still need the separate InnoDB log files.


--Ware

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to sort results with german umlauts in a UTF8 MySQL Database

2005-07-25 Thread Eugene Kosov

Christian Wollmann wrote:

By the way: Perhaps you could tell me how I can determine/change the encoding 
of the database?


SHOW CREATE DATABASE db_name;

ALTER DATABASE db_name
[[DEFAULT] CHARACTER SET charset_name]
[[DEFAULT] COLLATE collation_name];

http://dev.mysql.com/doc/mysql/en/charset-database.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to sort results with german umlauts in a UTF8 MySQL Database

2005-07-25 Thread Christian Wollmann

You could try:

SELECT memberid, lastname
  FROM tblmembers
  ORDER BY lastname COLLATE latin1_german2_ci ASC

It worked in my database. But I dont know, if my database is utf8.

By the way: Perhaps you could tell me how I can determine/change the encoding 
of the database?

Kind regards
Christian Wollmann

Am Freitag, 22. Juli 2005 10:06 schrieb Nico Grubert:
> Hi there,
>
> I have a MySQL 4.1 DB running including a database whose character set
> is set to utf8.
> In the database I have a table "tblmembers" with some records containing
> german umlauts.
> How do I sort results with german umlauts if the database character set
> is set to utf8?
>
> Using the SQL query
>
> SELECT memberid, lastname
>   FROM tblmembers
>   ORDER BY lastname
>
> I get the following result:
>
> Makee
> Maty
> Mayer
> März
> Müller
> Münze
> Mebel
> Meier
>
> This sort order is wrong according to german sorting rules.
>
> The right sorting order according to german sorting rules is:
>
> Makee
> März
> Maty
> Mayer
> Mebel
> Meier
> Müller
> Münze
>
>
> How can I sort the results in my utf8 database according german sorting
> rules?
>
> Thanks in advance.
>
> Kind regards,
> Nico
> _
> Mit der Gruppen-SMS von WEB.DE FreeMail können Sie eine SMS an alle
> Freunde gleichzeitig schicken: http://freemail.web.de/features/?mc=021179

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: storing php pages with sql queries in a mysql database

2005-07-25 Thread Eugene Kosov

Gregory Machin wrote:

Hi all.
I'm writing a php script to store the contents of html and php pages
in a data base, it works well until there are mysql queries in the
pages source then give errors such as this one.

Query failed: You have an error in your SQL syntax near 'temp' 


how do stop mysql from trying to interperate this data and blindly store it ??

Many Thanks


Perhaps mysql_real_escape_string() could help you..

http://ru3.php.net/manual/en/function.mysql-real-escape-string.php

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to sort results with german umlauts in a UTF8 MySQL Database

2005-07-25 Thread Alec . Cawley
Nico Grubert <[EMAIL PROTECTED]> wrote on 22/07/2005 09:06:25:

> 
> Hi there,
> 
> I have a MySQL 4.1 DB running including a database whose character set
> is set to utf8.
> In the database I have a table "tblmembers" with some records containing
> german umlauts.
> How do I sort results with german umlauts if the database character set
> is set to utf8?

According to http://dev.mysql.com/doc/mysql/en/charset-unicode-sets.html 
you might achieve the effect you want by setting the collation to 
utf8_unicode_ci

Alec


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Correct way to use innodb_file_per_table?

2005-07-25 Thread Marvin Wright
Hi,

Can anybody help with this ?

Regards,

Marvin 

-Original Message-
From: Marvin Wright 
Sent: 22 July 2005 10:46
To: Heikki Tuuri; mysql@lists.mysql.com
Subject: RE: Correct way to use innodb_file_per_table?

Hi Heikki,

I've followed your instructions here and its all worked fine.

Now I currently have a 200Gb shared innodb table space which is sitting
almost empty now all databases have there own table spaces.
I want to reduce this amount but ot sure what is the best way to do it.

I know I can not just remove some of the ibdata files so I'd like your
advice on what I should do.
You recommend to dump tables before changing then re-import them back.  But
if all databases are in there own tablespace I should need to do this dump
should I ?

I want to reduce it to about 10gb, that should be enough for all its
temporary storage and logs.

Any advice on the best way to do this would be great.

Thanks

Marvin.

-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
Sent: 04 March 2005 13:53
To: Mike Debnam; mysql@lists.mysql.com
Subject: Re: Correct way to use innodb_file_per_table?

Mike,

- Alkuperäinen viesti -
Lähettäjä: "Mike Debnam" <[EMAIL PROTECTED]>
Vastaanottaja: 
Kopio: <[EMAIL PROTECTED]>
Lähetetty: Friday, March 04, 2005 2:49 PM
Aihe: Re: Correct way to use innodb_file_per_table?


> Heikki,
>
>> the best way would be to symlink whole database directories under the 
>> datadir. Then also an ALTER TABLE keeps the new .ibd file on the 
>> drive you intended it to be on. If you symlink a single .ibd file, 
>> then an ALTER will create the new .ibd file as not symlinked.
>>
>> As an example, let us say you have three databases: 'database1', 
>> 'database2', and 'test'. You may shut down mysqld, copy all the 
>> contents of
>> /datadir/database2 to drive 2, and then symlink the directory
>> /datadir/database2 to drive 2.
>>
>
>
> Hmm, ok. I have just one decent size database though. I want to split 
> the tables in that database between disks. I haven't turned on 
> innodb_file_per_table yet I'm trying to plan it out first, so I don't 
> know the file layout yet. If my data directory is /var/db/mysql and my 
> InnoDB data file is /var/db/mysql/ibdata1 then the table files will be 
> created under /var/db/mysql/MyDatabase/MyTable1.ibd,
> /var/db/mysql/MyDatabase/MyTable2.ibd, etc it sounds like.
>
> Is there a way to split those table files? So I could have something 
> like /data/disk1/MyTable1.ibd, /data/disk2/MyTable2.ibd?

you can move the .ibd file where you want, and put a symlink in place.

But remember that an ALTER will recreate the table to its original database
dir, because ALTER does not know about symlinks.

> Thanks for your help.
>
> Mike

Best regards,

Heikki
Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL InnoDB
Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables
http://www.innodb.com/order.php

Order MySQL Network from http://www.mysql.com/network/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Quotation marks in string causing repace to not work.

2005-07-25 Thread Gregory Machin
Hi 

Please could you advise. I inserted some web pages into a table and
now i need to do updates to the pages so that the cms can display them
with out the legacy code. there are about 1000 pages.

I tried the following  

UPDATE temp SET 'file_content' = REPLACE(file_content, '' ,
'');

but it didn't work, i think thing problem is that the string i need to
replace / null has quotation marks .. how can i work around this 

Many Thanks

-- 
Gregory Machin
[EMAIL PROTECTED]
[EMAIL PROTECTED]
www.linuxpro.co.za
Web Hosting Solutions
Scalable Linux Solutions 
www.iberry.info (support and admin)
www.goeducation (support and admin)
+27 72 524 8096

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to sort results with german umlauts in a UTF8 MySQL Database

2005-07-25 Thread Nico Grubert

Hi there,

I have a MySQL 4.1 DB running including a database whose character set
is set to utf8.
In the database I have a table "tblmembers" with some records containing
german umlauts.
How do I sort results with german umlauts if the database character set
is set to utf8?

Using the SQL query

SELECT memberid, lastname
  FROM tblmembers
  ORDER BY lastname

I get the following result:

Makee
Maty
Mayer
März
Müller
Münze
Mebel
Meier

This sort order is wrong according to german sorting rules.

The right sorting order according to german sorting rules is:

Makee
März
Maty
Mayer
Mebel
Meier
Müller
Münze


How can I sort the results in my utf8 database according german sorting
rules?

Thanks in advance.

Kind regards,
Nico 
_
Mit der Gruppen-SMS von WEB.DE FreeMail können Sie eine SMS an alle 
Freunde gleichzeitig schicken: http://freemail.web.de/features/?mc=021179




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]