script of mysql

2010-07-30 Thread PRATIKSHA JAISWAL
Hi List,

Can somebody please help me if they have a script using which we can get an
idea for installed mysql server, backup, created databases, indexes, tables,
engines, replication etc...

I will appreciate your help in advance


Thanks
Pratiksha


Re: script of mysql

2010-07-30 Thread walter harms


PRATIKSHA JAISWAL schrieb:
 Hi List,
 
 Can somebody please help me if they have a script using which we can get an
 idea for installed mysql server, backup, created databases, indexes, tables,
 engines, replication etc...
 
 I will appreciate your help in advance
 
 

You can find tons of scripts on the internet but you need to adjust to *your* 
needs.
Running a mysql script is easy.

mysql DATABASE script

replace DATABASE with your database name and script with your script.

Read the available documentation on mysql.com to understand how to write such 
script
for your self.


re,
 wh


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: CentOS 4.8 no-install of MySQL 5.1.4X???

2010-07-30 Thread Nunzio Daveri
Thanks Claudio but I do that all the time.  The question is where is the file?  
You said to go to the download area, there are hundreads of files and not one 
tar file says rhel or cent os.

What is the URL / file name to download and untar please?

Thanks for responding :-)

Nunzio


  

Re: CentOS 4.8 no-install of MySQL 5.1.4X???

2010-07-30 Thread Joerg Bruehe
Nunzio, all,

Nunzio Daveri schrieb:
 Thanks Claudio but I do that all the time.  The question is where is the 
 file?  
 You said to go to the download area, there are hundreads of files and not one 
 tar file says rhel or cent os.

tar.gz packages are not specific to the Linux distribution, they are
generic. Their sole dependency is on the glibc version, and that is 2.3
(or up) with all reasonably current Linuxes (kernel 2.4 and up).

(With RPMs, it is different: They contain requirement specifications
which list package and version, and so there are specific ones for
current RHEL and SLES versions.)

 
 What is the URL / file name to download and untar please?

It depends on your CPU:
Go to http://dev.mysql.com/downloads/mysql/5.1.html;,
scroll down to the bottom of the list,
you will find 5 generic tar.gz packages:
x86, x86-icc, x86_64, x86_64-icc, ia64.

Those which do not bear icc in the name were built using gcc, and if
you are not specifically preferring binearies created by ICC you should
take a gcc-built one.

Pick the one that matches your CPU.


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: CentOS 4.8 no-install of MySQL 5.1.4X???

2010-07-30 Thread Joerg Bruehe
Hi Nunzio, all (cc' to the list)!


Nunzio Daveri schrieb:
 Hello Joerg, thanks for replying.  I am at:
 
 http://mysql.llarian.net/Downloads/MySQL-5.1/

I cannot comment on that server, do neither know it nor the origin of
the packages they provide (own builds or mirrored ones from the MySQL
Build Team).

 
 I am looking for what you described but cannot find a 5.1.44 for CentOS 4.8 
 on 
 Intel Quad Core CPU???  Any help is most appreciated.  I have no problems 
 with 
 Solaris but need it for CentOS 4.8 specificially on intel dual cpu quad core 
 x5000 seriec chipset running 64 bit with 16GB Ram.

That's ok, the packages built by MySQL (- Sun - Oracle) do not differ
between single- and multi-CPU machines.

 
 Based on what you have said, I should be looking for a:
 
 mysql-5.1.44-linux-x86_64.tar.gz

Right.

 
 but all I find is a:
 
 mysql-5.1.44.tar.gz

That's the plain sources.

 
 What am I doing wrong please?

You might try the MySQL download site I mentioned:
  http://dev.mysql.com/downloads/mysql/5.1.html

If you are looking for 5.1.44, you will not find it directly on that
page, because the current version is 5.1.49.
But at the very bottom of the page, in the second column, there is a
link Archives that will get you to
  http://downloads.mysql.com/archives.php

Click MySQL Database Server 5.1 (4th entry from the top),
on the next page select 5.1.44,
on the next page the first five entries are generic Linux tar.gz packages.


HTH,
Jörg


 
 From: Joerg Bruehe joerg.bru...@oracle.com
 To: Nunzio Daveri nunziodav...@yahoo.com
 Cc: mysql@lists.mysql.com
 Sent: Fri, July 30, 2010 9:19:29 AM
 Subject: Re: CentOS 4.8 no-install of MySQL 5.1.4X???
 
 Nunzio, all,
 
 Nunzio Daveri schrieb:
 [[...]]
 
 What is the URL / file name to download and untar please?
 
 It depends on your CPU:
 Go to http://dev.mysql.com/downloads/mysql/5.1.html;,
 [[...]]


-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Variable Expansion Using MySQL Client and .sql Files

2010-07-30 Thread Todd E Thomas

Simple - brilliant. I've modified to cat a temporary script...

1) cat the file as suggested:
cat  ~/payload/scripts/create_mysqldb.sql  EOF
###---
### Display the current user
###---
create database puppet;


###---
### Display all default accounts
###---
USE puppet;


###---
### Remove anonymous accounts
###---
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON puppet.* TO 
'admin'@'localhost' IDENTIFIED BY '$PASSWD_PRIV_ROOT';

(variables are expanded)

###---
### Verify new admin account was added
###---
SELECT 
user,Host,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv,Alter_priv 
from mysql.db;

EOF

2) Execute the script

3) Remove the script
rm -f ~/payload/scripts/create_mysqldb.sql

It is often the simplest things that are most beautiful. Thanks for 
pulling my head out :D



TT







On 07/29/2010 03:19 PM, Travis Ard wrote:

mysqlEOF
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON concrete5.db TO 
'admin'@'localhost' IDENTIFIED BY '$PASSWD_PRIV_ROOT';
EOF
   


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: CentOS 4.8 no-install of MySQL 5.1.4X???

2010-07-30 Thread Nunzio Daveri
Awesome, thanks for helping that's just what I needed :-)

Nunzio





From: Joerg Bruehe joerg.bru...@oracle.com
To: Nunzio Daveri nunziodav...@yahoo.com
Cc: MySQL General List mysql@lists.mysql.com
Sent: Fri, July 30, 2010 10:49:14 AM
Subject: Re: CentOS 4.8 no-install of MySQL 5.1.4X???

Hi Nunzio, all (cc' to the list)!


Nunzio Daveri schrieb:
 Hello Joerg, thanks for replying.  I am at:
 
 http://mysql.llarian.net/Downloads/MySQL-5.1/

I cannot comment on that server, do neither know it nor the origin of
the packages they provide (own builds or mirrored ones from the MySQL
Build Team).

 
 I am looking for what you described but cannot find a 5.1.44 for CentOS 4.8 
 on 

 Intel Quad Core CPU???  Any help is most appreciated.  I have no problems 
 with 

 Solaris but need it for CentOS 4.8 specificially on intel dual cpu quad core 
 x5000 seriec chipset running 64 bit with 16GB Ram.

That's ok, the packages built by MySQL (- Sun - Oracle) do not differ
between single- and multi-CPU machines.

 
 Based on what you have said, I should be looking for a:
 
 mysql-5.1.44-linux-x86_64.tar.gz

Right.

 
 but all I find is a:
 
 mysql-5.1.44.tar.gz

That's the plain sources.

 
 What am I doing wrong please?

You might try the MySQL download site I mentioned:
  http://dev.mysql.com/downloads/mysql/5.1.html

If you are looking for 5.1.44, you will not find it directly on that
page, because the current version is 5.1.49.
But at the very bottom of the page, in the second column, there is a
link Archives that will get you to
  http://downloads.mysql.com/archives.php

Click MySQL Database Server 5.1 (4th entry from the top),
on the next page select 5.1.44,
on the next page the first five entries are generic Linux tar.gz packages.


HTH,
Jörg


 
 From: Joerg Bruehe joerg.bru...@oracle.com
 To: Nunzio Daveri nunziodav...@yahoo.com
 Cc: mysql@lists.mysql.com
 Sent: Fri, July 30, 2010 9:19:29 AM
 Subject: Re: CentOS 4.8 no-install of MySQL 5.1.4X???
 
 Nunzio, all,
 
 Nunzio Daveri schrieb:
 [[...]]
 
 What is the URL / file name to download and untar please?
 
 It depends on your CPU:
 Go to http://dev.mysql.com/downloads/mysql/5.1.html;,
 [[...]]


-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
ORACLE Deutschland B.V.  Co. KG,  Komturstrasse 18a,  D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


  

Re: Indexes larger than RAM (was: Do you know who can answer this question I posted yesterday please?)

2010-07-30 Thread Joerg Bruehe
Hi!


I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:


Nunzio Daveri schrieb:
 [[...]]
 
 All, I was running slamdb against one of our QA boxes and noticed that the 
 innodb database is 190Gb in size BUT the worrying issue is that the indexes 
 are 
 30GB in size!!!  When I hit this server hard, it tanks on memory but still 
 performs, slower of course ;-)

Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.

 Any suggestions on what I should do?  I am 
 thinking of doing one of these:

Whether any action is needed, and which, depends on the problem you
experience:

- If the system as a whole (both CPU and disk) has a significant idle
  percentage, it isn't yet maxed out, and I don't expect that adding
  resources would improve performance significantly.

- If your CPUs have significant waiting for IO percentage, then data
  accesses need speedup. This could be done by faster disks, but I would
  expect more results from adding RAM for larger caches.
  This holds especially if your disk throughput is close to the possible
  maximum.
  (Assuming your bulk work is read/select. If it is insert/update, then
  *removing* indexes might reduce the workload, as there are fewer
  indexes to maintain.)

- If your CPUs are busy, then I don't expect any increase of caching
  would help.

 
 1. Remove all queries, run for a few days, look at the slow query logs and 
 then 
 find those queries that really need them and index those specificially for 
 performance.

Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).

 2. Split the single server into two servers both with 16 gb and 2 quad core 
 cpu's. One master the other a slave.

Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.

 3. Just add another 16gb (32GB total) and that should take care of the 
 indexing 
 issue.

Makes sense if the disks are the bottleneck (CPUs are in waiting for
IO), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).

 
 Anyone had this problem before???
 
 Oh this is a single box, 100% mysql only and it talks to 3 front end iPlanet 
 web 
 
 servers that hit it with a few hundread queries per second.

For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
limit.


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
   (+49 30) 417 01 487
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Indexes larger than RAM (was: Do you know who can answer this question I posted yesterday please?)

2010-07-30 Thread Nunzio Daveri
Thanks again :-)

Nunzio





From: Joerg Bruehe joerg.bru...@oracle.com
To: Nunzio Daveri nunziodav...@yahoo.com; mysQL General List 
mysql@lists.mysql.com
Sent: Fri, July 30, 2010 1:31:54 PM
Subject: Re: Indexes larger than RAM (was: Do you know who can answer this 
question I posted yesterday please?)

Hi!


I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:


Nunzio Daveri schrieb:
 [[...]]
 
 All, I was running slamdb against one of our QA boxes and noticed that the 
 innodb database is 190Gb in size BUT the worrying issue is that the indexes 
 are 

 30GB in size!!!  When I hit this server hard, it tanks on memory but still 
 performs, slower of course ;-)

Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.

                                Any suggestions on what I should do?  I am 
 thinking of doing one of these:

Whether any action is needed, and which, depends on the problem you
experience:

- If the system as a whole (both CPU and disk) has a significant idle
  percentage, it isn't yet maxed out, and I don't expect that adding
  resources would improve performance significantly.

- If your CPUs have significant waiting for IO percentage, then data
  accesses need speedup. This could be done by faster disks, but I would
  expect more results from adding RAM for larger caches.
  This holds especially if your disk throughput is close to the possible
  maximum.
  (Assuming your bulk work is read/select. If it is insert/update, then
  *removing* indexes might reduce the workload, as there are fewer
  indexes to maintain.)

- If your CPUs are busy, then I don't expect any increase of caching
  would help.

 
 1. Remove all queries, run for a few days, look at the slow query logs and 
 then 

 find those queries that really need them and index those specificially for 
 performance.

Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).

 2. Split the single server into two servers both with 16 gb and 2 quad core 
 cpu's. One master the other a slave.

Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.

 3. Just add another 16gb (32GB total) and that should take care of the 
 indexing 

 issue.

Makes sense if the disks are the bottleneck (CPUs are in waiting for
IO), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).

 
 Anyone had this problem before???
 
 Oh this is a single box, 100% mysql only and it talks to 3 front end iPlanet 
web 

 
 servers that hit it with a few hundread queries per second.

For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
limit.


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
              (+49 30) 417 01 487
ORACLE Deutschland B.V.  Co. KG,  Komturstrasse 18a,  D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


  

Re: Indexes larger than RAM (was: Do you know who can answer this question I posted yesterday please?)

2010-07-30 Thread mos

Nunzio Daveri,
 Joerg Bruehe gave you a lot of good tips to try and speed things up. 
A few hundred queries per second seem to be a relatively small number to 
cause the server to crawl. I don't have the rest of your thread, but can 
you publish some of the slow queries (see Slow Query Log) and the table 
structure?


Mike


At 01:31 PM 7/30/2010, you wrote:

Hi!


I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:


Nunzio Daveri schrieb:
 [[...]]

 All, I was running slamdb against one of our QA boxes and noticed that the
 innodb database is 190Gb in size BUT the worrying issue is that the 
indexes are

 30GB in size!!!  When I hit this server hard, it tanks on memory but still
 performs, slower of course ;-)

Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.

 Any suggestions on what I should do?  I am
 thinking of doing one of these:

Whether any action is needed, and which, depends on the problem you
experience:

- If the system as a whole (both CPU and disk) has a significant idle
  percentage, it isn't yet maxed out, and I don't expect that adding
  resources would improve performance significantly.

- If your CPUs have significant waiting for IO percentage, then data
  accesses need speedup. This could be done by faster disks, but I would
  expect more results from adding RAM for larger caches.
  This holds especially if your disk throughput is close to the possible
  maximum.
  (Assuming your bulk work is read/select. If it is insert/update, then
  *removing* indexes might reduce the workload, as there are fewer
  indexes to maintain.)

- If your CPUs are busy, then I don't expect any increase of caching
  would help.


 1. Remove all queries, run for a few days, look at the slow query logs 
and then

 find those queries that really need them and index those specificially for
 performance.

Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).

 2. Split the single server into two servers both with 16 gb and 2 quad 
core

 cpu's. One master the other a slave.

Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.

 3. Just add another 16gb (32GB total) and that should take care of the 
indexing

 issue.

Makes sense if the disks are the bottleneck (CPUs are in waiting for
IO), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).


 Anyone had this problem before???

 Oh this is a single box, 100% mysql only and it talks to 3 front end 
iPlanet web


 servers that hit it with a few hundread queries per second.

For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
limit.


HTH,
Jörg

--
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
   (+49 30) 417 01 487
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: script of mysql

2010-07-30 Thread Travis Ard

Something like this mysql-summary script might be useful: 
http://code.google.com/p/aspersa/wiki/mysql_summary

-Travis


 Date: Fri, 30 Jul 2010 12:19:07 +0100
 Subject: script of mysql
 From: pratikshadjayswa...@gmail.com
 To: mysql@lists.mysql.com

 Hi List,

 Can somebody please help me if they have a script using which we can get an
 idea for installed mysql server, backup, created databases, indexes, tables,
 engines, replication etc...

 I will appreciate your help in advance


 Thanks
 Pratiksha
  
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: query results group/summed by interval

2010-07-30 Thread Nguyen Manh Cuong
Hi Aveek,

I think Ghulam just want to count calls for each intervals
so the query should looks like this:

select count(*) as total_calls, queue_seconds
from calls group by queue_seconds order by total_calls;


- Original Message -
From: Aveek Misra ave...@yahoo-inc.com
To: Ghulam Mustafa mustafa...@gmail.com, mysql@lists.mysql.com
Sent: Tuesday, July 27, 2010 5:54:13 PM
Subject: RE: query results group/summed by interval

try this ...

 select 5 * floor(seconds/5) as start, 5 * floor(seconds/5) + 5 as end, 
sum(calls) from calls group by 5 * floor(seconds/5);

This should give you an output of the type

+---+--++
| start | end  | sum(calls) |
+---+--++
| 0 |5 |387 |
| 5 |   10 |225 |
|10 |   15 | 74 |
+---+--++


Thanks
Aveek

From: Ghulam Mustafa [mustafa...@gmail.com]
Sent: Tuesday, July 27, 2010 3:53 PM
To: mysql@lists.mysql.com
Subject: query results group/summed by interval

Hi everyone,

i have two columns (seconds, number of calls), i need to produce a
report which will show total number of calls in intervals (let'say 10
seconds interval), i know i can do this programmability in my script but
i was wondering if it's possible to accomplish this behavior within
mysql. for example i have following data.

+--+---+
|  calls   | queue_seconds |
+--+---+
|  250 |  0.00 |
|   28 |  1.00 |
|   30 |  2.00 |
|   56 |  3.00 |
|   23 |  4.00 |
|   31 |  5.00 |
|   33 |  6.00 |
|   50 |  7.00 |
|   49 |  8.00 |
|   62 |  9.00 |
|   74 | 10.00 |
...
... and so on...
...
+--+---+

now result should look like this with a 5 seconds interval.

+--+---+
| count(*) | queue_seconds |
+--+---+
|  250 |  0.00 |
|  168 |  5.00 |
|  268 | 10.00 |
...
... and so on...
...
+--+---+

i would really appreciate your help.

Best Regards.

--
Ghulam Mustafa

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=ave...@yahoo-inc.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=cuong.m...@vienthongso.com


-- 
Best Regards,
Cuongmc.

-- 
Nguyen Manh Cuong
Phong Ky Thuat - Cong ty Vien Thong So - VTC
Dien thoai: 0912051542
Gmail : philipscu...@gmail.com
YahooMail : philipscu...@yahoo.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org