Re: what journal options should I use on linux?

2007-03-09 Thread Kevin Burton

Just use XFS. it's a solve problem..

Kevin

On 3/8/07, Christopher A. Kantarjiev [EMAIL PROTECTED] wrote:


I'm setting up mysql on linux for the first time (have been using OpenBSD
and
NetBSD with UFS until now). The default file system is ext3fs, and I don't
mind
that, but it seems really silly to use a journaled file system for the
database
data - doubling my writes.

In particular, I have a couple of use cases where I spend a week or so
creating
a 17GB data (table) file and its 15GB index file, and then do sparse
queries out
of it. I need as much write speed as I can get. I certainly don't want to
have
every data block written twice, once to the journal and once to the file,
along
with the extra seeks.

What do people with this sort of large problem use on Linux?

Thanks,
chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: Google like search string to be implemented

2007-03-09 Thread abhishek jain

On 3/8/07, abhishek jain [EMAIL PROTECTED] wrote:




On 3/6/07, abhishek jain [EMAIL PROTECTED] wrote:



 On 3/6/07, Nils Meyer [EMAIL PROTECTED]  wrote:

  Hi,
 
  abhishek jain wrote:
   I am having a database with varchar(255) columns named title,
   extra_info1,extra_info2,extra_info3 .
   I want to search all these columns with a search string given to me
  via a
   form ,I am using PERL, the string will be like
  
   +abhishek jain -abcd this should be exact
 
  I think MySQL fulltext search in boolean mode would be the perfect
  choice for that:
 
  http://dev.mysql.com/doc/refman/5.0/en/fulltext-boolean.html
 
  You can use search strings like your example with that directly.
 
  regards
  Nils


 Hi Nils and all,
 I have gone through full text and here is a few limitations i find,
 1. works only wiht MyISAM , now if i use innodb then ? anways this is
 not a major restrictions and can be removed,
 2)Now i can not match within a word i mean the given word even in
 Boolean mode must start with the particular string word, even * would not
 work as the search string word should start with that, This is a major
 restriction.
 3. i am not sure how to implement in joins.

 Pl. help,
 Thanks,
 Abhishek jain



Hi guys,
Pl. reply and give an alternative which i can use to include a partuial
word in mine search string which i can match in a table coumn,
Thanks,
Abhishek jain



Hi,
What if i use multiple like and not like statements for searching .
Pl. reply,
Thanks,
Abhishek jain


RE: migrating 4.0 to 5.0

2007-03-09 Thread Little, Timothy
Here's what we did and still do :

Our 4.x tables and databases were/are in Latin-1 and all the 5.x tables 
are/were in utf8.  That means that the entire regiment of items (every column, 
every table, every database) in the old system (4.1) was latin-1 and all the 
destination items in 5.x were entirely utf8.

Sumary :
   We used MySQLdump to dump the files and then I use a tool to replace all 
occurrances of latin1 to utf8 then I use mysql command-line client to load 
/ execute those resultant dump-files.

For reference, our MY.INI files say (not sure if it is actually heeded) :

   default-character-set=latin1

Steps :

1 Here is a single table dump line from the dump portion of my scripts : 

@mysqldump --quick --default-character-set=latin1 -uroot 
--password=secretpassword --port=3306 --skip-set-charset --skip-comments 
--add-drop-table -c -C -h databasehost -r UseThisFileForInput.sql DatabaseToUse 
--tables AddressListTables

2  Then we run a program to search/replace all occurrances of latin1 to 
utf8 (shareware program called search-replace)... but other replacement tools 
will probably work too
@sr32 /u /i /p /q /n f:\DailyBackupLogic\*.sql /slatin1 /rutf8

Your replacement methods will be different in form, but likely identical in 
function.

3  Then we use this line to load it into the database :

mysql --local-infile=1 -uroot --password=secretpassword --port=3310 
-DNewDatabasename -hImprovedDatabaseServer --port=3310 -b -C -e \. 
UseThisFileForInput.sql

Tim...

-Original Message-
From: Matthias Henze [mailto:[EMAIL PROTECTED]
Sent: Monday, March 05, 2007 2:30 AM
To: MySQL General
Subject: migrating 4.0 to 5.0


hi,

i've still serious trouble in migrating databases createted with 4.0 to 5.0. 
the problems is still the charset. i'm connecting to mysql with php and when 
i try to use the 5.0 db german special chars are messed up. afaik 4.0 uses 
latin charset and 5.0 utf8 by default. can some one please give me a hint how 
to migrate ? i think i'm just too stupid  :-)

TIA
matthias

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problems with Spam on MySQL guest book

2007-03-09 Thread Gloria McMillan

Dear MySQL general list members,

I have a guest book on a small club web page that has been getting 
regular spam in with

the legitimate replies.

I use MySQL and (I think) Apache-style PHP to run the database.  I know 
it is PHP and

am certain when I started these survey and guest book it was Apache PHP.

If anybody has a way that I can add to my PHP code to filter out spam, 
please let me know.
I do not know which of the many MySQL lists to send this to, so if this 
is not the correct

list, I also apologize and ask you to please redirect me to the right list.

Thanks,
Gloria McMillan


Re: what journal options should I use on linux?

2007-03-09 Thread Brent Baisley
Enabling journaling isn't going to halve your performance. Remember, a journal is a record of what happened. It is only added to, 
not updated, so while there is overhead, performance is fairly good. ext3 also has a few different journaling optins.


Journaling is mainly for quick recovery and corruption prevention in case of a crash. This is something you would want your OS 
running on. If you can live with your database getting corrupted and can live with the time it would take to restore a backup, then 
you can forgoe journaling.
I would create a partition for you OS and then a partition for your data. One can be journalled, the other not. You can even 
optimize the block size on the partition for your data, or use a different file system for your data partition altogether.


Here is an interesting article to read on ext3 journaling overhead.
http://www-128.ibm.com/developerworks/library/l-fs8.html

- Original Message - 
From: Christopher A. Kantarjiev [EMAIL PROTECTED]

To: mysql@lists.mysql.com; [EMAIL PROTECTED]
Sent: Thursday, March 08, 2007 10:49 PM
Subject: what journal options should I use on linux?


I'm setting up mysql on linux for the first time (have been using OpenBSD and NetBSD with UFS until now). The default file system 
is ext3fs, and I don't mind that, but it seems really silly to use a journaled file system for the database data - doubling my 
writes.


In particular, I have a couple of use cases where I spend a week or so creating a 17GB data (table) file and its 15GB index file, 
and then do sparse queries out of it. I need as much write speed as I can get. I certainly don't want to have every data block 
written twice, once to the journal and once to the file, along with the extra seeks.


What do people with this sort of large problem use on Linux?

Thanks,
chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Diagnosing i/o thrashing

2007-03-09 Thread Brent Baisley
When you say 70% iowait are you referring to vmstat results? There are a lot of things that can be causing iowait, the most 
obvious being the disks are busy. In which case giving MySQL more memory won't really help, unless it's something that can be solved 
with caching.


What does your context switching rate look like? It's possible your system is busy juggling without actually accomplishing anything. 
What constitutes a high context switch rate is dependent on the hardware and os.


In MySQL, what are your threads_created and opened _tables numbers look like 
from show status? (not show innodb status).

- Original Message - 
From: Marcus Bointon [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Thursday, March 08, 2007 1:20 PM
Subject: Diagnosing i/o thrashing


I'm trying to diagnose a server which is frequently racking up over  70% iowait when running a big PHP/MySQL app. I've increased 
mysql  memory allocations as far as I dare, but it's just not clear to me  where the problem is. My tables are all innodb, but the 
numbers in  'show innodb status' are not terribly meaningful. It's also unclear  whether all the disk activity is down to mysql or 
the php driving it  (and if I stop one, of course the other stops too).


Though it is showing some swap usage, there's also a fair amount  shown as free 
or cached.

The diagnostic suggestions on phpMyAdmin's status page have been  useful to date, but I'm just not sure what measures will be the 
most  effective.


How might I best track down the root of the problem?

Alternatively, anyone up for a few hours of consultancy?

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Diagnosing i/o thrashing

2007-03-09 Thread Marcus Bointon

On 9 Mar 2007, at 15:22, Brent Baisley wrote:

When you say 70% iowait are you referring to vmstat results?  
There are a lot of things that can be causing iowait, the most  
obvious being the disks are busy. In which case giving MySQL more  
memory won't really help, unless it's something that can be solved  
with caching.


Both top and vmstat. It seems even the most trivial traffic makes the  
disks go nuts. The root of the problem is whatever is causing so much  
disk activity. I guess if MySQL buffers are set too big, it could be  
squeezing other apps, OTOH, if they are too small, MySQL itself will  
be generating more disk activity. However, there's no particular  
evidence that I'm hitting swap. A key issue is that I can't tell if  
it's MySQL or PHP that's causing it - if I stop one, of course the  
other stops too. In top, mysql and php processes are often only  
showing a few % CPU, yet iowait can be over 80%, so it's really not  
clear what's eating it.


From what I've read, next time I'll host mysql on XFS partitions...

What does your context switching rate look like? It's possible your  
system is busy juggling without actually accomplishing anything.  
What constitutes a high context switch rate is dependent on the  
hardware and os.


That sounds plausible, and I'd not looked at that. Most of the time  
that seems to be around 200, with busy periods of 500 and peaks of  
peaks of 1100. I don't know if that's high, low or about right.


In MySQL, what are your threads_created and opened _tables numbers  
look like from show status? (not show innodb status).


I have 14,000 opens after 1.5 million queries. There are some  
databases that are low traffic but that have around 175 tables each.


To all of you that have offered help, thank you very much. I've now  
got Peter Zaitsev on the case, so hopefully he'll be able to pinpoint  
my problem areas.


Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of [EMAIL PROTECTED] CRM solutions
[EMAIL PROTECTED] | http://www.synchromedia.co.uk/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Int(4) or int(8)

2007-03-09 Thread Bruno Rodrigues Silva
hi all

The mysql manual inform that: 

EX. century int(4)

The result is that values in the century column usually are
displayed four digits wide. However, its important to undesrstand that the
display width is unrelated to the range of the datatype. The display
width specified for a column affects ONLY the maximum number of
digits MySQL uses to DISPLAY column values.

I understand that theses values are unrelated to the range of datatype values, 
however i did not saw any diference when i use int(4) or int(8)

Some idea?

Regards

Bruno



__
Fale com seus amigos  de graça com o novo Yahoo! Messenger 
http://br.messenger.yahoo.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Chris Kantarjiev
Brent,

Thanks for your response.

 Enabling journaling isn't going to halve your performance.

I was careful to say write speed, not performance. 

I already have my data and index files on separate drives (raid
volumes, actually, each made up of complete drives). What I see
is that the index drive is being clobbered during table creation,
because mysql can't keep it all in memory. This is a long standing
problem with MyISAM files, where the index code isn't 64-bit safe.

Yes, 64-bit. This is a quad-processor opteron with 16GB of ram.
The index file is 15GB these days, so even if My ISAM *could*
hold more than about 3GB of index in its data structures, it
probably wouldn't all fit in memory.

Did I mention that this is a big data problem?

Please don't tell me to use InnoDB. It's much too slow for this purpose.

 Here is an interesting article to read on ext3 journaling overhead.
 http://www-128.ibm.com/developerworks/library/l-fs8.html

Interesting, if only to show how dangerous it is to publish
results that aren't understood. The author doesn't say anything
about testing methodology, so I have no idea whether or not to
trust the results. 16MB files are toys; they easily fit completely
in memory and Linux makes it difficult to clear the buffer cache
between runs. Was the machine rebooted between every test? When 
he runs these tests again with files that are bigger than available
RAM, I'll be a lot more interested.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Brent Baisley

Yes, the article did lack a lot of methodology information.

Since you already have things separated and it seems your data isn't ciritical, meaning losing it would be a pain as opposed to 
life threatening, you can turn off journaling for those file systems. You may actually want to look at other file system for those 
volumes too.
I'm not going to recommend InnoDB, it doesn't seem like you need transactions and clearly you are outside the physicial size where 
InndoDB performs well.


This might be a silly question, but did you max out the myisam_sort_buffer_size and key_buffer_size settings? Both can go up to 4GB. 
After indexes are created you can drop them back down since those large numbers are usually unnecessary for daily use.


You can also turn off indexing when loading the data, then turn it back on when the data is loaded (if you haven't already). But 
I've had some funky things happen when I've done this with large datasets (500 million records) and auto increment fields.



- Original Message - 
From: Chris Kantarjiev [EMAIL PROTECTED]

To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Friday, March 09, 2007 12:09 PM
Subject: Re: what journal options should I use on linux?



Brent,

Thanks for your response.


Enabling journaling isn't going to halve your performance.


I was careful to say write speed, not performance.

I already have my data and index files on separate drives (raid
volumes, actually, each made up of complete drives). What I see
is that the index drive is being clobbered during table creation,
because mysql can't keep it all in memory. This is a long standing
problem with MyISAM files, where the index code isn't 64-bit safe.

Yes, 64-bit. This is a quad-processor opteron with 16GB of ram.
The index file is 15GB these days, so even if My ISAM *could*
hold more than about 3GB of index in its data structures, it
probably wouldn't all fit in memory.

Did I mention that this is a big data problem?

Please don't tell me to use InnoDB. It's much too slow for this purpose.


Here is an interesting article to read on ext3 journaling overhead.
http://www-128.ibm.com/developerworks/library/l-fs8.html


Interesting, if only to show how dangerous it is to publish
results that aren't understood. The author doesn't say anything
about testing methodology, so I have no idea whether or not to
trust the results. 16MB files are toys; they easily fit completely
in memory and Linux makes it difficult to clear the buffer cache
between runs. Was the machine rebooted between every test? When
he runs these tests again with files that are bigger than available
RAM, I'll be a lot more interested. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Chris Kantarjiev
 Yes, the article did lack a lot of methodology information.

This one is *lots* better.

http://www.bullopensource.org/ext4/sqlbench/index.html

Losing data is always bad - that's why it's in a database, not a filesystem.
But these systems have been pretty reliable and are on UPS, etc. This
is a created table, so it's not life critical ... but it is expensive.

 This might be a silly question, but did you max out the 
 myisam_sort_buffer_size 
 and key_buffer_size settings? Both can go up to 4GB.

Yup. Not nearly big enough - index is 15GB!
And the code seems to misbehave, leading to crashes, at the limit.

 You can also turn off indexing when loading the data, then turn it back on 
 when the data is loaded (if you haven't already).

We need INSERT IGNORE, so this isn't really an option for us,
unfortunately.

I'm going to mount them as ext2fs for the time being.

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Brent Baisley

Would using merge tables be an option? You would have to split your data, but 
then each individual table could be within MySQL
memory limits. Divide and conquer. You would need a way to work around your 
insert ignore requirement. But the end result could be
much faster.

The solution I use for duplicate filtering is to to create a load table 
which loads a set of records, then compares those
records against the merge table for duplicates, deleting any found. Then the 
load table is added to the merge table and the process
is repeated for the next batch of data.

Adding 2.5 million rows to a 500 million row table takes about 2 hours, a daily 
occurance. The bottleneck is that a record is
considered a duplicate if another similar one exists withing a 24 hour time 
period. So it's a range match rather than a direct
comparison, which means I couldn't use insert ignore anyway.

Thanks for the link.

- Original Message - 
From: Chris Kantarjiev [EMAIL PROTECTED]

To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Friday, March 09, 2007 12:42 PM
Subject: Re: what journal options should I use on linux?



Yes, the article did lack a lot of methodology information.


This one is *lots* better.

http://www.bullopensource.org/ext4/sqlbench/index.html

Losing data is always bad - that's why it's in a database, not a filesystem.
But these systems have been pretty reliable and are on UPS, etc. This
is a created table, so it's not life critical ... but it is expensive.


This might be a silly question, but did you max out the
myisam_sort_buffer_size
and key_buffer_size settings? Both can go up to 4GB.


Yup. Not nearly big enough - index is 15GB!
And the code seems to misbehave, leading to crashes, at the limit.


You can also turn off indexing when loading the data, then turn it back on
when the data is loaded (if you haven't already).


We need INSERT IGNORE, so this isn't really an option for us,
unfortunately.

I'm going to mount them as ext2fs for the time being.

Best,
chris 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Chris Kantarjiev
Merge tables aren't an easy option here. There's a higher level
data partitioning that we're putting into place, though, and it's
been shown to help a lot. But I also want to get as much out of the
file system as I can.

 The solution I use for duplicate filtering is to to create a
 load table which loads a set of records, then compares those
 records against the merge table for duplicates, deleting any found.
 Then the load table is added to the merge table and the process is
 repeated for the next batch of data.

I don't think this will help us, but it's an interesting technique.
We use staging tables to cut the load in a bunch of places. 

I think the true answer to this particular problem lies outside SQL
and instead with a private index structure that is tuned for dealing
with duplicates...it would help if the MyISAM engine was a little
more clever about really large indexes.

Best,
chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Crashed Again!

2007-03-09 Thread Drew Jenkins
I built a new server at my server farm because the old one was corrupt. I had a 
hunch it had to do with the new 500 GB hard drive I bought. I told them to 
check it out. Apparently, they didn't. It zapped my database. I went to 
deinstall and reinstall mysql from FBSD port, but when I run it I get this very 
undescriptive error:

# Starting mysqld daemon with databases from /var/db/mysql
STOPPING server from pid file /var/db/mysql/server312.web.vi.pid
070309 17:19:12  mysqld ended


Helps a lot, doesn't it? Where do I begin with this?
TIA,
Drew

 
-
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.

Re: Crashed Again!

2007-03-09 Thread Drew Jenkins
Never mind. I forgot this command:

/usr/local/bin/mysql_install_db --user=mysql

Drew2

Drew Jenkins [EMAIL PROTECTED] wrote: I built a new server at my server farm 
because the old one was corrupt. I had a hunch it had to do with the new 500 GB 
hard drive I bought. I told them to check it out. Apparently, they didn't. It 
zapped my database. I went to deinstall and reinstall mysql from FBSD port, but 
when I run it I get this very undescriptive error:

# Starting mysqld daemon with databases from /var/db/mysql
STOPPING server from pid file /var/db/mysql/server312.web.vi.pid
070309 17:19:12  mysqld ended


Helps a lot, doesn't it? Where do I begin with this?
TIA,
Drew

 
-
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.

 
-
Need Mail bonding?
Go to the Yahoo! Mail QA for great tips from Yahoo! Answers users.

Problems w/ Grant Tables

2007-03-09 Thread Drew Jenkins
Hi;
I rebuilt mysql and injected the database which had user mysqluser and a root 
user. Both got wiped out. It looks like the data is intact other than that. 
Look at this:

mysql CREATE USER dev IDENTIFIED BY 'pass';
Query OK, 0 rows affected (0.04 sec)

mysql CREATE USER mysqluser IDENTIFIED BY 'pass';
ERROR 1396 (HY000): Operation CREATE USER failed for 'mysqluser'@'%'
mysql
mysql grant all on mrtablecloth.* to dev identified by 'pass';
Query OK, 0 rows affected (0.00 sec)

mysql Bye
server312# mysql -udev -ppass
ERROR 1045 (28000): Access denied for user 'dev'@'localhost' (using password: 
YES)
2
mysql use authdaemon;
Database changed
mysql show tables;
Empty set (0.02 sec)

Is authdaemon the database that has the grant tables? What gives? How do I add 
users that I can use?
TIA,
Drew

 
-
The fish are biting.
 Get more visitors on your site using Yahoo! Search Marketing.

Re: Problems w/ Grant Tables

2007-03-09 Thread Drew Jenkins
Problem solved:
/usr/local/bin/mysql_fix_privilege_tables2

Drew Jenkins [EMAIL PROTECTED] wrote: Hi;
I rebuilt mysql and injected the database which had user mysqluser and a root 
user. Both got wiped out. It looks like the data is intact other than that. 
Look at this:

mysql CREATE USER dev IDENTIFIED BY 'pass';
Query OK, 0 rows affected (0.04 sec)

mysql CREATE USER mysqluser IDENTIFIED BY 'pass';
ERROR 1396 (HY000): Operation CREATE USER failed for 'mysqluser'@'%'
mysql
mysql grant all on mrtablecloth.* to dev identified by 'pass';
Query OK, 0 rows affected (0.00 sec)

mysql Bye
server312# mysql -udev -ppass
ERROR 1045 (28000): Access denied for user 'dev'@'localhost' (using password: 
YES)
2
mysql use authdaemon;
Database changed
mysql show tables;
Empty set (0.02 sec)

Is authdaemon the database that has the grant tables? What gives? How do I add 
users that I can use?
TIA,
Drew

 
-
The fish are biting.
 Get more visitors on your site using Yahoo! Search Marketing.

 
-
Don't get soaked.  Take a quick peek at the forecast 
 with theYahoo! Search weather shortcut.

Re: Problems with Spam on MySQL guest book

2007-03-09 Thread obed

On 3/9/07, Gloria McMillan [EMAIL PROTECTED] wrote:

Dear MySQL general list members,

I have a guest book on a small club web page that has been getting
regular spam in with
the legitimate replies.

I use MySQL and (I think) Apache-style PHP to run the database.  I know
it is PHP and
am certain when I started these survey and guest book it was Apache PHP.

If anybody has a way that I can add to my PHP code to filter out spam,
please let me know.
I do not know which of the many MySQL lists to send this to, so if this
is not the correct
list, I also apologize and ask you to please redirect me to the right list.



well i think that you should ask in a php list but any way you could
try a CAPTCHA

http://www.roscripts.com/PHP_captcha_script,news,88,ID,news_details


Thanks,
Gloria McMillan




--

http://www.obed.org.mx --- blog

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]