Re: Maximum number of user variables

2005-05-12 Thread Neculai Macarie
 Not that I'm aware of. What type of conversions are you doing that you
 need 30,000 use vars? An easy solution would be to try it and find out :)

I need to move multiple linked entries (in around 12 tables) from one
running server to another. I'm using auto_increment's all over the place and
I have the following problem: I need to move (no replication possible) the
information in those 12 tables to an identical functioning system (same
software, same db, but other server) and I need to preserve the relations
between the tables. The problem that I have is that the systems evolve
independently and I can have the same id for 2 different entries on the
system (e.g. on the first system I can have the id 10 for User1, but on
second system id 10 would be for another user).

I want to make a script that dumps the info into an sql file that I can run
on the other server, something like this:
insert into customers () values ();
select @current_customer_id:=max(id) from customers;

insert into customer_categ (customer_id, name...) values
(@current_customer_id, categ...);

And I have around 20-30 000 records that are linked together (I cannot use a
single variable because the tables are linked 3 levels deep and I need to
keep the intermediate id's in variables).

-- 
mack /


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: table optimisation

2005-05-12 Thread Thomas Sundberg

 -Original Message-
 From: Seena Blace [mailto:[EMAIL PROTECTED] 
 Sent: den 11 maj 2005 23:58
 To: mysql@lists.mysql.com
 Subject: table optimisation
 
 Hi,
 I have been noticing table performanace issue when # of rows 
 grows more.How to tune that table?

Adding index on the column or columns that are often used in queries could
be a start. Performing full table scan on large, unindexed tables can take
quite longtime when the number of rows are increasing.

/Thomas


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Maximum number of user variables

2005-05-12 Thread Dominicus Donny
- Original Message - 
From: Neculai Macarie [EMAIL PROTECTED]
To: Mysql mysql@lists.mysql.com
Sent: Thursday, May 12, 2005 1:20 PM
Subject: Re: Maximum number of user variables


Not that I'm aware of. What type of conversions are you doing that you
need 30,000 use vars? An easy solution would be to try it and find out :)
I need to move multiple linked entries (in around 12 tables) from one
running server to another. I'm using auto_increment's all over the place 
and
I have the following problem: I need to move (no replication possible) the
information in those 12 tables to an identical functioning system (same
software, same db, but other server) and I need to preserve the relations
between the tables. The problem that I have is that the systems evolve
independently and I can have the same id for 2 different entries on the
system (e.g. on the first system I can have the id 10 for User1, but on
second system id 10 would be for another user).

Perhaps after you dump the structure and data into sql files,
you could remove temporarily the extra attribut auto increment to those 
columns.
Then start inserting.
After that add the auto increment attribut again.
OR
use bulkcopy(?) if im not mistaken, or any other methods to copy as it is to 
new dbservers
(compress it first).

Donny. 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: table optimisation

2005-05-12 Thread Seena Blace
How to check whae are tables are having how many indexes on which columns ?

Thomas Sundberg [EMAIL PROTECTED] wrote:
 -Original Message-
 From: Seena Blace [mailto:[EMAIL PROTECTED] 
 Sent: den 11 maj 2005 23:58
 To: mysql@lists.mysql.com
 Subject: table optimisation
 
 Hi,
 I have been noticing table performanace issue when # of rows 
 grows more.How to tune that table?

Adding index on the column or columns that are often used in queries could
be a start. Performing full table scan on large, unindexed tables can take
quite longtime when the number of rows are increasing.

/Thomas




-
Yahoo! Mail
 Stay connected, organized, and protected. Take the tour

Re: table optimisation

2005-05-12 Thread Peter Brawley




Seena,

How to check whae are tables are having how many 
indexes on which columns ?

http://dev.mysql.com/doc/mysql/en/show-index.html

and alternatively if you're using 5.03 or later, http://dev.mysql.com/doc/mysql/en/statistics-table.html

eg an approximate equivalent of SHOW KEYS FROM tbl [FROM db] would be
 SELECT * 
 FROM information_schema.statistics 
 WHERE table_name='tbl' 
 [AND table_schema = 'db']

PB

-

Seena Blace wrote:

  How to check whae are tables are having how many indexes on which columns ?

Thomas Sundberg [EMAIL PROTECTED] wrote:
  
  
-Original Message-
From: Seena Blace [mailto:[EMAIL PROTECTED]] 
Sent: den 11 maj 2005 23:58
To: mysql@lists.mysql.com
Subject: table optimisation

Hi,
I have been noticing table performanace issue when # of rows 
grows more.How to tune that table?

  
  
Adding index on the column or columns that are often used in queries could
be a start. Performing full table scan on large, unindexed tables can take
quite longtime when the number of rows are increasing.

/Thomas



		
-
Yahoo! Mail
 Stay connected, organized, and protected. Take the tour
  
  

No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.8 - Release Date: 5/10/2005
  



No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.8 - Release Date: 5/10/2005

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Maximum number of user variables

2005-05-12 Thread Neculai Macarie
  Not that I'm aware of. What type of conversions are you doing that you
  need 30,000 use vars? An easy solution would be to try it and find out
:)
 
  I need to move multiple linked entries (in around 12 tables) from one
  running server to another. I'm using auto_increment's all over the place
  and
  I have the following problem: I need to move (no replication possible)
the
  information in those 12 tables to an identical functioning system (same
  software, same db, but other server) and I need to preserve the
relations
  between the tables. The problem that I have is that the systems evolve
  independently and I can have the same id for 2 different entries on the
  system (e.g. on the first system I can have the id 10 for User1, but on
  second system id 10 would be for another user).
 

 Perhaps after you dump the structure and data into sql files,
 you could remove temporarily the extra attribut auto increment to those
 columns.
 Then start inserting.
 After that add the auto increment attribut again.
 OR
 use bulkcopy(?) if im not mistaken, or any other methods to copy as it is
to
 new dbservers
 (compress it first).

Donny,

Thanks for the answer. Dumping the structure and data alone (mysqldump
style) is not helping me because the tables are linked and I must update the
linked tables with the new id's.

-- 
mack /


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqlxml

2005-05-12 Thread mel list_php
Hi guys,
I was trying to download the mysqlxml patch for mysql 5.0 but didn't succeed 
from the url:
http://d.udm.net/bar/myxml/mysql­xml.tar.g

does anybody know where I could find it?
Did anybody tried to use it or have any link to a doc/tutorial in addition 
to the presentation of Alexander Barkov 
(http://mysql.r18.ru/~bar/myxml/XMLXpathSupportInMySQL.pdf)?

Thanks,
Melanie
_
Be the first to hear what's new at MSN - sign up to our free newsletters! 
http://www.msn.co.uk/newsletters

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


stopped while creating index.

2005-05-12 Thread Tom
Hey, all.

Hardware: Itiaum 2 with 1.3G cpu * 4 and 8G RAM.
OS: Red Hat Enterprise Advanced Server 3.0
Mysql: mysql-standard-4.0.24-gnu-ia64-glibc23

I created a index on a large table with more than 100,000,000 records by the 
following command.
mysql create index scn_ra on twomass_scn (ra);

It went good within 20 hours although it costs nearly 8G RAM. Then it stopped 
and the err log say Warning: Enabling keys got errno 116, retrying. It has not 
gone on by now.
I cannot figure out why and I really appreciate any suggestion.

Thanks.



Eswine.



Re: Report(query)

2005-05-12 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Seena Blace [EMAIL PROTECTED] writes:

 Hi,
 I want report like this
 
 date   process pending wip
 
 5/10/051030  40
 5/11/05 09   28  60
 
 Summary 19   58 100
 
 
 
 select date,sum(process),sum(pending),sum(wip) from tab
 group by date;
 What command I used to show heading and what command to show sub total and 
 total ?

The last row can be calculated by the ROLLUP modifier of GROUP
(requires MySQL 4.1.1 or later).  Nicely formatting the output with
headings etc is not the business of a database engine and should be
done by your application.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqlxml

2005-05-12 Thread Bastian Balthazar Bux
mel list_php wrote:
 Hi guys,
 
 I was trying to download the mysqlxml patch for mysql 5.0 but didn't
 succeed from the url:
 http://d.udm.net/bar/myxml/mysql­xml.tar.g
 
 does anybody know where I could find it?
 Did anybody tried to use it or have any link to a doc/tutorial in
 addition to the presentation of Alexander Barkov
 (http://mysql.r18.ru/~bar/myxml/XMLXpathSupportInMySQL.pdf)?
 
 Thanks,
 Melanie

try
http://mysql.r18.ru/~bar/myxml/
 
 _
 Be the first to hear what's new at MSN - sign up to our free
 newsletters! http://www.msn.co.uk/newsletters
 
 


-- 
 
. These pages are best viewed by coming to my house and looking at   .
. my monitor. [S. Lucas Bergman (on his website)].
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: stopped while creating index.

2005-05-12 Thread Partha Dutta
What type of table is this? MyISAM or InnoDB?
What are your system variable settings when you issued the CREATE INDEX
command?

If this is a MyISAM table, then MySQL will spend time re-creating the data
file first before creating ALL of the indexes including the new one.

If you already have indexes on the table, you can speed up some of the
processing time:

SET SESSION myisam_repair_threads = 2;
  (This will enable parallel index build for the indexes)
SET SESSION myisam_sort_buffer_size = SOME LARGE VALUE;
  (This buffer will be used during the CREATE INDEX for sorting MyISAM
indexes)

--
Partha Dutta, Senior Consultant
MySQL Inc, NY, USA, www.mysql.com
 
Are you MySQL certified?  www.mysql.com/certification
 
 -Original Message-
 From: Tom [mailto:[EMAIL PROTECTED]
 Sent: Thursday, May 12, 2005 5:12 AM
 To: mysql@lists.mysql.com
 Subject: stopped while creating index.
 
 Hey, all.
 
 Hardware: Itiaum 2 with 1.3G cpu * 4 and 8G RAM.
 OS: Red Hat Enterprise Advanced Server 3.0
 Mysql: mysql-standard-4.0.24-gnu-ia64-glibc23
 
 I created a index on a large table with more than 100,000,000 records by
 the following command.
 mysql create index scn_ra on twomass_scn (ra);
 
 It went good within 20 hours although it costs nearly 8G RAM. Then it
 stopped and the err log say Warning: Enabling keys got errno 116,
 retrying. It has not gone on by now.
 I cannot figure out why and I really appreciate any suggestion.
 
 Thanks.
 
 
 
 Eswine.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Multi-Language setup

2005-05-12 Thread Gleb Paharenko
Hello.



Older versions of MySQL have very limited support for localization

and character sets. In general you may assign only one global server

character set, and other objects in database don't have their own

character sets. 





Mike Blezien [EMAIL PROTECTED] wrote:

 

 Unfortunately, if I am reading the docs correctly, this is only support in 
 MySQL 

 4.1 and older version do not support changing or assigning character sets. or 
 am 

 I wrong on this ??

 

 

 Thx's



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: table optimisation

2005-05-12 Thread Gleb Paharenko
Hello.



It depends on the table's type. For MyISAM see:

  http://dev.mysql.com/doc/mysql/en/optimize-table.html



For InnoDB:

  http://dev.mysql.com/doc/mysql/en/innodb-file-defragmenting.html





Seena Blace [EMAIL PROTECTED] wrote:

 [-- text/plain, encoding 7bit, charset: us-ascii, 9 lines --]

 

 Hi,

 I have been noticing table performanace issue when # of rows grows more.How 
 to tune that table?

 thanks

 -Seena

 



 -

 Do you Yahoo!?

 Read only the mail you want - Yahoo! Mail SpamGuard.



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



write a spatial queary

2005-05-12 Thread PRASHANT N
hi,

i have created / imported a table to mysql thru mysqlgisimport libraries and it 
has created a table with the following field / table structure
Field name Type Allow nulls? Key Default value Extras 
id int(10)  unsigned No Primary NULL  auto_increment  
NAME   varchar(100) No None 
TYPE   varchar(100) No None 
SPEED_LIMI int(11) No None 0
ONE_WAYint(11) No None 0
geogeometry No Indexed 

now i need to write a sample query to demostrate the gis / spatial capabalities 
of mysql...
i have gone thru all the web but didnt find an example for buffer(g,d) or 
Distance() functions..
please help out in writing these type of mysql queries

thanks  regards

prashant


___ 
Share diaries and photos with your friends and family.

http://www.homemaster.net - Homemaster. Come Together. Online.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Solution to slow queries (Resolved, kinda)

2005-05-12 Thread Paul Halliday
Hi,

First of all, thanks to everyone that provided pointers on this matter.
The route I chose to take was to make 2 tables. One is for cumulative
network stats; this table can be used for the weekly,monthly,yearly
reports. I also created a table for daily stats which will be dropped
at midnight each day.

So I whipped up a simple shell script that looks like this:

mysql -B --user=flow --password=hi flowdb -e DROP TABLE traffic
mysql -B --user=flow --password=hi flowdb  /home/flow/Code/create_flowdb.sql

and a crontab entry that looks like:

0   0   *   *   *   /home/flow/Code/db_rollover.sh
 /dev/null 21

the report entries look like this:

*/5 *   *   *   *  
/home/flow/Reports/incident_report.tcl  /dev/null 21

*/20*   *   *   *  
/home/flow/Reports/traffic_report.tcl  /dev/null 21

Now looking at crons log from last night I see:

May 12 00:00:00 watcher cron[84039]: (flow) CMD
(/home/flow/Code/db_rollover.sh  /dev/null 21)
May 12 00:00:00 watcher cron[84040]: (flow) CMD
(/home/flow/Reports/traffic_report.tcl  /dev/null 21)
May 12 00:00:00 watcher cron[84041]: (flow) CMD
(/home/flow/Reports/incident_report.tcl  /dev/null 21)

So the script did indeed run, yet it did not drop the table. The
script works fine from the command line so I guess because the other
programs were running too it could not drop the table? Those scripts
are just doing selects, no updates, but there is the possibility that
the program which populates the db was running at the same time too.

Is there a way to force the table drop? (Without adding checks to the
shell script)


Thanks.

_
Paul Halliday
http://dp.penix.org

Diplomacy is the art of saying Nice doggie! till you can find a rock.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query to lowercase database columns names

2005-05-12 Thread zzapper
On Wed, 11 May 2005 15:08:50 +0200,  wrote:

Selon Jay Blanchard [EMAIL PROTECTED]:

 [snip]
 I've got a converted from Excel spreadsheet to mysql database, which has
 mixed case column names and

With advice from this thread, what I ended up doing was

show create table tbl_products;

To get syntax

such

`01_Desc` varchar(255) default NULL,
`01_Lcode` varchar(255) default NULL,

I then used a text editor www.vim.org to generate the following mysql command 
using VIM regexp
substitutes

alter table tbl_products 
change `01_Desc` `p01_desc` varchar(255) default NULL, 
change `01_lLcode` `p01_lcode` varchar(255) default NULL, 
etc

eg VIM commands

:%s#^[^ ]\+#change  p\L
:%s# p`# `p#

This worked very well


-- 
zzapper
vim -c :%s%s*%Cyrnfr)fcbafbe[Oenz(Zbbyranne%|:%s)[[()])-)Ig|norm Vg?
http://www.rayninfo.co.uk/tips/ vim, zsh  success tips


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



UPDATE Query

2005-05-12 Thread shaun thornburgh
Hi,
I am getting an error on the following query and but can't understand why, 
the syntax looks fine to me!

mysql UPDATE CSV_Upload_Data SET SUBSTRING(PRACT_ASCII, 16, 6) = '11' 
WHERE Insertion_ID = 190716;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual 
that corresponds to your MySQL server version for the right syntax to use 
near 'SUBSTRING(PRACT_ASCII, 16, 6) = '11' WHERE Insertion_ID = 190716' 
at line 1
mysql

Any advice would be greatly appreciated.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: UPDATE Query

2005-05-12 Thread Partha Dutta
If you are trying to set the first 6 characters of your column to '11'
then you can't use SUBSTRING on the LHS, but only from the RHS:

UPDATE CSV_Upload_Data SET PRACT_ASCII =
CONCAT(SUBSTRING(PRACT_ASCII, 1, 15), '11',
SUBSTRING(PRACT_ASCII, 22))
WHERE Insertion_ID = 190716;

--
Partha Dutta, Senior Consultant
MySQL Inc, NY, USA, www.mysql.com
 
Are you MySQL certified?  www.mysql.com/certification
 

 -Original Message-
 From: shaun thornburgh [mailto:[EMAIL PROTECTED]
 Sent: Thursday, May 12, 2005 9:47 AM
 To: mysql@lists.mysql.com
 Subject: UPDATE Query
 
 Hi,
 
 I am getting an error on the following query and but can't understand why,
 the syntax looks fine to me!
 
 mysql UPDATE CSV_Upload_Data SET SUBSTRING(PRACT_ASCII, 16, 6) = '11'
 WHERE Insertion_ID = 190716;
 ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near 'SUBSTRING(PRACT_ASCII, 16, 6) = '11' WHERE Insertion_ID =
 190716'
 at line 1
 mysql
 
 Any advice would be greatly appreciated.
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: write a spatial queary

2005-05-12 Thread Peter Brawley
Prashant,
i have gone thru all the web but didnt find an example for buffer(g,d) 
or Distance() functions..
please help out in writing these type of mysql queries

According to the manual, Distance(), Related()  the functions listed in 
18.5.3.2 including Buffer() aren't yet implemented.

PB
-
PRASHANT N wrote:
hi,
i have created / imported a table to mysql thru mysqlgisimport libraries and it has created a table with the following field / table structure
Field name Type Allow nulls? Key Default value Extras 
id int(10)  unsigned No Primary NULL  auto_increment  
NAME   varchar(100) No None 
TYPE   varchar(100) No None 
SPEED_LIMI int(11) No None 0
ONE_WAYint(11) No None 0
geogeometry No Indexed 

now i need to write a sample query to demostrate the gis / spatial capabalities 
of mysql...
i have gone thru all the web but didnt find an example for buffer(g,d) or 
Distance() functions..
please help out in writing these type of mysql queries
thanks  regards
prashant
___ 
Share diaries and photos with your friends and family.

http://www.homemaster.net - Homemaster. Come Together. Online.
 


--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.9 - Release Date: 5/12/2005
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Preventing slaves from falling behind masters...

2005-05-12 Thread Greg Whalin
Donny Simonton wrote:
With Mysql you should ONLY use RAID10.  Everything else is not worth your
time.

I would argue that a large stripe (RAID0) would be a better solution for 
slaves in a large replicant network.  Why waste the drive space and 
performance on a RAID10 when you have multiple replicants for HA.  That 
said, all of our DBs are RAID10 as we never know which ones we may need 
to put in as master (if master fails).


As long as you are using 15k SCSI drives, on both your master and your
slave, your slave should rarely ever fall behind.  Especially if you are
doing less than 1,000 inserts per second on the master.  Otherwise you
should be just fine.
The only time our slaves ever fall behind, is when you delete 40 records
from table A then insert 40 new ones.  But we do that for 600k records, so
we do a few million deletes and inserts in a short period of time.

Given that the slaves will never start a query until the master has 
finished it and passed it along to the slave, the slave is ALWAYS 
running a bit behind the master.  And, if you are using MyIsam tables, 
and running w/ low-priority-updates (as the Mysql guide suggests for 
performance on slaves), then it is likely that a heavy hit replicant 
will lag to a noticable level when under high traffic.  Innodb should 
lessen this a bit as it is not doing full table locks and you can have 
it do dirty reads.

As for our replicant cluster, we do not run w/ low-piority-updates 
simply because we found it increased replication lags, and exposed 
replication bugs in our application (write to the rw pool, immediately 
look for that record in the ro pool and not find it).  We also put our 
more powerful hardware in our replicant cluster as our rw machine does a 
fraction of the traffic (1/10th the qps of any of our replicants).  Even 
w/ this, our replicants sometimes fall behind, but never more than 1 to 
2 seconds.


Donny

-Original Message-
From: Kevin Burton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 11, 2005 7:30 PM
To: mysql@lists.mysql.com
Subject: Preventing slaves from falling behind masters...
If you're running in a master/slave environment.. and you're application
is using the slave too often... replication can fall behind which can
then confuse your application.
This can happen if the IO performance of both the master and slaves is
equivalent and you're performaning INSERT/UPDATE/DELETE with a load of 1
on the master.  Then when the transactions move to the slave all the IO
is used up and any additional SELECTS will just cause the slave to fall
behind.
Has anyone else seen this?  One way I was thinking of solving this is to
use RAID5 on our master and then RAID0 on the slaves so that the master
is a hard bottleneck. Then the slaves have no problem running
transactions via replication and have load available to run SELECTS.
Any other ideas?
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you want to chat.
Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SATA vs SCSI

2005-05-12 Thread Scott M. Grim
I've fairly extensively (although not necessarily scientifically) tested 
SATA 150 vs. SCSI U320 and find that if you're doing a lot of random reads 
and writes (such as with a database server), SCSI provides nearly 5x the 
performance as SATA so, for us, it's well worth the additional expense.

It's also my experience that even the best SATA drives seem to be 
disposable.  There's a huge difference in reliability and life expectancy 
between SATA and SCSI drives because they put a bit more quality into SCSI 
drives as they are expected to perform in an enterprise environment.

With RAID arrays and hotswap bays, it's easy enough to deal with SATA's 
unreliability, but it's always best to not have to swap and rebuild because 
every failure has the potential to cause some cascade that can become 
devestating.

- Original Message - 
From: Kevin Burton [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, May 11, 2005 3:29 PM
Subject: SATA vs SCSI


Were kicking around using SATA drives in software RAID0 config.

The price diff is significant.  You can also get SATA drives in 10k RPM
form now.,

Kevin

-- 


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

   Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SATA vs SCSI

2005-05-12 Thread Alec . Cawley
Scott M. Grim [EMAIL PROTECTED] wrote on 12/05/2005 16:42:00:

 I've fairly extensively (although not necessarily scientifically) tested 

 SATA 150 vs. SCSI U320 and find that if you're doing a lot of random 
reads 
 and writes (such as with a database server), SCSI provides nearly 5x the 

 performance as SATA so, for us, it's well worth the additional expense.
 
 It's also my experience that even the best SATA drives seem to be 
 disposable.  There's a huge difference in reliability and life 
expectancy 
 between SATA and SCSI drives because they put a bit more quality into 
SCSI 
 drives as they are expected to perform in an enterprise environment.
 
 With RAID arrays and hotswap bays, it's easy enough to deal with SATA's 
 unreliability, but it's always best to not have to swap and rebuild 
because 
 every failure has the potential to cause some cascade that can become 
 devestating.

I would concur with this. Having talked to drive manufacturers, they use 
the Scsi interface, which is not in itself significantly faster than Sata, 
as a marker for what you might call Professional grade drives. 
Components such as bearings etc. are built to a higher spec, head 
actuators are more powerful, buffers are bigger, more effort is put into 
optimising the drive's internal code to do better overlapping, there are 
more self diagnostics etc.

As is usually true, you pay for what you get. While there might be a 
slight element of gouge in it, SATA drives are basically consumer-grade 
drives with a fast interface, which SCSI drives are what the manufacturers 
think of as professional grade. What are the warranties and MTBF on the 
SATA drives like? A year or so ago, the manufacturers drastically cut the 
warranties on their ATA drives, without changing the SCSI. Where to SATA 
fall in this spectrum?

Alec




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SATA vs SCSI

2005-05-12 Thread Andy Davidson
On Wed, May 11, 2005 at 12:29:47PM -0700, Kevin Burton wrote:
  Were kicking around using SATA drives in software RAID0 config.  
  The price diff is significant.  You can also get SATA drives in 10k RPM 
  form now.,

Good idea, but a few points :

 - 10krpm disks will run hotter than 7200rpm disks, this might be
   significant in your data centre.

 - the controller you pick will have a major impact on the raid setup -
   many controllers can't do real hardware raid (e.g. the controller on
   the Intel E7210 board needs windows for sort-of hardware raid, but
   the 3ware 9000 series sata controller will do real hw raid.)

-- 
regards, Andy Davidson
Sysadmin
www.ebuyer.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Maximum number of user variables

2005-05-12 Thread Eric Bergen
It sounds like you should be doing the link preservation and number 
update part in php or perl.


Neculai Macarie wrote:
Not that I'm aware of. What type of conversions are you doing that you
need 30,000 use vars? An easy solution would be to try it and find out
   

:)
 

I need to move multiple linked entries (in around 12 tables) from one
running server to another. I'm using auto_increment's all over the place
and
I have the following problem: I need to move (no replication possible)
 

the
 

information in those 12 tables to an identical functioning system (same
software, same db, but other server) and I need to preserve the
 

relations
 

between the tables. The problem that I have is that the systems evolve
independently and I can have the same id for 2 different entries on the
system (e.g. on the first system I can have the id 10 for User1, but on
second system id 10 would be for another user).
 

Perhaps after you dump the structure and data into sql files,
you could remove temporarily the extra attribut auto increment to those
columns.
Then start inserting.
After that add the auto increment attribut again.
OR
use bulkcopy(?) if im not mistaken, or any other methods to copy as it is
   

to
 

new dbservers
(compress it first).
   

Donny,
Thanks for the answer. Dumping the structure and data alone (mysqldump
style) is not helping me because the tables are linked and I must update the
linked tables with the new id's.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SATA vs SCSI

2005-05-12 Thread Larry Lowry
My $.02.As I agree SCSI has had a reputation for being
a more solid enterprise type drive, everyone's mileage varies.
We have moved to using all SATA drives in our newer
servers.  I have to admit most of our databases are smaller
than what many on this list have.  All our db's are  under
500 megs.My reality is this.  If a SATA drive does fail,
so far only 1 over the last 18 months, it is cheap and easy
to replace.  I have all my setups raided so we have no lost
data. At the same time I have several Hitachi/IBM SCSI
drives just sitting here. Why, because we have to keep
RMAing them when they fail.  I have several that are bad
but it just takes too much time to get them replaced. It's
not worth my effort. We have even replaced whole servers
just to get away from the SCSI drives.  Of course I get bigger
nicer boxes that way.  ;-))  I have heard that many folks have
problems with the newer 10k sata drives. So far they are
running great for me with no failures. Although they have
only been running for a few months.  I'm hedging my bet
and only using those on the backup servers for now.
Morals:
 1)   Performance is more than just the drive type.
 2)   Reliability is more than just the drive type.
Good luck with whatever you decide to use.
Larry
- Original Message - 
From: Scott M. Grim [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Thursday, May 12, 2005 8:42 AM
Subject: Re: SATA vs SCSI


I've fairly extensively (although not necessarily scientifically) tested
SATA 150 vs. SCSI U320 and find that if you're doing a lot of random reads
and writes (such as with a database server), SCSI provides nearly 5x the
performance as SATA so, for us, it's well worth the additional expense.
It's also my experience that even the best SATA drives seem to be
disposable.  There's a huge difference in reliability and life expectancy
between SATA and SCSI drives because they put a bit more quality into SCSI
drives as they are expected to perform in an enterprise environment.
With RAID arrays and hotswap bays, it's easy enough to deal with SATA's
unreliability, but it's always best to not have to swap and rebuild 
because
every failure has the potential to cause some cascade that can become
devestating.

- Original Message - 
From: Kevin Burton [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, May 11, 2005 3:29 PM
Subject: SATA vs SCSI

Were kicking around using SATA drives in software RAID0 config.
The price diff is significant.  You can also get SATA drives in 10k RPM
form now.,
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you want to chat.
Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SATA vs SCSI

2005-05-12 Thread Greg Whalin
Newer SATA drives are supporting command queueing, which should really 
help their performance.  I think when SATA-2 becomes more available, 
SATA will start being a more viable choice and start rivaling SCSI 
performance.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SATA vs SCSI

2005-05-12 Thread Brent Baisley
I'd be curious what you tested. Did the SATA drives support tagged 
command queueing (TCQ)? That can make a huge difference in a multi-user 
environment, detrimental in a single user. How many drives were in the 
SATA array and how many were in the SCSI array? You could probably put 
2-3x the numbers of drives in the SATA array, boosting performance, for 
the same price as a much smaller SCSI array. One on one I think an SATA 
is slower than SCSI, but bang for the buck I think goes to SATA.
Here's a link to a review comparing SATA and SCSI. It shows equal 
setups (meaning number of drives) of SCSI and SATA have similar 
performance, but the SATA setup costs 40% less. Reliability is of 
course a major consideration, but the SATA drives of today are probably 
just as reliable as SCSI drives of 5 years ago. Kind of like the worst 
cars of today are more reliable than the best cars of 10 years ago.

http://www.storagereview.com/articles/200406/20040625TCQ_1.html
On May 12, 2005, at 11:42 AM, Scott M. Grim wrote:
I've fairly extensively (although not necessarily scientifically) 
tested
SATA 150 vs. SCSI U320 and find that if you're doing a lot of random 
reads
and writes (such as with a database server), SCSI provides nearly 5x 
the
performance as SATA so, for us, it's well worth the additional expense.

It's also my experience that even the best SATA drives seem to be
disposable.  There's a huge difference in reliability and life 
expectancy
between SATA and SCSI drives because they put a bit more quality into 
SCSI
drives as they are expected to perform in an enterprise environment.

With RAID arrays and hotswap bays, it's easy enough to deal with SATA's
unreliability, but it's always best to not have to swap and rebuild 
because
every failure has the potential to cause some cascade that can become
devestating.

- Original Message -
From: Kevin Burton [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, May 11, 2005 3:29 PM
Subject: SATA vs SCSI
Were kicking around using SATA drives in software RAID0 config.
The price diff is significant.  You can also get SATA drives in 10k RPM
form now.,
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you want to chat.
Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
   Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Numbering rows

2005-05-12 Thread Mauricio Pellegrini
Hi , I don't know if this is possible with Sql but I'm trying to set the
row number into a field for each row.

The complexity comes when I try to do that according to some grouping
rules. 
I think I'll made myself more clear with a simple example:

This is the table I have

Column Id is primary key and auto_numeric


-
Id  order   itemvalue   col_type
-
1   3   15  0   null
2   3   15  5   null
3   3   15  0   null
4   8   22  7   null
5   8   22  0   null
6   10  64  20  null
-

And this is the result I would like to obtain.
Where column col_type should be filled with a number
representing the ordinal number for the row within the 
group formed by order,item 

-
Id  order   itemvalue   col_type
-
1   3   15  0   1 --This is row number 1 in the group 
2   3   15  5   2   formed by Order 3 and Item 15
3   3   15  0   3
4   8   22  7   1 -This is row number 1
5   8   22  0   2 -This is row number 2
6   10  64  20  1 -This is row number 1
-

I hope it is clear enough

All sugestions are welcome

Thanks 
Mauricio









-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SATA vs SCSI

2005-05-12 Thread Andy Eastham
Brent,

I'd disagree with your felling that today's disk drives are more reliable
than dive years ago.

I used to think of disk failures as a rare event, but now that they are
producing such high capacity parts for next to nothing, I think quality has
suffered.

I've heard of a lot more people suffering drive failures (in PCs, laptops
and servers) recently.  Also, I believe that Fujitsu produced an enormous
batch of disks which had a very high failure rate.

Whatever, I'd say make sure you've always got hot standby disks in your raid
arrays, and keep decent backups :-)

Andy 

 -Original Message-
 From: Brent Baisley [mailto:[EMAIL PROTECTED]
 Sent: 12 May 2005 17:47
 To: Scott M. Grim
 Cc: mysql@lists.mysql.com
 Subject: Re: SATA vs SCSI
 
 I'd be curious what you tested. Did the SATA drives support tagged
 command queueing (TCQ)? That can make a huge difference in a multi-user
 environment, detrimental in a single user. How many drives were in the
 SATA array and how many were in the SCSI array? You could probably put
 2-3x the numbers of drives in the SATA array, boosting performance, for
 the same price as a much smaller SCSI array. One on one I think an SATA
 is slower than SCSI, but bang for the buck I think goes to SATA.
 Here's a link to a review comparing SATA and SCSI. It shows equal
 setups (meaning number of drives) of SCSI and SATA have similar
 performance, but the SATA setup costs 40% less. Reliability is of
 course a major consideration, but the SATA drives of today are probably
 just as reliable as SCSI drives of 5 years ago. Kind of like the worst
 cars of today are more reliable than the best cars of 10 years ago.
 
 http://www.storagereview.com/articles/200406/20040625TCQ_1.html
 
 
 On May 12, 2005, at 11:42 AM, Scott M. Grim wrote:
 
  I've fairly extensively (although not necessarily scientifically)
  tested
  SATA 150 vs. SCSI U320 and find that if you're doing a lot of random
  reads
  and writes (such as with a database server), SCSI provides nearly 5x
  the
  performance as SATA so, for us, it's well worth the additional expense.
 
  It's also my experience that even the best SATA drives seem to be
  disposable.  There's a huge difference in reliability and life
  expectancy
  between SATA and SCSI drives because they put a bit more quality into
  SCSI
  drives as they are expected to perform in an enterprise environment.
 
  With RAID arrays and hotswap bays, it's easy enough to deal with SATA's
  unreliability, but it's always best to not have to swap and rebuild
  because
  every failure has the potential to cause some cascade that can become
  devestating.
 
  - Original Message -
  From: Kevin Burton [EMAIL PROTECTED]
  To: mysql@lists.mysql.com
  Sent: Wednesday, May 11, 2005 3:29 PM
  Subject: SATA vs SCSI
 
 
  Were kicking around using SATA drives in software RAID0 config.
 
  The price diff is significant.  You can also get SATA drives in 10k RPM
  form now.,
 
  Kevin
 
  --
 
 
  Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
  See irc.freenode.net #rojo if you want to chat.
 
  Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
 
 Kevin A. Burton, Location - San Francisco, CA
AIM/YIM - sfburtonator,  Web - http://peerfear.org/
  GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 Brent Baisley
 Systems Architect
 Landover Associates, Inc.
 Search  Advisory Services for Advanced Technology Environments
 p: 212.759.6400/800.759.0577
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FW: SATA vs SCSI

2005-05-12 Thread Moulder Glen CONT PBFL

Larry wrote:

My $.02.  As I agree SCSI has had a reputation for being
a more solid enterprise type drive, everyone's mileage varies. 
We have moved to using all SATA drives in our newer servers.  I 
have to admit most of our databases are smaller than what many 
on this list have.  All our db's are  under
500 megs.My reality is this.  If a SATA drive does fail,
so far only 1 over the last 18 months, it is cheap and easy
to replace.  I have all my setups raided so we have no lost 
data. At the same time I have several Hitachi/IBM SCSI drives...


What controller do you use for RAIDing the SATAs?

Glen





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqld post-mortem

2005-05-12 Thread Mark C. Stafford
Hello everyone. I've got a MySQL server that behaves
wonderfully...most of the time. It has crashed three times this week
and I'm at a loss for why.

Below I've included all the following:
  error.log
  os info
  memory check
  results from resolve_stack_dump
  my.cnf

Does the information point toward a problem you can see? Thanks in advance.

 error.log
 050510 15:57:48 InnoDB: Started
 050510 15:57:49 Found invalid password for user: '5014'@'10.0.%'; Ignoring user
 /usr/libexec/mysqld: ready for connections.
 Version: '4.0.23a-log' socket: '/var/run/mysql/mysql.sock' port: 3306
Source distribution

 050511 14:52:19 Found invalid password for user: '5014'@'10.0.%'; Ignoring user
 mysqld got signal 11;
 This could be because you hit a bug. It is also possible that this
binary or one of the libraries it was linked against is corrupt,
improperly built, or misconfigured. This error can also be caused by
malfunctioning hardware. We will try our best to scrape up some info
that will hopefully help diagnose the problem, but since we have
already crashed, something is definitely wrong and this may fail.

 key_buffer_size=1073741824
 read_buffer_size=1044480
 max_used_connections=125
 max_connections=150
 threads_connected=65
 It is possible that mysqld could use up to
 key_buffer_size + (read_buffer_size +
sort_buffer_size)*max_connections = 3659174 K
 bytes of memory
 Hope that's ok; if not, decrease some variables in the equation.

 thd=(nil)
 Attempting backtrace. You can use the following information to find
out where mysqld died. If you see no messages after this, something
went terribly wrong...
 Cannot determine thread, fp=0xb068, backtrace may not be correct.
 Stack range sanity check OK, backtrace follows:
 0x81048f1
 0xb7f83c85
 0xb7e4633e
 0xb7e437a3
 0x82d0163
 0x82ce417
 0x82cfee1
 0x80f9080
 0x81065ea
 0x8105964
 0xb7df7469
 0x80b62b1
 New value of fp=(nil) failed sanity check, terminating stack trace!
 Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html
and follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please
do resolve it
 The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
 050512 9:43:47 InnoDB: Started
 050512 9:43:47 Found invalid password for user: '5014'@'10.0.%'; Ignoring user
 /usr/libexec/mysqld: ready for connections.
 Version: '4.0.23a-log' socket: '/var/run/mysql/mysql.sock' port: 3306
Source distribution

 slackware linux info
 uname - a
 Linux b5 2.6.11 #1 SMP Sat Apr 16 23:35:33 MDT 2005 i686 unknown
unknown GNU/Linux

 memory check:
 googled 3659174 KB in MB == 3 659 174 kilobytes = 3 573.41211 megabytes
 this machine has 4GB RAM
 4GB in MB == 4 gigabytes = 4 096 megabytes
 4 096 - 3 573.41211 = 522.58789
 1/2GB RAM available for non-mysql functionality is more than enough, right?

 i ran resolve_stack_dump on the backtrace from
error.log per http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html
resolve_stack_dump -s ./mysqld.sym -n ./mysqld.stack and got the
results below, but the information provided doesn't mean anything to
me now what?
 0x81048f1 handle_segfault + 641
 0xb7f83c85 _end + -1346321187
 0xb7e4633e _end + -1347621994
 0xb7e437a3 _end + -1347633157
 0x82d0163 my_malloc + 35
 0x82ce417 init_io_cache + 295
 0x82cfee1 open_cached_file + 145
 0x80f9080 _ZN3THDC1Ev + 928
 0x81065ea handle_connections_sockets + 666
 0x8105964 main + 2180
 0xb7df7469 _end + -1347945279
 0x80b62b1 _start + 33

 my.cnf
[mysqld]
# BASICS
  datadir=/var/lib/mysql
  pid_file=/var/run/mysql/mysql.pid
  port=3306
  socket=/var/run/mysql/mysql.sock
  tmpdir=/tmp/
  tmp_table_size=64M
  max_connections=150
  # IMHO, long_query_time=6 is too big...but we'll lower this after
nailing the worst offenders
  long_query_time=6
  log_long_format
  # /var/log/mysqld doesn't exist by default, so i created it and ran
chown mysql:mysql on it
  log_slow_queries=/var/log/mysqld/slow.log
  log_error=/var/log/mysqld/error.log
  # discourages brute force attacks, unblock blocked hosts with FLUSH HOSTS
  max_connect_errors=5
#
# REPLICATION
  server_id=1
  read_only=OFF
  # security hazard...a client with the SUPER privilege can disable
binary logging of its own statements by using a SET SQL_LOG_BIN=0
  log_bin=b5
  # *ignores errors re: revokation of non-existent grants Error: 1147
SQLSTATE: 42000
  slave-skip-errors=1147
  max_allowed_packet=16M
  character_set=latin1
#
# PERFORMANCE TUNING (low_priority_updates, max_join_size and
max_seeks_for_key feel a bit bleeding-edge and should be removed if
the results suck)
  # low_priority_updates makes INSERT, UPDATE AND lower priority than
SELECT and LOCK TABLE
  low_priority_updates=ON
  max_join_size=32M
  max_seeks_for_key=100
  # caches
table_cache=768
max_binlog_cache_size=2GB
binlog_cache_size=999MB
thread_cache_size=50
# 

Re: Maximum number of user variables

2005-05-12 Thread SGreen
Neculai Macarie [EMAIL PROTECTED] wrote on 05/12/2005 03:26:33 AM:

   Not that I'm aware of. What type of conversions are you doing that 
you
   need 30,000 use vars? An easy solution would be to try it and find 
out
 :)
  
   I need to move multiple linked entries (in around 12 tables) from 
one
   running server to another. I'm using auto_increment's all over the 
place
   and
   I have the following problem: I need to move (no replication 
possible)
 the
   information in those 12 tables to an identical functioning system 
(same
   software, same db, but other server) and I need to preserve the
 relations
   between the tables. The problem that I have is that the systems 
evolve
   independently and I can have the same id for 2 different entries on 
the
   system (e.g. on the first system I can have the id 10 for User1, but 
on
   second system id 10 would be for another user).
  
 
  Perhaps after you dump the structure and data into sql files,
  you could remove temporarily the extra attribut auto increment to 
those
  columns.
  Then start inserting.
  After that add the auto increment attribut again.
  OR
  use bulkcopy(?) if im not mistaken, or any other methods to copy as it 
is
 to
  new dbservers
  (compress it first).
 
 Donny,
 
 Thanks for the answer. Dumping the structure and data alone (mysqldump
 style) is not helping me because the tables are linked and I must update 
the
 linked tables with the new id's.
 
 -- 
 mack /
 

Here is a summary of how I have merged hierarchical data structures in the 
past.  I start by adding a column or two to my destination data tables for 
each table in the tree I need to reconstruct.  The first new column (I 
usually call it something like old_ID) holds the original PK of the 
source record. The second (if necessary) will hold the original PARENT's 
PK value.

As an example, let's imagine that I need to merge a table that looks like 
{ID, PARENT_ID, DATA columns} into a new table with the same structure. 
In this table PARENT_ID points to some other record in the same table. I 
would add my two columns like this

ALTER TABLE new_table add old_ID int, add old_parentid int;

Then, I would need to map the INSERT like this

INSERT new_table (old_ID, old_parentid, data columns)
SELECT old_table id, parent_id, data columns;

Then I go back and update the PARENT_ID of the records in new_table with 
the new ID value of their OLD parent records.

UPDATE new_table nt1 
INNER JOIN new_table nt2
ON nt2.old_ID = nt1.old_parentid
SET nt1.PARENT_ID = nt2.ID
WHERE nt1.old_parent_ID is not null;

This re-creates the parent-child relationship that used to exist in 
old_table by filling in the new values for the PARENT_ID that were 
auto-generated when the old records were merged. Repeat this for each 
table in your hierarchy. In this example both parent and child records 
were from the same table but they didn't need to be. nt1 is the alias for 
the child table while nt2 represents the parent table.

You will need to disable any FK constraints (InnoDB) while you rebuild 
your parent-child relationships but once you have filled in the child's 
parent_id field, you should be able to re-enable those keys and move on to 
the next level.  If you get an error, check your data. Working from the 
top down, you should not create too many issues, especially if your data 
was well-organized to start with. Once you have regenerated your 
parent-child links to use the new auto_increment values, you can start 
dropping the old_* columns to recover that space and optimize your tables 
at the same time (thanks to the behavior of the ALTER TABLE).

ALSO, before you start, make a backup of your data (mysqldump works well 
for most people). That way if you hose it up in some major way, you can at 
least get back to where you started without too much pain.

If you have any questions, I will be lurking ;-)

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



utf8, 4.1, and character length truncation in mysql system tables

2005-05-12 Thread Adam Fields
I changed the default character set on a 4.1 server to utf8.

As expected, this caused the lengths of character fields to be
shortened, requiring alter table to be run on them to extend the
lengths.

But I didn't expect that this would also shorten the mysql system
tables (the mysql db), so that usernames for newly inserted users
have been truncated to fit the next field lengths.

Is this a known issue?

Should I set the character set for the mysql db back to latin1?
Running alter table on the mysql tables to extend all of the column
lengths seems like a bad idea, but seems like what's recommended for
other tables in the manual.

Also, on a related note, these are really big tables, and running
alter table on them to modify the column lengths is taking a LOOONG
time. Any hints on speeding this up?

-- 
- Adam

** I can fix your database problems: http://www.everylastounce.com/mysql.html **

Blog... [ http://www.aquick.org/blog ]
Links.. [ http://del.icio.us/fields ]
Photos. [ http://www.aquick.org/photoblog ]
Experience. [ http://www.adamfields.com/resume.html ]
Product Reviews: .. [ http://www.buyadam.com/blog ]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Cleaning muck out of data fields

2005-05-12 Thread zzapper
Hi,

I wanted to clean up some numeric currency data fields which had some 
non-numeric values which took
the first two characters of the field (they were some kind of garbage 
characters) anyway the
following did the trick

update tbl_products set p10_price=mid(p10_price,2) where p10_price regexp 
('[^0-9.]');

Which worked because the mucky characters were always the first two digits but 
it's still cludgy.

What I really wanted to do was just filter out the good any currency numerics 
of form \d+.\d\d
10.95 but as mysql only supports regexp when matching, I couldn't think of a 
way. I have the same
problem if I try to Locate I cant AFAIK say locate first digit.

Comments/Ideas?

Q2)
Can I match mucky non-alphanumerics ?
-- 
zzapper
vim -c :%s%s*%Cyrnfr)fcbafbe[Oenz(Zbbyranne%|:%s)[[()])-)Ig|norm Vg?
http://www.rayninfo.co.uk/tips/ vim, zsh  success tips


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Report(query)

2005-05-12 Thread Seena Blace
ROLLUP is not working with current version.IS any way i can do?
I have 1 column names time with data type datetime.I want to get report like 
-mm-dd format.
Can I get by using date_format(nameofcolumn,'%Y-%m-%d')?



Harald Fuchs [EMAIL PROTECTED] wrote:
In article [EMAIL PROTECTED],
Seena Blace writes:

 Hi,
 I want report like this

 date process pending wip

 5/10/05 10 30 40
 5/11/05 09 28 60
 
 Summary 19 58 100



 select date,sum(process),sum(pending),sum(wip) from tab
 group by date;
 What command I used to show heading and what command to show sub total and 
 total ?

The last row can be calculated by the ROLLUP modifier of GROUP
(requires MySQL 4.1.1 or later). Nicely formatting the output with
headings etc is not the business of a database engine and should be
done by your application.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]




-
Discover Yahoo!
 Stay in touch with email, IM, photo sharing  more. Check it out!

Re: Report(query)

2005-05-12 Thread Eamon Daly
From the manual:
http://dev.mysql.com/doc/mysql/en/group-by-functions.html
If you use a group function in a statement containing no
GROUP BY clause, it is equivalent to grouping on all rows.
Just run the same query without the GROUP BY to get the
total:
SELECT
DATE_FORMAT(date, '%Y-%m-%d') AS date,
SUM(process),
SUM(pending),
SUM(wip)
FROM tab
GROUP BY date;
SELECT
'Summary',
SUM(process),
SUM(pending),
SUM(wip)
FROM tab;

Eamon Daly

- Original Message - 
From: Seena Blace [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Thursday, May 12, 2005 12:54 PM
Subject: Re: Report(query)


ROLLUP is not working with current version.IS any way i can do?
I have 1 column names time with data type datetime.I want to get report 
like
-mm-dd format.
Can I get by using date_format(nameofcolumn,'%Y-%m-%d')?


Harald Fuchs [EMAIL PROTECTED] wrote:
In article [EMAIL PROTECTED],
Seena Blace writes:
Hi,
I want report like this

date process pending wip

5/10/05 10 30 40
5/11/05 09 28 60

Summary 19 58 100


select date,sum(process),sum(pending),sum(wip) from tab
group by date;
What command I used to show heading and what command to show sub total 
and total ?
The last row can be calculated by the ROLLUP modifier of GROUP
(requires MySQL 4.1.1 or later). Nicely formatting the output with
headings etc is not the business of a database engine and should be
done by your application.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Maximum number of user variables

2005-05-12 Thread Neculai Macarie
 Here is a summary of how I have merged hierarchical data structures in the
 past.  I start by adding a column or two to my destination data tables for
 each table in the tree I need to reconstruct.  The first new column (I
 usually call it something like old_ID) holds the original PK of the
 source record. The second (if necessary) will hold the original PARENT's
 PK value.

 As an example, let's imagine that I need to merge a table that looks like
 {ID, PARENT_ID, DATA columns} into a new table with the same structure.
 In this table PARENT_ID points to some other record in the same table. I
 would add my two columns like this

 ALTER TABLE new_table add old_ID int, add old_parentid int;

 Then, I would need to map the INSERT like this

 INSERT new_table (old_ID, old_parentid, data columns)
 SELECT old_table id, parent_id, data columns;

 Then I go back and update the PARENT_ID of the records in new_table with
 the new ID value of their OLD parent records.

 UPDATE new_table nt1
 INNER JOIN new_table nt2
 ON nt2.old_ID = nt1.old_parentid
 SET nt1.PARENT_ID = nt2.ID
 WHERE nt1.old_parent_ID is not null;

 This re-creates the parent-child relationship that used to exist in
 old_table by filling in the new values for the PARENT_ID that were
 auto-generated when the old records were merged. Repeat this for each
 table in your hierarchy. In this example both parent and child records
 were from the same table but they didn't need to be. nt1 is the alias for
 the child table while nt2 represents the parent table.

 You will need to disable any FK constraints (InnoDB) while you rebuild
 your parent-child relationships but once you have filled in the child's
 parent_id field, you should be able to re-enable those keys and move on to
 the next level.  If you get an error, check your data. Working from the
 top down, you should not create too many issues, especially if your data
 was well-organized to start with. Once you have regenerated your
 parent-child links to use the new auto_increment values, you can start
 dropping the old_* columns to recover that space and optimize your tables
 at the same time (thanks to the behavior of the ALTER TABLE).

 ALSO, before you start, make a backup of your data (mysqldump works well
 for most people). That way if you hose it up in some major way, you can at
 least get back to where you started without too much pain.

 If you have any questions, I will be lurking ;-)

Thanks for this solution. Indeed it's much simpler then needing 30 000 user
variables :).

-- 
mack /


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SATA vs SCSI

2005-05-12 Thread Larry Lowry
We have been using the controllers built into the 
motherboards.  I know they are not as good as some
dedicated cards but they work well enough for us.
I prefer the nVidia nForce4 Ultra Chipsets.  They 
have a nice raid setup.  We needed a cheap box for
data server but with a lot of tempory disk space.  
A system with the K8N Neo4 motherboard, Athlon
64 3500+, 2gb memory and 5 250gb sata drives 
yields a fast box with 1tb storage. All for under
$1500.  I know this is not an Enterprise DB box
but again everyone has to evaluate their needs,
budget and boss.

Larry

- Original Message - 
From: Moulder Glen CONT PBFL [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Thursday, May 12, 2005 9:30 AM
Subject: FW: SATA vs SCSI


Larry wrote:
My $.02.  As I agree SCSI has had a reputation for being
a more solid enterprise type drive, everyone's mileage varies. 
We have moved to using all SATA drives in our newer servers.  I 
have to admit most of our databases are smaller than what many 
on this list have.  All our db's are  under
500 megs.My reality is this.  If a SATA drive does fail,
so far only 1 over the last 18 months, it is cheap and easy
to replace.  I have all my setups raided so we have no lost 
data. At the same time I have several Hitachi/IBM SCSI drives...

What controller do you use for RAIDing the SATAs?
Glen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: utf8, 4.1, and character length truncation in mysql system tables

2005-05-12 Thread Gleb Paharenko
Hello.



 Is this a known issue?



It is interesting for me. According to the:



  http://dev.mysql.com/doc/mysql/en/charset-metadata.html



MySQL stores usernames in utf8. Yes, you should convert your

tables to utf8, however, in my opinion, you don't have to do

this with 'mysql' database. Could you lose the characters from the

users' names due to other reasons (wrong character set for your client

application)?



Adam Fields [EMAIL PROTECTED] wrote:

 I changed the default character set on a 4.1 server to utf8.

 

 As expected, this caused the lengths of character fields to be

 shortened, requiring alter table to be run on them to extend the

 lengths.

 

 But I didn't expect that this would also shorten the mysql system

 tables (the mysql db), so that usernames for newly inserted users

 have been truncated to fit the next field lengths.

 

 Is this a known issue?

 

 Should I set the character set for the mysql db back to latin1?

 Running alter table on the mysql tables to extend all of the column

 lengths seems like a bad idea, but seems like what's recommended for

 other tables in the manual.

 

 Also, on a related note, these are really big tables, and running

 alter table on them to modify the column lengths is taking a LOOONG

 time. Any hints on speeding this up?

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld post-mortem

2005-05-12 Thread Gleb Paharenko
Hello.



 Linux b5 2.6.11 #1 SMP Sat Apr 16 23:35:33 MDT 2005 i686 unknown



Is it a 32-bit arch?



 sort_buffer_size)*max_connections =3D 3659174 K

 bytes of memory



Usually the process size on 32-bit machines can't be more than 2G.





Mark C. Stafford [EMAIL PROTECTED] wrote:

 Hello everyone. I've got a MySQL server that behaves

 wonderfully...most of the time. It has crashed three times this week

 and I'm at a loss for why.

 

 Below I've included all the following:

  error.log

  os info

  memory check

  results from resolve_stack_dump

  my.cnf

 

 Does the information point toward a problem you can see? Thanks in advance.

 

 error.log

 050510 15:57:48 InnoDB: Started

 050510 15:57:49 Found invalid password for user: '5014'@'10.0.%'; Ignoring=

 user

 /usr/libexec/mysqld: ready for connections.

 Version: '4.0.23a-log' socket: '/var/run/mysql/mysql.sock' port: 3306

 Source distribution

 

 050511 14:52:19 Found invalid password for user: '5014'@'10.0.%'; Ignoring=

 user

 mysqld got signal 11;

 This could be because you hit a bug. It is also possible that this

 binary or one of the libraries it was linked against is corrupt,

 improperly built, or misconfigured. This error can also be caused by

 malfunctioning hardware. We will try our best to scrape up some info

 that will hopefully help diagnose the problem, but since we have

 already crashed, something is definitely wrong and this may fail.

 

 key_buffer_size=3D1073741824

 read_buffer_size=3D1044480

 max_used_connections=3D125

 max_connections=3D150

 threads_connected=3D65

 It is possible that mysqld could use up to

 key_buffer_size + (read_buffer_size +

 sort_buffer_size)*max_connections =3D 3659174 K

 bytes of memory

 Hope that's ok; if not, decrease some variables in the equation.

 

 thd=3D(nil)

 Attempting backtrace. You can use the following information to find

 out where mysqld died. If you see no messages after this, something

 went terribly wrong...

 Cannot determine thread, fp=3D0xb068, backtrace may not be correct.

 Stack range sanity check OK, backtrace follows:

 0x81048f1

 0xb7f83c85

 0xb7e4633e

 0xb7e437a3

 0x82d0163

 0x82ce417

 0x82cfee1

 0x80f9080

 0x81065ea

 0x8105964

 0xb7df7469

 0x80b62b1

 New value of fp=3D(nil) failed sanity check, terminating stack trace!

 Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html

 and follow instructions on how to resolve the stack trace. Resolved

 stack trace is much more helpful in diagnosing the problem, so please

 do resolve it

 The manual page at http://www.mysql.com/doc/en/Crashing.html contains

 information that should help you find out what is causing the crash.

 050512 9:43:47 InnoDB: Started

 050512 9:43:47 Found invalid password for user: '5014'@'10.0.%'; Ignoring =

 user

 /usr/libexec/mysqld: ready for connections.

 Version: '4.0.23a-log' socket: '/var/run/mysql/mysql.sock' port: 3306

 Source distribution

 

 slackware linux info

 uname - a

 Linux b5 2.6.11 #1 SMP Sat Apr 16 23:35:33 MDT 2005 i686 unknown

 unknown GNU/Linux

 

 memory check:

 googled 3659174 KB in MB =3D=3D 3 659 174 kilobytes =3D 3 573.41211 meg=

 abytes

 this machine has 4GB RAM

 4GB in MB =3D=3D 4 gigabytes =3D 4 096 megabytes

 4 096 - 3 573.41211 =3D 522.58789

 1/2GB RAM available for non-mysql functionality is more than enough, right=

 ?

 

 i ran resolve_stack_dump on the backtrace from

 error.log per http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html

 resolve_stack_dump -s ./mysqld.sym -n ./mysqld.stack and got the

 results below, but the information provided doesn't mean anything to

 me now what?

 0x81048f1 handle_segfault + 641

 0xb7f83c85 _end + -1346321187

 0xb7e4633e _end + -1347621994

 0xb7e437a3 _end + -1347633157

 0x82d0163 my_malloc + 35

 0x82ce417 init_io_cache + 295

 0x82cfee1 open_cached_file + 145

 0x80f9080 _ZN3THDC1Ev + 928

 0x81065ea handle_connections_sockets + 666

 0x8105964 main + 2180

 0xb7df7469 _end + -1347945279

 0x80b62b1 _start + 33

 

 my.cnf

 [mysqld]

 # BASICS

  datadir=3D/var/lib/mysql

  pid_file=3D/var/run/mysql/mysql.pid

  port=3D3306

  socket=3D/var/run/mysql/mysql.sock

  tmpdir=3D/tmp/

  tmp_table_size=3D64M

  max_connections=3D150

  # IMHO, long_query_time=3D6 is too big...but we'll lower this after

 nailing the worst offenders

  long_query_time=3D6

  log_long_format

  # /var/log/mysqld doesn't exist by default, so i created it and ran

 chown mysql:mysql on it

  log_slow_queries=3D/var/log/mysqld/slow.log

  log_error=3D/var/log/mysqld/error.log

  # discourages brute force attacks, unblock blocked hosts with FLUSH HOSTS

  max_connect_errors=3D5

 #

 # REPLICATION

  server_id=3D1

  read_only=3DOFF

  # security hazard...a client with the SUPER privilege can disable

 binary logging of its own statements by using a SET SQL_LOG_BIN=3D0

  log_bin=3Db5

  # *ignores errors re: revokation of non-existent grants Error: 

Re: utf8, 4.1, and character length truncation in mysql system tables

2005-05-12 Thread Adam Fields
On Thu, May 12, 2005 at 10:08:33PM +0300, Gleb Paharenko wrote:
 Hello.
 
  Is this a known issue?
 
 It is interesting for me. According to the:
 
   http://dev.mysql.com/doc/mysql/en/charset-metadata.html
 
 MySQL stores usernames in utf8. Yes, you should convert your
 tables to utf8, however, in my opinion, you don't have to do
 this with 'mysql' database. Could you lose the characters from the
 users' names due to other reasons (wrong character set for your client
 application)?

I suppose that's possible. This was done through the stock mysql
client, which defaults to latin1 even if you set the server character
set to utf8 (which makes sense, as most terminals don't support utf8).

Is there something else I should be doing to create new users post
4.1?

Is this behavior something I should be worried about? (I am,
currently.)


 Adam Fields [EMAIL PROTECTED] wrote:
  I changed the default character set on a 4.1 server to utf8.
  
  As expected, this caused the lengths of character fields to be
  shortened, requiring alter table to be run on them to extend the
  lengths.
  
  But I didn't expect that this would also shorten the mysql system
  tables (the mysql db), so that usernames for newly inserted users
  have been truncated to fit the next field lengths.
  
  Is this a known issue?
  
  Should I set the character set for the mysql db back to latin1?
  Running alter table on the mysql tables to extend all of the column
  lengths seems like a bad idea, but seems like what's recommended for
  other tables in the manual.
  
  Also, on a related note, these are really big tables, and running
  alter table on them to modify the column lengths is taking a LOOONG
  time. Any hints on speeding this up?
  

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump question

2005-05-12 Thread Amit M Bhosle
Hi:

 i was wondering if there's any way to limit the bandwidth used by
mysqldump to dump data from remote hosts. since i couldn't find any
documentation on this, i assume that mysqldump will use all the
available bandwidth of the network.

 the issue is that i'm looking to fetch data to the tune of 100s of
MBs, and i don't want the mysqldump to hog all the bandwidth, thus
adversely affecting other communication.

thx in advance for ur time.
AB

-- 
A great idea need not be complicated.
http://www.cs.ucsb.edu/~bhosle

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Very UIrgent....How can i start with mysql in c language?

2005-05-12 Thread Ashok Kumar
Hi friends, 
   I'm ashok, new member to this forum. I'm doing my final year graduation and 
I'm new to this MySQL, C and Windows Combination(i never worked DB connectivity 
in C). There is no such header files in TC such as mysql.h and so on. how can i 
include those files and how can i create a db via queries from 'C'(is it 
possible to create a db manually?). Pls give me sugestions in some what 
detailed manner. 
 
 
Thank u and Regards, 
Ashok Kumar.P.S


-
Yahoo! Mail
 Stay connected, organized, and protected. Take the tour

Re: Very UIrgent....How can i start with mysql in c language?

2005-05-12 Thread Eric Bergen
Start reading here: http://dev.mysql.com/doc/mysql/en/c.html

On 5/12/05, Ashok Kumar [EMAIL PROTECTED] wrote:
 Hi friends,
I'm ashok, new member to this forum. I'm doing my final year graduation 
 and I'm new to this MySQL, C and Windows Combination(i never worked DB 
 connectivity in C). There is no such header files in TC such as mysql.h and 
 so on. how can i include those files and how can i create a db via queries 
 from 'C'(is it possible to create a db manually?). Pls give me sugestions in 
 some what detailed manner.
 
 Thank u and Regards,
 Ashok Kumar.P.S
 
 
 -
 Yahoo! Mail
  Stay connected, organized, and protected. Take the tour
 


-- 
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Read past Equivalent in MySQL

2005-05-12 Thread Ramesh G
Hi All,
Is there a way by which I can tell the Mysql to ignore the rows that are  
locked by someone else and take the next available record. The problem is,  
I have a Query like this:

Select * from Table1 where Fld1=2 FOR UPDATE Limit 1
I will have multiple clients running this same query with the same where  
clause. For the second instance of the query mysql seems to wait till the  
transaction of the first instance gets completed. This makes this query  
slow as the time taken for the transaction to complete is somewhere  
between 1 and 1.5 seconds.

Regards,
Ramesh G
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]