RE: disabling version number

2003-03-24 Thread Jeremy Tinley
Authorized != trusted.

If you're a hosting provider who allows access to MySQL for customers, your
users have access to see the version number by way of simply connecting to
their own database. Not that mysql --version from a shell doesn't give you
the same thing... but paying for a low end account, finding the version
number the host is running and finding an exploit for that version would
probably be what the original poster had in mind of preventing.



-Original Message-
From: Joseph Bueno [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 24, 2003 10:39 AM
To: Florian Effenberger
Cc: [EMAIL PROTECTED]
Subject: Re: disabling version number

Florian Effenberger wrote:
No, why?
 
 
 Part of my security concept, I generally disable all version numbers.
 
 
You can patch mysql source and recompile ;)

However, if someone has enough access rights on your system to run
select version();, showing mysql version number should be the least
important of your problems.

Regards,
Joseph Bueno


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: clustering/scalability question

2003-03-10 Thread Jeremy Tinley
If you want to use DNS, that's the way to go.

You can also use LVS to setup clusters of databases.

http://www.linuxvirtualserver.org


-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 07, 2003 6:05 PM
To: John Masterson
Cc: [EMAIL PROTECTED]
Subject: Re: clustering/scalability question

On Fri, Mar 07, 2003 at 04:56:54PM -0700, John Masterson wrote:
 Hello,
 
 We're a web hosting company currently hosting nearly 2000 MySQL 
 databases (3.23.54) at around 200-300 queries/second. Amazing
 performance, really. But we're growing fast and planning for the next
 step. Our wishlist is short:
 
 We want to allow all our customers to have one set of instructions that
 says connect to db.modwest.com (as opposed to 'read from here, write
 to there', or 'connect to db2 if your domain name starts with M-Z'). Of
 course on the invisible backend, multiple servers would be servicing
 these requests.

Tell them to connect to db.$their_domain.com and setup CNAMEs in DNS
that point to their real server.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 4.0.8: up 32 days, processed 1,005,162,161 queries (357/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Replication

2003-03-04 Thread Jeremy Tinley
I've devised a solution under Linux, using Heartbeat, DRBD and Perl, but we
have not put it into production yet.  We also use LVS to put our database
slaves into HA clusters.

The basic idea is that you have an Active/Standby master, (the standby being a
slave) in a heartbeat cluster.  You use DRBD to mirror the binlogs to the
secondary machine.  When a failure occurs, the standby takes over (this is
done automatically with heartbeat), and at most, you lose only your current
inserts that were happening during the failure.


Heartbeat: http://linux-ha.org/
LVS: http://www.linuxvirtualserver.org/
DRBD: http://www.complang.tuwien.ac.at/reisner/drbd/

-J


-Original Message-
From: Rodrigo Dias Cruz [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 04, 2003 1:32 PM
To: [EMAIL PROTECTED]
Subject: Replication

Hello

I'd like to know if they have already implemented that feature of
replication that changes the master server to other serve if the master
fails.

If they have not implemented that yet, how can I do something similar using
the features that mysql have now?

Thank you for any answer.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.459 / Virus Database: 258 - Release Date: 25/02/03



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: MySQL or PostgreSQL

2003-02-28 Thread Jeremy Tinley
Since you're posting on a MySQL list, you could probably expect some biased
responses.  Could you post more about what you need to use a database for,
i.e., what are your needs, wants, what kind of data are you handling,
connection rates, serving platform, code base, etc.

Here are a few URLs I was able to google up for your specific question:
http://www.webtechniques.com/archives/2001/09/jepson/
http://librenix.com/?inode=1266
http://phd.pp.ru/Software/SQL/PostgreSQL-vs-MySQL.html




-Original Message-
From: InfoSysNCA [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 28, 2003 11:24 AM
To: [EMAIL PROTECTED]
Subject: MySQL or PostgreSQL

Hi! I've just started working in Linux, but I'd like to know which 
database is better to use, MySQL or PostgreSQL. Which one would be 
better in the long run?

-- 
Regards,
Neil






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: copying databases to avoid insert holdups

2003-02-26 Thread Jeremy Tinley
You could try changing the priority of your inserts using INSERT LOW PRIORITY

See: http://www.mysql.com/doc/en/INSERT.html

This will give the selects priority over the inserts, as the table becomes
free, the insert will happen. 

Also, look at this:
http://www.mysql.com/doc/en/Insert_speed.html

-J

-Original Message-
From: Seth Brundle [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 26, 2003 2:01 PM
To: [EMAIL PROTECTED]
Subject: copying databases to avoid insert holdups

I have a MySQL database I will be launching soon which has a search engine
database accesable over DBI/mod_perl/Apace. It has about 2M records, and is
a 4GB table with 1GB index.

Every day I insert 76k records all at once and delete about another 76k.
While these inserts are happening, the select performance is obviously
impacted.

What I want to do is to have two copies of the database, copying the updated
version to the publicly-accessable one, i.e.:

mysql stop
rm -rf /mysql/data/publicdbdir
copy -r mysql/data//insertdbdir mysql/data/publicdbdir
mysql start

My question is this:
Is this enough to prevent me from experiencing slow inserts and queries
during insert, or should I be firing up two seperate servers?

Also - if I create the MyISM tables on 3.23, can copy them to a 4.0.10
server's data directory? (the reason i ask is that I have a 3.23 server
available on that box)

Thanks
q







-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: # of Business Days?

2003-02-24 Thread Jeremy Tinley
use Date::Manip

http://www.perldoc.com/perl5.6.1/lib/Date/Manip.html

-J

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 24, 2003 1:33 PM
To: 'Lucas Cowgar'; MySQL Mailing List
Subject: RE: # of Business Days?

Can anyone out there help me with a SQL query? I need to find the number
of business days between date a and b? Obviously finding simply the number
of days is easy, but I have no clue how to find the number of business days.
TIA!

Here's some ropey perl I wrote a ages back to calculate working minutes
between two unix timestamps (based on UK bank holidays and working hours of
9 to 6).  I'm sure you could adapt to whatever you need

Please - no-one correct me on my perl (I already know), Don't fix what
ain't broke :)

Even so - I would reccoment some changes on a system that uses the below
heavily...

Cheers,

Andrew

###  Used to calculate working minutes between two unix timestamps
#
# $sdate = 1040807553;
# $edate = 1040809553;
#
# $result = calc_workmins($sdate, $edate);
##

sub calc_workmins {
local ($sdate,$edate) = @_;
$mins_total = 1;
$rows = 0;
while ($sdate le $edate) {
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) =
localtime($sdate);
$year += 1900;
$mon  += 1; ## perl returns months in the range 0 to 11
my ($todaysdate) = $mday . - . $mon . - . $year;
my $bankflag = 0;
for my $bankhol
(1-1-2002,29-3-2002,1-4-2002,3-6-2002,4-6-2002,26-8-2002,25-12-
2002,26-12-2002,1-1-2003,18-4-2003,21-4-2003,5-5-2003,26-5-2003
,25-8-2003,25-12-2003,26-12-2003,27-12-2003,28-12-200,31-1-2004,
9-4-2004,12-4-2004,3-5-2004,31-5-2004,30-8-2004,25-12-2004,26-1
2-2004,27-12-2004,3-1-2005,25-3-2005,28-3-2005,2-5-2005,30-5-200
5,29-8-2005,25-12-2005,26-12-2005) {
if ($todaysdate == $bankhol) {
$bankflag = 1;
}
$bankholshow = $bankhol;
}
if (($hour  8)  ($hour  19)  ($wday  0)  ($wday 
6)  ($bankflag  1)) {
$mins_total++;
}
$sdate = $sdate + 60;
$rows++;
}
return $mins_total;
}




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/doc/ (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: SET FOREIGN_KEY_CHECKS = 0 Suggestion.

2003-02-21 Thread Jeremy Tinley
Changing one local variable, IMO, shouldn't replicate.

I would much rather have a REPLICATE command that I could place before any SQL
command that causes it to replicate.  This keeps local variables local, but in
the event I need to replicate a change to all my slaves without going to each
one, I can.

-J

-Original Message-
From: wertyu [mailto:[EMAIL PROTECTED]] 
Sent: Friday, February 21, 2003 3:15 AM
To: [EMAIL PROTECTED]
Subject: SET FOREIGN_KEY_CHECKS = 0 Suggestion.

Hello, everyone.

I'm using MySQL replication(Version 4.0.10)

Master and slave have FOREIGN KEY constration.
and I  back up data with mysqldump.
but mysqldump does not produce table and record invalid order for FK.

So,when I restore data,  I execute SET FOREIGN_KEY_CHECKS = 0; on master.

But, this command does not forwarded to Slave.
so, slave fails to insert record.

So my suggestion is SET FOREIGN_KEY_CHECKS = 0; forwarded to slave.

What do you think of this?

Have a nice weekend!

##
Heo, Jungsu Mr.
SimpleX Internet. http://www.simplexi.com

Filter : sql



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail mysql-unsubscribe-##L=##[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/doc/ (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail mysql-unsubscribe-##L=##[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: received 0 length packet from server

2003-02-18 Thread Jeremy Tinley
Resetting the binlogs on the master didn't make a difference.  Syncing all
MySQL versions didn't make a difference.  I've checked network configuration,
permissions on both the replication user (on the master) and the files on
slaves.

The master error log has nothing in it.

Is there a way to turn up the verbosity of the log files without building a
debug version of MySQL?

-Original Message-
From: Jeremy Tinley [mailto:[EMAIL PROTECTED]] 
Sent: Monday, February 17, 2003 4:07 PM
To: [EMAIL PROTECTED]
Subject: RE: received 0 length packet from server

A few more tidbits:

* All slaves are running the static i686 linux build
(mysql-3.23.55-pc-linux-i686) under RH8.0 (since RH8 won't compile MySQL
without it dumping core).


* More log snippets:

030217 15:43:01  Slave: received 0 length packet from server, apparent master
shutdown:
030217 15:43:01  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 540017428
030217 15:43:01  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 540017428
030217 15:43:01  Slave: received 0 length packet from server, apparent master
shutdown:
030217 15:44:01  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 540017428
030217 15:44:01  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 540017428
030217 15:44:02  Error reading packet from server: Lost connection to MySQL
server during query (server_errno=2013)
030217 15:44:02  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 540037417
030217 15:44:02  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 540037417


This is one complete cycle of the problem.  This time, I see
server_errno=2013. Perror doesn't have anything on this error.



-Original Message-
From: Jeremy Tinley [mailto:[EMAIL PROTECTED]] 
Sent: Monday, February 17, 2003 3:37 PM
To: [EMAIL PROTECTED]
Subject: received 0 length packet from server

Howdy,

Running MySQL 3.23.54 on master, 3.23.55 on all slaves, I'm experiencing the
following situation:

slave1  slave4 both receive 0 length packets from the server.
slave2  slave3 both work fine.

slave1  2 are identical machines both in hardware and configuration
slave3  4 are different from 1  2, and from each other in hardware only

The my.cnf files are the same, same subnet, different switches (but a machine
that works, and one that doesn't are on the same switch).

Log files look like this (trimmed for neatness):

030217 15:16:47  Slave: received 0 length packet from server, apparent master
shutdown:
030217 15:16:47  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 531922547
030217 15:16:47  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 531922547
030217 15:16:47  Slave: received 0 length packet from server, apparent master
shutdown:
030217 15:16:47  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 531922771
030217 15:16:47  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 531922771


Troubleshooting thoughts, anyone?  The slaves are in an LVS cluster and can be
downed at any time.




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




received 0 length packet from server

2003-02-17 Thread Jeremy Tinley
Howdy,

Running MySQL 3.23.54 on master, 3.23.55 on all slaves, I'm experiencing the
following situation:

slave1  slave4 both receive 0 length packets from the server.
slave2  slave3 both work fine.

slave1  2 are identical machines both in hardware and configuration
slave3  4 are different from 1  2, and from each other in hardware only

The my.cnf files are the same, same subnet, different switches (but a machine
that works, and one that doesn't are on the same switch).

Log files look like this (trimmed for neatness):

030217 15:16:47  Slave: received 0 length packet from server, apparent master
shutdown:
030217 15:16:47  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 531922547
030217 15:16:47  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 531922547
030217 15:16:47  Slave: received 0 length packet from server, apparent master
shutdown:
030217 15:16:47  Slave: Failed reading log event, reconnecting to retry, log
'binlog.010' position 531922771
030217 15:16:47  Slave: reconnected to master 'repl@master1:3306',replication
resumed in log 'binlog.010' at position 531922771


Troubleshooting thoughts, anyone?  The slaves are in an LVS cluster and can be
downed at any time.




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL Replication - v4 slave with v3 master

2002-10-15 Thread Jeremy Tinley

According to the replication compatibility table, you can.

http://www.mysql.com/doc/en/Replication_Implementation.html

-J


-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, October 15, 2002 8:36 AM
To: [EMAIL PROTECTED]
Subject: MySQL Replication - v4 slave with v3 master

Hi all,

Does anyone have any experience with running a v3 master and v4 slaves?

Will it work?  -  I can't see why not...

Thanks in advance,

Andrew

sql, query

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: information about the tables in the database

2002-10-09 Thread Jeremy Tinley

show tables;
show tables from catalogName;
show tables from catalogName like 'order%';

-J

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf
Of Iikka Meriläinen
Sent: Wednesday, October 09, 2002 10:09 AM
To: Inbal Ovadia
Cc: '[EMAIL PROTECTED]'
Subject: Re: information about the tables in the database

On Wed, 9 Oct 2002, Inbal Ovadia wrote:

 Hi all,
 There is a way to know if I have table x in my db?
 Let's say some catalog tables that hold information about the db like
tables
 name etc.
 and i can do query like select tables from catalogName and get all
the
 tables in the db

Hi,

I guess that's not possible.

SHOW TABLES shows your tables in the database, but unfortunately you
can't
do something like SELECT * FROM (SHOW TABLES) WHERE Tables_in_xx =
'blahblah'

Of course you can create a table that contains metadata for your tables,
but
you would have to update it manually. Or you could have an external
program
scanning your data directories for added/removed table files.

Regards,
Iikka

**
* Iikka Meriläinen   *
* E-mail: [EMAIL PROTECTED] *
* Vaala, Finland *
**


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Limiting size of individual databases?

2002-10-08 Thread Jeremy Tinley

What happens when the user reaches their quota limitation from MySQL
doing an insert?  IIRC, the filesystem won't allow the write and could
potentially cause problems with the database.

You may choose to put them on another un-quota-ed filesystem and write a
quick perl script to check the size of each database.  Fire off an email
to you or the customer.  If they don't reduce the size of the database
in a day or so, charge 'em for more disk space. :)

The quota function at the MySQL level would be preferable to keep data
integrity.  MySQL using the OS level soft/hard quota would be even nicer
for sysadmins.

-J

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf
Of Iikka Meriläinen
Sent: Tuesday, October 08, 2002 12:58 PM
To: Jesse Sheidlower
Cc: [EMAIL PROTECTED]
Subject: Re: Limiting size of individual databases?

On Tue, 8 Oct 2002, Jesse Sheidlower wrote:

 I'm sure this is a common requirement, but the only thing I could
 find in the docs or in a book or two was an unanswered version of
 the same question at http://www.mysql.com/doc/en/CREATE_DATABASE.html
.

 I'm setting up MySQL on a small ISP and would like to be able to
 restrict the size of individual databases to something like 25M.
 What's the best way of getting this done? The server is running
 Solaris, if it's necessary to do it through the OS insteady of
 through MySQL.

Hi,

Unfortunately you'll have to use Solaris disk quotas for that. Make sure
that the table files of each restricted-space user count towards their
personal quotas. This means that you have to create a separate OS
account
for each user you want to restrict with the quota. This also means that
the
table files must be owned by those restricted users.

I hope this quota feature will be added into MySQL in the future.

Best regards,
Iikka




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




HA of MySQL

2002-10-07 Thread Jeremy Tinley

Howdy,

There has been some interest lately in HA of MySQL services both in my
company and on the list.  A few of us here sat down on Friday (at 5PM no
less) and started hashing out the details of providing such a service.
Following several possible approaches, we ran into major stumbling
blocks on each path.

The Setup:
One master database, 5 slave databases.  The end application are Perl
CGIs connecting to DNS CNAMES (db1, db2, db3, etc).  I started by
demonstrating the application of slaves in an LVS
(http://www.linuxvirtualserver.org) cluster.  This proved to be very
successful.  I was able to build a cluster of slaves, load balancing the
queries among them, weighting them differently and having ones removed
from the cluster by shutting down MySQL.  The goal then is to point
queries to web-db, which is a cluster of 2 or 3 slaves.

The next step is to use heartbeat (http://www.linux-ha.org) to do IP
address takeover of the master in the event of a failure.  This is where
it gets tricky. One of the slaves will be designated the master
failover. Upon detection of a master failure, the program...

1) Runs a SLAVE STOP on the failover slave
2) Runs a script to enable writes to the slave tables
3) Removes itself from the cluster
4) Takes over the IP address of the master

The problem then lies in how to miss as few inserts queries as possible.
The easiest solution is to start the binlog on the failover slave as
soon as it becomes the master.  As a downfall, some writes to the master
will be lost, possibly forever with a disk failure (but disk failure is
a scenario you can't always prepare for at a software level).

What happens if other slaves in the cluster are very far behind,
possibly due to long reporting queries. If the master goes down, these
would have to rely on the new master to catch up, however, the new
master has no binlog information resulting in wildly out of sync data.

In order to provide true data redundancy, the binlog position would have
to be identical to that of the master to retain the same filename and
position.  That's not an easy feat to accomplish.  If you bring the
slave down, the master down, the slave up, then the master up, you
should get binlogs that match, but I can't confirm this yet.

So, I put it to the list.  Am I missing the obvious here?  How do YOU
achieve a failover master?

-J




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: HA of MySQL

2002-10-07 Thread Jeremy Tinley

The problem isn't so much with the failover.  It's with data integrity.
Binlogs control replication.  You can place a failover master in between
the master and slaves.  In the event of a master failure, you eliminate
the loss of writes by directing them to the failover.  If you lose your
failover, the binlogs can be completely different thus pointing the
slaves to the master is useless.

The binlog position is the real problem.  Since binlogs are stored with
their byte position as the indicator instead of a unique value passed on
from the master, there's no easy way of finding the position you were
just at.

Is is possible to write two binlogs?  One to the local disk, one to a
network device?

-J

-Original Message-
From: Daniel Koch [mailto:[EMAIL PROTECTED]] 
Sent: Monday, October 07, 2002 11:47 AM
To: Jeremy Tinley
Cc: [EMAIL PROTECTED]
Subject: Re: HA of MySQL

On Mon, 2002-10-07 at 10:30, Jeremy Tinley wrote:
 Howdy,
 
 There has been some interest lately in HA of MySQL services both in my
 company and on the list.  A few of us here sat down on Friday (at 5PM
no
 less) and started hashing out the details of providing such a service.
 Following several possible approaches, we ran into major stumbling
 blocks on each path.
 


You might check with these guys, who have a similar project:

http://mysql-ha.sourceforge.net/



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL Monitoring Tool

2002-09-10 Thread Jeremy Tinley

Tom,

What type of problems AREN'T you detecting?

I'd take a wild guess and say you're only checking MySQL every 5 minutes
or so.  You can decrease the frequency between checks for more updated
status.  There are two variables that you need to look at:

normal_check_interval, which is part of the services.cfg file.  This
determines how often to check your host.  The second is interval_length
in the nagios.cfg file.

normal_check_interval * interval_length = time between checks in s.

We decreased interval_length from 60 to 30, so each interval was 30
seconds instead of 60.  This cuts the time between checks in half across
the board.

-J

-Original Message-
From: Tom Freeman [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, September 10, 2002 5:04 AM
To: [EMAIL PROTECTED]
Subject: MySQL Monitoring Tool

Hi,
I hope this question hasn't been asked loads of times before but I can't
see
any reference to it in the documentation.
Basically I need a way to monitor MySQL to ensure it hasn't gone down.
We
are using MySQL as the backend of some important sites and need to
ensure
that if it does crash for whatever reason, an alert (email and SMS) is
sent
out to an engineer to resolve the problem.

We are presently using a program known as NetSaint to monitor our
services
but it sometimes doesn't seem to be 100% reliable at detecting a MySQL
error.

Anyway, can anyone tell me a better way to monitor MySQL so that if it
has
any problems an oncall engineer can respond quickly. I'm sure this must
be a
common problem so there must be something out there already.

Many thanks,
Tom



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Replications...

2002-09-09 Thread Jeremy Tinley

This handles part of the problem but a true load balanced master
solution is needed. There's no real advantage in spending 5, 10 or
$20,000 on a failover master if you can't load balance and the spare
will just sit idle.  Master servers should intelligently talk to each
other and determine duplicate key problems.

You could create an LVS cluster of masters. You would have to write some
hand code to remove a master from the cluster when it is behind so when
a master is brought back up, it's out of the cluster until it has caught
up.  Then toss in some code to sync the downed master with the current
running ones.  Perhaps you could point replication to the LVS IP instead
of a specific machine.  When it comes back up, it will find a valid
master to connect to via LVS, replicate, and then rejoin the
collective... err, cluster. :)  If a host that is currently being a
master to another master goes down, the slave to that master will
reconnect to LVS and find a new host to get data from.

This still leaves the auto_increment problem.

On the slave side however, it's easy.  You can build a cluster of slaves
and probably create a nice script to change weight based upon distance
behind the master.  After a slave falls so far behind, it's removed
until it catches back up.

More thoughts are always welcome.

-J


-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]] 
Sent: Saturday, September 07, 2002 10:51 PM
To: Sam Przyswa
Cc: [EMAIL PROTECTED]
Subject: Re: Replications...

On Sun, Sep 08, 2002 at 03:30:35AM +, Sam Przyswa wrote:
 Jeremy Zawodny ([EMAIL PROTECTED]) écrivait:
 
 On Fri, Sep 06, 2002 at 03:52:24PM +, Sam Przyswa wrote:
  Hi,
 
  Does the next MySQL 4.x version will support cross replications
  between several master servers in cluster ?
 
 Multi-master replication works as well in 4.x as it does in 3.23.xx.
 
 Does multi-master mean master to master, imagine you have 3 MySQL
 servers, A, B, C, with a load balancer in front end, if we make an
 update on A, is it possible to replicate/sync the change on B and C
 ?

You want to do this?

http://jeremy.zawodny.com/mysql/managing-mysql-replication_files/slide01
21.htm

Just make B a slave of A, C a slave of B, and A a slave of C.

Beware of using auto-increments in that scanario, though.  You'll be
rather surprised and frustrated as explained in the manual.
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 32 days, processed 662,833,682 queries (233/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Replications...

2002-09-09 Thread Jeremy Tinley

 This handles part of the problem but a true load balanced master
 solution is needed. There's no real advantage in spending 5, 10 or
 $20,000 on a failover master if you can't load balance and the spare
 will just sit idle.

 Sure there is.  If your master blows up, you have a spare waiting to
 take its place.  It may not solve your problem, but it is a real
 advantage for some folks.

Heh, that's a tough pill for management to swallow. :)  We've been
discussing the HA/Failover solutions for our DB's all day.  We finally
came to the conclusion we'll have to use some type of failover (non-load
balanced) solution and have the failover either a) reside in a slave
cluster, or b) provide another service to be deemed non-idle.



 Master servers should intelligently talk to each other and determine
 duplicate key problems.

 What if the masters are a few thousand miles apart with 80-120ms
 network latency?  You may gain some load-handling capabilities (in
 theory), but you're got a serious bottleneck to deal with.

This is a good point.  I'm at a loss for how to handle the issue then.
Move away from using auto increment?  We're taking a low/moderate amount
of traffic from a consistent provider that generates 500-700
queries/minute.  The odds of duplicate keys based on auto_increment in
load balanced masters is too high for comfort.


-snip-


 Then toss in some code to sync the downed master with the current
 running ones.

 Instead of using MySQL's native replication?

No, using MySQL replication.  Copying 15GB of data across even the LAN
is too much for me. :)

 Perhaps you could point replication to the LVS IP instead of a
 specific machine.  When it comes back up, it will find a valid
 master to connect to via LVS, replicate, and then rejoin the
 collective... err, cluster. :)

 The trick is to make sure that all the masters have EXACTLY the same
 data in their binary logs (give or take the server-id).

Well, I realized you can't point to a machine back into its own cluster.

The cluster of slaves is still a viable option, but load balanced
masters will have to wait... 

Thanks for the input.

-J



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL performance questions

2002-09-05 Thread Jeremy Tinley

Rummaging through some docs on performance and have come up with some
questions.  Let me preface by saying, we don't have any performance
problems.  I inherited this monster of a database and am running through
the configuration to make sure that it is indeed setup for optimum
performance.

For clarity sake, assume the following:

Red Hat Linux 7.1
2.4.8 kernel
MySQL 3.23.42
MyISAM databases
3GB RAM
P3/700 x 4
15GB database spanned across ~200 tables

Key_reads / Key_read_request   = 0.00059875
Key_write / Key_write_requests = 0.81324723


1) Since the key_buffer variable defines the total amount of indexed
data in memory, then the key_buffer size and index size of my tables
should be releated.  Total index size of all relevant tables is 440MB.
Should the key_buffer size be , , = or a percentage of this number?


2) How does performance directly relate to the number of open tables?
Yes, these open tables take memory, and a FLUSH TABLES would adequately
free this memory, but then the tables that are used most frequently
would need to be reopened.  Am I looking at memory vs. CPU in this case?
Provided I have enough RAM, wouldn't it make sense to leave all the
tables open?


I'm sure I'll think of more later on.  By the way, here are the pages I
used to ramp up my performance knowledge:
http://www.mysql.com/doc/en/SHOW_VARIABLES.html
http://lists.mysql.com/cgi-ez/ezmlm-cgi?1:mss:19873
http://www.linux-mag.com/2001-12/mysql_01.html


-J



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL performance questions

2002-09-05 Thread Jeremy Tinley

First of all, I forgive the rather lengthy post.

Thanks for the repl(y|ies) Benjamin.  Decreasing the key_buffer should
be my first step.  Back to the questions:

3)  I'm somewhat at a loss for this one and perhaps the answer is more
obvious than not.  I have 257 total tables from my main DB and mysql.  I
figured this by a ls -al var/ | grep -c MYD. How can I possibly have
512 (which is also the table_cache value) open tables?

| Open_tables  | 512|
| Open_files   | 776|
| Open_streams | 0  |
| Opened_tables| 1499   |


I also see Opened_tables is 1499, which is 3x the number of open tables.
I would consider this number average and not think about increasing
table_cache.  OTOH, one of my slaves has this:

| Open_tables  | 256|
| Open_files   | 459|
| Open_streams | 0  |
| Opened_tables| 3532   |

Where 256 is the table_cache limit.  I'd consider the opened tables to
be big, and would probably want to increase the table_cache size.



4)  How can I reliably determine how much RAM MySQL is indeed using?
From the manual:

ps and other system status programs may 
 report that mysqld uses a lot of memory

ps shows each instance of MySQL using 91760k of RAM x ~40 processes =
3.6GB, which is how much real RAM I have.   The rest of the swap is
probably due to the rest of the system processes running.



To further assistance here, here are my settings for the master, which
is the DB that's swapping:

First, we start with variables...
back_log| 50
basedir | /usr/local/mysql/
binlog_cache_size   | 32768
character_set   | latin1
character_sets  | latin1 dec8 dos german1 hp8 koi8_ru
latin2 swe7 usa7 cp1251 danish hebrew win1251 esto
nia hungarian koi8_ukr win1251ukr greek win1250 croat cp1257 latin5
concurrent_insert   | ON
connect_timeout | 5
datadir | /usr/local/mysql//var/
delay_key_write | ON
delayed_insert_limit| 100
delayed_insert_timeout  | 300
delayed_queue_size  | 1000
flush   | OFF
flush_time  | 0
have_bdb| NO
have_gemini | NO
have_innodb | NO
have_isam   | YES
have_raid   | NO
have_ssl| NO
init_file   |
interactive_timeout | 28800
join_buffer_size| 131072
key_buffer_size | 805302272
language| /usr/local/mysql/share/mysql/english/
large_files_support | ON
locked_in_memory| OFF
log | OFF
log_update  | OFF
log_bin | ON
log_slave_updates   | OFF
log_long_queries| OFF
long_query_time | 10
low_priority_updates| OFF
lower_case_table_names  | 0
max_allowed_packet  | 10484736
max_binlog_cache_size   | 4294967295
max_binlog_size | 1073741824
max_connections | 150
max_connect_errors  | 10
max_delayed_threads | 20
max_heap_table_size | 16777216
max_join_size   | 4294967295
max_sort_length | 1024
max_user_connections| 0
max_tmp_tables  | 32
max_write_lock_count| 4294967295
myisam_recover_options  | 0
myisam_max_extra_sort_file_size | 256
myisam_max_sort_file_size   | 2047
myisam_sort_buffer_size | 67108864
net_buffer_length   | 16384
net_read_timeout| 30
net_retry_count | 10
net_write_timeout   | 60
open_files_limit| 0
port| 3306
protocol_version| 10
record_buffer   | 2093056
record_rnd_buffer   | 2093056
query_buffer_size   | 0
safe_show_database  | OFF
server_id   | 1
slave_net_timeout   | 3600
skip_locking| ON
skip_networking | OFF
skip_show_database  | OFF
slow_launch_time| 2
socket  | /tmp/mysql.sock
sort_buffer | 2097144
sql_mode| 0
table_cache | 512
table_type  | MYISAM
thread_cache_size   | 8
thread_stack| 65536
transaction_isolation   | READ-COMMITTED
timezone| CDT
tmp_table_size  | 33554432
tmpdir  | /tmp/
version | 3.23.42-log
wait_timeout| 28800


Now we go 

Tape backups from live slave

2002-08-30 Thread Jeremy Tinley

Howdy,

Before I run off and try it, can anyone offer up any problems with doing
a LOCK TABLES, FLUSH TABLES and then use tar to backup my 16GB of DB
tables from a slave to a tape?

Currently, I redirect the traffic to another slave, and then shut down
this slave to backup the tables and keep replication from happening.  An
hour later, it's complete.

Any thoughts on my query?
(Sorry, had to throw the filter word in there somewhere...)


-J



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Tape backups from live slave

2002-08-30 Thread Jeremy Tinley

Good question,

a) The slave does not have the free space to make a duplicate of my
data.

b) I have 200 tables.  I don't want to have to recreate indexes for
every table if I have to restore.  I'd rather spend an hour restoring
from tape and let replication catch me up.

-J

-Original Message-
From: William R. Mussatto [mailto:[EMAIL PROTECTED]] 
Sent: Friday, August 30, 2002 1:57 PM
To: Jeremy Tinley
Cc: [EMAIL PROTECTED]
Subject: Re: Tape backups from live slave

On Fri, 30 Aug 2002, Jeremy Tinley wrote:

 Date: Fri, 30 Aug 2002 12:36:28 -0500
 From: Jeremy Tinley [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Tape backups from live slave
 
 Howdy,
 
 Before I run off and try it, can anyone offer up any problems with
doing
 a LOCK TABLES, FLUSH TABLES and then use tar to backup my 16GB of DB
 tables from a slave to a tape?
 
 Currently, I redirect the traffic to another slave, and then shut down
 this slave to backup the tables and keep replication from happening.
An
 hour later, it's complete.

Why not: use mysqldump to a file and then backup the file.  The total 
time you bypass the slave should be much shorter. 

 
 Any thoughts on my query?
 (Sorry, had to throw the filter word in there somewhere...)
 
 
 -J
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

Sincerely,

William Mussatto, Senior Systems Engineer
CyberStrategies, Inc
ph. 909-920-9154 ext. 27



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Tape backups from live slave

2002-08-30 Thread Jeremy Tinley

True, I hadn't considered that mysql will just create the indexes as it
loads the data back in.  That increases the time for inserts by a
fraction of a second at most, but multipled over a large enough scale,
it's unnecessary.  Even if I load the data and then create or rebuild
the indexes, it's still too long.

That still leaves the problem of the size of the databases themselves
and the time that it takes to dump:
mysqldump -u user --opt db table  table.db

This has been running for over 30 minutes now, and it's only one table.

Obviously, even doing bad math ((16GB / 1.2GB) * .5  = 6.6 hrs)
mysqldump is not an option.  Surely there's a better backup solution for
large databases.

This leaves me looking at mysqlhotcopy.  Here's what it would do for one
table:

- begin snip -
mkdir /path/to/backup/dbname/, 0750
LOCK TABLES `dbname`.`tablename` READ
FLUSH TABLES /*!32323 `dbname`.`tablename` */
Copying 3 files...
cp -p
'/usr/local/mysql/var/dbname/tablename.MYD'
'/usr/local/mysql/var/dbname/tablename.MYI'
'/usr/local/mysql/var/dbname/tablename.frm'
'/path/to/backup_test/dbname'
Copying indices for 0 files...
UNLOCK TABLES
- end snip -

If all mysqlhotcopy does is lock tables and copy the source files to
another location on the drive, tar would, on a basic level, do just the
same thing by reading the file data in the same manner.  In theory, I
could just use tar instead of cp, and go straight to tape, bypassing the
file IO of copying 16GB of data to another location on the file system
(and subsequently deleting it).

-J


-Original Message-
From: Mike Hall [mailto:[EMAIL PROTECTED]] 
Sent: Friday, August 30, 2002 3:14 PM
To: Jeremy Tinley; [EMAIL PROTECTED]
Subject: Re: Tape backups from live slave

 b) I have 200 tables.  I don't want to have to recreate indexes for
 every table if I have to restore.  I'd rather spend an hour restoring
 from tape and let replication catch me up.

Why would you have to recreate the indexes? The dump will contain this
information in the CREATE statements.

Mike

[sql,query]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php