Re: MySQL Replication

2003-08-12 Thread Michael Conlen
Something to ask yourself in all this is, how fast can I change out a 
piece of hardware? If I needed a high availability system, on the cheap 
(ie, not a million dollars US worth of Sun hardware) I'd probably go 
with a bunch of SuperMicro 2U rack mount servers with the hot swap SCSI 
drives. You can rack up a bunch of thses and in the event of hardware 
failure pull the drives from one machine and put them in another, and be 
online in under 15 minutes (that's a lot of 9s if it happens once a 
year, your going to be down longer than that rebooting each time there's 
a critical OS update). I've had a lot of success of bringing a server up 
on different hardware by switching drives in to new units quickly. If 
your racked up and ready to go it could be as quick as the time to swap 
drives and power up. This presumes human presence in your datacenter 
24/7 and good alarms. The nice thing is that it doesn't require someone 
to log in and do things, it doesn't require custom scripts, it doesn't 
require esoteric high availability software. It requires someone to pull 
some drives and plug them in somewhere else and turn the unit on. Is 15 
minutes acceptable over the life of one of these systems (hint: I've 
only ever seen one of these systems fail, and I've got a lot of them 
here (and yes, I pulled the drives and put them in a spare unit and 
voila)).

If you have a lot of data you can look at fibre channel solutions for 
your data drives. The new unit can attach to the same disks over a 
fabric (if I'm not using outdated buzz words here) and voila, a few 
terrabytes of data is on a new system.  It's also handy if you need to 
switch off masters. Work out a system where your data is on a FC array 
to where you can switch which system handles it. Down the server, run a 
script to change which systems attach that data, bring up the server on 
the other machine (complete with IP addresses). Sure, there's some small 
downtime, but you can usually get away with a well planned couple of 
seconds at 3 AM.

Lots of time/money are put in to software solutions where an igor would 
do well (or a NOC tech, and for your NOC techs out there, I've got a lot 
of respect for igors, they are good with a needle).

Something else to consider in high availability systems is regression 
testing. Think about what people can/will do to your systems and test 
against it. This is a good way to get a lot of extra hardware around in 
your office/lab. Think of everything you might do to a production system 
and write a test plan for it (I once did a 1800 line interactive shell 
script that had 900 test plans for each hardware platform it worked on, 
of which there were 12). In any case, when I upgrade the version of the 
OS, what happens. When the code does X (for every X) what happens. When 
I buy a new switch, what happens. When I upgrade MySQL, what happens. 
When I introduce code changes, what happens. While it's not directly 
related to MySQL it's important, and you should at least be thinking in 
terms of OS, Hardware and Database Server and have a good set of 
automated test plans from the developers you can run against your 
hardware that includes load testing. You can put a ton in to hardware 
failover, but it won't mean squat when the code locks all your other 
queries out for a couple of hours. I had a situation where upgrading the 
clients sytems to using INNODB tables caused a problem for one of his 
scripts that bulk loaded information in to the system. It turned out to 
be a nice little switch in my.cnf, but I had no way to test this before 
I did a alter table on his stuff to know that his updates would take 
*that* long before (in the end) failing. Also consider that hte default 
for the option that needed to be switched had changed between 
versions... ...anyway, get your self a nice testing lab out of all this 
if you can, I'm sure we'd all like to have more hardware to play with :)

--
Michael Suspenders and Belt Conlen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: max clients

2003-07-31 Thread Michael Conlen
The answer (like most of these) is, it depends. You really need to 
benchmark your application to know how often a user is going to 
actually hit your database server, and how many connections the 
application creates for each user. I've got a server that handles the 
databases for 3000 users hitting web apps on several different web 
servers without blinking (dual P4 Xeon, 2 GB of memory). It runs at 
worst 75% idle. On the other hand I've seen applications where 100 users 
would bring this server to it's knees if it were serving the database 
for it. It all depends on usage and you can't get from users to 
database usage directly.

I would pile up some hardware your not using at the moment set things up 
and write some scripts to simulate actual usage of your application. 
Load the server with some real data, especially in terms of quantity of 
rows, and see what it handles. Measure performance and system usage 
metrics and go from there.

--
Michael Conlen
NEWMEDIAPLAN wrote:

Can mysql handle 5000 concurrent webusers sending queries to the db through
a web search engine.
Is it possible (with a very big server/hw) ?
Thanks. Roberto


 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Show Status data

2003-07-17 Thread Michael Conlen
I'm developing a net-snmp module that executes a 'show status' command 
on a MySQL server and advertises the values. My choices seem to be

super concrete: one server, one set of results, the variables don't 
change, ever...
moderatly concrete: multiple possible servers, one set of results each, 
the variables don't change, ever..
abstract: multiple servers returning a table of some results. Things are 
what they are when they are...

The problem with the obvious choice is that it's a table of tables of 
values, and if you've used SNMP much you know where that leads, and try 
programing for it...

So the question is, are the values that it returns likely to be added 
to/removed from the show status command likely to change much, or can I 
just write a long but easy to code for MIB with each of the values.

--
Michael 5yy5jp Conlen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Stopping MySQL taking down a server?

2003-07-16 Thread Michael Conlen
What Operating system are you using and how are you measuring 
unresponsiveness?

--
Michael Conlen
Tim Fountain wrote:

This may be a silly question but what can be done to stop
load-intensive MySQL processes taking down a server? Things like
adding fulltext indexes to very large tables, or selects on very large
(multi-million-row) tables just completely kill the box until they
complete.
I don't mind how long these things take but the box shouldn't become
unresponsive while they are running.
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


4.0.13 or 4.0.14

2003-07-15 Thread Michael Conlen
I'm getting ready to upgrade a server from 4.0.12 and was wondering if 
anyone knew the time frame for 4.0.14, or if I should just go with .13 
for now.

--
Michael Conlen
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Compilers

2003-07-10 Thread Michael Conlen
There's been much news lately about various compilers for the Intel 
platforms after various benchmarks have come out which all disagree. One 
that I noticed is that the new Intel processors have a math processor 
that allows vector calculations, which I understand as being able to 
pass an array and an operation to the processor to perform an iterative 
calcuation as opposed to running the steps over and over again.

Does anyone know if these would be of signifcant value to MySQL, and are 
there versions compiled with the Intel compiler as opposed to gcc to 
demonstrate this with?

If not, does anyone have access to the Intel compilers to benchmark this 
with?

--
Michael Conlen
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: freebsd 5.1 + mysql 4.0.13

2003-07-02 Thread Michael Conlen
Richard,

Welcome to the wonderful world of FreeBSD. FreeBSD-5.1 is not release 
code. It is alpha quality (thought pretty good quality as Alpha goes). 
I understand that the website doesn't make this abundantly clear on the 
homepage. FreeBSD 4.8 is the current production quality code. It will 
probably be much faster for you as well, since debugging options are 
turned off by default. You can get from 5.1 to 4.8 without having to 
reinstall the OS by, well, reinstalling the OS.

By following the instructions in the handbook for upgrading you can 
also downgrade to FreeBSD-4.8 by downgrading your source tree in 
/usr/src, building and installing world. It's actually not quite as bad 
as it seems in the docs and I've managed upgrades and downgrades with 
minimum of downtime. Most of the steps can be done while the system is 
running. The steps that should be done in single user mode (and I 
recomend this since your not familiar with FreeBSD yet) are pretty quick.

FreeBSD is actually fairly nice once you know what's going on.

The Handbook is 
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/index.html
and the chapter you want is 
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/cutting-edge.html

Substitude old for cutting edge and your on your way. The CVS tag you 
would want is

RELENG_4_8

I would recomend rebuilding anything you built in ports after you 
upgrade the system, since this downgrade is considerable, but again much 
of that compile time will be while the system is running.

Your really only looking at the time to do a make install and make 
installkernel, and rebuilding the database server as your downtime. Much 
less considerable than reinstalling an entire OS and getting everything 
installed that you want.

If you need more assistance shoot me a line, off the MySQL list (as it's 
no longer a MySQL issue) and I'll answer what I can.

--
Michael Conlen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: License?

2003-07-02 Thread Michael Conlen
I would recomend reading up on the copyright laws in your country. 
publishing and distribution are legally defined terms. IANAL but IIRC 
publishing means to create a copy of something on a medium and 
distribution means to deliver that medium to someone else. These 
concepts get tricky in a legal sense, and the issue of presedence of 
contract law and copyright law needs to be determined. You may have 
rights granted to you through copyright law.

In the context of doing things at work you are your company, not you as 
an individual, so your company is not 'distributing' it if you put it on 
a bunch of machines as you might be doing if you put it on a bunch of 
machines for other companies. This gets tricky when your a consultant 
and you have been hired to install MySQL...

You should be able to ask MySQL for a clear answer to a clear question.

Like all legal things, talk to a lawyer if there's a license issue.
--
Michael Conlen
Joel Rees wrote:

What does internal distribution mean? Is it another thing than copying?
   

I've wondered that myself. 

See
   http://www.gnu.org or 
   http://www.fsf.org 

to get more information on the GPL. Licensing, etc., is explained on
their site,
   http://www.mysql.com/doc/en/Licensing_and_Support.html

and they tend to be willing to answer questions if you send mail to
their sales crew.
Remember that publishing and distribution are two separate things.

I think they used not to be very concerned about internal distribution,
except in cases where the numbers were large, but I think their lawyers
and business people having been pushing them to avoid ambiguities.
 

Consider this example:

A company has 2 database servers and want to install MySQL on both servers.
Is MySQL free for the first server, but require a license for the second
server? Or are MySQL free for both servers?
   

Don't get me started.

   http://www.mysql.com/doc/en/Using_the_MySQL_software_under_a_commercial_license.html
   http://www.mysql.com/doc/en/Using_the_MySQL_software_for_free_under_GPL.html
It looks to me as if distributing a modified version of MySQL appears to
require either the use of the GPL on your modifications or the purchase
of a license for each copy distributed. 

Modification includes linking an application to either MySQL or to one
of the MySQL provided drivers. A GPL compatible license may also be used,
I think, and if that path is chosen, it must be applied to all of your
application source. 

Previously, the drivers were under the LGPL, which allowed linking an
application that was not GPL compatibly licensed, and that was
significantly easier to work with.
Apparently (without further elucidation from MySQL) you can't distribute
PHP linked with the new versions of the drivers (or even to libraries
designed to work only with parts of the driver API that are uniquely
MySQL's and therefore covered by MySQL's copyright). 

As a result, PHP 4 is distributed with libraries linked to the old
drivers, and PHP 5 is distributed without the MySQL specific drivers
directly linked. So the end user of an app written for PHP 5 must
install MySQL and its drivers; separately install PHP and either compile
the MySQL libraries in or, for MSWindows, set it up to use the MySQL
shared libraries dll; and then install the app. 

You could provide an installer to install both PHP and the app, I think,
but the installer for MySQL would have to be separate. (And if you built
your own separate installer for MySQL, the installer would have to be
under the GPL.)
This would be because PHP is not under the GPL license, but under the
PHP license, which does not require modifications to be published under
a GPL compatible license in order to be distributed.
If you use a generic driver, you may be able to avoid the GPL effects,
but that's really beside the point.
If it makes you money, and if you want it to continue to make you money,
logic itself requires you to send some of the action back to the people
that build it. In MySQL's case, the people who build it have set up a
licensing program to make it easier to cooperate financially and
technically.
rant
If you used, for instance, PostGreSQL, even though that license does not
place any publishing or licensing requirements on linked code, the logic
remains. Support the developers, or expect to find yourself stuck
without support. Vote with your money, so to speak.
(As I see it, the two specific advantages of open source and free
software are, first, you can legally modify it to your own purposes, and,
second, you can usually set up some way to get a good start without
paying through the teeth just for the right to find out if your project
is going to roll like a tank or roll in the tank. The concept of making
money with no expenses at all is a mirage, and a dangerous one, and when
you hear the suits talk about frictionless economy, tell them to take
their manure generators elsewhere.)
/rant
 



--
MySQL General Mailing List
For list archives: http

Re: NFS or replication?

2003-07-02 Thread Michael Conlen
Steven,

Don't use NFS, bad idea.

You can do the master writer/multi reader, but it's always been a 
problem making sure every reader is up to date. You need to have a way 
to verify this manually.

If your really going to max out your platform there's other platforms to 
look at, but the costs go up FAST. Instead of looking at a 10k machine 
you start looking at a 100k to 1mil machine, but it all depends on your 
needs. If you need to have the data correct that instant, you need it, 
but for a web profile it might be much more cost effictive to say 
updates to your profile may take a few moments or some such. Sun, HP 
and IBM make some very good hardware using their own designs (Sparc, 
PA-RISC and Power4)

--
Michael Conlen
Steven Balthazor wrote:

I am interested in any thoughts that people may have for creating a scalable mysql 
infrastructure.
I have a web application which runs on several front end web servers which hit one 
backend mysql
server.  Presently I can continue to grow by adding front end webservers -- the mysql 
server is not
close to maxed out.  Looking toward the future I will have to make a decision about 
how to grow the
mysql serving capability and have several ideas on how to do it.
Now some questions for the group:
1.  I can guess that my select to insert/update ratio is probably on the order of 4:1 but is there a
simple tool to use to determine the actual ratio.
2.  When I want to scale up the mysql server what are the pros/cons of each of the following:
	a.  Create an NFS server on the backend and load balance several mysql servers all accessing
the same database files via NFS (is this even possible/desirable?)
	b.  Make one big server the primary insert/update server and replicate the data out to many
read-only slaves (at what ratio of read to writes in conjunction with number of slaves does this
start to limit scalability).
	c.  Buy one big monster server every year and hope to stay ahead of my needs (and have the
previous years machine as a backup)
3.  With a replication strategy how does one make sure that the current information is displayed to
a user?   For example, a frequent action in a web application is to update information in a user's
profile.  Typically the way this is done is for the user to enter the information into a web form
submit the form and then the user gets a page with the current data displayed in a read-only format
(so the user knows the update was successful).  How do most people handle this to make sure that the
current data is displayed?  Do you just perform the select from the write server for this one case?
Or is replication fast enough that performing the select from one of the slaves is ok?  
4.  Replication (choice b) seems to be the preferred way to go, based on what I have seen on the
list; is there a reason why NFS is not an option?  Also is the choice determined by the type of
database (InnoDB vs. MyIsam).  

I am interested in any comments/experience people may have on this issue.  I have many 
thoughts of
my own regarding ease of maintenance, backup, reliability, ease of expansion, cost, 
performance,
etc.  However I have not had time or hardware to test the different possibilities and 
would greatly
appreciate hearing what others have to say.
Thank you for your comments,
Steven Balthazor
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: simplify query?

2003-07-02 Thread Michael Conlen
What you want to do is reduce the query. Logic is reduced using 
similar rules to algebra.

Think of or operations as addition and and operations as 
multiplication and you can manipulate them using the same rules as you 
do in algebra.

select * from t where (a and b) or (a and c);

where a, b and c are conditions

this query could be restated as

select * from t where a and (b or c);

your query is of the form

(a or b or c) and (d or e or f),

which I don't think can be reduced any further, but it can certainly be 
made more complicated as in (a and d) or (a and e) or (a and f) or (b 
and d) ...

--
Michael Conlen
Reto Baudenbacher wrote:

hi

Sorry for this newbie-question: 

is it possible to simplifiy the following (working) query?



SELECT * FROM mytable WHERE 

((col1 LIKE '%test%') OR (col2 LIKE '%test%') OR (col3 LIKE 'test%')) 

AND

(col5 = 'y' OR col6 = 'y' OR col7 = 'y') ORDER BY col1



Thanks for any suggestions!
Reto Baudenbacher
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: slow performance on simple queries

2003-06-26 Thread Michael Conlen
Do you have disk performance data? My guess would be it's the time to 
read the information off the disk.

I think redhat has the iostat command. Try running

iostat 1

in one window while you run your query in another, what result do you 
get from the iostat?

--
Michael Conlen
Bob Ostermann wrote:

Hi-

I'm running mysql  Ver 12.20 Distrib 4.0.13, for pc-linux (i686) / 
RedHat 7.3 on a AMD 2400XP 256 MB.

I've got a table (type=MyISAM) that's grown to 1,7 GB (154,815 
records) and it takes several seconds to perform even simple querys:

select data from eod where symbol='ZOOM'

yields: 1 row in set (3.15 sec)

(data is about 29K)

a repeat of the query yields 0.00 seconds, so the slowdown appears to 
be in the initial accessing of the data, not the handling/display of 
the result.

nearby records (key 'ZOWI') execute faster (1 row in set (0.28 sec)) 
but still slowly.

this is the create:

CREATE TABLE `eod` (
  `symbol` varchar(10) binary NOT NULL default '',
  `updated` date default NULL,
  `data` mediumtext,
  PRIMARY KEY  (`symbol`)
) TYPE=MyISAM
Is there a relationship between filesystem/recordsize/recordcount that 
needs to be optimized for this type of table?

bob.



==
The RediNews Network http://www.redinews.com/
Custom Financial Web Site Content
phone: (516) 997-4343
==



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Anyone had a chance to try an Opteron yet?

2003-06-24 Thread Michael Conlen
64 bit servers have different performance characteristics and tend to be 
used for different things. than PC systems. They are generally 
backplanes to which are attached some number of processors and some 
amount of memory, and a lot of IO. They aren't used as much for 
processor speed (you could get a bunch of PCs to do that) as they are 
for doing a lot of IO. Even a small server like the old Sun e450's (4 
processors) had something like 6 or 8 PCI busses on them. Larger systems 
could be configured with a large number of IO cards for those computers 
that just need a few gigabit per second of network IO and a ton of disk 
space (multiple disk controllers, or FC controllers all going full speed).

You would use the memory to store temp information as a query would run 
and you rely on the systems fast access to the disks to scan through the 
tables. You would generally attach anywhere from a few hundred gigs of 
disk (spread out over many smaller disks) up to many terabytes (it's 
been a while since I've done large system admin work, so I have no idea 
what the largest systems are doing, but imagine 72 cabinets full of 72 
GB or larger disks). This way instead of getting speed from caching the 
data you get speed by reading the data off the disks quickly.

64 bit workstations had an advantage over PC systems most of the time in 
that the memory bus was not the bottleneck it can be on the PC avoiding 
delays due to cache misses, which made them great for visualization 
workstations where the system had to scan through a lot of memory 
quickly to generate an image or process scientific data.

There's a lot of other things going back to the fact that Digital, HP 
Sun and IBM have always had a head start on superscalar and multi-core 
CPU designs, so comparing Mz was never even close between two 
processors. On the other hand many people never saw that advantage 
because they would compile with gcc which was never the best choice for 
pure speed on a given processor.

If you need a 64 bit processor for memory and file size concerns and can 
sacrifice some of the processing speed (which often goes away because of 
the faster IO) there's always been a good used market, in particular for 
Sun equipment. I've seen some dirt cheap prices on fully loaded Sun E450 
systems which are very nice for their size. I think they hold 20 disks 
internally and there's PCI slots for a lot more if you need large files.

On the other hand I think need 64 bit and affordable are rare 
situations.

--
Michael Conlen
Mike Wexler wrote:

Not necessarily. People that need relatively affordable 64 bit systems 
may be waiting for the Opteron to stabilize. My experience is the 
Wintel solutions (like Opteron) tend to have at least a 2-1 price 
performance over Sun and Dec. Also, given that HP has basically 
dropped Alpha, I don't think a lot of people are likely to be 
implementing that platform.

Dan Nelson wrote:

In the last episode (Jun 24), David Griffiths said:
 

I'm surprised there is not more interest in this; is it that not many
work with large-ish (10+ gig) databases that need high-end
performance?
  


I think we have a mysql database running on Tru64, and I'm sure it runs
great on Solaris.  My guess is the people that needed over 2gb of RAM
have switched to 64-bit CPUs long ago.
 







The best in online adult entertainment
http://www.tarrob.com/ads.html
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Distributed/Fault Tolerant DB operation... possible?

2003-06-20 Thread Michael Conlen
Rick,

Sorry your request was taken in the wrong way, I get this request for 
various things all the time and they are serious about it. I also see on 
a regular basis people get two T1 connections from the same loop 
provider, to the same ISP and call it redundant. So the question 
remains, how many 9's do you need?

I'd look at having two servers, one a master and another a slave, and 
some software that can tell when one of them goes down then promote the 
salave to master, and take over the IP address of the down system.

--
Michael Conlen
Rick Franchuk wrote:

On Thu, 19 Jun 2003, Michael Conlen wrote:

 

First get an acceptable outtage rate. Your only going to get so many 
nines, and your budget depends on how many. The system will fail at some 
point, no matter what, even if it's only for a few seconds. That's 
reality. Figure out what kinds of failures you can tolerate based on how 
many 9's you get and what kinds you have to design around. From there 
you can figure out a budget. 99.999% uptime is 5 minutes and 15 seconds 
per year of total downtime. 99.99% is  52.56 minutes and so on. At some 
point something will happen, and I've never seen anyone offer more than 
5 9's, and IBM charges a lot for that. Then, figure out everything that 
could cause an outtage, figure out how to work around them and give them 
a budget. Watch how many 9's come off that requirement.
   

Obviously my hyperbolic emphasis on uptime has brought out the pedant in some. 
I'm not expecting unrealistic results... I would have thought my 'breathless' 
expression of uptime requirement would be seen as a bit of exaggeration. I'll 
try to be more explicit in the future.

 

If you have to use MySQL I'd ditch PC hardware and go with some nice Sun 
kit if you haven't already, or maybe a IBM mainframe.
   

The code using mysql represents several thousand man hours of work, and 
although the database-using elements are somewhat abstracted there's 
guaranteedly mysql linguistic variants spread all throughout the code. It'd be 
substantial work converting... not impossible, but a pricey PITA. A mysql 
solution would be preferable under the circumstances.

 

hand waving extreme commentary about the impossibilty of the situation and 
general hyperbole snipped
   

 

There's a lot of issues to consider in there, and you probably want someone
with a graduate degree in computer science to look over the design for you.
(anything this critical and I get someone smarter than me to double check my
designs and implementations).
   

I'm sure more than one of the various graduates on our design team will do so, 
seeing how their code will likely need to be modified somewhat to take 
advantage of the new setup.

 

On the other hand, if you have all this money, look at some of the 
etc and so on
This is just a quick run down of immediate issues in a 24x7x365, it's 
not exhaustive. Think about every cable, every cord, every component, 
from a processor to a memory chip and think about what happens when you 
pull it out or unplug it, then make it redundant.
   

Yes, we're quite aware of what 'redundancy' means. Thank you ever so much for 
pointing out the painfully obvious, much of which has already been taken care 
of inside the time, engineering and money constraints given us.

Now, if you have some actual HELP to give regarding the viability of MySQL to 
the application of the problem at hand...

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: selecting PRIMARY KEY when there is no unique value

2003-06-19 Thread Michael Conlen
Mojtaba Faridzad wrote:

Hi,

In a docuement such as Invoice Form, we have a header and a couple of
records for the detail. In header table, Invoice# can be the PRIMARY KEY but
in detail table, Invoice# is not unique. I think there are two solutions to
choose a Primary Key (in MyISAM type) :
1) Adding an id field ( auto_increment ) and choose it as PRIMARY KEY
in this case we have to add another index on Invoice# for making relation
with the header table
2) There is another field in detail table with timestamp type for keeping
the last change on the record. I want to select ( Invoice# + myTimestamp )
for PRIMARY KEY. in this case I don't need to add a new fields ( id ) and
another index ( on Invoice# ) to the table.
which one do you prefer and usually use?

I always use a id field with auto increment. It helps for normalization, 
and makes the code I use to deal with information very generic, grated 
I've abstracted the code to the point that it has no clue what it's 
doing, it just gets it done. In my case, I know that the foreign key is 
always one column and I can short cut the lookup to create the joins, 
it's it's an index, it's a foreign key, it's this table and index. If 
the foreign key's index could be anything then It's it's an index, it's 
a foreign key, it's this table and index, the index are these columns 
and the code to generate the join is 'interesting'.

The other issue is that while your timestamp should be unique when 
combined with an invoice by whatever rules your dealing with, there's 
nothing that says it will be in the real world (the one where crazy 
things happen). By having the id field I never, ever deal with it 
myself, MySQL always puts the number in there for me and I know it's 
going to be unique unless MySQL does something it should not do.

The id field just takes the guesswork, mess and headaches out of the 
code (well not *all* of them, but enough) and with the size of disk 
space these days the extra space isn't much.

--
Michael Conlen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help me!!!

2003-06-19 Thread Michael Conlen
Hola,

Puedes encontrar una lista Mysql Espanol a 
http://lists.mysql.com/list.php?list=mysql-es#b . Yo pienso que ellos 
pueden ayudarte ma's que nosotros podemos.

You can find a spanish MySQL list at 
http://lists.mysql.com/list.php?list=mysql-es#b . I think they can help 
you more than we can.

--
Michael Conlen
Luis Enrique Bauzá Peña wrote:

Hi, I need some link to a spanish mysql list, would you ...?

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Distributed/Fault Tolerant DB operation... possible?

2003-06-19 Thread Michael Conlen
First get an acceptable outtage rate. Your only going to get so many 
nines, and your budget depends on how many. The system will fail at some 
point, no matter what, even if it's only for a few seconds. That's 
reality. Figure out what kinds of failures you can tolerate based on how 
many 9's you get and what kinds you have to design around. From there 
you can figure out a budget. 99.999% uptime is 5 minutes and 15 seconds 
per year of total downtime. 99.99% is  52.56 minutes and so on. At some 
point something will happen, and I've never seen anyone offer more than 
5 9's, and IBM charges a lot for that. Then, figure out everything that 
could cause an outtage, figure out how to work around them and give them 
a budget. Watch how many 9's come off that requirement.

If you have to use MySQL I'd ditch PC hardware and go with some nice Sun 
kit if you haven't already, or maybe a IBM mainframe. Sun's Ex8xx line 
should let you do just about anything without taking it down (like 
change the memory while it's running). Then I'd get a bunch of them. 
Then I'd recode the application to handle the multiple writes to 
multiple servers and keep everything atomic, then test the hell out of 
it. There's a lot of issues to consider in there, and you probably want 
someone with a graduate degree in computer science to look over the 
design for you. (anything this critical and I get someone smarter than 
me to double check my designs and implementations).  It may be best to 
just build it in to the driver so the apps are consistent.

On the other hand, if you have all this money, look at some of the 
comerical solutions. This is probably heresy on this list, but hey, it's 
about the best solution for the needs right? Sybase or DB2 would be my 
first choices depending on the hardware platform (Sun or Mainframe). The 
systems are setup to handle failover of the master server. I know for 
Sun you want to be looking at Sun Clustering technology, a nice SAN and 
a couple of nice servers. You write to one server, but when it fails the 
backup server starts accepting the write operations as if it were the 
master. There's a general rule with software engineering that says if 
you can buy 80% of what you want, your better off doing that than trying 
to engineer 100%

Think about the networking. two datapaths everywhere there's one. Two 
switches, two NIC cards for each interface, each going to a different 
switch.

Depending on where your clients are you need to look at your 
datacenter. Is your database server feeding data to clients outside your 
building? If so you probably want a few servers in a few different 
datacenters. At least something like one on the east coast and one on 
the west coast in the US, or the equivelent in your country, both of 
whom have different uplinks to the Internet. Get portable IP addresses 
and do your own BGP. That way if a WAN link fails the IP addresses will 
show up on the other WAN link even though it's from a different provider.

This is just a quick run down of immediate issues in a 24x7x365, it's 
not exhaustive. Think about every cable, every cord, every component, 
from a processor to a memory chip and think about what happens when you 
pull it out or unplug it, then make it redundant.

--
Michael Conlen
Rick Franchuk wrote:

Hi guys,

 I've been doing some digging around and found some information about 
master/slave database duplication, but it always sees to focus on increasing 
query performance by spreading the db out.

 My situation is that there's a database which must absolutely, guaranteedly 
be operational 24x7x365 always forever. It must survive and still be 
operational through power failures, machine locks, and any other manner of 
scheduled or unscheduled downtime short of a bomb dropping on the co-lo.

 This would be relatively easy to do if the system was purely read-only: I'd 
simply duplicate my data across numerous machines and pull queries from them, 
perhaps on the other side of a load balancer to make no one machine have to be 
too painfully hit.

 However, this system is write-heavy (at least 50%, with periods of time 
reaching 80% or more). Therefore, I need to be able to do a store to one of 
the servers, and have that store propogate to the other machines (with 
appropriate software design to compensate for propogation delays and 
insert-order neutrality).

 Has anyone done this with two (or more, if possible!) machines? Is it 
possible to do at the present time?

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


identifying foreign keys

2003-06-13 Thread Michael Conlen
I'm working on a system that will allow me to perform operations on a 
table using a web form, and I would like to know if a table has a 
foreign key, however I don't seem to see this information listed in any 
of the show commands output (or describe). Any idea how I could identify 
what colums are foreign keys and what they reference without having to 
tell the code? I use 'describe table' to get most of the information out 
I need.

Oh yes, MySQL 4.0.12 on FreeBSD using INNODB tables.

Thanks much.

--
Michael Conlen


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]