RE: [PHP] RE: non-auto increment question

2009-02-26 Thread Gary W. Smith
> Being rather new to all this, I understood from the MySql manual that
> the auto_increment is to b e used immediately after an insertion not
> intermittently. My application is for administrators (the site owner &
> designates) to update the database from and administration directory,
> accessed by user/password login... so there's really very little
> possibility of 2 people accessing at the same time.
> By using MAX + 1 I keep the id number in the $idIn and can reuse it in
> other INSERTS
> --

The statement is confusing at best.  For the casual user auto_increment
is the way to do.  I say for the casual user.  That is typical me and
you.  Basically if you do an insert a unique value is inserted at the
time of the insert.  As mentioned, there are ways to get this value back
in the return.  

Now why I say it's for the casual user is because if you are using
triggers then you can do things prior to this value being used and then
the statement above is correct.  But you are not going to be using
triggers...

So, put an auto_increment on the key field and find one of the 2^16
samples of how this works with PHP.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: non-auto increment question

2009-02-25 Thread Gary W. Smith
Not sure that this is the problem BUT you should probably qualify the name of 
the variable such that "SELECT MAX(id) AS id FROM book". But you don't want 
"max(id) as id" but rather "max(id) + 1 as id".  With that you can then just 
return the final value.  Also, if you don't want to alias the value (or 
whatever it's called) you should use $row[0] to get it by ordinal posistion.
 
As for now wanting to use autoincrement, you can run into a race condition 
where two people are inserting at the same time, thus having the same generated 
id.
 
Hope that helps.
 



From: PJ [mailto:af.gour...@videotron.ca]
Sent: Wed 2/25/2009 2:01 PM
To: MySql; php-gene...@lists.php.net
Subject: non-auto increment question



I want to insert a new table entry 1 number higher than the highest in
the field (id). I cannot use auto-increment.
And I want to show the value of the field to be added in an input field
on the web page:
if (isset($_REQUEST["AddNewBooksRequest"])) {
$SQL = "SELECT MAX(id) FROM book";
$result = mysql_query($sql, $db);
$bookCount = mysql_num_rows($result);
for ($i=0; $i < $bookCount; $i++) {
$row = mysql_fetch_array($result);
$idIN= $row["id"]+1;
}
$idIN= $_POST["idIN"];
$titleIN= $_POST["titleIN"];

...snip...


";
?>


What am I doing wrong? (The query works and returns the right nr. but
what do I have to do to add 1 to that number and then display it in the
on page and post it to the table?

--

Phil Jourdan --- p...@ptahhotep.com
   http://www.ptahhotep.com  
   http://www.chiccantine.com  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=g...@primeexalia.com





RE: Upgrade story / request for insight

2009-02-25 Thread Gary W. Smith
Jerry, 
 
To touch a little more on Claudio's statement, you are trying to compare 
monkey's and trucks when you talk about mysql on these two different OS's.  
Microsoft is a different best when it comes to the install.
 
What caught my attention though is you are running mysql 4.0 on CentOS.  This 
means that you are probably running an older version of CentOS as 5.x comes 
with mysql 5.0 (I believe).  You might want to setup a similar environment with 
the same OS and do a db upgrade on that (without your actual data) and see if 
everything works first.  You might find some lib issues with the older CentOS.
 
Gary



From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Wed 2/25/2009 12:50 PM
To: Jerry Schwartz
Cc: MySql
Subject: Re: Upgrade story / request for insight



Hi Jerry,
probably does not help you very much and excuse me in advance for this,
but there is little use in having a development/preproduction system on
different architecture,
none of the issues you faced with windows (services installation probably)
will show up on a CentOS box.
In particular an upgrade which involves filesystem and services installation
is quite different between Win and Linux.
>From a 'service' point of view (MySQL server) there will be no difference
for any client in accessing a Win or a Linux box,
but from a maintenance point of view you are facing problems that are
peculiar of the platform,
in windows in fact mysql is installed as a service so you should check
windows services as well.
In any case I strategy I always used for migration is to install the new
version and export / import data,
this is good because you have two parallel servers up and you can compare
and test both of them,
provided you are using different 'sockets', that is different PORT if just
using TCP/IP connection method.

Cheers

Claudio Nanni




2009/2/25 Jerry Schwartz 

> My ultimate goal is to upgrade a production server (MySQL 4.1.22 on CentOS)
> to a modern 5.1 release. My development system is a Windows Vista x86
> machine, and although the process is not that similar I decided to try an
> upgrade there. (I've never done one.) I figured this would give me some
> insight as to whether or not our code would break.
>
>
>
> The upgrade from 5.0.45 to 5.1.31 was a horror show! I downloaded the
> 5.1.31
> msi package, and ran the wizard. The Windows notes seemed to say that for
> this upgrade I didn't need to uninstall the old one, and that might have
> been a mistake. In any case, the wizard attempted to install 5.1.31, but
> after it asked me if I wanted to configure an instance it just disappeared.
> I ran the instance configuration wizard by hand, and it showed two
> different
> server versions. The older one was apparently still running. I tried
> shutting it down; I tried deleting it with the sc command, which (after a
> reboot) did make it go away; but the instance configuration wizard still
> listed it. In fact, it still listed it after I renamed the MySQL 5.0
> directory.
>
>
>
> The 5.1 server would attempt to start, but would fall over dead
> immediately.
>
>
>
> I uninstalled 5.0, and that made no difference. I uninstalled 5.1, and when
> I reinstalled it I got the same basic behavior.
>
>
>
> Eventually I went through the registry and wiped out every reference to
> MySQL that I could find. After a reboot and one last installation of 5.1,
> things started to work right. From there on I was able to run mysql_upgrade
> and get myself back on the air.
>
>
>
> Can anyone guess where I went astray?
>
>
>
>
>
> Regards,
>
>
>
> Jerry Schwartz
>
> The Infoshop by Global Information Incorporated
>
> 195 Farmington Ave.
>
> Farmington, CT 06032
>
>
>
> 860.674.8796 / FAX: 860.674.8341
>
>
>
> www.the-infoshop.com
>
> www.giiexpress.com
>
> www.etudes-marche.com
>
>
>
>




INNODB and Max Processors

2009-01-30 Thread Gary W. Smith
A few weeks back I was reading an article that said that INNODB doesn't take 
adantage of servers using more than 4 processors.  I think I also recieved this 
as a reply some time ago as to the same thing.
 
I was wondering if this is indeed true.  We are using 5.1.30 and wanted to 
pickup a new dual quad core with 32GB.  Before we make the purchase we just 
want to make sure the database will be able to take advantage of it.  Otherwise 
we will go for the dual core higher speed.
 
This will support hundreds of connections per second and some complicated 
queries.  Overall the data will be less than 50gb so we are looking at more ram 
to hope that it will support both application and os level caching.
 
Any advice would be greatly appreciated.
 
Gary


RE: Compare DATETIME to DATE

2008-12-31 Thread Gary W. Smith
Truncate the time part of the datetime field when doing the compare
 
AND DATE_FORMAT(customer.created_dt, '%Y-%m-%d 00:00:00') BETWEEN '2008-12-30' 
AND '2008-12-30'

Should work.  Probably not the most efficient.  The other options would be to 
use take end date + 1 day, minue 1 second.  That's even a bigger hack but it 
would probably be more efficient than converting all of the dates on the fly if 
you have a large number of records to process.
 
 


From: Johnny Withers [mailto:joh...@pixelated.net]
Sent: Wed 12/31/2008 10:13 AM
To: MySQL General List
Subject: Compare DATETIME to DATE



Hi,
I don't quite understand (or even know) what the "proper" way to compare a
DATETIME column to a given DATE value is. I've used various methods but I'd
like to know if there's a better way to compare these values.

Right now I have a query with this in the WHERE clause (customer.created_dt
is a DATETIME):

AND CAST(customer.created_dt AS DATE) BETWEEN '2008-12-30' AND '2008-12-30'

This was working (MySQL on Win32) before I moved the database to MySQL on
RHEL 64-bit (5.0.45-log).

Should that work?

I've also done this:

AND customer.created_dt BETWEEN '2008-12-30 00:00:00' AND '2008-12-30
23:59:59'

That works on both servers, but I really don't want to have to put the time
in there (unless that's the way you are supposed to do this).

I've though about using DATE_FORMAT... not sure about that either.


-
Johnny Withers
601.209.4985
joh...@pixelated.net




RE: too many connections

2008-09-19 Thread Gary W. Smith
Gail, 
 
I know the list has already recommended allowing more connections but the 
bigger question is what is sucking them all up.  Even with 1000 connections 
things like apache can only use the number of connections that there are 
processes (* the number of connections used within each process).  
 
As a fast workaround, increase the connections but for a long term solution you 
really need to find out what the problem is, now how to work around it.
 
Gary
 

 


From: Kinney, Gail [mailto:[EMAIL PROTECTED]
Sent: Fri 9/19/2008 8:33 AM
To: 'mysql@lists.mysql.com'
Subject: too many connections



Hello,  We have MySQL 4.0.14 and have just gotten an error:  too many 
connections.  we can't connect to our site using MySQL admin.  Please help.

Gail Kinney
Webmaster UC Denver
[EMAIL PROTECTED]





Failed auth loggin

2008-08-22 Thread Gary W. Smith
Hello, 
 
I've been looking through the documentation/list and haven't found anything 
directly on this subject.  It's possible that I'm just not looking in the right 
place.
 
I would like to log all failed authentications to the server.  It would be nice 
to be able to log the attempted user name, host, date/time.  Is there anything 
like this already in MySQL?
 
Gary


RE: Column level replication q?

2008-02-21 Thread Gary W. Smith
Michael, 
 
It seemed to replicate just fine.  At least in test. (5.1.22RC).  I believe 
that we are using statement level replication which might be triggering the 
trigger on the slave node.  I noticed that the triggers themselves also 
replicatated to the slaves, which is something I didn't expect.
 
Gary



From: Michael Dykman [mailto:[EMAIL PROTECTED]
Sent: Thu 2/21/2008 8:06 AM
To: mysql@lists.mysql.com
Cc: Gary W. Smith; Baron Schwartz
Subject: Re: Column level replication q?



On Wed, Feb 20, 2008 at 10:48 AM, Gary W. Smith <[EMAIL PROTECTED]> wrote:
> Well, I created the lookup table, created my two triggers (as nothing is ever 
> updated, just added or removed) and did a bulk one time load and it seems to 
> work.  Now I just need to replicate that to the other server.  I'll have to 
> find my easy button and press it a few times.
>
>  In fact, on the other end, we don't even need to have the same table names.  
> So we can replicate it just like this with no problem.
>
>  Thanks for the links,
>

I am curious: I thought that data written by triggers was not
replicated, replicating instead the action to call the trigger.  I
seem to recall that several approaches to logically replicate triggers
have been tried, where are we at these days?


--
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong. Some models are useful.




RE: Column level replication q?

2008-02-20 Thread Gary W. Smith
Well, I created the lookup table, created my two triggers (as nothing is ever 
updated, just added or removed) and did a bulk one time load and it seems to 
work.  Now I just need to replicate that to the other server.  I'll have to 
find my easy button and press it a few times.
 
In fact, on the other end, we don't even need to have the same table names.  So 
we can replicate it just like this with no problem.
 
Thanks for the links,
 
Gary



From: Nanni Claudio [mailto:[EMAIL PROTECTED]
Sent: Wed 2/20/2008 1:19 AM
To: Gary W. Smith; mysql@lists.mysql.com
Subject: R: Column level replication q?



Of course I am talking about a materialized view, did you try?
Here some useful links:

http://forums.mysql.com/read.php?100,21746,21746#msg-21746

and from the reply in the same forum topic 
by Waheed Noor  09/10/2007 01:39PM you get this link: 
http://www.shinguz.ch/MySQL/mysql_mv.html


Aloha!
Claudio




-Messaggio originale-
Da: Gary W. Smith [mailto:[EMAIL PROTECTED]
Inviato: mercoledì 20 febbraio 2008 1.18
A: mysql@lists.mysql.com
Oggetto: Column level replication q?

We need to replicate a table to a third part.  The information in the table is 
pretty much public, with the exception of one column.  Is it possible to 
replicate all of the tables with the exception of one column?

What I was thinking was to replication it to a 2nd machine that will limit it 
to the tables we care about and then expose that slave machine as a secondary 
master. 

We are looking to do something like this

MasterA -> SlaveA/MasterB -> SlaveC

MasterA tableA (our machine)
field1
field2
field3
field4

SlaveA/MasterB (our machine)
field1
field2
field4

SlaveC (their machine)
field1
field2
field4

We know we can limit the tables which they can use (which is why we have 
introducted SlaveA/MasterB so they will only be able to pull the tables we make 
avaiable to them.

Our current method is database dumps but this is become impracticle due to 
size.  Replication in testing works for what we want to do, we just have a 
problem with a single field.

Any ideas on how to make this work? 

Gary Wayne Smith


"Questo messaggio ed ogni suo allegato sono confidenziali e possono essere 
riservati o, comunque, protetti dall'essere diffusi. Se il ricevente non é il 
destinatario diretto del presente messaggio, é pregato di contattare 
l'originario mittente e di cancellare questo messaggio ed ogni suo allegato dal 
sistema di posta. Se il ricevente non é il destinatario diretto del presente 
messaggio, sono vietati l'uso, la riproduzione e la stampa di questo messaggio 
e di ogni suo allegato, nonché la diffusione del loro contenuto a qualsiasi 
altro soggetto"
*
"This message and any attachment are confidential and may be privileged or 
otherwise protected from disclosure. If you are not the intended recipient, 
please contact the sender and delete this message and any attachment from your 
system. If you are not the intended recipient you must not use, copy or print 
this message or attachment or disclose the contents to any other person."




RE: Column level replication q?

2008-02-20 Thread Gary W. Smith
Claudio, 
 
I didn't think about that approach either.  The triggers will be much simpler 
to implement on the primary servers, then pull it over with a table rewrite to 
the intermediate server, then allow that to replicate out just fine.  I'll play 
around with it a little.  The table has millions of rows, but the primary data 
in the table that I really care about is relationship keys, which should be 
small if we put just that data into a intermediate table.
 
Thanks, 
 
Gary Wayne Smith



From: Nanni Claudio [mailto:[EMAIL PROTECTED]
Sent: Wed 2/20/2008 1:19 AM
To: Gary W. Smith; mysql@lists.mysql.com
Subject: R: Column level replication q?



Of course I am talking about a materialized view, did you try?
Here some useful links:

http://forums.mysql.com/read.php?100,21746,21746#msg-21746

and from the reply in the same forum topic 
by Waheed Noor  09/10/2007 01:39PM you get this link: 
http://www.shinguz.ch/MySQL/mysql_mv.html


Aloha!
Claudio




-Messaggio originale-
Da: Gary W. Smith [mailto:[EMAIL PROTECTED]
Inviato: mercoledì 20 febbraio 2008 1.18
A: mysql@lists.mysql.com
Oggetto: Column level replication q?

We need to replicate a table to a third part.  The information in the table is 
pretty much public, with the exception of one column.  Is it possible to 
replicate all of the tables with the exception of one column?

What I was thinking was to replication it to a 2nd machine that will limit it 
to the tables we care about and then expose that slave machine as a secondary 
master. 

We are looking to do something like this

MasterA -> SlaveA/MasterB -> SlaveC

MasterA tableA (our machine)
field1
field2
field3
field4

SlaveA/MasterB (our machine)
field1
field2
field4

SlaveC (their machine)
field1
field2
field4

We know we can limit the tables which they can use (which is why we have 
introducted SlaveA/MasterB so they will only be able to pull the tables we make 
avaiable to them.

Our current method is database dumps but this is become impracticle due to 
size.  Replication in testing works for what we want to do, we just have a 
problem with a single field.

Any ideas on how to make this work? 

Gary Wayne Smith


"Questo messaggio ed ogni suo allegato sono confidenziali e possono essere 
riservati o, comunque, protetti dall'essere diffusi. Se il ricevente non é il 
destinatario diretto del presente messaggio, é pregato di contattare 
l'originario mittente e di cancellare questo messaggio ed ogni suo allegato dal 
sistema di posta. Se il ricevente non é il destinatario diretto del presente 
messaggio, sono vietati l'uso, la riproduzione e la stampa di questo messaggio 
e di ogni suo allegato, nonché la diffusione del loro contenuto a qualsiasi 
altro soggetto"
*
"This message and any attachment are confidential and may be privileged or 
otherwise protected from disclosure. If you are not the intended recipient, 
please contact the sender and delete this message and any attachment from your 
system. If you are not the intended recipient you must not use, copy or print 
this message or attachment or disclose the contents to any other person."




RE: Column level replication q?

2008-02-19 Thread Gary W. Smith
And that is a hack at best, but it does give me some ideas.  I really only need 
two fields out of that table anyhow so I might actually migrate the columns 
that I care about into a new table and update the corresponding SQL statements 
that I use to update them (i.e. split the source data).



From: [EMAIL PROTECTED] on behalf of Baron Schwartz
Sent: Tue 2/19/2008 5:15 PM
To: Gary W. Smith
Cc: mysql@lists.mysql.com
Subject: Re: Column level replication q?



Hi,


This isn't "natively" supported.  You can hack it with replication to
a table that has a trigger, which will then insert all but one column
into another table, which you can replicate on to the final
destination.  But I'm scared of such hacks for anything that matters
:-)




Column level replication q?

2008-02-19 Thread Gary W. Smith
We need to replicate a table to a third part.  The information in the table is 
pretty much public, with the exception of one column.  Is it possible to 
replicate all of the tables with the exception of one column?
 
What I was thinking was to replication it to a 2nd machine that will limit it 
to the tables we care about and then expose that slave machine as a secondary 
master.  
 
We are looking to do something like this
 
MasterA -> SlaveA/MasterB -> SlaveC
 
MasterA tableA (our machine)
field1
field2
field3
field4
 
SlaveA/MasterB (our machine)
field1
field2
field4
 
SlaveC (their machine)
field1
field2
field4
 
We know we can limit the tables which they can use (which is why we have 
introducted SlaveA/MasterB so they will only be able to pull the tables we make 
avaiable to them.
 
Our current method is database dumps but this is become impracticle due to 
size.  Replication in testing works for what we want to do, we just have a 
problem with a single field.
 
Any ideas on how to make this work?  
 
Gary Wayne Smith


RE: Replication and changing engine type

2007-12-06 Thread Gary W. Smith
> You can set the default storage engine on each of the servers and then
> don't declare it explicitly in any CREATE TABLE statements.

This seems like the most viable option.  Since almost all of the remote
tables are created with INNODB it should work fine.  I do have one table
that isn't but we will convert that before we slave the data to this
box.

The big problem is test existing dataset is more than 10gb (across
several tables/databases).  So other suggestions to just recreate the
tables by not specifying the engine type isn't particle at this time
(but if done at initial design time, we would have been fine).

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication and changing engine type

2007-12-06 Thread Gary W. Smith
We have a master/master environment that has tables in innodb.  We want to 
setup a slave that will be used for reporting and some other stuff as well.  We 
want to use MyISAM as the engine on the slave'd server.
 
Since all of the table creations are also part of the replication, is it 
possible to override the table creates and force them to use a different 
engine?  I have read a few articles on implementing the blackhole engine (for 
intermediate replication) which would be useful for us when we setup the 
replication to multiple sites, but this leads to the same question of how to 
change the engine (which isn't explained in the sample articles I've read).
 
Any help would be greatly appreciated.
 
Gary Wayne Smith
 
 
 


RE: Lengtht of TEXT data types

2007-09-02 Thread Gary W. Smith
> Hello list
> 
> I have doubt on TEXT data types... Checking my notes I see these ones:
> 
> TINYTEXT/TINYBLOB (2^8) 255 chars
> TEXT/BLOB (2^16) 64K chars
> MEDIUMTEXT/MEDIUMBLOB (2^24) 16M chars
> LONGTEXT/LONGBLOB (2^32) chars
> 
> Well, my doubt consist on this... are these FIXED lengths for the text
> fields
> or they have variable length that may grow up to that as maximum?
> 
> I require to store messages of variable length from 1 to 1024
> characters, so
> TINYTEXT is too short and TEXT is too large, so I want to know if it
> they
> have variable length analog to the VARCHAR type and the client program
> only
> ensures the messages to have 1024 length...

They are variable length.  I believe they are assigned by blocks (disk
chucks, I believe), but I could be wrong on that part.  

But in your case, you are safe using TEXT/BLOB.  If you only store 1
byte, it's only taking 1 byte (plus a couple bytes overhead for the text
field).  This is all covered somewhere on the MySQL site.  Do a google
search on "mysql data type size" and you should find some additional
information.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Synchronizing two MySQL servers over slow network

2007-08-25 Thread Gary W. Smith
> for generating Statutory reports. Also cluster can not be a solution
as
> it
> requires min. 100 MB network.

Says who?

But clustering won't help.  You are looking for active/active, which
could be accomplished but this would possibly lead to specific conflicts
if people are trying to edit the same record.  

You've also failed to mention the speed you are talking about.  My home
office is a replication point for our large database at our CO.  I do
this over Cable.

You could try active/active, assuming you work out some type of conflict
resolution plan.  If each site will generally be editing their own data,
but combined for reporting only, then active/active should be fine, even
if you are talking about 20KB/s.



> Can we generate scripts on windows to sync them manually?
> Thanks
> CPK
> 
> On 8/25/07, Craig Huffstetler <[EMAIL PROTECTED]> wrote:
> >
> > I would probably recommend replication. It's not that bad to setup
> and
> > once it catches up on the slave then it will continue to be an easy
> sync in
> > the future as long as both are running. How big is the database you
> wish to
> > synchronize? What connection are both servers on?
> >
> > Is there anyway possible to disable to VPN between the two D.B.
> servers?
> > Perhaps a firewall in between is in use and you can simply setup
> access
> > rules for both to talk to each other? The VPN tunnel is most likely
a
> big
> > slow down.
> >
> > On 8/25/07, C K <[EMAIL PROTECTED]> wrote:
> > >
> > > Hi,
> > > I have a  problem as below-
> > > We have a MySQL server for our ERP database. Now we have to
> implement
> > > the
> > > ERP for Head office. HO is away from the current setup and
> connection
> > > between the two is through VPN at slow speed. How can we
> synchronize the
> > > two
> > > MySQL servers?
> > > Replication,
> > > Cluster, or manually?
> > > Thanks for your replies.
> > > CPK
> > >
> > > --
> > > Keep your Environment clean and green.
> > >
> >
> >
> 
> 
> --
> Keep your Environment clean and green.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Stored process accessing tables in different database

2007-03-02 Thread Gary W. Smith
> You SELECT statrment
>   SELECT
> (s.server_disk_space * s.server_load * s.server_ram),
> s.server_fqdn,
> s.server_url,
> s.server_id
>   INTO
>  L_server_load,
>  L_server_fqdn,
>  L_server_url,
>  L_server_id
>   FROM clientdev.servers s
>   WHERE s.active = 1
> AND s.available = 1
>   ORDER BY
> (s.server_disk_space * s.server_load * s.server_ram)
>   LIMIT 1;
> 
> Don't forget the semicolon after LIMIT 1
> 
> Give it a try !!!

I'll give it a shot.  To work around the problem I used a cursor.  I don't know 
which is more efficient.  I suspect just an inline SELECT statement.  I thought 
I had tried doing a SELECT fields INTO fields without success but I don't 
remember what that final product looked like.

Thanks, 

Gary


RE: Best update strategy with MyISAM

2007-03-02 Thread Gary W. Smith
Mike, 

Below is an message written a few months back tailing a discussion on
why MyISAM is and is not faster than InnoDB.  Basically my understanding
is that if you have multiple simultaneous updates/inserts, stay away
from MyISAM.  

We switched from MyISAM to InnoDB because of the concurrency issue.
Sometimes we might have upwards of 500 inserts/s (though the current
configuration is causing a backlog and delays).  When these updates were
done via MyISAM it was really slow.  With InnoDB it was much faster (by
a factor I believe).

Also, I have yet to find any documentation to say that if InnoDB doesn't
fit into ram you will see a performance penalty (then again, I haven't
been looking around that much for it).  Can anyone else chime in on
this? 


** Quoting:

From: Axel Schwenke Date:   Sunday, October 22, 2006
 
There are *no* transactions with MyISAM. If you need transactions, 
go InnoDB. 

MyISAM always locks whole tables. That is: reads and writes are 
mutually exclusive. Concurrent reads are fine, concurrent writes 
are sometimes possible. If you expect concurrent reads and writes 
on your tables -> go InnoDB. OTOH: MyISAM table locks are really, 
really fast. As long as you do not need hundreds of writes per 
second (along with reads on the same table) you can still use 
MyISAM. Reads and writes will be serialized internally, but you 
won't notice. 

Again: performance isn't the only - not even the most important - 
criteria for chosing a storage engine.XL 
-- 
Axel Schwenke, Senior Software Developer, MySQL AB 


> -Original Message-
> From: mos [mailto:[EMAIL PROTECTED]
> Sent: Friday, March 02, 2007 7:39 AM
> To: mysql@lists.mysql.com
> Subject: Best update strategy with MyISAM
> 
> I will have an application that will have 1000 20 row selects executed
> every second. The application will also be updating individual rows
> from
> this table at a rate of 100 rows/second. I prefer to use MyISAM
instead
> of
> InnoDb because eventually the table will exceed the amount of memory
on
> the
> machine and InnoDb is too slow if the table doesn't fit into memory.
> 
> With MyISAM:
> 1) Will the updates block the Select statements from executing? If so,
> is
> there a way around it? I don't want to stop the Select's from
executing
> by
> having them wait for an update lock to complete.
> 2) Is it better to update a secondary table and then use a Select with
> a
> join to display the results? Or will it matter?
> 
> TIA
> 
> Mike
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication performance questions

2007-03-02 Thread Gary W. Smith
> Inserts are of the form (updates are analogous):
> 
> insert into dns_records (zone, host, data, ... )
>   values ('domain.com', 'www', '1.2.3.4', ... );
> 
> Queries are of the form:
> 
> select ttl, type, mx_priority, case
>   when lower(type)='txt' then
>   concat('\"', data, '\"')
>   when lower(type) = 'soa' then
>   concat_ws(' ', data, resp_person, serial, refresh,
retry,
> expire, minimum)
> else data end from dns_records where
>   zone = 'domain.com' and host = 'www';
> 
> We've fixed a few data formatting issues that made the select queries
> slow under certain circumstances, but we're still running into
> occasional performance problems running the inserts/updates.
> 
> There are no joins, subqueries, transactions, or any of the usual muck
> that complicates a performance issue.

Pdns?

Anyway, did you enable the slow query logging?  That still might give
you an idea if something is running slow.  But I also forgot to ask
earlier, what is running slow, the inserts or the selects during the
inserts?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication performance questions

2007-03-01 Thread Gary W. Smith
> The table is MyISAM.  I searched on google a bit for info on slow
> updates with MyISAM and didn't really hit it on the nose.  Can I ask
> you to elaborate?
> 

In /etc/my.cnf try adding:

long_query_time = 1
log-slow-queries=/var/lib/mysql/mysql-slow.log

Restart and then watch the file.  If a anything pop's up, do an EXPLAIN
on the SQL statement and see what might be slowing it down.  This helped
me when I was debugging a join between two tables with a where clause
between the two on a non-indexed field (with 100k rows in one, 30k rows
in another).  It wasn't too slow but I was able to turn the 5+ second
response to around .2 seconds on one of our test servers.  My dev
environment that I'm testing my 50M row database on is a P4 HT
workstation running under xen DomU (with 1gb of ram in the DomU).  At
first I thought all my sluggish responses were because of hardware but
low and behold, it was just in need of some good indexes.  With that
said, don't go index crazy.

If that fails and nothing else seems to help, add lots of ram and cpu's
:).  

Hope that helps.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Stored process accessing tables in different database

2007-03-01 Thread Gary W. Smith
I'm having trouble with a stored proc.  The problem is that the first
table is in a different database.  Here is the test stored proc below.

Where I run this I get " ERROR 1327 (42000): Undeclared variable: s".  I
have tried it by declaring "clientdev.servers s" and just by using
"clientdev.servers" and specifying a qualifier on each statement of
"servers." and also "clientdev.server."  This is being created in the
dwhdev databased (clientdev is replicated from a different server).

DELIMITER *@@@*

DROP PROCEDURE IF EXISTS sp_selectdwhserver *@@@*
CREATE [EMAIL PROTECTED] PROCEDURE sp_selectdwhserver(
  OUT P_server_id INT,
  OUT P_server_load INT, 
  OUT P_server_fqdn VARCHAR(200),
  OUT P_server_url VARCHAR(200)
)
BEGIN
  DECLARE L_server_load INT DEFAULT 0;
  DECLARE L_server_id INT DEFAULT 0;
  DECLARE L_server_fqdn VARCHAR(200);
  DECLARE L_server_url VARCHAR(200);

  SELECT 
(s.server_disk_space * s.server_load * s.server_ram) INTO
l_server_load, 
s.server_fqdn INTO L_server_fqdn,
s.server_url INTO L_server_url,
s.server_id INTO L_server_id
  FROM clientdev.servers s
  WHERE s.active = 1
AND s.available = 1
  ORDER BY
(s.server_disk_space * s.server_load * s.server_ram) 
  LIMIT 1

  SET P_server_id := L_server_id;
  SET P_server_load := L_server_load;
  SET P_server_fqdn := L_server_fqdn;
  SET P_server_url := L_server_url;
  
END
*@@@*

DELIMITER ;

Just as a side note, there is more going on before the end of the
procedure (thus the locally declared var's) but I haven't managed to get
this to work just yet.  Also, in production, the DEFINER will be an
account that have SELECT permissions on the proper table in the proper
database (as well as EXECUTE in the dwhdev database)


Any ideas?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication performance questions

2007-03-01 Thread Gary W. Smith
> Also, what type of database are you using?  INNODB?  MyISAM?  If you
> are
> running MyISAM then things can get slow on updates.

Sorry, I missed where you said you were using MyISAM.  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication performance questions

2007-03-01 Thread Gary W. Smith
I could be wrong BUT...

> 1) Does increasing the number of replication slaves increase query
> latency on the master?  We're considering tiering the replication if
> it might help - replicate the master to two slaves, each of which
> replicates to ten clients.

The slaves should only be pulling from the log file, not querying the
master data directly.  But yes, I guess I could cause an additional load
on the server if there are many many slaves.  But with < 10,000 updates
a day (that is 8 per minute, this shouldn't be much of a load at all.
 
> 2) Is there a chance that the insert latency is coming from the fact
> that the table is growing so long?  At a certain point, even with
> indexes, I imagine that the engine is going to have to do some linear
> searching.

Well, back to answer 1.  Replication is about log's, not querying the
data.  

You mentioned updates, but what about querying the data.  Do you run a
lot of queries against the data on the master server?  We have a
database with 50M rows in it and we have a complicated replication
strategy for the reader just so we can take 99% of the load off the
master.  We have a slave'd database just to run reports from (actually
we have a load balanced cluster of them).  The master received inserts
about 20 records/s

Also, what type of database are you using?  INNODB?  MyISAM?  If you are
running MyISAM then things can get slow on updates.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Breaking Up Tables

2007-03-01 Thread Gary W. Smith
> I have an unusual problem. My current production server's OS is
corrupt
> and I'm building a new one. However, due to the corruption, I can't
> move files out of it. I have a "back door" through Zope, but I can
only
> move files of a certain size (I don't know how large ;). I need to
back
> up a certain database. All the "weight" of the database is in one
> table. Doing a mysqldump of that table results in a file too big to
> transfer. Is there a way to break up that table, do various dumps, and
> then reassemble it?

Sure, there a couple utilities for splitting and reassembling large
files.

Try doing a google for "man split" and "man join".



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Another stored proc question

2007-02-28 Thread Gary W. Smith
This is a follow-up to the earlier stored proc question.

I have a stored proc, with user level permissions for execute.  If I
drop the stored proc and then create it again, the user level execute
permissions go away.  What is the proper way to edit/alter a store proc
without losing these permissions?


Given this is the create syntax used:
DELIMITER $$
CREATE [EMAIL PROTECTED] PROCEDURE sp_testlogin(
  P_user_name VARCHAR(32),
  P_password VARCHAR(32)
)
SQL SECURITY DEFINER 
BEGIN
  SELECT * FROM users WHERE user_name = P_user_name ;
END$$
DELIMITER ;

I would think that I could use ALTER in this way:
DELIMITER $$
ALTER PROCEDURE sp_testlogin(
  P_user_name VARCHAR(32),
  P_password VARCHAR(32)
)
SQL SECURITY DEFINER 
BEGIN
  SELECT user_id, last_login FROM users WHERE user_name = P_user_name ;
END$$
DELIMITER ;

But I receive:
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near '(

Can someone provide me an example of SP modification using alter (or
whatever method works).

Thanks, 

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Stored proc permissions question

2007-02-28 Thread Gary W. Smith
> Better add "SQL SECURITY DEFINER" to it.

I noticed that it works with and without this.  I have added it to the
procedure.

Another quick question though.  Since I have added the end user that
will execute the procedure it works fine, until I drop the procedure and
recreate it then I have to read the execute permission for that user.
Is this normal behavior?

I know that I should probably be using alter instead but I was just
wondering.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Stored proc permissions question

2007-02-28 Thread Gary W. Smith
> The next question is what permissions do I need to give
> [EMAIL PROTECTED] to just access that one permission?

The sound of that thump thump thump is my head hitting the wall.

There is like a single small line in the GRANT section of the how to
that mentioned "TABLE|PROCEDURE|FUNCTIO" but none of the samples use
this.  

GRANT EXECUTE ON PROCEDURE testdb.sp_testlogin TO [EMAIL PROTECTED];
 ^

The mysql team might want to add a sample like this to the documentation
so people can see how to fine tune the permissions for SP, etc.

Everything seems to be working fine now

Here is what I did:
Created user svspexec with select, insert update, delete on testdb
Created procedure with definer [EMAIL PROTECTED]
Added execute procedure for testdb.sp_testlogin to user
[EMAIL PROTECTED]

I can now execute the store proc and get the expected result back
without having to have select access to the table.

I did fine many people saying that I need select access to mysql.proc
for the svreader account (the account to execute the sp) but it seems to
work fine without it.  It this a bug that was fixed or do I have a open
permission issue on my box somehow?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Stored proc permissions question

2007-02-28 Thread Gary W. Smith
> Do you know about the "SQL SECURITY { DEFINER | INVOKER }" options to
> CREATE PROCEDURE? With them it should be possible to
> * deny selects to users on the tables
> * allow selects to users to this procedure
> * having the procedure itself working with the rights of root/definer

I did not, but now I do.  

Do in this case, I have two users.  User 1 svreader has limited select
access to the database (specifically tables that do prevent disclosure
of information).  User 2 is svreaderauth has full select access to the
database.

If I'm understanding the docs correct (which I'm probably not), I should
create the procedure with [EMAIL PROTECTED]

CREATE [EMAIL PROTECTED] PROCEDURE sp_testlogin(
  P_user_name VARCHAR(32),
  P_password VARCHAR(32)
)
BEGIN
  SELECT user_id, user_fullname, date_last_login FROM users WHERE
  user_name = P_user_name AND user_password = P_password;
END

Is this correct?

The next question is what permissions do I need to give
[EMAIL PROTECTED] to just access that one permission?





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Stored proc permissions question

2007-02-28 Thread Gary W. Smith
I have a set of tables that contain sensitive user information.  I need
to use this data for validation BUT I don't want the end user to have
access to read this data.  In MSSQL I used to be able to create a stored
proc to do the work (even though the user didn't have access to the
table).  I was wondering if there is a way to do this in MySQL.

In a nut shell, this table contains user account information included
encoded password.  I want to be able to pass a username and password to
the stored proc and return the valid user ID (or no RS of none is
found).

Basically, here is the stored proc.  It works, but only for root.  Is it
possible to allow user to execute this without having SELECT access to
the table users?  If so, how?


CREATE [EMAIL PROTECTED] PROCEDURE `sp_testlogin`(
  P_user_name VARCHAR(32),
  P_password VARCHAR(32)
)
BEGIN
  SELECT user_id, user_fullname, date_last_login FROM users WHERE
user_name = P_user_name AND user_password = P_password;
END

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: struggling with select query

2007-02-27 Thread Gary W. Smith
> select distinct
> TraderPersonalInfo.TraderID,PM2.PlatformID,PM4.PlatformID
> from  TraderPersonalInfo,Locations,Platforms PF2,Platforms PF4,
> PlatformMap PM2,PlatformMap PM4
> where (TraderPersonalInfo.TraderID = PM2.TraderID)
> and   (PM2.PlatformID = PF2.PlatformID)
> and   PM2.PlatformID = 2
> and   (TraderPersonalInfo.TraderID = PM4.TraderID)
> and   (PM4.PlatformID = PF4.PlatformID)
> and   PM4.PlatformID = 4
> and   PM2.TraderID=Pm4.TraderID;
> 
> Give it a try !!!
> 
> select distinct TraderPersonalInfo.TraderID,PlatformMap.PlatformID from
> TraderPersonalInfo,Locations,PlatformMap,Platforms where
> (TraderPersonalInfo.TraderID = PlatformMap.TraderID) and
> (PlatformMap.PlatformID = Platforms.PlatformID) and
> PlatformMap.PlatformID =
> 2 or PlatformMap.PlatformID = 4;
> 

Instead of wrapping it twice why not just use () around the or statement.  I do 
have a question on this.  Isn't using the JOIN statement faster than using a 
WHERE CLAUSE to join table data?  This would allow him to use a simple OR 
statement at the end.  You also have locations in there with no reference what 
so ever.  This would cause redundant work for the SQL engine as it will be seen 
as a large results set (being result set * number of records in location) prior 
to being parsed by DISTINCT.

select distinct TraderPersonalInfo.TraderID,PlatformMap.PlatformID from
TraderPersonalInfo,Locations,PlatformMap,Platforms where
(TraderPersonalInfo.TraderID = PlatformMap.TraderID) and
(PlatformMap.PlatformID = Platforms.PlatformID) and

(
PlatformMap.PlatformID = 2 or PlatformMap.PlatformID = 4
)
;

And why not:

SELECT DISTINCT 
TraderPersonalInfo.TraderID,
PlatformMap.PlatformID 
FROM
TraderPersonalInfo,Locations INNER JOIN PlatformMap
ON TraderPersonalInfo.TraderID = PlatformMap.TraderID
INNER JOIN Platforms 
ON PlatformMap.PlatformID = Platforms.PlatformID
WHERE
PlatformMap.PlatformID = 2 or PlatformMap.PlatformID = 4






RE: Disable DNS reserve lookup for auth

2007-02-22 Thread Gary W. Smith
> We have 4 development servers that are fairly configured the same way.
> We have an admin account on each server using %" for the hostname.
> This
> works on 3 of the 4 servers.  The 4th server seems to do a lookup and
> since the client machine (i.e. my workstation) isn't specifically
> specified access is denied.
> 
> I know I can disable the resolution and use IP's instead but is there
a
> way to enforce "%"?  It seems that even though I have "%" in place it
> still rejects because of the reserves DNS.

I found the source of the problem.  Apparently it was a service stuck
between the chair and the keyboard...

While playing around with development replication I removed some of the
log files and one of the test databases manually and I guess I also
removed the mysql directory.  When the service was started it created
them, as expected but since they were new the login account was gone.  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Disable DNS reserve lookup for auth

2007-02-22 Thread Gary W. Smith
We have 4 development servers that are fairly configured the same way.
We have an admin account on each server using %" for the hostname.  This
works on 3 of the 4 servers.  The 4th server seems to do a lookup and
since the client machine (i.e. my workstation) isn't specifically
specified access is denied.

I know I can disable the resolution and use IP's instead but is there a
way to enforce "%"?  It seems that even though I have "%" in place it
still rejects because of the reserves DNS.

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SQL Translation

2007-02-15 Thread Gary W. Smith
> > Select *
> > from  group_mstr gm,group_payers gp
> > where  gm.practice_id = '1'
> > and gp.location_id = '2'
> > and gp.practice_id =* gm.practice_id
> > and gp.group_id =* gm.group_id
> > order by gp.payer_id
> 
> I bet =* is shorthand for an outer join (not sure if it's left or
> right).  You should be able to do the same in mysql with

Isn't that the Oracle syntax for join?  I didn't think that was
supported in SQL 2000

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB fixed file size, how much is left?

2007-02-15 Thread Gary W. Smith
> For what you described... you will not get a fixed size...
> 
> If you have set file_per_table flag in my.cnf you might want to know
> that the .ibd files in the database directory are by default
> auto-extending... so those files WILL grow... along with your data...
> 
> The shared tablespaces that you talked about (10 * 100MB) are still
> used by InnoDB for transactions and foreign keys reference (at least
> these two things)...
> Although I don't get why would you use 10 files of 100MB... why not 20
> of 50MB... unless they are on different disks and partitions... I
> don't understand...
> 
> I would personally go with at most 2 files arround 500MB... keeping in
> mind that you have file_per_table on !!!
> 
> How big transactions are you expecting... how many clients are you
> expecting ?
> Answers to these questions can help you tweak the server...

For the shared tables I'm just playing around with the ideology.  In
prod I would like to have a couple GB of transactional space.  The
transactions will be small but there will probably be 200+/s.  I expect
that the transaction logs will be on a separate disk instance (probably
a raid 5 disk) in production.

I had realized after the fact that I had use the per_table.  When I
figured that out I was able to see the sizes of course.

In dev we're running about 1000/s (or at leasts that's the max insert
rate on the xen instances we have -- not a really fast machine).

 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB fixed file size, how much is left?

2007-02-14 Thread Gary W. Smith
> -Original Message-
> From: Gary W. Smith [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 14, 2007 8:01 PM
> To: mysql@lists.mysql.com
> Subject: InnoDB fixed file size, how much is left?
> 
> I'm working on migrating an bunch of MyISAM tables over to InnoDB.
For
> development we want to use a fixed amount of space.  So I have
> specified
> 10 100MB files in my.cnf.  I started replicating data over but what I
> can't tell is how much space I have left.  Running show innodb
status\G
> shows a lot of stuff but I can't find anything that clearly says "X"
> bytes stored or how much space might be left.
> 
> Is there anything in particular I'm missing?  Actually better
question,
> where should I be looking?
> 
> Gary

Maybe I should think a little less harder on the problem.  I put
innodb_file_per_table in the config so the 10 100mb files are currently
ununsed.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB fixed file size, how much is left?

2007-02-14 Thread Gary W. Smith
I'm working on migrating an bunch of MyISAM tables over to InnoDB.  For
development we want to use a fixed amount of space.  So I have specified
10 100MB files in my.cnf.  I started replicating data over but what I
can't tell is how much space I have left.  Running show innodb status\G
shows a lot of stuff but I can't find anything that clearly says "X"
bytes stored or how much space might be left.

Is there anything in particular I'm missing?  Actually better question,
where should I be looking?

Gary 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: stored proc question.

2007-02-14 Thread Gary W. Smith
> I have a string ("word word bob jack") such that I want to pass to the
> store proc as a single entity,
> split it in the store proc, and do a specific action for each word.
> Any
> advice on how to do this?  There is more going on that just that
single
> word so multiple calls isn't practical and they string will have N+1
> elements so setting up a number of parameters is impractical (since
the
> word count may be as many as 1000).
> 

I was able to create a loop based on the instr function and while.  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



stored proc question.

2007-02-14 Thread Gary W. Smith
I have a string ("word word bob jack") such that I want to pass to the
store proc as a single entity,
split it in the store proc, and do a specific action for each word.  Any
advice on how to do this?  There is more going on that just that single
word so multiple calls isn't practical and they string will have N+1
elements so setting up a number of parameters is impractical (since the
word count may be as many as 1000).

Like a foreach statement?





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Master -> Slave/Master -> Slave problem.

2007-02-14 Thread Gary W. Smith
> Hi,
> 
> on server B you need to set log-slave-updates to pass statements from
A
> over B to D.
> 
> BTW: I can recommend you to use replicate-wild-do-table=db_name.*
> instead of replicate-do-db, otherwise statements with db prefix before
> tables won't replicate over B to D.  RTM please
> 
> 
> Filip
> 

Filip, 

Someone sent me the log-slave-updates offlist and I managed to get it
working.  I will definitely explore the replicate-wild option.  This is
purely a test environment at this time.  

Thanks, 

Gary 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Master -> Slave/Master -> Slave problem.

2007-02-14 Thread Gary W. Smith

> You should have:
> log-slave-updates
> 
> on the slave/master host(s)
> 
> 
> Best regards,
> Irek Slonina

Irek, 

Thanks for the follow up.  Someone else has mentioned it but I just noticed 
that it was offlist.


Master -> Slave/Master -> Slave problem.

2007-02-13 Thread Gary W. Smith
I'm working with two distinct databases on four different boxes.
Databases A on Server A needs to be present everywhere (Server B, C and
D).  Database B needs to be present on Server C.

So I setup replication from Server A to Server B and Server D and then I
setup replication from Server B to Server C (to include the tables that
were replicated from Server A).  

So when a change is made in Database A I see it roll over to B and C.
But nothing replicated to B ever gets re-replicated to D.  It would be
optimal if I could slave from two distinct masters but from reading this
doesn't seem possible.  

Server A:

log-bin=/datastore/mysql-log/repl
binlog-do-db=clients
server-id=1

Server B:

log-bin=/datastore/mysql-log/repl
binlog-do-db=clients
binlog-do-db=datastore

server-id=21
master-host=testdba.local
master-user=testdbareader
master-password=passworda
master-connect-retry=60
replicate-do-db=clients

Server C:

server-id=22
master-host=testdba.local
master-user=testdbareader
master-password=passworda
master-connect-retry=60
replicate-do-db=clients

Server D:

server-id=24
master-host=testdbb.local
master-user=testdbbreader
master-password=passwordb
master-connect-retry=60
replicate-do-db=clients
replicate-do-db=data

what can I do to fix this?

Gary 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Capacity/Load question

2007-02-10 Thread Gary W. Smith
Hello, 

Just looking for a little feedback before deciding on a data engine path
and configuration.

We're working on an application that is expected to have a high growth
rate over the coming years.  We're at the stage of designing the backend
databases to maximize performance while keeping costs at a minimum. (I
know, everyone's heard this one before) -- database already exists but
in a small controlled environment.

Here is the overall gist of the database.  We have a couple tables that
will be update infrequently but read from 100-200 times per second.  We
also have a couple tables that will be updated up to a couple hundred
times per second.  The tables that are read may only have 50k rows in
them, the tables that are written will continue to grow (expecting 50M
rows per year).

Here is my basic idea on the overall design on the system (based upon
how the app will work).

The app has 3 distinct parts to it.  The first part (A) will primarily
do lookups (50K rows - Client table).  The second part (B) will take
transactional information and insert it into a database (50M rows -
Transaction table).  The third part (C) of the app will be used to query
the transactional data.

I was thinking of creating a master database that will hold the basic
information needed across the three processes.  Since parts A, B and C
will all need to access the Client tables, I thought that maybe I should
create a master database for Client Table.  We will call this ServerA.
>From there I figure we can create replicated slave ServerA-Nodes that
all of the processes can read from in a load balanced scenario (using
IPVSADM).  This is the easy part.

The part B is heavy write and part C is heavy read I figure we can use a
similar scenario.  Have a single large cluster for part B's writes
(including the use of table partitions) on ServerB and then create
replicated slave ServerB-nodes that all of the part C processes would
read from.  Replication has to be near real time.  That is, we have a
contractual agreement to report the incoming processed data within 30
minutes.


General table information
Client table, multiple tables, 1:many
Table A, 500 bytes ~200 read/s (50K records)
Table B, Text, < 3000 bytes, rarely read (mostly contact info)
Table C, 250 bytes medium ~1 read/s (150K records)
Transaction table, multiple tables, each 1:1
Table A, 400 bytes ~200 insert/s (peak 500 insert/s, low 10
insert/s) (50M records)
Table B, 200 bytes ~50 insert/s (peak 200 insert/s, low 10
insert/s) (15M records)
Table C, Text, < 2000 bytes ~200 insert/s (peak 500 insert/s,
low 10 insert/s) (50M records)

So, in my end design I'm thinking two master clusters, one for Client
tables, one for Transaction tables, and slave everything else (into two
distinct groups).

Anyone see any downside to this?  Any better suggestions?

Also, I've been looking at some data partitioning schemes.  Would this
add any performance impacts in the long run (by allowing me to put the
different files on different drive arrays in the future).

For the hardware clusters I'm looking to use dual dual core AMD's, 8gb
ram, raid 5, for the slaves single dual core AMD's, 4gb ram, raid 5.
All GB nic.

Any feedback would be greatly appreciated.

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]