Re: Data Warehouse on MySQL questions

2008-04-03 Thread Gary Richardson
I've built several datamarts using perl and MySQL. The largest ones
have been up to about 30GB, so I'm not quite on your scale.

for #1, I have an etl_id in the fact table so I can track back any
particular ETL job. I typically make it a dimension and include date,
time, software version, etc. That doesn't help so much if you're
messing up your dimension tables, but I haven't typically run into
that problem based on the designs I've used.

For #2, I haven't built anything big enough for it to be a concern yet..

Also, LOAD DATA INFILE is your friend :)

On Thu, Apr 3, 2008 at 11:28 AM, Dre [EMAIL PROTECTED] wrote:
 Hey folks,

  I'm currently deciding whether to build a decent sized (around 300-500GB,
 although honestly, I've got little to base that on at the moment) data
 warehouse in postgreSQL or MySQL.  I've developed several in MS SQL and
 postgreSQL, but the client is comfortable with MySQL, and I'd prefer to use
 that as the platform since it will be less painful for them to manage when
 I'm gone.  I'm hoping that someone with experience building a warehouse on
 MySQL will be able to answer two outstanding questions I have:

  1) Several sources seem to suggest MyISAM is a good choice for data
 warehousing, but due to my lack of experience in a transaction-less world,
 this makes me a little nervous.  How do you handle data inconsistency
 problems when ETL jobs fail?  (For the record, I don't use a separate tool
 for the ETL; I usually use perl/shell scripts to interact with the file
 system, and pl/pgsql or transact-sql once the data is loaded into the
 staging database.  For each file that is loaded, I'll identify steps that
 must be posted together, and wrap them in a transaction in the ETL job.)  I
 can see doing something like manually cleaning out the necessary tables
 before you re-run, but that seems a bit messy to me.  Anyone figure out a
 better approach?

  2) Isn't the lack of bitmap indexes a problem in the warehouse? Most FKs in
 the fact tables will be low cardinality columns; queries that didn't use
 date would be very slow on large fact tables (MS SQL had this problem).  Has
 anyone run into this with MySQL?

  Many thanks in advance!

  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



pre-create innodb tablespace

2007-02-22 Thread Gary Richardson

Hi,

Can I pre-create innodb tablespace using something like dd (or any other
better tool)?

I have a server that is getting low on innodb table space and I want to add
15GB or so, but I want to minimize downtime. The server is a bit slow and I
estimate it will take around 10-20 minutes or so. I was hoping to 'dd' the
additional table files, add the configuration directives to my.cnf and
restart the server. This would mean my downtime would be whatever time it
takes to restart the server.

Thanks.


Re: OLAP for MySQL or an Aggregrate Table

2006-10-12 Thread Gary Richardson

You'd have to use another table. I don't believe mysql views will keep your
'moving average' values.

If you're using 5.1, you can automate the select/insert with an event --
it's a cron like tool built into mysql.

If you have a datetime field in either of the tables that represents the
'action' time of each piece of data, you could do it as:

select
DATE(datetime_field) AS period,
Avg (A),
Min(A),
Max(A),
AVG(B)
min(B),
max(B)
from table1
left join table2
on table1.field  = table2.field
GROUP BY period

On 10/11/06, Ow Mun Heng [EMAIL PROTECTED] wrote:


Anyone here uses OLAP, running in MySQL? Any OpenSOurce OLAP query
tools?
How does one goes about doing OLAP? Is there any documentation w/ MySQL
anywhere. (I'm currently looking at Pentaho and Mondrian etc but the
Pre-configured demo didn't work as it should, meaning it doesn't run for
X reasons)

What about doing views? eg: I take a snapshot of the data, every 1
hours, and plug them into a view or another table and use that eg:

select
Avg (A),
Min(A),
Max(A),
AVG(B)
min(B),
max(B)
from table1
left join table2
on table1.field  = table2.field
etc..
etc..

Will that work? Then when I query, I query this newly created aggregrate
table and I'm thinking of something along these lines.

DATE | Month | Day | Hour | AVG | Min | Max| some other field
 2006 |  | |   | 10| 0 | 100 |
AAA
--2006| Oct  |  |   | 10.5| 1 | 101 | BBB
--2006| Oct   | 10|   |10.2  | 3| 98| CCC

etc..etc..

Will something like this work for a So called moving average??





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: Can I delete old host-bin.xxxxx files in order to free up some space?

2006-06-03 Thread Gary Richardson

Yes you can. Be sure not to delete the one the database is currently writing
to.

Are you replicating your database? Are you using them for point in time
restores?

If you're replicating, make sure your replicas have all caught up on the
files you're deleting.

On 6/3/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:


I have not been monitoring my databases now I am using a 100% disk space.

thank you,
Raymond




Re: Can I delete old host-bin.xxxxx files in order to free up some space?

2006-06-03 Thread Gary Richardson

I believe the command is PURGE LOGS or something like that.

On 6/3/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:



-Original Message-
From: Gary Richardson [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 03, 2006 14:10
To: Jacob, Raymond A Jr
Cc: mysql@lists.mysql.com
Subject: Re: Can I delete old host-bin.x files in order to free up
some space?


Yes you can. Be sure not to delete the one the database is currently
writing to.

Are you replicating your database?
[Jacob, Raymond A Jr] No.
  Are you using them for point in time restores?
[Jacob, Raymond A Jr] No.
If you're replicating, make sure your replicas have all caught up on the
files you're deleting.

[Jacob, Raymond A Jr] To remove the excess binary update logs, and start
again , should I run the SQL command:
RESET MASTER?
Or if I don't need to reconstruct or restore a table may I just delete
them are comment log-bin out
of my.cnf and restart mysql?

thank you,
Raymond



Re: customer id - made the worst possible way

2006-05-09 Thread Gary Richardson

I would drop the primary key off of your cust_id and add a new field like
customer_id and relink using the old fk.

I'd probably do this by creating a new table and doing an INSERT INTO SELECT
FROM to populate the autoincrement and rename the table.. From there, add
the fk's to your other tables and update those records. I'd probably keep
the cust_id in your primary customer table for tracking purposes..

On 5/9/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


hi to all,
I have to redo a web site of one company and the structure of the current
db is a little mess.
one of them is customer id number. right now, customer table use as
primary key cust_id column varchar(50) PRIMARY KEY (no auto increment). I
really have no idea why previous developer made cust_id with letter C on
the beggining of a number, and the number is made from date, (mdyHis) ?!?!

What do you suggest to do:
1. take off letter C and keep the numbers, change cust_id to integer NOT
NULL, add one customer with number 2000 and then apply auto_increment?
2. replace current Cxx with INT numbers and replace the cust_id in
every other table where cust_id is foreign key?
3. something else?

Thanks for any help!

-afan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: customer id - made the worst possible way

2006-05-09 Thread Gary Richardson

Yeah, pretty much, but I would keep cust_id around and start over with a
true autoincrement from 1.

On 5/9/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


and this would be, in other words, the solution 2, right?



 If you really want to change the customer ID, then you can always copy
the
 entire table to another table with a primary key set.  Then simply
 reference that primary key field and forget the prior one.

 --
 Steve - Web Applications Developer
 http://www.sdwebsystems.com


 On Tue, May 9, 2006 9:33 am, [EMAIL PROTECTED] said:
 hi to all,
 I have to redo a web site of one company and the structure of the
 current
 db is a little mess.
 one of them is customer id number. right now, customer table use as
 primary key cust_id column varchar(50) PRIMARY KEY (no auto increment).
 I
 really have no idea why previous developer made cust_id with letter C
on
 the beggining of a number, and the number is made from date, (mdyHis)
 ?!?!

 What do you suggest to do:
 1. take off letter C and keep the numbers, change cust_id to integer
NOT
 NULL, add one customer with number 2000 and then apply
 auto_increment?
 2. replace current Cxx with INT numbers and replace the cust_id
 in
 every other table where cust_id is foreign key?
 3. something else?

 Thanks for any help!

 -afan


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: INNODB database size

2006-04-25 Thread Gary Richardson
Look at your my.cnf for a configuration directive called
'innodb_data_file_path'. This is where you configure the files for the
innodb table space. The last one is probably an auto-grow. My guess is that
every time it complains, it's just added 8MB to the file. If you remove the
auto-grow (and I can't remember what the configuration name is for that..
sorry), and add another 2GB file, it should be fine. You'll want to keep an
eye on it though, so you can add 2GB files more into the future. That is,
unless you add the autogrow to the last file.

The mysql online docs have lots of good info on this.

On 4/25/06, Todd Smith [EMAIL PROTECTED] wrote:

 Hello

 I have inherited an INNODB database. I am new to MySQL and may not be
 describing my problem correctly so any suggestions or questions are
 welcome.
 I have data files named ib_data_00 through ib_data_14 all of which are
 2.0G.
 I also have ib_data_15 which is 26G.  I am receiving errors saying that
 the
 innodb space is full.  I know that the space isn't full because it is
 still
 working.  I am wondering how I can get a true size of the space used.  I
 would like to get back into the 2G segments. Any suggestions.

 Todd


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




Re: Care and feeding of MySQL tables (aka Preventative Maintenance via Cron)

2006-03-10 Thread Gary Richardson
If you use the 5.1 tree, you should be able to use the new EVENT stuff
to accomplish this:

http://dev.mysql.com/doc/refman/5.1/en/events.html

CREATE EVENT check_table ON SCHEDULE AT '2006-03-10 19:00:00' EVERY
WEEK DO your_check_table_procedure_here();

Or something like that..

On 3/10/06, René Fournier [EMAIL PROTECTED] wrote:
 Just curious, what cron jobs do you schedule for automatically
 checking/repairing/analyzing/optimizing MyISAM tables?

 I have been doing this periodically, manually for a while, but I
 figured cron is the way to go.

 Also, what parameters are you using? I am thinking of running the
 following cron jobs:

 mysqlcheck --all-databases --auto-repair (daily)
 mysqlcheck --all-databases --analyse (weekly)
 mysqlcheck --all-databases --optimize (weekly)

 Good idea? Bad? Or just plain ugly?

 ...Rene

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need help configuring INNODB (Customer is ready to sue)

2006-02-09 Thread Gary Richardson
What are the problems you've been experiencing? Did you convert all tables?
How big is the database?

On 2/9/06, Shaun Adams [EMAIL PROTECTED] wrote:

 I have a customer who has been in production for a few weeks now having
 converted from MyISM to INNODB.  We have been experiencing a few problems
 with our application.  With that said, could you take a look at the info
 below from my show INNODB status and let me know if you see any problems
 based off of the my.cnf configuration.  Should I be worried about the
 free
 buffers being at 0  Also, if you have any suggestions for modifying
 my.cnf,
 please let me know.

 Server is a Dell 1850, Dual P4 Processor with HT, RHEL4 64-bit, 4GB
 Memory.
 The only thing we have running on the server is MySQL v 4.0.26 (upgrading
 is
 not an option right now).  The web is on another server.  Our programmers
 are working on examining the code and queries.



 
 060209 21:52:39 INNODB MONITOR OUTPUT
 
 Per second averages calculated from the last 58 seconds
 --
 SEMAPHORES
 --
 OS WAIT ARRAY INFO: reservation count 32909, signal count 32897
 Mutex spin waits 22210, rounds 96667, OS waits 3131
 RW-shared spins 59775, OS waits 29579; RW-excl spins 122, OS waits 80
 
 TRANSACTIONS
 
 Trx id counter 0 12549
 Purge done for trx's n:o  0 12265 undo n:o  0 0
 Total number of lock structs in row lock hash table 0
 LIST OF TRANSACTIONS FOR EACH SESSION:
 ---TRANSACTION 0 0, not started, process no 7200, OS thread id 1147570528
 MySQL thread id 40, query id 32699 localhost root
 show INNODB status
 
 FILE I/O
 
 I/O thread 0 state: waiting for i/o request (insert buffer thread)
 I/O thread 1 state: waiting for i/o request (log thread)
 I/O thread 2 state: waiting for i/o request (read thread)
 I/O thread 3 state: waiting for i/o request (write thread)
 Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
 Pending flushes (fsync) log: 0; buffer pool: 0
 10041 OS file reads, 98838 OS file writes, 28068 OS fsyncs
 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
 -
 INSERT BUFFER AND ADAPTIVE HASH INDEX
 -
 Ibuf for space 0: size 1, free list len 0, seg size 2,
 65 inserts, 65 merged recs, 63 merges
 Hash table size 4980539, used cells 3352510, node heap has 8217 buffer(s)
 0.00 hash searches/s, 0.00 non-hash searches/s
 ---
 LOG
 ---
 Log sequence number 1 2288366733
 Log flushed up to   1 2288366733
 Last checkpoint at  1 2288366733
 0 pending log writes, 0 pending chkp writes
 11189 log i/o's done, 0.00 log i/o's/second
 --
 BUFFER POOL AND MEMORY
 --
 Total memory allocated 2761264906; in additional pool allocated 6054656
 Buffer pool size   153600
 Free buffers   0
 Database pages 145383
 Modified db pages  0
 Pending reads 0
 Pending writes: LRU 0, flush list 0, single page 0
 Pages read 49486, created 458759, written 822494
 0.00 reads/s, 0.00 creates/s, 0.00 writes/s
 No buffer pool page gets since the last printout
 --
 ROW OPERATIONS
 --
 0 queries inside InnoDB, 0 queries in queue
 Main thread process no. 7200, id 1147169120, state: waiting for server
 activity
 Number of rows inserted 19473761, updated 1063, deleted 0, read 6117388
 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
 




 
 MY. CNF
 
 # MySQL Server Instance Configuration File

 [mysqld]
 port=3306
 socket=/tmp/mysql.sock

 basedir=/usr/local/mysql
 datadir=/var/lib/mysql

 default-character-set=latin1
 lower_case_table_names=1

 server-id=101
 #log-slow-queries
 #long_query_time=1
 #log

 max_allowed_packet=128M
 skip-bdb

 max_connections=100
 thread_concurrency=10
 open_files_limit=512
 table_cache=512
 thread_cache=20
 query_cache_size=64M

 # replication
 #log-bin=/var/lib/mysql/binlog
 #server-id=90
 #sync_binlog=1

 # thread buffers
 read_buffer_size=128K
 read_rnd_buffer_size=256K
 sort_buffer_size=512K
 join_buffer_size=128K

 # myisam
 myisam_sort_buffer_size=205M
 key_buffer=384M

 # innodb
 innodb_additional_mem_pool_size=8M
 innodb_flush_log_at_trx_commit=2
 innodb_log_buffer_size=8M
 innodb_buffer_pool_size=2400M
 innodb_log_file_size=512M
 innodb_data_file_path=ibdata1:2G:autoextend
 innodb_flush_method=O_DIRECT
 innodb_thread_concurrency=32

 #*** Other Options ***
 [mysqldump]
 quick
 set-variable = max_allowed_packet=16M

 [mysql]
 no-auto-rehash



 
 SHOW STATUS
 

 +--++
 | Variable_name| Value  |
 

Re: Histogram from tables.

2006-01-13 Thread Gary Richardson
I would typically do this in code, but you could also create a stored
procedure that does something like:

- create a temporary table
- populate it with all possible values with a count field set to 0
- run an REPLACE INTO temp_table your query goes here
- dump the contents of the temp table

You may also want to just have an empty copy of the temp table with
all the 0 values if you need to run this query frequently -- it may be
easy to copy the data instead of creating it each run..

On 1/13/06, Mike Martin [EMAIL PROTECTED] wrote:
 I have a large table of filenames and creation dates from which I want
 to produce a histogram.

 SELECT year(date), quarter(date), count(0) FROM pics
   WHERE date(date)  '2000' AND date(date)  ' 2005'
   GROUP BY year(date), quarter(date)

 Gets me close, but to simplify plotting, I want to include rows for
 the dates where no files were created.

 I get this:
 ++---+--+
 | year(date) | quarter(date) | count(0) |
 ++---+--+
 |   2001 | 3 |   34 |
 |   2002 | 1 |2 |
 |   2002 | 4 |1 |
 |   2003 | 2 |1 |
 |   2003 | 3 |1 |
 |   2003 | 4 |3 |
 |   2004 | 1 |1 |
 |   2004 | 2 |1 |
 |   2004 | 3 |5 |
 |   2004 | 4 |1 |
 ++---+--+

 I want this:
 ++---+--+
 | year(date) | quarter(date) | count(0) |
 ++---+--+
 |   2001 | 1 |0 |
 |   2001 | 2 |0 |
 |   2001 | 3 |   34 |
 |   2001 | 4 |0 |
 |   2002 | 1 |2 |
 |   2002 | 2 |0 |
 |   2002 | 3 |0 |
 |   2002 | 4 |1 |
 |   2003 | 1 |0 |
 |   2003 | 2 |1 |
 |   2003 | 3 |1 |
 |   2003 | 4 |3 |
 |   2004 | 1 |1 |
 |   2004 | 2 |1 |
 |   2004 | 3 |5 |
 |   2004 | 4 |1 |
 ++---+--+

 Thanks in advance for your help!

 MikeMartin

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql 5 - disk bound - fixed

2006-01-11 Thread Gary Richardson
CREATE TABLE blah LIKE old_table

On 1/11/06, George Law [EMAIL PROTECTED] wrote:
 Hi All,



 Just another follow up on my emails.



 What this came down to was sheer number of records and indices in my
 tables causing it to take several minutes to insert 10-20K records via
 load data ...



 table1 has probably 15 million rows, 60 fields per row, 1 index

 table2 has 7 million rows, 33 fields per row, 5 indices

 This is approx 2 months worth of data





 2006-01-11 06:37:11 : begin import into table1
 2006-01-11 06:43:14: end import into table1 records (17315) deleted :
 (0) skipped  (0) warnings:(0)



 2006-01-11 06:43:42 : begin import into table2
 2006-01-11 06:48:19 : import into table2 records: (16197) deleted:(0)
 skipped: (0) warnings:(0)







 This morning, I created 2 new tables, renamed the original tables and
 rotated the new tables in.



 2006-01-11 08:46:16 : begin import into table1
 2006-01-11 08:46:17: end import into table1 records (18853) deleted :
 (0) skipped (0) warnings:(0)



 2006-01-11 08:46:52 : begin import into table2
 2006-01-11 08:46:55 : import into table2 records: (17615) deleted:(0)
 skipped: (0) warnings:(0)



 This is mysql 5.0.18, running on a HP Proliant, dual P4 3 GHZ, 4 GB RAM,
 running Suse 9.3, so I do not believe it is hardware related





 I have to work on an automatic way to rotate these tables every week.
 Is there an easy way with SQL to create a new table based on the schema
 of an existing table?

 --

 Thanks!



 George Law








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Upgrading to 5.0.15

2005-12-28 Thread Gary Richardson
We moved directly from 4.0.20 to 5.0.16. Worked like a charm. I had a
script that went through and optimized all tables with keys on
text/varchar and char fields.

We're also slowly ALTERing innodb tables to get them into the new
compact format.


On 12/28/05, Gleb Paharenko [EMAIL PROTECTED] wrote:
 Hello.



 Manual recommends to perform an upgrade step by step. So

 I suggest you to move to 4.1 at first. When you said 'tablespace'

 have you meant that you're using InnoDB tables? I'm not sure about

 them, check the change logs to find out any incompatible changes. In

 case of MyISAM,very often it is enough just to copy files to the

 directory of the new server and repair indexes.









 Tripp Bishop wrote:

  Howdy all,

 

  I've got a MySQL 4.0.24 database that I'd like to

  upgrade to 5.0.15. Is it possible to backup the

  tablespace file in the mysql data directory and then

  install the new database then move the backed up files

  into the new installations data directory? Are we

  stuck running a mysqldump script? That would take

  hours and we'd like to avoid it if at all possible. If

  not we'll deal with it but it would be nice to just

  move the files.

 

  Thanks,

 

  Tripp

 

 

 

 

  __

  Yahoo! for Good - Make a difference this year.

  http://brand.yahoo.com/cybergivingweek2005/

 



 --
 For technical support contracts, goto https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.NET http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
___/   www.mysql.com




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [OT-ish] Hardware for MySQL server

2005-12-13 Thread Gary Richardson
I don't have any experience with dual core yet (my first dual dual
core box is scheduled to arrive this week!!). I don't think I'd opt
for a dual core in place of 2 single cores. I'm hoping (expecting?) to
see an advantage in 2 DC over 2 SC.

As far as SCSI over SATA goes, I exclusively use SATA. It meets my
workload needs and I always run in redundant arrays. I like using
3ware cards -- the management interface is the most sensical that I've
come across. I've also learned to get chassis with hot swap bays only
-- you loose more drives than you'd think and it definitely reduces
downtime.

SCSI will perform better than SATA, but it will also cost a lot more.
If you're going to run a non-redundant array, go SCSI -- they do tend
to stand up better over time.

I'd also think about your environment -- what do the other systems
use? What are the other admins comfortable with? If everything else is
SCSI, it probably means you have a bunch of spare SCSI drives sitting
around and a supplier that gives you a deal.

You still haven't outlined what your applications requirements are, so
it's hard to say for sure.

On 12/13/05, James Harvard [EMAIL PROTECTED] wrote:
 Thanks for all the feedback on this.

 Is there any received wisdom on whether 1 dual core processor is better than 
 2 'normal' processors?

 Also, is there any advantage to SCSI over SATA?

 TIA,
 James Harvard

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [OT-ish] Hardware for MySQL server

2005-12-12 Thread Gary Richardson
It doesn't seem like a mission critical app, but I would seriously
consider using redundant RAID (ie, not 0, but 1 or 5). Nothing ruins
your day quite like losing a non-redundant drive, even if you have
good backups.

Also, what sort of workload are you looking at? How responsive does it
need to be? Is your data well indexed? What table types are you using?
What is the total size of the data?

Spindles do play a factor, but for 6GB of data you're not looking at
orders of magnitudes of performance. Smart indexing will make the
biggest difference, then RAM.

It all really depends -- if you queries are run once or twice a day to
generate reports, it is probably okay for them to take 5 or 10
minutes. If you run the questions repeatedly and need under 10 seconds
for a response you'll have different requirements.

You should take a look at Supermicro chassis
(http://www.supermicro.com). They have various 1U systems that take 4
drives. I'm sure there is a UK distributor.

Thanks.

On 12/12/05, James Harvard [EMAIL PROTECTED] wrote:
 [Apologies for my first post here being semi-off-topic!]

 I normally deploys apps I develop  (MySQL with Lasso web middleware) with an 
 ISP, so I have no experience of choosing hardware configurations or sourcing 
 them.

 My current client's application involves a very large amount of data which I 
 have split into a number of tables. These tables (data files) are currently 
 between several hundred MB and 2 GB each for 6 1/2 years data, and will grow. 
 However, tables are not updated in normal use (we can take the app off-line 
 for updates) - so the data is pretty much read-only.

 From my reading of the manual it seems that disc seek speed is the limiting 
 factor once tables get so large that the data and indices cannot be cached in 
 RAM. So I believe that the best hardware setup for a dedicated MySQL server 
 would include two fast discs striped (RAID 0) for the databases and a third 
 separate disc for the operating system.

 Does this sound right? (Also thoughts on SCSI versus SATA?)

 Second question:

 The chap who will probably administer the servers seems to prefer buying 
 Dell, but AFAIK Dell don't do any 1U servers that would support 3 drives. Can 
 anyone recommend any server brands available in the UK, or UK based companies 
 that will build servers, supporting 3 discs (2 RAID  1 for the OS)?

 Many thanks,
 James Harvard

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5 is 25% slower then 4.1

2005-12-05 Thread Gary Richardson
Are you doing single insert statements, multiple insert statements or
LOAD DATA INFILE statements?

On 12/4/05, Chenzhou Cui [EMAIL PROTECTED] wrote:
 Dear  MySQL fans,

 I have a 260 GB huge file with 1045175762 rows. Two weeks ago, I wrote a
 Java program to read the huge plain text file into MySQL 4.1.12.  300
 million of rows could be loaded in one day.

 Last Thursday, I updated the MySQL to 5.0.16 and then run the same
 program. Only 225 million of rows can be loaded in one day.

 In additional to the version difference, the MySQL 4.1.12 was compiled
 from SRPM packages from RedHat Updates, while the MySQL 5.0.16 is
 installed directly using the MySQL binary RPM packages for Redhat AS 4.

 Any information and suggestion are welcome.

 Regards,
 Chenzhou CUI


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5 is 25% slower then 4.1

2005-12-05 Thread Gary Richardson
Also, if the file looks anything like a CSV file, I recommend using
LOAD DATA INFILE

http://dev.mysql.com/doc/refman/4.1/en/load-data.html

You'll probably load that data in half to a quarter of the time.

On 11/30/05, Daniel Kasak [EMAIL PROTECTED] wrote:
 Chenzhou Cui wrote:

  I didn't use multiple insert statements and LOAD DATA INFILE, but
  only insert into statement. The Java program reads one line from the
  source file, and then execute an insert statement.

 I can't comment on the speed of 5.0.x vs 4.1.x, but I can suggest that
 you optimize this 'insert' process.

 Instead of issuing a new insert command per record, how about using
 placeholders and binding values? I don't know any Java, but from Perl
 you can do this. Actually, I'm not certain that MySQL supports this yet
 or not - you'd have to check your server version and drivers.

 Or you could do something like collect a series of records - say 100
 records at a time, and issue an insert query that includes all of them, eg:

 insert into SomeTable ( field_1, field_2, field_3 ) values
 ( 3, 56, 45 ),
 ( 45, 3456, 345 ),
 ( 345, 76, 345 ),
 ( 345, 45, 546 )

 This is a lot faster than issuing separate inserts per record. You don't
 want to include *too* many records at once - there's a maximum packet
 size or something like that that you can't exceed.

 --
 Daniel Kasak
 IT Developer
 NUS Consulting Group
 Level 5, 77 Pacific Highway
 North Sydney, NSW, Australia 2060
 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
 email: [EMAIL PROTECTED]
 website: http://www.nusconsulting.com.au

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 5.0.15-5.0.16 upgrade dependency failure

2005-11-28 Thread Gary Richardson
Whaaa? I did no such thing. My post was on topic.

On 11/28/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Marcus Bointon [EMAIL PROTECTED] wrote on 11/28/2005 04:45:54
 AM:

 
  On 28 Nov 2005, at 00:41, [EMAIL PROTECTED] wrote:
 
   The only times MySQL server interact directly with each other are:
   a) during replication
   b) NDB cluster operations
   c) FEDERATED tables
  
   Options b) and c) are available only in v5.0+. If what you want to do
   doesn't fall into one of those three categories, you will need to
   open a
   second connection in your code and send a second query to the second
   server.  There is no other way around it.
 
  Errr, were you answering some other question?
 
  Marcus
  --
  Marcus Bointon
  Synchromedia Limited: Putting you in the picture
  [EMAIL PROTECTED] | http://www.synchromedia.co.uk
 

 I blame it on Gary for hijacking your original (unrelated) thread. Sorry
 for the cross-post. I will watch out next time.

 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 5.0.15-5.0.16 upgrade dependency failure

2005-11-27 Thread Gary Richardson
I've also experienced this. The 5.0.16 RPMs for RHEL3 worked fine on a
RHEL3 box.

As for the RHEL4, I even tried recompiling the source RPM's and still
got the dependency error! I ended up using the generic RPM's as those
installed fine.

For me, it's a moot point anyway as 5.0.16 isn't stable enough for my
app -- it crashes on subselects of the same table. I downgraded to
5.0.15.

On 11/27/05, Marcus Bointon [EMAIL PROTECTED] wrote:
 I have a happily working 5.0.15 installation on RHEL4, installed from
 the standard RPMs. I'm upgrading using rpm -Uvh to the new 5.0.16
 release. -Server, -client and -devel packages install ok, but when I
 attempt to install -shared, I get a dependency failure on zlib. Of
 course I have zlib installed (lots of other things depend on it), and
 rpm says it is there, and it's up to date. I've downgraded to 5.0.15
 again (as nothing works in PHP without -shared installed) and that
 installs without problems. What's changed in .16 to create this
 seemingly broken dependency?

 Marcus
 --
 Marcus Bointon
 Synchromedia Limited: Putting you in the picture
 [EMAIL PROTECTED] | http://www.synchromedia.co.uk


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: asking opinion about hosting database and webserver on the same server

2005-11-11 Thread Gary Richardson
It depends on what heavy traffic is and what your machine is.. If it's
an E10K, then you can probably put them both on one machine for most
traffic loads :)

Your database will typically perform better if you have your DB on a
separate machine.

Without knowing more about the code running the site, the database
schema, the machines and the traffic load, I would probably design
your setup so that you can start scaling out easily -- go from one to
two, and scale from there.

On 11/11/05, Bing Du [EMAIL PROTECTED] wrote:
 Hello all,

 Should MySQL and Apache be hosted on separate servers or it's ok they
 being on one server?  We're building a compound that includes about 10
 websites, all their contents are hosted in MySQL.  There will be frequent
 database updates and the webserver will have to handle heavy traffic.

 Thanks in advance for sharing your opinion and experience or pointers!

 Bing

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAID/MySQL configuration question

2005-08-25 Thread Gary Richardson
My guess is that the RAID has nothing to do with it -- it seems very unlikely.

In any case, if you want top performance out of your raid, you may
want to change things up. You'd get better performance if you didn't
use RAID5. Use RAID1 or RAID10 for your data drives. RAID5 is slower
than these other methods.

Based on the information you've given, I'm assuming a few things:

1) your raid controller supports RAID10
2) you have an even number of drives.

If this is the case, I would recreate the raid as a RAID10 (pair up
your drives and then create a stripe out of the pairs). Then you can
feel free to allocate space to whatever partition struction you need.

On 8/25/05, Curious George [EMAIL PROTECTED] wrote:
 G'morning all!
 
 (Using Red Hat Linux Enterprise 4.1)
 I have a Dell PowerEdge 2800 with a PERC 4 RAID controller.  The RAID
 controller has one RAID 1 mirror and one RAID 5 stripe volume created.
  We installed most of the OS stuff on the RAID 1 set and the
 /usr/local and /var on the RAID 5 set (since the Red Hat MySQL rpm
 puts the data directory under /var ).
 
 I'd like to know if there are any better ways to configure this (I can
 repartition and reinstall the OS, if necessary).
 
 background
 I'm having problems with a Tomcat application (OSP - ePortfolios) that
 uses a lot of disk space for uploaded files (under Tomcat directory
 which I install under /usr/local). Not sure how large the MySQL
 database will grow to be. I installed the Red Hat MySQL rpm, but not
 sure if it is RAID-aware and considering compiling MySQL from source (
 --with raid ?). Or my problem may be with the MySQL Connector/J driver
 (which would be a question for the mysql-java list).
 
 The application builds and installs fine with no errors, but Tomcat
 only works for the static directories (i.e. /jsp-examples ) and not
 with the application that interacts with MySQL.
 
 I've installed this application successfully on an identical non-raid
 system. The only differenced between the two machines is that the
 problem child is RAID (configured as above) and the java sdk version
 changed from _08 to _09.
 /background
 
 1) Best way to configure the RAID/partitions for best MySQL performance?
 2) Is MySQL RAID-aware if not compiled: - - with raid? (unsure if the
 Red Hat rpm used that)
 3) Is there a way to tell if a problem is specifically related to the
 MySQL Connector/J driver or a problem connecting to MySQL? (probably
 should direct that one to the mysql-java list, eh?)
 
 Thanks in advance for any help. This is the first RAID machine I've
 ever worked with.
 : \
 
 Darren Addy
 University of Nebraska at Kearney
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL in CentOS? try before?

2005-08-04 Thread Gary Richardson
We've been using CentOS more and more. I recently built a small prototype 
datawarehouse with CentOS (but I used MySQL 5 RPM's instead of the stock).

I've had no problems that I didn't have with RHEL on CentOS. I also have a 
few similar RHEL systems, so when a problem occurs on a CentOS box, I 
reproduce it on a RHEL box and and get support that way.

On 8/4/05, KH [EMAIL PROTECTED] wrote:
 
 Hi all,
 I just came across Centos (Community Enterprise Operating System -
 http://www.centos.org/) It came with mysql as well. Hence my question
 here, does anyone guru here using this OS before for production ?
 Success story?
 
 Cheers
 KH
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: 
 http://lists.mysql.com/[EMAIL PROTECTED]
 



Re: Separate disk for logs, DRBD ...

2005-07-18 Thread Gary Richardson
Make sure your disks are all redundant -- get two of each and mirror
them. You'll thank yourself when a drive dies.

If the database server has any uptime requirements, I recommend going
hotswap for everything -- you'll thank yourself again when you can
swap the drive out during core business hours instead of coming in at
2:00AM to do it..

On 7/17/05, Sinang, Danny [EMAIL PROTECTED] wrote:
 Dear All,
 
 Am planning on making MySQL write its data files to disk1 and log files
 to disk2.
 
 My questions are :
 
 1. I know I can put the connections, slow, query, and InnoDB logs on
 disk2.
 
 Is it also possible (and advisable) to put the binary logs with them
 ?
 
 2. If disk2 is slower than disk1 ( like when disk1 is 15k RPM while
 disk2 is 10k RPM ), will it slow down any data-related operations ?
 
 3. I'm thinking of using DRBD to replicate changes on one MySQL Master
 server to another box. Does anyone here have a similar setup ?
 
 I plan on buying 2 identical servers with 3 disk each - 1 for the
 OS, the other for Data, and the last one for Logs.
 
 If the Logs disk crashes, will MySQL be able to write logs to the
 Logs disk on the 2nd server via DRBD ?
 
 
 Regards,
 Danny
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: email notification

2005-05-31 Thread Gary Richardson
I don't think there is an SMTP API built into MySQL. A trigger could
insert a record into another table and a crontab could send mails
based on the contents of this table..

On 5/31/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 If we get triggers in mysql 5, couldn't you setup something like this?
 
 --ja
 
 On Tue, 31 May 2005, Philip Denno wrote:
 
  I doubt you would find this feature in any database. You would have to
  implement at the application code level. Basically whenever you insert
  into a table have the application framework send an e-mail.
 
  The log4j package provides this kind of functionality.
 
  See http://logging.apache.org/
 
  And look for information on the SMTP appender.
 
  Cheers,
  Philip.
 
  -Original Message-
  From: Jayson [mailto:[EMAIL PROTECTED]
  Sent: May 31, 2005 6:59 AM
  To: mysql@lists.mysql.com
  Subject: email notification
 
 
  I'm looking for a feature in mysql where it will email me if ther are
  any changes in a particular database or table.
 
 
 
 --
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: recovering a 17G dump

2005-05-20 Thread Gary Richardson
If it's an option, buy more RAM and more disks..

Is it a 17gb table or 17gb of data spread across several tables? If
it's across several tables, you won't have as much trouble rebuilding
the indexes.

Another option is to build another machine with a bunch of ram and a
RAID1 or RAID10 (SATA or SCSI). Import it there and copy the data
files up to the server.

On 5/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I doubt there is much to do. Your hardware is the bottleneck I would
 think. You will completely kill the server regenerating the indexes
 afterwards as well, so if the idea is to get is up and running fast, it
 may not be so fast once you get it up.
 
 -Original Message-
 From: Mikel - [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 20, 2005 11:05 AM
 To: mysql@lists.mysql.com
 Subject: recovering a 17G dump
 
 Hi list,
 
 I have a 17G dump from  my DB, any suggestions to recover that dump
 faster,
 any variables to tune up?... I don't have an accurate binary backup, so I
 have to restore it from my 17G text file.
 
 I remove the indexes, foreign keys, I will create them after I've
 recovered
 all the data.
 I have an innodb storage, mysql ver. 3.23-58, 80G HD, 1G RAM on a
 white-box
 linux distribution.
 
 Thanks in advanced for your suggestions
 
 Greetings
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to backup and restore database

2005-05-05 Thread Gary Richardson
try mysqldump -keq SDN  SDN.sql

Then you can import it on another instance using mysql  -u root
new_instance  SDN.sql

On 5/5/05, zlf [EMAIL PROTECTED] wrote:
 Hi all,
 I have installed a MySQL5.0 instance on Windows. And then created a
 database( named 'SDN' ). Now I want to move this database to another
 MySQL5.0 envirnment on Linux.
 How can I make it. Thx
 
 zlf
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Get a Random Row on a HUGE db

2005-04-26 Thread Gary Richardson
Why don't you generate a random integer in your code and select for an
article? If there is no article there, do it again. Even if you have
to call it 50 times it may be faster than doing a full scan on the
table.

It may not work so well if there are lots of gaps in your autoincrement.

In perl (don't know about PHP), you could pass your MAX(article_id) to
RAND to limit the outside of the random number generated. You may need
to call int() on it though as it may be a float.

On 4/26/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 This difference between using a 40 mb table and 4mb table with the same
 traffic was a 70 server load versus a .9 server load.  So it was the amount
 of data that I was selecting that was choking this feature.
 
 
 -
 
 [EMAIL PROTECTED] wrote:
 
 Thanks for that I implemented to my Random code.  Same problem that select
 *
 portion is just a nightmare.  Remember I selecting 38mb of data when I do
 that.
 
 What I want to do is jump to a Valid random row.  Now If I didn't delete
 content often that would be easy grab the last autoincremented row_id and
 get a random number between 1 and End  Jump to that row to create the link.
 Very fast. Zero load
 
 So what I am trying is this.
 
 $last_row =SELECT from firebase_content LAST_INSERT_ID();
 $last_row_query = $dbi-query($last_row);
 $last_row_result = $row-id;
 
 But what I am seeing is this:
 
 Object id #9
 
 and not the number that is in the database.
 
 What am I sending to this variable that is wrong?
 
 
 
 [snip]
 I am wanting to display a random page from my site, But I have over
 12,000 articles right now and we add over 150 per day.  What I wound up
 doing was a Virtual DOS attack on my own server because the 40 mb db was
 being loaded to many times.
 
 I have tons of memory and a Dell Dual Xeon 2.8 gig.
 
 Can someone think up a better way of doing this?  I wish Mysql would
 just bring me back 1 valid random row  It could be used in so many ways
 it should just be a part of MySql anyway.
 
 ?php
 ini_set(display_errors, '1');
 header(Pragma: private);
 header(Cache-Control: post-check=0, pre-check=0, false);
 header(Cache-Control: no-cache, must-revalidate);
 require_once(firebase.conf.php);
 $dbi = new DBI(DB_URL);
 $stmt = Select * from firebase_content ORDER BY RAND(NOW()) LIMIT 1;
 $result = $dbi-query($stmt);
 while($row = $result-fetchRow())
 {
  $title = $row-title;
  $cate = $row-category;
  $get = Select cat_url from firebase_categories where
 cat_name='$cate';
  $now = $dbi-query($get);
  $rows = $now-fetchRow();
  $url = $rows-cat_url;
  $link = $url . $title;
 }
 header(Location: http://www.prnewsnow.com/$link;);
 exit;
 /* Sudo code that I am trying to create to relieve server stress.
 function randomRow(table, column) {
 var maxRow = query(SELECT MAX($column) AS maxID FROM $table);
 var randomID;
 var randomRow;
 do {
 randomID = randRange(1, maxRow.maxID);
 randomRow = query(SELECT * FROM $table WHERE $column = $randomID);
 } while (randomRow.recordCount == 0); return randomRow;
 }
 */
 ?
 [/snip]
 
 Try this ...
 SELECT * FROM foo ORDER BY RAND(NOW()) LIMIT 1;
 
 12000 rows is not huge at all, so this should be pretty quick
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 
 
 --
 No virus found in this outgoing message.
 Checked by AVG Anti-Virus.
 Version: 7.0.308 / Virus Database: 266.10.2 - Release Date: 4/21/2005
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: server params

2005-04-23 Thread Gary Richardson
A quick thing to check is that DNS lookups are timing out on connect.
If you don't have DNS on your local subnet, the server may be doing
reverse lookups..

Otherwise, you should definitely profile your code.

On 4/21/05, Brent Baisley [EMAIL PROTECTED] wrote:
 For starters, compare timings between running a query between the
 machines and one local to the machine. Use the command line client, not
 your web server. See if there is a significant time difference. If
 there is, there is probably something wrong on your network. If not,
 then you should check your web server for bottlenecks.
 Of course, network slowdowns could come up if the machines are on
 separate switches, one or both are not running full duplex, the switch
 isn't set to full-duplex, network errors (i.e. due to damaged or poor
 quality cables). And naturally, checking that you are not running at
 the full capacity of your network.
 
 Bottom line, put in timings in your code to find out where the
 bottleneck is. Monitor your machines for cpu, memory and I/O usage
 levels. You need to find where the bottleneck is before you start
 playing with server variables.
 
 
 On Apr 21, 2005, at 3:47 PM, Rob Brooks wrote:
 
  Hello, we have a web based application where the mysql server runs on a
  separate box from the code.  The 2 boxes are on the same subnet so
  there
  really shouldn't be any latency issues(emphasis on shouldn't .) but
  we're
  trying to track down some timeout glitches in the application.  I was
  wondering what things I might look at as far as server variable
  settings
  that might be pertinent to having the mysql server on a different box
  from
  the code.
 
 
 
  server 4.0.2 on darwin6.8
 
 
 
  Thx
 
  Rob
 
 
 --
 Brent Baisley
 Systems Architect
 Landover Associates, Inc.
 Search  Advisory Services for Advanced Technology Environments
 p: 212.759.6400/800.759.0577
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Performance

2005-04-12 Thread Gary Richardson
 I've got IDE hdd. Is there simple way to check, if it's overloaded?
 Would RAID1 help? (I don't know if in raid1 there are parralel reads or
 maybe one disk is only a mirror)

If it's IDE, probably not. Moving the database to a different
subsystem would give more IO. You can use iostat to check the io on
the disk, but you kinda need iostats to compare against.


 But if it would be too high, server would use swap, what makes more io
 calls. :(

Yup, that's why you'd need more ram.

 700mb of innodb tables (where phpbb_posts_text contains 17 records =
 116mb and phpbb_search_wordmatch contains about 500 records = 500mb,
 rest is smaller) and 200mb of myisam tables

I think you also have to include indexes in your memory usage.. I'd
probably up the RAM, especially if your webserver is running on the
same box.

 As I wrote in reply to kernel's message, I've changed settings to:
 
 innodb_data_file_path = ibdata1:128M:autoextend
 innodb_buffer_pool_size=150M
 innodb_additional_mem_pool_size = 50M

Increasing your file size to 128M doesn't really help. You should
probably set it to something like:

innodb_data_file_path = ibdata1:1G;ibdata2:1G;ibdata3:1G:autoextend

This will create 3x 1 gig table files. You're data is around 1G, so
this should give you some headroom. InnoDB uses more space for
transactions and such than just the byte size of your rows * number of
rows. The autoextend allows the last file to grow if/when you run out
of space. I prefer to add files when I start running out of room
instead of letting it autogrow.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Making Slave a Master

2005-03-31 Thread Gary Richardson
I haven't done it in a nice way and I haven't done it in a long time,
but you can do this.

In the past, I've done the following:

On the slave:

1) stop the server
2) comment out all the lines in my.cnf that refer to the machine as
being a slave -- you still need your binary log directives though.
You'll also need to nuke all of your binary logs on the slave, along
with the status files.
3) start the server and test some updates -- you want to make sure
your binary logs are working

At this point, you're all set. I'm sure there is a way to do this with
a CHANGE MASTER TO command.

When you want to turn your master back on, you can use a CHANGE MASTER
TO on the master machine. This will make it a slave to your slave
server. Once you're all back in sync, you need to disable replication
and run the CHANGE MASTER TO command on the slave server.

On Thu, 31 Mar 2005 16:02:55 -0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 We have one master and one slave database and use the slave for reads.
 If for some reason our master goes down,
 we would like to make our slave the master and use it for both writes
 and reads and then switch to the original configuration
 when the master is up, which includes updating the master copy. Limited
 downtime/locking of the second database is OK.
 
 Is this something that is easy to do or recommended? If so, what steps
 we need to go through or where can I find isome nformation regarding
 this? If not, what other approachs are there (assuming we only have two
 machines w/ above configuration).
 
 Thanks,
 
 -JF
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: speed of 3.23 vs 4.1 for logging server

2005-03-29 Thread Gary Richardson
The RPM's from mysql.com should work fine on FC3. The source RPMs
should build as well. I would say it shouldn't take that much effort.

I don't know about performance issues, but I always figured that you
can tune InnoDB with a bit more control than myisam. For inserts, you
could probably have a larger memory pool and a longer time between
changelog commits, but you should probably benchmark. You'd want to
use the newer versions for that, I'd assume.

Plus, with MyISAM, your odd SELECT queries will lock the tables,
preventing inserts. If your queries run for more than 15 seconds, that
may affect a logging server..

just some thoughts..


On Tue, 29 Mar 2005 11:48:56 -0800, Florin Andrei
[EMAIL PROTECTED] wrote:
 Fedora Core 3, which is the Linux distribution that i'm using, is still
 shipping with 3.23.58
 The beta (test) version of Fedora 4 has mysql-4.1.10. However, FC4 is
 scheduled to go live on June 6th. But i need a MySQL server now.
 
 This server will be mostly used for logging (think: syslog logging to
 SQL), so most of the time will just receive INSERTs on a permanent basis
 from a couple of sources, to a few tables (just a handful, all of them
 in two databases) that will be rotated periodically (all tables will be
 append-only - when they're too big, they're just rotated away and the
 very old ones are deleted when disk usage hits a threshold).
 Every now and then, a user or two will perform searches through the
 logs.
 
 Is there a big performance difference between 3.23 and 4.1 in such a
 case?
 I prefer to just use whatever's offered in the current distribution
 because i don't have much time to spend tweaking the system (upgrade
 MySQL, recompile PHP, verify SELinux policies, etc.), but if there is a
 really big performance difference, i may do the effort to upgrade MySQL
 to the one offered in FC4-test.
 
 --
 Florin Andrei
 
 http://florin.myip.org/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DBI mysql question.

2005-03-16 Thread Gary Richardson
Hey,

The placeholders (?) are safely escaped by the DBI library -- your
query that gets submitted to the server actually looks like:

CREATE TABLE IF NOT EXISTS CS_ 12569 (
ID_NO MEDIUMINT,
NAME VARCHAR(30),
TYPE CHAR(1)
)

I think it puts the space in, but it might actually be quoting it.

I would do something like:

$dbh-do(sprintf(CREATE TABLE IF NOT EXISTS CS_%d (
ID_NO MEDIUMINT,
NAME VARCHAR(30),
TYPE CHAR(1)
), $T_NO));

out.

On Wed, 16 Mar 2005 10:10:39 -0800 (PST), Richard Reina
[EMAIL PROTECTED] wrote:
 Dear MySQL Developers and Enthusiasts,
 
 when I run these lines of code :
 
 my $T_NO = 12569;
 use DBI;
 my $dbh =
 DBI-connect(DBI:mysql:database=carr_search;192.168.0.1,user,password);
 
 my $q = CREATE TABLE IF NOT EXISTS CS_? (
 ID_NO MEDIUMINT,
 NAME VARCHAR(30),
 TYPE CHAR(1)
 );
 my $sth = $dbh-prepare($q);
 $sth-execute($T_NO);
 
 from w/in a program I get:
 
 DBD:mysql::st execute failed: You have an error in
 your SQL syntax near '12569 (
 ID_NO MEDIUMINT,
 NAME VARCHAR(30),
 TYP' at line 2 at ./carr_s.pl line 36.
 
 However if I cut and paste the exact same code and
 make it it's own program then execute it, it works
 perfectly. Can anyone tell me what's happening and how
 I can fix it?
 
 Thanks,
 
 Richard
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: safe way of replication?

2005-03-09 Thread Gary Richardson
I would probably not replica the mysql database in your case.

We run a similar setup. One problem we have is people connecting to
the wrong database server and applying updates (for example, they
think they are in the master, but it's really the slave and they
perform an update or an insert). As you can guess, it causes problems
with data integrity. In order to get around this, we remove
insert,update,delete from our users on the slave..

out.


On Wed, 9 Mar 2005 17:49:10 +0200, Chris Knipe [EMAIL PROTECTED] wrote:
 Lo all,
 
 Just wondering... Would the below be considered a safe way to do
 replication...
 
 MasterBD:  One Database (most Critical)
 SlaveDB: Replicating all databases from MasterBD as well as hosting allot of
 other 3rd party, or customer DBs.
 
 Basically, the MasterBD holds a single critical database for our company
 (i.e. main database).  The slave DB then becomes a slave for the MasterDB
 (one way replication), but also hosts allot of other not so critical
 databases?
 
 So far, this seems to be working, but I am getting a couple of errors or
 problems in regards to the mysql table for user authentication...  My guess
 is that I more than likely just don't need to replicate that DB from the
 master.
 
 Would this be considered safe??  Or should I look at a third database server
 for deployment?
 
 --
 Chris.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAID, MySQL and SATA - benchmarks

2005-03-09 Thread Gary Richardson
I found the article very interesting. It seems they couldn't trash
3ware cards enough.

We swear by 3ware cards -- other than the PCIX riser card issue, we
haven't a single problem with them. Our production database server is
running off of a RAID1 for the OS and a RAID10 for the data and every
time we are doing schema maintenance or database migration, we are
blown away by the speed of the machine.

BTW, never use Western Digital  Raptor drives. Some people may
remember me posting about them around October. Out of 6 drives in the
machine, we've had 10 failures. We're currently replacing the drives
with Seagates. We'll take the 3K RPM hit for piece of mind.


On Wed, 9 Mar 2005 17:04:54 +1100, Richard Dale [EMAIL PROTECTED] wrote:
 Tweakers.net has completed a comparison of 9 serial ATA RAID 0/1/5/10
 controllers at:
 http://www.tweakers.net/reviews/557
 
 There is a specific section on MySQL performance in the section:
 http://www.tweakers.net/reviews/557/25
 
 Just thought these articles would be of interest to some (it's interesting
 to see the difference between single drive operations and multiple drive
 operations - up to 12 drives, with the different RAID levels).
 
 Here's my rough speed comparison based upon eyeballing the graphs.  Some
 controllers were better than others so this represents a rough average of
 the entire set of controllers:
 
 Single drive -  1.0
 RAID 1 - 2 disks - 1.4
 RAID5 - 3 disks -  1.7
 RAID5 - 4 disks -  2.0
 RAID10 - 4 disks - 2.0
 RAID5 - 6 disks - 2.3
 RAID5 - 8 disks -  2.4
 RAID5 - 10 disks - 2.9
 RAID5 - 12 disks - 3.1
 
 The article also highlighted the difference between the reliable
 write-through mode and the write-back mode.  In write-through mode,
 performance is degraded by approximately 50%.  Clearly if you want
 reliability, a controller with a battery backup is highly recommended.
 
 On the issue of SCSI version SATA performance, it would appear that SCSI
 still performas somewhat better (about 20% more transactions but the test
 was comparing 15K RPM SCSI drives to 10K RPM SATA drives) but the reduced
 cost of SATA drives allows you to add more drives to achieve the same
 performance levels at lesser cost.  With Serial ATA II drives around the
 corner (with Native Command Queueing) then I think we'll find SATA will take
 a much bigger lead in database performance.
 
 Really nice work from tweakers.net - would have been interesting to see the
 Linux performance too though.
 
 Best regards,
 Richard Dale.
 Norgate Investor Services
 - Premium quality Stock, Futures and Foreign Exchange Data for
   markets in Australia, Asia, Canada, Europe, UK  USA -
 www.premiumdata.net
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CygWin and MySQL???

2005-03-06 Thread Gary Richardson
Hey,

Those RPMS are compiled for a linux architecture. Cygwin is the GNU
tools (plus a few others) compiled under windows to similate a linux
environment. It can't actually execute those binaries.

You have three options:

1) Install the Windows binary. You won't be able to compile things
like DBD::mysql under Cygwin if you go this route.

2) Get the SRC RPM and build and install it. I don't know if this will
actually work under Cygwin.

3) Compile from source under Cygwin.

If I were you, I'm probably install the windows binary version. If you
need to access it from Cygwin, get the source and compile it under
windows. Before you configure it, run ./configure --help. There is an
option for client-only, or no-server or something like that. It will
give you the mysql client utilities plus a shared library for
compiling all of your linux tools against.

out.

On Sun, 6 Mar 2005 08:44:55 -, Christopher Malton
[EMAIL PROTECTED] wrote:
  
  
 
 Whenever I try to install the RPM for Linux i386 on CygWin it fails with an
 error saying: 
 
   
 
 Error: MySql-??-4.1.10-0.i386.rpm: V3 DSA signature: BAD, key ID
 5072e1f5 
 
 Error: MySql-??-4.1.10-0.i386.rpm cannot be installed 
 
   
 
 Where ?? is: devel, chared-compat or client 
 
   
 
 What do I do??? 
 
   
 
 Should I build it from source? 
 
   
 
 Chris 
 No virus found in this outgoing message.
 Checked by AVG Anti-Virus.
 Version: 7.0.308 / Virus Database: 266.6.2 - Release Date: 04/03/2005
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Questions regarding INNODB

2005-03-04 Thread Gary Richardson
You could configure per table table spaces.. I suppose that would be a
little more trackable..


On Fri, 04 Mar 2005 16:38:20 -0600, Michael Satterwhite
[EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 I just found out that the hosting company we're using has disabled
 INNODB support in their MySQL configuration. I'm a bit upset that they
 don't document crippling the server anywhere, but that's neither here
 nor there.
 
 When I asked them about this, I got the following reply:
 
 InnoDB doesn't provide a mechanism to track disk quota and isn't
 suitable for a shared server environment - structures etc all reside in
 a shared area.
 
 How many of you agree or disagree with their reply? Do you know hosting
 companies that *DO* provide INNODB support? If so, obviously they don't
 consider innodb to be unusable in a shared environment. How much
 validity is there to their excuse for not providing the support? Other
 comments welcome as well.
 
 tia
 - ---Michael
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.0 (GNU/Linux)
 Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
 
 iD8DBQFCKONcjeziQOokQnARAoQoAJ9RiwoxNaTUo9FOQyEwk2pRwpkamACfeoSA
 vFJLop1ByO2Aj1vMs5RDZHI=
 =YXbN
 -END PGP SIGNATURE-
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Switching to InnoDB turns out dissapointing

2005-03-01 Thread Gary Richardson
What have you actually done to 'tune' the server? How are you doing
the inserts?

InnoDB uses transactions. If you are doing each row as a single
transaction (the default), it would probably take a lot longer.

I assume you're doing your copying as a INSERT INTO $new_table SELECT
* FROM $old_table. Try wrapping that in a
  BEGIN; 
  INSERT INTO $new_table SELECT * FROM $old_table; 
  COMMIT;

How do you have your table space configured?

Just some random thoughts..

On Tue, 1 Mar 2005 17:24:32 -0600, Alfredo Cole [EMAIL PROTECTED] wrote:
 Hi:
 
 I have switched from MyISAM tables to InnoDB, using MySQL 4.1.10 under SuSE
 8.2.
 
 My application, an ERP system developed in-house, uses 70 tables, the largest
 one holding a little over one million rows. To assist when changing table
 structures, we developed a software that creates a new table for each of the
 70 tables, one at a time, using the new structure, copies all of the records
 from the old table to the new one, drops the old one and renames the new one.
 
 Using MyISAM tables, this process takes 10 minutes using a two Xeon 2.4 Ghz
 server, with 4 Gb RAM and SCSI RAID 5 disks. The same system takes 2 1/2
 hours using InnoDB tables with the same configuration. We have followed the
 guidelines for tuning the server, and still, we find this to be excessive.
 Can somebody point to some docs, guidelines or web sites we can consult to
 improve InnoDB's performance? It seems inserting many rows decreases
 performance significantly.
 
 Thank you and regards.
 
 --
 Alfredo J. Cole
 Grupo ACyC
 www.acyc.com - www.clshonduras.com - SolCom - www.acycdomains.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Switching to InnoDB turns out dissapointing

2005-03-01 Thread Gary Richardson
InnoDB is a very different platform from MyISAM.

 innodb_data_file_path = ibdata1:10M:autoextend

How big is your data? You need to set your innodb_data_file_path to
have enough space for this. Right now, your file is autoextending
constantly. I would either turn on per table table space, or
pre-create your table space. My preference is to pre-create:

innodb_data_file_path=ibdata1:1G;ibdata2:1G;ibdata3:1G:autoextend

That will create 3 gigs of table space. If you need more than 3 gigs
of space, it will autogrow the last file. Note that restart the server
after this change will probably nuke your existing files. It'll also
probably take awhile to start as it has to create those files.

 set-variable = innodb_buffer_pool_size=192M

If the server is only for MySQL and primarily InnoDB, you should set
this way higher. On your 512MB laptop, You'd probably want to go to
384MB if it's only for Inno. Otherwise, I'd probably set 2G - 3G for
your 4GB machine. Benchmarking will help determine the right number
based on your number of concurrent connections.

 I am using the syntax as you describe it. In my notebook, with 512M RAM, it
 takes 4 hours to complete.

InnoDB uses a lot more disc IO, for me anyway. Overall, I see a huge
concurrency jump and the increased IO cost is well worth it.

 The top command says mysqld is using about 8% of CPU, so it must be a disk
 problem. Funny thing is, it did not show when the tables were MyISAM.

Try the settings above -- it'll probably make a difference..

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Geologic Time

2005-02-28 Thread Gary Richardson
If you want to represent 290 million years as an integer (290,000,000):

- An UNSIGNED INT can store 4,294,967,295
- A UNSIGNED BIGINT can store 18,446,744,073,709,551,615

In your schema, I'd use a start_period and end_period instead of a
varchar. It's easier to sort and do math on.

You could factor out 1,000,000 from your dates and use a float to
represent the numbers. 290.00 could represent 290 mya, while 0.01
represents 10,000 years ago. Just make sure there is enough precision
on your float.

On Mon, 28 Feb 2005 11:16:55 -0800 (PST), David Blomstrom
[EMAIL PROTECTED] wrote:
 I'm working on a geologic time database and want to
 ask a question about geologic time. Can/should you
 apply MySQL's date function to geologic time?
 
 In other words, if I create a field for the number of
 years ago a certain geologic period began or ended -
 say 260 million years ago - could I designate that
 field Date?
 
 I'll probably just designate a varchar field and enter
 data like this:
 
 345-250
 
 Then I can add million years ago or mya in my PHP
 script. Some scientists also use the term BP (Before
 Present, I think).
 
 Another thing I have to deal with is units of
 thousands. For example, the Pleistocene Epoch (Ice
 Age) ended about 10,000 years ago.
 
 I just wondered if anyone had any suggestions for
 dealing with geologic time. Thanks.
 
 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around
 http://mail.yahoo.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to plan the Tablespace in a huge mysql?

2005-02-16 Thread Gary Richardson
I've always figured that if your tablespaces are tight enough to
autoextend, you're going to take a performance hit. InnoDB uses the
tablespace for some transaction/rolebacks -- if you have a large
transaction going through it will be slowed down by an autoextend
operation. Plus, once you're there, you're probably going to be always
autoextending. I can't find anything 'official' that says this, so I
could be talking straight out of butt.

If you're talking about autoextending files on a 1.5TB database,
you're going to have 50GB files anyway. I'd sooner control it than let
it grow wildly.


On Wed, 16 Feb 2005 19:44:56 +0800, proace [EMAIL PROTECTED] wrote:
 I expect the data size is no more then 1.5TB.
 
 Why don't you like to let tablespace auto grow?
 Is it performace issue or not?
 
 If I create ten innodb_data_file and each size of innodb_data_file is 50G,
 dose some issues must be take care?
 Because the 50G is really very big for a file, I never do it.
 
 Regards,
 proace
 
 
 On Tue, 15 Feb 2005 07:27:42 -0800, Gary Richardson
 [EMAIL PROTECTED] wrote:
  My preference is to use innodb_data_file. If everything is InnoDB, I
  would probably create 25G or 50G files until you've created enough to
  hold all the data plus enough for growth. Do you know specifically how
  big the data is?
 
  I don't like to let my table space autogrow, so I have monitors
  watching the free innodb space. If it gets tight, I manually add more
  space.
 
 
  On Tue, 15 Feb 2005 20:25:36 +0800, proace Tsai [EMAIL PROTECTED] wrote:
   Hello:
  
   The mysql server is estimated to be as follows,
   1. two servers, one is master and the other is slaves (replication)
   2. two databases in mysql
   3. 513 tables in each database
   4. about 300 rows in each table
   5. about 2T disk space for each server using SAN Storage
   6. backup database periodically
  
   The running environment is follows,
   Server: Dual Intel Xeon 3.2G with 4G DDR2 Memory.
   OS: FreeBSD 5.3-RELEASE
   MySQL: 4.1 branch
   Operation: 70 ~ 80% operation is query (select statement)
  
   According to the above terms,
   how to plan the Tablespace in the mysql server?
   Using raw devices for the tablespace or innodb_data_file?
   ( How many Tablespace do I create? )
   or using innodb_data_file with innodb_file_per_table?
  
   Regards,
   proace.
  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: key on the month portion of a date field

2005-02-09 Thread Gary Richardson
On Wed, 9 Feb 2005 15:17:36 -0200, Gabriel B. [EMAIL PROTECTED] wrote:

 It is not too slow this way since i started the WHERE with lots of
 checks that cuts down to a medium of 200 rows that actualy gets to
 this check, but i feel uncorfotable to not use a index.

Isn't there a limit of 1 index per table in a query? If you're already
using where statements to eliminate rows, I'm assuming that you're
already using an index.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slave integrity.

2005-01-07 Thread Gary Richardson
Hey,

I have some code that does a bunch of tests. I keep saying I'm going
to release it, but I don't get around to it.

My code does the following:

- checks that the schema matches
- check that keys match
- check that foreign keys match
- check that row counts match
- checksums the whole table in 5000 row increments to verify the data
matches (a lot faster than comparing row by row, but my code requires 
that each table has a numeric, single primary key, like int(11))

If you're interested, you could be an early beta tester :)


On Fri, 7 Jan 2005 05:16:49 -0500 (EST), RV Tec [EMAIL PROTECTED] wrote:
 Folks,
 
 I'm doing replication (one master, two slaves) with MySQL 4.0.23. I'm
 really happy with it. Although, every once in a while, the slaves get
 desync'ed, one at a time (the DB is 6GB large, and has 140 tables).
 
 I can't explain why this desync happens, but this wouldn't be a problem if
 I had a way (smart and trustable) to verify the integrity of the slave.
 Does anyone know how I can accomplish this?
 
 Today I'm doing a thousand queries on both servers (master/slave) to
 compare the results, but this keeps my servers unavailable temporarily
 (have to flush tables with read lock).
 
 Any thoughts on this?
 
 Thanks a lot!
 
 Best regards,
 RV Tec
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL support for AMD64

2004-12-01 Thread Gary Richardson
Hey,

I haven't used AMD64's, but we're running our production on a machine
with a 3ware 9508 running RAID10 with RHES 3. It's a beautiful setup.
The 3ware cards are an excellent choice, but as other posts say, use
RAID 10. If possible put your InnoDB logs onto a seperate array as
well.

out.


On Tue, 30 Nov 2004 13:23:05 -0600, Lynn Bender [EMAIL PROTECTED] wrote:
 I just received a box with the following specs:
 
 Dual AMD64
 8G ram
 Two 3ware 2.4 terabyte RAID 5 arrays.
 
 My company has been using Redhat for most of its production machines.
 
 1. Does anyone have any success/horror stories running MySQL 4.0.x
 on RHES 3/ AMD64?
 
 2. Does anyone have alternate recommendations for running MySQL
 databases in the terabyte range on AMD64?
 
 Thanks
 Lynn Bender
 
 
 
 UnsubCentral
 Secure Email List Suppression Management
 Neutral. Bonded. Trusted.
 
 You are receiving this commercial email
 from a representative of UnsubCentral, Inc.
 13171 Pond Springs Road, Austin, TX 78729
 Toll Free: 800.589.0445
 
 To cease all communication with UnsubCentral, visit
 http://www.unsubcentral.com/unsubscribe
 or send an email to [EMAIL PROTECTED]
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Autocommit and kill?

2004-11-23 Thread Gary Richardson
Hey guys,

We just had a small incident. I was hoping for confirmation on why it happened.

A person updated a large table. They were trying to update one record
but due to keyboard fumbling sent the update without a where
statement. Left unchecked, it would have made a lot of records in the
database identical.

The table has around 6 million rows and is InnoDB. The command was
entered from the standard command line client.

The query ran for under 60 seconds before it was killed. As we were
preping to restore the table from backups, someone did a group by to
find out how many rows got fragged. It turns out the command didn't
affect the table at all.

I have two guesses as to why this happened:

1) For Inno tables, entering an update query actually runs BEGIN;
$query; COMMIT; Killing the query prevented the commit from happening.

2) The first 60 seconds of the query were spent preparing or sorting
or something.

Most likely I'm thinking it was #1, but I just wanted to see if that's
actually the case.

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Auto-Increment Starting Point? (Multimaster Replication Question)

2004-11-19 Thread Gary Richardson
The binary logs used for replication set the value used for
autoincrementing before each insert query. The number on the master
will always be replicated properly. If a row already exists with the
autoincrement value, my guess is that replication will die with an
error.

I'm not too sure which version of mysql introduced the feature.

On Thu, 18 Nov 2004 13:35:08 -0800, Robinson, Eric
[EMAIL PROTECTED] wrote:
 When you set a field to auto-increment, can you tell it where to start?
 
 I'm trying to set up multimaster replication, but I'm worried about
 auto-increment collisions.
 
 Q: If server A starts auto-incrementing at 0, and server B starts
 auto-incrementing at some point higher than the maximum number of
 records, would that allow replication without auto-increment collisions?
 
 Q2: Assuming you can tell it where to start auto-incrementing, what
 happens after the following sequence:
 
 1. Johnny inserts record 1000 at server A.
 
 2. Server A receives record number 5000 from server B via replication.
 
 3. Mary needs to insert a new record. Does server A number the new
 record 1001 or 5001?
 
 --
 
 Eric Robinson
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Low-end SATA vs. SCSI

2004-11-12 Thread Gary Richardson
If you are talking about the WD Raptor's -- stay away. Out of 6 we
used, 3 failed. Do a few googles and you'll hear the same from other
users.

On the other hand, the do fly. Raid10 them them on a 3ware 9500 and
you'll be amazed.


On Fri, 12 Nov 2004 13:06:10 -0800, Larry Lowry [EMAIL PROTECTED] wrote:
 For cost reasons I use SATA.  Does the machine already
 have a SCSI card in it? If so I would use SCSI.  If not
 I would give one of the newer 10k SATA drives a spin.
 
 Larry
 
 
 
 
 - Original Message -
 From: Fagyal Csongor [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, November 12, 2004 12:03 PM
 Subject: Low-end SATA vs. SCSI
 
  Hi List,
 
  I am putting in a separate disk for our MySQL (4.1.7) server. I have
  some MyISAM, some InnoDB tables. Lots of reads, lots of writes (mostly
  atomic ones, insert/update one row), a few million rows per table,
  approx. 100-400 queries per second.
 
  What would you say is better (with respect to performance): a small SCSI
  disk (say 18G, 10kRPM) or a bigger SATA (say 120G, 7200RPM)?
 
  Thank you for your feeback,
  - Csongor
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Alter table only on the master?

2004-11-09 Thread Gary Richardson
Hey,

As I found out on the list, you want to SET SQL_LOG_BIN=0 in your
session. The user making the changes needs have the SUPER privilege.

http://dev.mysql.com/doc/mysql/en/SET_SQL_LOG_BIN.html

On Tue, 9 Nov 2004 20:35:22 +0100, harm [EMAIL PROTECTED] wrote:
 Hello,
 
 I want to convert a table from innodb to myisam on the _master only_, not
 on the slaves. Is there a secret 'do this alter table only on the
 master' command, or do I really have to do some 
 voodoo-replication-queries-skipping
 on the slaves?
 
 Thanks,
 Harmen
 
 --
The Moon is Waning Crescent (10% of Full)
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best Practices

2004-11-09 Thread Gary Richardson
Have you thought about locking the reporting database for write? You
could eliminate the dirty reads.

If you are using InnoDB on the reporting tables, you could use a
transaction for the update operation. That would accomplish the same
thing.

You could use replication to move the load to another server all together.

On Tue, 9 Nov 2004 16:06:18 -0800 (PST), Michael Haggerty
[EMAIL PROTECTED] wrote:
 I am working on a data warehousing solution involving
 mysql and have a question about best practices. We are
 standardized on mysql 4.1, and this is for a rather
 picky client.
 
 We have a relational transaction database that stores
 the results of customer calls and a dimensional
 reporting database used as a data mart by several
 applications. Each night, we run a process that
 aggregates the number of calls, the subjects of each
 call, and various other data to populate the reporting
 database. We would like to move to a real time
 solution, and are struggling with the best way to
 implment it.
 
 What we are considering is a solution where we mirror
 the transactional database and repopulate key tables
 in the reporting database every minute or few minutes.
 I am loathe to do this, mainly because it would add to
 our server load and could possibly lead to 'dirty
 reads' (i.e. where one table in the reporting database
 is populated with fresh data but others are not). At
 the same time, the client is demanding we implement
 something.
 
 Does anyone have any war stories or suggestions for
 how to accomplish this?
 
 Thank You,
 M
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Building new db linux box, Hardware questions

2004-11-02 Thread Gary Richardson
 1) would it be better to go brandnew with a single processor or like a quad
 p4 that's a year or two old.

Depends on how your application runs. BTW, I don't think they made
quad p4's. You can't run p4 chips in SMP -- they must be Xeon's.
 
 2) I am going to running raid 5, so I assume that I should run scsi drives?

I'd recommend getting one of the 3ware 9500 raid controllers and
getting SATA drives. You'll save yourself a bundle of money. Although,
don't get the WD Raptor drives -- they have a very high failure rate.
You can also then run in RAID 10, which, in most cases, is faster than
RAID 5.

 3) my database is not really huge, any tricks on how to determine how much
 ram I need?

There is a formula somewhere. It's something like shared buffers +
(thread buffers * client count) + OS Disk Cache. You'd have to look up
what those buffers are to get the exact count.

out.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimizing database

2004-10-22 Thread Gary Richardson
Hey,

I'm interested. I currently run a few websites with tables of this
size and larger.

Thanks.


On Thu, 21 Oct 2004 15:19:16 -0700 (PDT), Razor Fish [EMAIL PROTECTED] wrote:
 i need consulting help for optimizing a database with
 1.2 million records to handle 3-4 million hits a day.
 
 if interested please contact me off the list for
 details. im willing to pay for your time.
 
 ___
 Do you Yahoo!?
 Declare Yourself - Register online to vote today!
 http://vote.yahoo.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql and innoDB

2004-10-21 Thread Gary Richardson
Is there any output from the command? Any errors?

Try SHOW CREATE TABLE x. It should show you the TYPE= at the end.

out.


On Thu, 21 Oct 2004 11:10:10 +0200, Stefan Gnann
[EMAIL PROTECTED] wrote:
 Hi all,
 
 I have a mysql database 4.0.15 on a suse linux 9.x system running.
 Now we have to use the features of InnoDB tables (rollback, a.s.o.).
 Up to now we use the standard table type MyISAM.
 Now I want to change the tabel type with the command ALTER TABLE x TYPE
 = InnoDB.
 The command doesn´t show any effect !!!
 
 What can I do to change the table type?
 What did I have to do to change the table type??
 
 best regards
 
Steve
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Ignore a single query in replication

2004-10-21 Thread Gary Richardson
Hey,

Is there a way to tell the slave to not execute a query without
ignoring tables or databases?

There are a bunch of queries that happen on the master for statistical
purposes that don't use temp tables and generate large amounts of
data. These queries don't need to run on the slaves and in fact slow
it down quite a bit.

I've tried hunting around the online docs, but I can't seem to find
anything. For some reason I thought there was some sort of comment
that I could put infront of my query to accomplish this.

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Ignore a single query in replication

2004-10-21 Thread Gary Richardson
 If the queries modify tables that are being replicated, then how would
 the slave remain
 in sync with the master if it didn't replicate them?

These are essentially temporary tables that aren't defined as such --
they typically take a long time to derive (30 minutes to an hour) and
are used for multiple queries afterwards before being dropped.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Ignore a single query in replication

2004-10-21 Thread Gary Richardson
 If this is the case you can create the table in a separate database, and
 in your mysql configuration tell the binary logging to exclude that
 database.  Then anything in that specific database won't get replicated,
 I believe you can only do this exclusion on the database level, not per
 table.  But you can perform all your queries across databases just fine.

Yeah, I thought about that. We're currently creating the tables in the
test db. I'm just worried about someone trying to update/insert/delete
data back in the main database based on a query against the temporary
data. In that case the source data wouldn't exist on the replica.

I suppose create a database called test_not_replicated or something
similar to help avoid people doing that.

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Ignore a single query in replication

2004-10-21 Thread Gary Richardson
This was exactly what I was looking for :) Too bad you need to be
SUPER to do it :(

Thanks.


On Thu, 21 Oct 2004 12:44:11 -0700, Sanjeev Sagar
[EMAIL PROTECTED] wrote:
 Try SET SQL_LOG_BIN=0 before you run your queires on master. This will be valid for 
 that connection only.
 
 
 
 
 -Original Message-
 From: Gary Richardson [mailto:[EMAIL PROTECTED]
 Sent: Thu 10/21/2004 11:24 AM
 To: Mysql General (E-mail)
 Subject: Ignore a single query in replication
 
 Hey,
 
 Is there a way to tell the slave to not execute a query without
 ignoring tables or databases?
 
 There are a bunch of queries that happen on the master for statistical
 purposes that don't use temp tables and generate large amounts of
 data. These queries don't need to run on the slaves and in fact slow
 it down quite a bit.
 
 I've tried hunting around the online docs, but I can't seem to find
 anything. For some reason I thought there was some sort of comment
 that I could put infront of my query to accomplish this.
 
 Thanks.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump and binary logs

2004-10-20 Thread Gary Richardson
If you are using a replica, you can grab the information from SHOW
SLAVE STATUS or SHOW MASTER STATUS. I'm not too sure if there is a
call for non-replicated database servers for this.

BTW, if you aren't locking everything you're dumping, you'll probably
never get a consistent state if you want to replay your binary logs.

out.

On Wed, 20 Oct 2004 11:37:59 -0400, Tucker, Gabriel
[EMAIL PROTECTED] wrote:
 Hi All
 
 I have databases that are writing binary logs that I back up daily using mysqldump.
 
 I was wondering what is the best way to determine which binary logs have been 
 created after the mysqldump, in the situation where I might do some restore that 
 will involve the logs.
 
 Thanks
 Gabe
 
 
 There are no problems, only solutions.
 
 Gabe Tucker
 Bloomberg LP
 (609) 750 6668 - P
 (646) 268 5681 - F
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: B-tree index question

2004-10-20 Thread Gary Richardson
If you are using MyISAM tables, have you thought about using MERGE
tables instead? You could partition your table into several smaller
tables. I don't know how the performance would be on a billion record
table, but from my understanding it would shrink your index down.

http://dev.mysql.com/doc/mysql/en/MERGE.html

On Wed, 20 Oct 2004 11:09:43 +0100, Phil Bitis [EMAIL PROTECTED] wrote:
 Thanks for the informative reply Sergei,
 
 We're actually just using an INT field at the moment, we were going to move
 over to BIGINT when we start using 64-bit MySQL (soon).
 Do you know where I should look for information on writing our own table
 handler?
 
 Thanks,
 -Phil

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Char to Varchar on Innodb

2004-10-20 Thread Gary Richardson
I'm not too worried about it myself, but I can see how less magic on a
database server is better.

Without actually trying it, if I mix CHAR's and VARCHAR's in a table,
does SHOW CREATE TABLE reflect the internal conversion?

If you wanted to reduce fragmentation in an Innodb table, wouldn't all
the CHAR fields need to be left-packed? For example, you'd still get
fragmentation if your columns went CHAR, VARCHAR, CHAR. I guess that
is unless the engine underneath did this automagically.

Thanks.

On Wed, 20 Oct 2004 11:29:30 +0300, Heikki Tuuri
[EMAIL PROTECTED] wrote:
 Gary,
 
 those 'silent column specification changes' affect all table types. That
 feature has caused lots of user questions over years. It is in the TODO to
 remove that feature from MySQL, because it is not standards compliant. In
 InnoDB, a reason to use a CHAR column in some cases is to reduce
 fragmentation if there are lots of updates to that column. A CHAR column
 takes a fixed space. Silent column specification changes in many cases
 defeat this optimization which would otherwise be available to users.
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 Foreign keys, transactions, and row level locking for MySQL
 InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
 tables
 http://www.innodb.com/order.php
 
 Order MySQL technical support from https://order.mysql.com/
 
 
 
 
 Hey,
 
 From http://dev.mysql.com/doc/mysql/en/Silent_column_changes.html:
 
 snip
 If any column in a table has a variable length, the entire row becomes
 variable-length as a result. Therefore, if a table contains any
 variable-length columns (VARCHAR, TEXT, or BLOB), all CHAR columns
 longer than three characters are changed to VARCHAR columns. This
 doesn't affect how you use the columns in any way; in MySQL, VARCHAR
 is just a different way to store characters. MySQL performs this
 conversion because it saves space and makes table operations faster.
 See section 15 MySQL Storage Engines and Table Types.
 /snip
 
 Does this affect all table types? I'm curious if this is happening on
 my InnoDB tables as well. No problems, just curiosity..
 
 Thanks.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Char to Varchar on Innodb

2004-10-19 Thread Gary Richardson
Hey,

From http://dev.mysql.com/doc/mysql/en/Silent_column_changes.html:

snip
If any column in a table has a variable length, the entire row becomes
variable-length as a result. Therefore, if a table contains any
variable-length columns (VARCHAR, TEXT, or BLOB), all CHAR columns
longer than three characters are changed to VARCHAR columns. This
doesn't affect how you use the columns in any way; in MySQL, VARCHAR
is just a different way to store characters. MySQL performs this
conversion because it saves space and makes table operations faster.
See section 15 MySQL Storage Engines and Table Types.
/snip

Does this affect all table types? I'm curious if this is happening on
my InnoDB tables as well. No problems, just curiosity..

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Number of Rows in DB.

2004-10-14 Thread Gary Richardson
Note that SHOW TABLE STATUS counts for innodb are educated guesses --
innodb has to do a table scan to get the actual count.

On Thu, 14 Oct 2004 10:19:19 -0400, Michael Stassen
[EMAIL PROTECTED] wrote:
 Sujay,
 
 The output of
 
DESCRIBE table_name;
 
 does not include number of rows.  Perhaps you meant
 
SHOW TABLE STATUS LIKE 'table_name';
 
 or simply
 
SHOW TABLE STATUS;
 
 as Michael suggested, to get output for each and every table.  If so, it is
 important to note that the row count for InnoDB tables is an approximation,
 rather than an accurate count, so it won't help here.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: cross database joins performance hit?

2004-10-13 Thread Gary Richardson
I've worked on projects before where splitting up the schema into
databases was used. Joins across DB's are fine, but there is another
place that the performance can hit you.

If you use something like perl's Apache::DBI, you will increase the
number of open connections to your database. That's assuming that
you'll have connections to each database in your application.


On Tue, 12 Oct 2004 17:43:11 -0700, Jason [EMAIL PROTECTED] wrote:
 I've tried to find references to if there are any design flaws with using
 multiple databases or not however was unable to locate anything (but I was
 told by a previous co-worker that there were performance hits).
 
 Are there any performance hits or design flaws by separating a large
 database into separate databases then cross-database joining the tables?
 (these tables have anywhere between 1m and 5m+ rows)
 
 Thanks for any pointers in advance. If theres a major design flaw to
 splitting a database up into separate databases I'd like to know before
 getting to far down the path on this project :-) Thanks much for any
 answers!
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Some basic and advanced replication questions

2004-10-13 Thread Gary Richardson
 (e) If you had just a one way master-slave relationship and you delete
 data on the slave, that is very bad.  The slave is now not the same as
 the master, the record will not be re-inserted, if you go to update the
 record on the master, when the command gets replicated to the slave an
 error will generat and replication will halt awaiting you to manually
 fix it.  However, if you have setup a master/slave pair where
 replication goes both ways as described in (a) or a ring as described in
 (c) then you have no issue as all servers will be kept consistent.

Does anyone have scripts for checking the integrity of their slave
servers? I've been writing some pretty in depth stuff in perl and I
plan on releasing it eventually. It's pretty ugly right, even for
perl. I will eventually clean it up.

Some of the tests I do are:

- make sure the same tables exist on the master and slaves
- compare the schema (columns defs, keys, table types)
- check that the row counts match
- randomly check a percentage of rows in each table for large tables
- for 'small' tables perform the checksumming as found in Sys Admin
Mag's Taming the Distributed Database Problem: A Case Study Using
MySQL (http://samag.com/articles/2004/0408/)

Has anyone already gone through this effort already?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: connecting via TCP/IP

2004-10-09 Thread Gary Richardson
Did you insert into the permission tables or did you use a GRANT
statement? You may need to flush your permissions (FLUSH PRIVILEGES).


On Sat, 9 Oct 2004 20:57:29 +0200, roland [EMAIL PROTECTED] wrote:
 Hello again,
 
 refering to my previous question I forgot to mention that xxx.xxx.xxx.xxx is
 an IP number.
 
 using mysql -h xxx.xxx.xxx.xxx -u user -puserpassword
 
 I would exect to be able to achieve a connection.
 
 The user and userpassword I must add are isnerted with grant with TO
 'user'@'%' as well as localhost.
 
 I can ping the server so the network connection is fine etc.
 
 Do I need some option to be able to supply the host as an IP address to the
 mysql call?
 
 Thanks again.
 
 Regards.
 
 Roland
 
 --
 -I have not failed. I've just found 10,000 ways that won't work.
 
 -(Thomas Alva Edison 1847-1931)
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Restarting Replication from Backup

2004-10-08 Thread Gary Richardson
Hey guys,

I'm running a master/slave setup with v4.0.20. There are a hand full
of databases being replicated. For backups, we stop replication on the
slave and pull a text dump using mysqldump. I also record a 'SHOW
SLAVE STATUS' from the time of the backup.

My replica server crashed last night. It looks like it had something
to do with the disk cache as the replica was trying to replay already
committed transactions (lots of duplicate record errors).

After running an integritty check on the servers, the row counts are
out of sync for far more large tables than I care to manually fix. I'm
thinking of:

1) deleting all the data on the replica
2) pulling a backup from a few days ago and re-importing it with
replication disabled on the replica (ie, comment out all replication
configuration directives).
3) artificially recreating master-info-file using the information from
'SHOW SLAVE STATUS'
4) restart the replica with replication turned back on

With MySQL's two phase replication, will the IO thread automatically
figure out what file to start downloading and where to resume?

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Restarting Replication from Backup

2004-10-08 Thread Gary Richardson
Hey,

The perl script that does the backup issues a SLAVE STOP just before
it starts dumping. It also grabs SHOW SLAVE STATUS, which has a bunch
of file positions and I'm pretty sure it's everything that is in the
master.info file.

The backup I'd be pulling is going to be at least a day old, so it
will be out of sync and reseting the master will not help.

Thanks.

On Fri, 8 Oct 2004 13:24:00 -0400, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
  
 Gary, 
  
 We go through the process of removing the slave DB and restoring from
 backup on a fairly regular basis (due to testing new functionality of our
 app). My first question would be about your backups - how are you doing
 them? 
  
  If you are doing a filesystem backup (taring the entire mysql data
 directory and replication files, for instance) then your master info and
 relay-log will have the information on where the to start the replication
 from the master. This is what we are doing. 
  
 I'm not 100% certain about using a mysqldump type program, but I suspect
 that you would need to reset the master logs after the backup to tell the
 slave to basically start from line 1. I dont know how you would ensure that
 the master would reset at the very last command that was backed up on the
 slave, perhaps someone using this type of slave/backup scenario could share
 some knowledge on the correct procedure. 
  
  
 Regards, 
  
 Scott Tanner 
 Systems Administrator 
 Rowe/AMi 
   
  
  
  
  Gary Richardson [EMAIL PROTECTED] 
 
 10/08/2004 01:01 PM 
 Please respond to Gary Richardson 
  
 To:[EMAIL PROTECTED] 
 cc: 
 Subject:Restarting Replication from Backup 
  
  
 
 
 Hey guys,
  
  I'm running a master/slave setup with v4.0.20. There are a hand full
  of databases being replicated. For backups, we stop replication on the
  slave and pull a text dump using mysqldump. I also record a 'SHOW
  SLAVE STATUS' from the time of the backup.
  
  My replica server crashed last night. It looks like it had something
  to do with the disk cache as the replica was trying to replay already
  committed transactions (lots of duplicate record errors).
  
  After running an integritty check on the servers, the row counts are
  out of sync for far more large tables than I care to manually fix. I'm
  thinking of:
  
  1) deleting all the data on the replica
  2) pulling a backup from a few days ago and re-importing it with
  replication disabled on the replica (ie, comment out all replication
  configuration directives).
  3) artificially recreating master-info-file using the information from
  'SHOW SLAVE STATUS'
  4) restart the replica with replication turned back on
  
  With MySQL's two phase replication, will the IO thread automatically
  figure out what file to start downloading and where to resume?
  
  Thanks.
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to extract Particular Tables from a Dump file taken with mysqldump

2004-10-08 Thread Gary Richardson
They dump files are just mysql CREATE TABLE and INSERT statements. You
can you a perl script to read the dump file line by line and switch
output files when it hits a 'CREATE TABLE tablename'


On Fri, 8 Oct 2004 23:14:07 +0530, Buchibabu [EMAIL PROTECTED] wrote:
 Hi,
 
 Please let me know how to extract few tables from a dump file, which is taken with 
 mysqldump. I know it extracting a file. But the thing is I would like to do it with 
 mysql.
 
 Thanks,
 Buchibabu


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Restarting Replication from Backup

2004-10-08 Thread Gary Richardson
Hey guys,

I think I have this figured out. I'm just doing some testing. If I
manually recreate the master.info file, it sort of works, but I get
the following error from the IO thread:

Error reading relay log event: slave SQL thread aborted because of I/O error

So, instead of manually creating the replicatin index and info files,
just delete them. Then edit your my.cnf and make your master
connection info invalid in some way -- use a bogus host or change the
username or password. Basically, you want to be able to start your
server with replication without replicating.

From there, start your server, issue a SLAVE STOP and then a CHANGE
MASTER TO statement to fix it all. This will change the master server
and you can specify the log file position and the log file you are
working on.

This looks like it works -- I'm still using the corrupt data so I'm
getting duplicate queries, but the slave seems to start and run.

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Restarting Replication from Backup

2004-10-08 Thread Gary Richardson
Yeah, that's exactly what I figured out.. We do record the SHOW SLAVE
STATUS settings before each backup.

I find we need individual tables restored far more frequently than
whole databases. It's much easier using mysqldump and perl to dump
each table in text to its own file. This is especially true when you
are mixing table types or using InnoDB with table files.


On Sat, 9 Oct 2004 00:48:37 +0200, Mikael Fridh [EMAIL PROTECTED] wrote:
 
 
 On Friday 08 October 2004 19.01, Gary Richardson wrote:
  Hey guys,
 
  I'm running a master/slave setup with v4.0.20. There are a hand full
  of databases being replicated. For backups, we stop replication on the
  slave and pull a text dump using mysqldump. I also record a 'SHOW
  SLAVE STATUS' from the time of the backup.
 
  My replica server crashed last night. It looks like it had something
  to do with the disk cache as the replica was trying to replay already
  committed transactions (lots of duplicate record errors).
 
  With MySQL's two phase replication, will the IO thread automatically
  figure out what file to start downloading and where to resume?
 
 Nothing is automagic but if you did
 1. stop slave;
 2. show slave status; and recorded the info
 before you did your backup you would restore the backup data and start
 replication after jumping to the correct position on the slave with the
 following:
 CHANGE MASTER TO
 MASTER_LOG_FILE='george-bin.5528', -- Relay_Master_Log_File
 MASTER_LOG_POS=290303997; -- Exec_master_log_pos
 
 this command will purge any relay logs and reset the slave thread to the
 requested position.
 
 It can be a good idea to always have skip-slave-start set in your .cnf file on
 the slaves.
 
 consider just copying/tar'ing the mysql datadir - raw file backup and restore
 are more efficient than mysqldumps.
 
 --
  ___
 |K  | Ongame e-Solutions AB
 | /\| Mikael Fridh / Technical Operations
 |_\/| tel: +46 18 606 538 / fax: +46 18 694 411
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]