I've built several datamarts using perl and MySQL. The largest ones
have been up to about 30GB, so I'm not quite on your scale.
for #1, I have an etl_id in the fact table so I can track back any
particular ETL job. I typically make it a dimension and include date,
time, software version, etc. That
Hi,
Can I pre-create innodb tablespace using something like dd (or any other
better tool)?
I have a server that is getting low on innodb table space and I want to add
15GB or so, but I want to minimize downtime. The server is a bit slow and I
estimate it will take around 10-20 minutes or so. I w
You'd have to use another table. I don't believe mysql views will keep your
'moving average' values.
If you're using 5.1, you can automate the select/insert with an event --
it's a cron like tool built into mysql.
If you have a datetime field in either of the tables that represents the
'action'
I believe the command is PURGE LOGS or something like that.
On 6/3/06, Jacob, Raymond A Jr <[EMAIL PROTECTED]> wrote:
-Original Message-
From: Gary Richardson [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 03, 2006 14:10
To: Jacob, Raymond A Jr
Cc: mysql@lists.mysql.com
Subje
Yes you can. Be sure not to delete the one the database is currently writing
to.
Are you replicating your database? Are you using them for point in time
restores?
If you're replicating, make sure your replicas have all caught up on the
files you're deleting.
On 6/3/06, Jacob, Raymond A Jr <[EMA
Yeah, pretty much, but I would keep cust_id around and start over with a
true autoincrement from 1.
On 5/9/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
and this would be, in other words, the solution 2, right?
> If you really want to change the customer ID, then you can always copy
the
>
I would drop the primary key off of your cust_id and add a new field like
customer_id and relink using the old fk.
I'd probably do this by creating a new table and doing an INSERT INTO SELECT
FROM to populate the autoincrement and rename the table.. From there, add
the fk's to your other tables a
Look at your my.cnf for a configuration directive called
'innodb_data_file_path'. This is where you configure the files for the
innodb table space. The last one is probably an auto-grow. My guess is that
every time it complains, it's just added 8MB to the file. If you remove the
auto-grow (and I ca
If you use the 5.1 tree, you should be able to use the new EVENT stuff
to accomplish this:
http://dev.mysql.com/doc/refman/5.1/en/events.html
CREATE EVENT check_table ON SCHEDULE AT '2006-03-10 19:00:00' EVERY
WEEK DO your_check_table_procedure_here();
Or something like that..
On 3/10/06, René
What are the problems you've been experiencing? Did you convert all tables?
How big is the database?
On 2/9/06, Shaun Adams <[EMAIL PROTECTED]> wrote:
>
> I have a customer who has been in production for a few weeks now having
> converted from MyISM to INNODB. We have been experiencing a few prob
I would typically do this in code, but you could also create a stored
procedure that does something like:
- create a temporary table
- populate it with all possible values with a count field set to 0
- run an REPLACE INTO temp_table
- dump the contents of the temp table
You may also want to just
CREATE TABLE blah LIKE old_table
On 1/11/06, George Law <[EMAIL PROTECTED]> wrote:
> Hi All,
>
>
>
> Just another follow up on my emails.
>
>
>
> What this came down to was sheer number of records and indices in my
> tables causing it to take several minutes to insert 10-20K records via
> "load da
upgrading between version. On possible probablem you could be facing
in change in how whitespace is handled in indexes.
On 12/30/05, Christian Meisinger <[EMAIL PROTECTED]> wrote:
> Gary Richardson wrote:
> > We moved directly from 4.0.20 to 5.0.16. Worked like a charm. I had a
>
We moved directly from 4.0.20 to 5.0.16. Worked like a charm. I had a
script that went through and optimized all tables with keys on
text/varchar and char fields.
We're also slowly ALTERing innodb tables to get them into the new
compact format.
On 12/28/05, Gleb Paharenko <[EMAIL PROTECTED]> wro
I don't have any experience with dual core yet (my first dual dual
core box is scheduled to arrive this week!!). I don't think I'd opt
for a dual core in place of 2 single cores. I'm hoping (expecting?) to
see an advantage in 2 DC over 2 SC.
As far as SCSI over SATA goes, I exclusively use SATA. I
It doesn't seem like a mission critical app, but I would seriously
consider using redundant RAID (ie, not 0, but 1 or 5). Nothing ruins
your day quite like losing a non-redundant drive, even if you have
good backups.
Also, what sort of workload are you looking at? How responsive does it
need to be
Also, if the file looks anything like a CSV file, I recommend using
LOAD DATA INFILE
http://dev.mysql.com/doc/refman/4.1/en/load-data.html
You'll probably load that data in half to a quarter of the time.
On 11/30/05, Daniel Kasak <[EMAIL PROTECTED]> wrote:
> Chenzhou Cui wrote:
>
> > I didn't us
Are you doing single insert statements, multiple insert statements or
LOAD DATA INFILE statements?
On 12/4/05, Chenzhou Cui <[EMAIL PROTECTED]> wrote:
> Dear MySQL fans,
>
> I have a 260 GB huge file with 1045175762 rows. Two weeks ago, I wrote a
> Java program to read the huge plain text file in
Whaaa? I did no such thing. My post was on topic.
On 11/28/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Marcus Bointon <[EMAIL PROTECTED]> wrote on 11/28/2005 04:45:54
> AM:
>
> >
> > On 28 Nov 2005, at 00:41, [EMAIL PROTECTED] wrote:
> >
> > > The only times MySQL server interact directly w
I've also experienced this. The 5.0.16 RPMs for RHEL3 worked fine on a
RHEL3 box.
As for the RHEL4, I even tried recompiling the source RPM's and still
got the dependency error! I ended up using the generic RPM's as those
installed fine.
For me, it's a moot point anyway as 5.0.16 isn't stable eno
It depends on what heavy traffic is and what your machine is.. If it's
an E10K, then you can probably put them both on one machine for most
traffic loads :)
Your database will typically perform better if you have your DB on a
separate machine.
Without knowing more about the code running the site,
My guess is that the RAID has nothing to do with it -- it seems very unlikely.
In any case, if you want top performance out of your raid, you may
want to change things up. You'd get better performance if you didn't
use RAID5. Use RAID1 or RAID10 for your data drives. RAID5 is slower
than these oth
We've been using CentOS more and more. I recently built a small prototype
datawarehouse with CentOS (but I used MySQL 5 RPM's instead of the stock).
I've had no problems that I didn't have with RHEL on CentOS. I also have a
few similar RHEL systems, so when a problem occurs on a CentOS box, I
r
Make sure your disks are all redundant -- get two of each and mirror
them. You'll thank yourself when a drive dies.
If the database server has any uptime requirements, I recommend going
hotswap for everything -- you'll thank yourself again when you can
swap the drive out during core business hours
I don't think there is an SMTP API built into MySQL. A trigger could
insert a record into another table and a crontab could send mails
based on the contents of this table..
On 5/31/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> If we get triggers in mysql 5, couldn't you setup something lik
If it's an option, buy more RAM and more disks..
Is it a 17gb table or 17gb of data spread across several tables? If
it's across several tables, you won't have as much trouble rebuilding
the indexes.
Another option is to build another machine with a bunch of ram and a
RAID1 or RAID10 (SATA or SCS
try mysqldump -keq SDN > SDN.sql
Then you can import it on another instance using mysql -u root
new_instance < SDN.sql
On 5/5/05, zlf <[EMAIL PROTECTED]> wrote:
> Hi all,
> I have installed a MySQL5.0 instance on Windows. And then created a
> database( named 'SDN' ). Now I want to move this
Why don't you generate a random integer in your code and select for an
article? If there is no article there, do it again. Even if you have
to call it 50 times it may be faster than doing a full scan on the
table.
It may not work so well if there are lots of gaps in your autoincrement.
In perl (d
A quick thing to check is that DNS lookups are timing out on connect.
If you don't have DNS on your local subnet, the server may be doing
reverse lookups..
Otherwise, you should definitely profile your code.
On 4/21/05, Brent Baisley <[EMAIL PROTECTED]> wrote:
> For starters, compare timings betw
> I've got IDE hdd. Is there simple way to check, if it's overloaded?
> Would RAID1 help? (I don't know if in raid1 there are parralel reads or
> maybe one disk is only a mirror)
If it's IDE, probably not. Moving the database to a different
subsystem would give more IO. You can use iostat to check
I haven't done it in a nice way and I haven't done it in a long time,
but you can do this.
In the past, I've done the following:
On the slave:
1) stop the server
2) comment out all the lines in my.cnf that refer to the machine as
being a slave -- you still need your binary log directives though.
The RPM's from mysql.com should work fine on FC3. The source RPMs
should build as well. I would say it shouldn't take that much effort.
I don't know about performance issues, but I always figured that you
can tune InnoDB with a bit more control than myisam. For inserts, you
could probably have a l
Hey,
The placeholders (?) are safely escaped by the DBI library -- your
query that gets submitted to the server actually looks like:
CREATE TABLE IF NOT EXISTS CS_ 12569 (
ID_NO MEDIUMINT,
NAME VARCHAR(30),
TYPE CHAR(1)
)
I think it puts the space in, but it might actually be quoting it.
I woul
I found the article very interesting. It seems they couldn't trash
3ware cards enough.
We swear by 3ware cards -- other than the PCIX riser card issue, we
haven't a single problem with them. Our production database server is
running off of a RAID1 for the OS and a RAID10 for the data and every
tim
I would probably not replica the mysql database in your case.
We run a similar setup. One problem we have is people connecting to
the wrong database server and applying updates (for example, they
think they are in the master, but it's really the slave and they
perform an update or an insert). As y
Hey,
Those RPMS are compiled for a linux architecture. Cygwin is the GNU
tools (plus a few others) compiled under windows to similate a linux
environment. It can't actually execute those binaries.
You have three options:
1) Install the Windows binary. You won't be able to compile things
like DBD
You could configure per table table spaces.. I suppose that would be a
little more trackable..
On Fri, 04 Mar 2005 16:38:20 -0600, Michael Satterwhite
<[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I just found out that the hosting company we're using has disabl
InnoDB is a very different platform from MyISAM.
> innodb_data_file_path = ibdata1:10M:autoextend
How big is your data? You need to set your innodb_data_file_path to
have enough space for this. Right now, your file is autoextending
constantly. I would either turn on per table table space, or
pre-
What have you actually done to 'tune' the server? How are you doing
the inserts?
InnoDB uses transactions. If you are doing each row as a single
transaction (the default), it would probably take a lot longer.
I assume you're doing your copying as a INSERT INTO $new_table SELECT
* FROM $old_table.
If you want to represent 290 million years as an integer (290,000,000):
- An UNSIGNED INT can store 4,294,967,295
- A UNSIGNED BIGINT can store 18,446,744,073,709,551,615
In your schema, I'd use a start_period and end_period instead of a
varchar. It's easier to sort and do math on.
You could fac
eate ten innodb_data_file and each size of innodb_data_file is 50G,
> dose some issues must be take care?
> Because the 50G is really very big for a file, I never do it.
>
> Regards,
> proace
>
>
> On Tue, 15 Feb 2005 07:27:42 -0800, Gary Richardson
> <[EMAIL PROTECTED]&g
On Wed, 9 Feb 2005 15:17:36 -0200, Gabriel B. <[EMAIL PROTECTED]> wrote:
> It is not too slow this way since i started the WHERE with lots of
> checks that cuts down to a medium of 200 rows that actualy gets to
> this check, but i feel uncorfotable to not use a index.
Isn't there a limit of 1 ind
Hey,
I have some code that does a bunch of tests. I keep saying I'm going
to release it, but I don't get around to it.
My code does the following:
- checks that the schema matches
- check that keys match
- check that foreign keys match
- check that row counts match
- checksums the whole table in
Hey,
I haven't used AMD64's, but we're running our production on a machine
with a 3ware 9508 running RAID10 with RHES 3. It's a beautiful setup.
The 3ware cards are an excellent choice, but as other posts say, use
RAID 10. If possible put your InnoDB logs onto a seperate array as
well.
out.
On
Hey guys,
We just had a small incident. I was hoping for confirmation on why it happened.
A person updated a large table. They were trying to update one record
but due to keyboard fumbling sent the update without a where
statement. Left unchecked, it would have made a lot of records in the
databa
The binary logs used for replication set the value used for
autoincrementing before each insert query. The number on the master
will always be replicated properly. If a row already exists with the
autoincrement value, my guess is that replication will die with an
error.
I'm not too sure which vers
If you are talking about the WD Raptor's -- stay away. Out of 6 we
used, 3 failed. Do a few googles and you'll hear the same from other
users.
On the other hand, the do fly. Raid10 them them on a 3ware 9500 and
you'll be amazed.
On Fri, 12 Nov 2004 13:06:10 -0800, Larry Lowry <[EMAIL PROTECTED]>
Have you thought about locking the reporting database for write? You
could eliminate the dirty reads.
If you are using InnoDB on the reporting tables, you could use a
transaction for the update operation. That would accomplish the same
thing.
You could use replication to move the load to another
Hey,
As I found out on the list, you want to SET SQL_LOG_BIN=0 in your
session. The user making the changes needs have the SUPER privilege.
http://dev.mysql.com/doc/mysql/en/SET_SQL_LOG_BIN.html
On Tue, 9 Nov 2004 20:35:22 +0100, harm <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I want to convert a
> 1) would it be better to go brandnew with a single processor or like a quad
> p4 that's a year or two old.
Depends on how your application runs. BTW, I don't think they made
quad p4's. You can't run p4 chips in SMP -- they must be Xeon's.
>
> 2) I am going to running raid 5, so I assume that I
Hey,
I'm interested. I currently run a few websites with tables of this
size and larger.
Thanks.
On Thu, 21 Oct 2004 15:19:16 -0700 (PDT), Razor Fish <[EMAIL PROTECTED]> wrote:
> i need consulting help for optimizing a database with
> 1.2 million records to handle 3-4 million hits a day.
>
> i
nection only.
>
>
>
>
> -Original Message-----
> From: Gary Richardson [mailto:[EMAIL PROTECTED]
> Sent: Thu 10/21/2004 11:24 AM
> To: Mysql General (E-mail)
> Subject: Ignore a single query in replication
>
> Hey,
>
> Is there a way to tell the slave to not exe
> If this is the case you can create the table in a separate database, and
> in your mysql configuration tell the binary logging to exclude that
> database. Then anything in that specific database won't get replicated,
> I believe you can only do this exclusion on the database level, not per
> tab
> If the queries modify tables that are being replicated, then how would
> the slave remain
> in sync with the master if it didn't replicate them?
These are essentially temporary tables that aren't defined as such --
they typically take a long time to derive (30 minutes to an hour) and
are used fo
Hey,
Is there a way to tell the slave to not execute a query without
ignoring tables or databases?
There are a bunch of queries that happen on the master for statistical
purposes that don't use temp tables and generate large amounts of
data. These queries don't need to run on the slaves and in fa
Is there any output from the command? Any errors?
Try SHOW CREATE TABLE x. It should show you the TYPE= at the end.
out.
On Thu, 21 Oct 2004 11:10:10 +0200, Stefan Gnann
<[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I have a mysql database 4.0.15 on a suse linux 9.x system running.
> Now we hav
I'm not too worried about it myself, but I can see how less magic on a
database server is better.
Without actually trying it, if I mix CHAR's and VARCHAR's in a table,
does SHOW CREATE TABLE reflect the internal conversion?
If you wanted to reduce fragmentation in an Innodb table, wouldn't all
th
If you are using MyISAM tables, have you thought about using MERGE
tables instead? You could partition your table into several smaller
tables. I don't know how the performance would be on a billion record
table, but from my understanding it would shrink your index down.
http://dev.mysql.com/doc/my
If you are using a replica, you can grab the information from SHOW
SLAVE STATUS or SHOW MASTER STATUS. I'm not too sure if there is a
call for non-replicated database servers for this.
BTW, if you aren't locking everything you're dumping, you'll probably
never get a consistent state if you want to
Hey,
>From http://dev.mysql.com/doc/mysql/en/Silent_column_changes.html:
If any column in a table has a variable length, the entire row becomes
variable-length as a result. Therefore, if a table contains any
variable-length columns (VARCHAR, TEXT, or BLOB), all CHAR columns
longer than three cha
Note that SHOW TABLE STATUS counts for innodb are educated guesses --
innodb has to do a table scan to get the actual count.
On Thu, 14 Oct 2004 10:19:19 -0400, Michael Stassen
<[EMAIL PROTECTED]> wrote:
> Sujay,
>
> The output of
>
>DESCRIBE table_name;
>
> does not include number of rows.
> (e) If you had just a one way master->slave relationship and you delete
> data on the slave, that is very bad. The slave is now not the same as
> the master, the record will not be re-inserted, if you go to update the
> record on the master, when the command gets replicated to the slave an
> err
I've worked on projects before where splitting up the schema into
databases was used. Joins across DB's are fine, but there is another
place that the performance can hit you.
If you use something like perl's Apache::DBI, you will increase the
number of open connections to your database. That's ass
Did you insert into the permission tables or did you use a GRANT
statement? You may need to flush your permissions (FLUSH PRIVILEGES).
On Sat, 9 Oct 2004 20:57:29 +0200, roland <[EMAIL PROTECTED]> wrote:
> Hello again,
>
> refering to my previous question I forgot to mention that xxx.xxx.xxx.xxx
especially true when you
are mixing table types or using InnoDB with table files.
On Sat, 9 Oct 2004 00:48:37 +0200, Mikael Fridh <[EMAIL PROTECTED]> wrote:
>
>
> On Friday 08 October 2004 19.01, Gary Richardson wrote:
> > Hey guys,
> >
> > I'm running a master
Hey guys,
I think I have this figured out. I'm just doing some testing. If I
manually recreate the master.info file, it sort of works, but I get
the following error from the IO thread:
Error reading relay log event: slave SQL thread aborted because of I/O error
So, instead of manually creating t
They dump files are just mysql CREATE TABLE and INSERT statements. You
can you a perl script to read the dump file line by line and switch
output files when it hits a 'CREATE TABLE '
On Fri, 8 Oct 2004 23:14:07 +0530, Buchibabu <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Please let me know how to extra
edure.
>
>
> Regards,
>
> Scott Tanner
> Systems Administrator
> Rowe/AMi
>
>
>
>
> "Gary Richardson" <[EMAIL PROTECTED]>
>
> 10/08/2004 01:01 PM
> Please respond to Gary Richardson
>
> To:
Hey guys,
I'm running a master/slave setup with v4.0.20. There are a hand full
of databases being replicated. For backups, we stop replication on the
slave and pull a text dump using mysqldump. I also record a 'SHOW
SLAVE STATUS' from the time of the backup.
My replica server crashed last night.
69 matches
Mail list logo