Hi,
i've been already reading the documentation the whole day, but still confused
and unsure what to do.
We have two databases which are important for our work. So both are stored
hourly. Now I recognized that each database has a mixture of MyISAM- and
InnoDB-tables. A backup of this mix does
Am 22.08.2014 um 19:40 schrieb Lentes, Bernd:
i've been already reading the documentation the whole day, but still confused
and unsure what to do.
We have two databases which are important for our work. So both are stored
hourly. Now I recognized that each database has a mixture of MyISAM
XTrabackup can handle both InnoDB and MyISAM in
a consistent way while minimizing lock time on
MyISAM tables ...
http://www.percona.com/doc/percona-xtrabackup/2.1/
--
Hartmut Holzgraefe, Principal Support Engineer (EMEA)
SkySQL - The MariaDB Company | http://www.skysql.com/
--
MySQL General
Hello,
Just one more suggestion to do full backups in large databases:
- Dedicated slave (either physical machine, a disk cabinet using iscsi
connections from a machine just with a bunch of RAM etc)
- Get the slave delayed a certain time (ie: 1 hour, 2 hours...depends on
your workload) using
simpler if the replica for
backups and replica for failovers are the same thing.
Peace
Karen
On 02.11.2012, at 0:55, Manuel Arostegui wrote:
Hello,
Just one more suggestion to do full backups in large databases:
- Dedicated slave (either physical machine, a disk cabinet using iscsi
Hi All
I am busy investigating some options relating to the backup for
MySQL databases when they get quite large.
When using the MySQL enterprise, there is the option to use the
MySQL enterprise backup as it is part of the Enterprise license.
However, when using the GA
On 01/11/2012 11.28, Machiel Richards - Gmail wrote:
[...]
I am busy investigating some options relating to the backup for
MySQL databases when they get quite large.
When using the MySQL enterprise, there is the option to use the
MySQL enterprise backup as it is part
Am 01.11.2012 11:28, schrieb Machiel Richards - Gmail:
Using mysqldump and restores on an 80-100GB database seems a bit unpractical
as the restore times seems to
get quite long as well as the backup times.
* setup a master/slave configuration
* stop the slave
* rsync the raw datadir to
@lists.mysql.com
Subject: Re: Mysql backup for large databases
Am 01.11.2012 11:28, schrieb Machiel Richards - Gmail:
Using mysqldump and restores on an 80-100GB database seems a bit
unpractical as the restore times seems to get quite long as well as the
backup times.
* setup a master/slave
-
From: Reindl Harald [mailto:h.rei...@thelounge.net]
Sent: Thursday, November 01, 2012 4:47 AM
To: mysql@lists.mysql.com
Subject: Re: Mysql backup for large databases
Am 01.11.2012 11:28, schrieb Machiel Richards - Gmail:
Using mysqldump and restores on an 80-100GB database
Am 01.11.2012 16:36, schrieb Singer Wang:
On Thu, Nov 1, 2012 at 11:34 AM, Rick James rja...@yahoo-inc.com
mailto:rja...@yahoo-inc.com wrote:
Full backup:
* Xtrabackup (Backup: slight impact on source; more if you have MyISAM
(as mentioned))
* Slave (Backup: zero impact on
Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2 and
etc, you should be fine. We have been using the trio: flush tables with
read lock, xfs_freeze, snapshot for months now without any issues. And we
test the backups (we load the backup into a staging once a day, and dev
once
good luck
i would call snapshots on a running system much more dumb
than innodb_flush_log_at_trx_commit = 2 on systems with
100% stable power instead waste IOPS on shared storages
Am 01.11.2012 16:45, schrieb Singer Wang:
Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2
@lists.mysql.com
Subject: Re: Mysql backup for large databases
good luck
i would call snapshots on a running system much more dumb
than innodb_flush_log_at_trx_commit = 2 on systems with
100% stable power instead waste IOPS on shared storages
Am 01.11.2012 16:45, schrieb Singer Wang:
Assuming you're
find you!
-Original Message-
From: Reindl Harald h.rei...@thelounge.net
Date: Thu, 01 Nov 2012 16:49:45
To: mysql@lists.mysql.commysql@lists.mysql.com
Subject: Re: Mysql backup for large databases
good luck
i would call snapshots on a running system much more dumb
than
[mailto:machiel.richa...@gmail.com]
Sent: Thursday, November 01, 2012 8:54 AM
To: Reindl Harald; mysql@lists.mysql.com
Subject: Re: Mysql backup for large databases
Well, the biggest problem we have to answer for the clients is the
following:
1. Backup method that doesn't take long and don't
find you!
-Original Message-
From: Reindl Harald h.rei...@thelounge.net
Date: Thu, 01 Nov 2012 16:49:45
To: mysql@lists.mysql.commysql@lists.mysql.com
Subject: Re: Mysql backup for large databases
good luck
i would call snapshots on a running system much more dumb
than
to connect to it.
You are correct. Running on 3307. (from settings.php) But even
loging into that instance was not showing me the drupal
databases - so I was stumped! I would guess that was because the
3307 instance was using the system-wide my.cnf instead of the
drupal my.cnf. I could
* Johan De Meersman vegiv...@tuxera.be [121031 07:10]:
Tim Johnson t...@akwebsoft.com wrote:
* Ananda Kumar anan...@gmail.com [121030 09:48]:
why dont u create a softlink
From /opt/local/var/db/mysql5/ to /opt/local/var/db/mysql5/ ???
I can try that, but I am doing things to MySQL that
drupal
configfile should hold the necessary data to connect to it.
You are correct. Running on 3307. (from settings.php) But even
loging into that instance was not showing me the drupal
databases - so I was stumped! I would guess that was because the
3307 instance was using the system-wide
installer you used created a second instance of mysql. Your drupal
configfile should hold the necessary data to connect to it.
You are correct. Running on 3307. (from settings.php) But even
loging into that instance was not showing me the drupal
databases - so I was stumped! I would
Am 01.11.2012 01:54, schrieb Tim Johnson:
* Reindl Harald h.rei...@thelounge.net [121031 08:12]:
you MUST NOT use localhost if you want to connect to
a different mysqld-port because localhost is unix-socket
mysql -h 127.0.0.1 --port=3307 -u username -p
I get access denied when I do that.
* Reindl Harald h.rei...@thelounge.net [121031 17:22]:
Am 01.11.2012 01:54, schrieb Tim Johnson:
* Reindl Harald h.rei...@thelounge.net [121031 08:12]:
you MUST NOT use localhost if you want to connect to
a different mysqld-port because localhost is unix-socket
mysql -h 127.0.0.1
* 小刀 13488684...@163.com [121029 18:43]:
You can the the /etc/my.cnf and file the parameter about the data_dir
I'm sorry. I don't understand.
I might try cp -p -r, but need a second opinion.
BTW: No need to CC me.
thanks
--
Tim
tim at tee jay forty nine dot com or akwebsoft dot com
Is it the question that how to dump drupal database ? then here is one of
the option. You can look at the details mentioned in the
default/settings.php file and use mysql to export the data. Or the other
option to install a module in Drupal Backup Restore which can you just
login thru drupal and
Am 30.10.2012 17:17, schrieb Hassan Schroeder:
On Tue, Oct 30, 2012 at 9:12 AM, Tim Johnson t...@akwebsoft.com wrote:
I might try cp -p -r, but need a second opinion.
If you want to dump one or more databases, presumably you know
the name(s); just use the mysqldump utility. Copying
* Reindl Harald h.rei...@thelounge.net [121030 08:25]:
if it are MyISAM tables it is the best way because all data
are in a folder with the database name - i never in my life
used a dump to migrate mysql-databases while started with
MySQL-3.x long years ago and moved them between Linux
Am 30.10.2012 17:34, schrieb Tim Johnson:
* Reindl Harald h.rei...@thelounge.net [121030 08:25]:
if it are MyISAM tables it is the best way because all data
are in a folder with the database name - i never in my life
used a dump to migrate mysql-databases while started with
MySQL-3.x long
* Tim Johnson t...@akwebsoft.com [121030 08:37]:
*
I remain as dumb as ever, but I hope I have made myself clearer
regards
To elaborate further : See this entry from mysql --help
Default options are read from the
not recognize the drupal databases!
Example :
linus:prj tim$ mysqldump event -hlocalhost -utim
-pXX event.sql
mysqldump: Got error: 1049: Unknown database 'event' when
selecting the database
... snip ...
Your syntax is inverted. Put the name of the database
:
*
mysqldump does not recognize the drupal databases!
Example :
linus:prj tim$ mysqldump event -hlocalhost -utim
-pXX event.sql
mysqldump: Got error: 1049: Unknown database 'event' when
selecting the database
... snip
* Reindl Harald h.rei...@thelounge.net [121030 08:49]:
The drupal mysql datafiles are located at
/Applications/drupal-7.15-0/mysql/data
as opposed to /opt/local/var/db/mysql5 for
'customary' mysql.
this crap is outside your mysqldata
I don't know what you mean by crap. Sorry.
why dont u create a softlink
On Tue, Oct 30, 2012 at 11:05 PM, Tim Johnson t...@akwebsoft.com wrote:
* Reindl Harald h.rei...@thelounge.net [121030 08:49]:
The drupal mysql datafiles are located at
/Applications/drupal-7.15-0/mysql/data
as opposed to /opt/local/var/db/mysql5 for
* Kishore Vaishnav kish...@railsfactory.org [121030 08:25]:
...snip
Or the other option to install a module in Drupal Backup
Restore which can you just login thru drupal and you can take a
dump of the existing database if you have admin credentials.
OK. Use drupal directly? That makes sense.
* Ananda Kumar anan...@gmail.com [121030 09:48]:
why dont u create a softlink
From /opt/local/var/db/mysql5/ to /opt/local/var/db/mysql5/ ???
I can try that, but I am doing things to MySQL that I have never
done before and I am reluctant to risk clobbering a complex
development environment
* Hassan Schroeder hassan.schroe...@gmail.com [121030 08:25]:
On Tue, Oct 30, 2012 at 9:12 AM, Tim Johnson t...@akwebsoft.com wrote:
I might try cp -p -r, but need a second opinion.
If you want to dump one or more databases, presumably you know
the name(s); just use the mysqldump
OP :
* Tim Johnson t...@akwebsoft.com [121029 16:28]:
I've recently unstalled drupal 7.15 on Mac OS X 10.7.
I want to back up the mysql data for drupal.
However, I can't locate those databases and tables using MySQL
server or PHPMyAdmin, even if I start mysql on port 3307.
The drupal
* Tim Johnson t...@akwebsoft.com [121030 10:40]:
OP :
* Tim Johnson t...@akwebsoft.com [121029 16:28]:
...snip
I want to back up the mysql data for drupal.
However, I can't locate those databases and tables using MySQL
server or PHPMyAdmin, even if I start mysql on port 3307.
...snip
I've recently unstalled drupal 7.15 on Mac OS X 10.7.
I want to back up the mysql data for drupal.
However, I can't locate those databases and tables using MySQL
server or PHPMyAdmin, even if I start mysql on port 3307.
The drupal mysql datafiles are located at
/Applications/drupal-7.15-0/mysql
You can the the /etc/my.cnf and file the parameter about the data_dir
At 2012-10-30 08:24:14,Tim Johnson t...@akwebsoft.com wrote:
I've recently unstalled drupal 7.15 on Mac OS X 10.7.
I want to back up the mysql data for drupal.
However, I can't locate those databases and tables using MySQL
difference, maintaining separate
ibdata files for each and every table insted of having one singl tabale
for all databases.
hi every one
Is there any performance difference, maintaining separate ibdata
files for each and every table insted of having one singl tabale for
all databases
singl tabale
for all databases.
hi every one
Is there any performance difference, maintaining separate ibdata
files for each and every table insted of having one singl tabale for
all databases, for InnoDB Storage Engine.
please let me know the difference.
--
3murthy
hi every one
Is there any performance difference, maintaining separate ibdata
files for each and every table insted of having one singl tabale for
all databases, for InnoDB Storage Engine.
please let me know the difference.
--
3murthy
--
MySQL General Mailing List
For list archives: http
Hi, I am new to mysql and have a question regarding how to read/import
existing mysql databases (i.e. binaries, not sql dumps) copied from
somewhere.
What I now have is a whole mysql directory that contains all binary files
(version 5.0.27 for 32bit Linux) copied from somewhere, including
Am 27.02.2012 09:05, schrieb Pengkui Luo:
Hi, I am new to mysql and have a question regarding how to read/import
existing mysql databases (i.e. binaries, not sql dumps) copied from
somewhere.
However, the show databases; command did not give me the foo database;
and the use foo; command
Today I needed to split a mysqldump -A into it several databases.
I didn't have access to the original source, so I only had the texr file to
work.
It was a webhosting server dump, so there was a LOT of databases...
I split the file with this little script I made:
file=myqdl dump file
nextTable
Hi,
I have a linux box running mysql plus phpmyadmin which has tables
getting montly data and when a new month starts a new table is
created. I want to store only 2 years of data so when a new month
starts i need to drop the table which became the data container of 2
years previous data. So to
On Fri, Nov 18, 2011 at 1:32 PM, a bv vbavbal...@gmail.com wrote:
Hi,
I have a linux box running mysql plus phpmyadmin which has tables
getting montly data and when a new month starts a new table is
created. I want to store only 2 years of data so when a new month
starts i need to drop
Hi unknown,
Have a look at database information_schema.TABLES:
SELECT * FROM information_schema.TABLES WHERE TABLE_SCHEMA='database';
As long as your MySQL version is = 5.1, you don't need a cron script,
you can use the MySQL scheduler, create a stored procedure that will run
each month. You'll
Please help calculate the total revenue from the two databases below. The first
database is the name of the item and price. The second database is the goods
sold. I will make a recapitulation of every month to my total income (total
only). I've tried tp always failed.
TABLE A (item name
2011/06/17 14:09 +0700, HaidarPesebe
Please help calculate the total revenue from the two databases below. The first
database is the name of the item and price. The second database is the goods
sold. I will make a recapitulation of every month to my total income (total
only). I've tried tp
*/
COMMIT
-
This makes me think that I can easily move the FILE and THEME tables to
different databases if I use a distributed transaction in the above. I
understand that on a very large scale to insist
HI All
I am hoping someone has had this before as this one is baffling me
entirely.
We did a MySQL database move from one machine to another one last
night.
The O/S versions are the same and so are the database version
(5.1.22).
The database was installed and configured
copy the /etc/init.d/mysql file from your old m/c to the new and try the
start/stop.
regards
anandkl
On Wed, Dec 8, 2010 at 2:21 PM, Machiel Richards machi...@rdc.co.za wrote:
HI All
I am hoping someone has had this before as this one is baffling me
entirely.
We did a MySQL database
That's a very Debian-specific issue. The credentials for the
debian-sys-maint user are randomly generated at install, and stored in
/etc/mysql/debian.cnf. Either copy the file from the old to the new machine,
or update the user's password on the new machine to the one in the file.
On Wed, Dec
Hi Johan
Would the server require a restart after this or not?
Machiel
-Original Message-
From: Johan De Meersman vegiv...@tuxera.be
To: Machiel Richards machi...@rdc.co.za
Cc: mysql mailing list mysql@lists.mysql.com
Subject: Re: Moving of databases from one server to another
Date
by
this.
That user is used mostly for system-based maintenance: table checks at
startup, clean shutdown and package upgrade operations, plus during the
install of some other packags to create and initialize their necessary
databases.
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt
HI Johan
Thank you for the advice... Problem resolved.
Regards
Machiel
-Original Message-
From: Johan De Meersman vegiv...@tuxera.be
To: Machiel Richards machi...@rdc.co.za
Cc: mysql mailing list mysql@lists.mysql.com
Subject: Re: Moving of databases from one server to another
Date
over a couple of
web servers all connecting to the same database) using apacje and jdk.
2 MySQL databases running as Masler/Slave replication with all
reads and writes going to the master and the slave being used for data
exports and failover if required.
The websites are rather busy
this
morning to research possible load balancing options for MySQL database.
What is currently running is a website (balanced over a couple of
web servers all connecting to the same database) using apacje and jdk.
2 MySQL databases running as Masler/Slave replication with all
reads
: Johan De Meersman vegiv...@tuxera.be
To: Machiel Richards machi...@rdc.co.za
Cc: mysql mailing list mysql@lists.mysql.com
Subject: Re: Replication on MySQL databases
Date: Thu, 4 Nov 2010 10:21:11 +0100
If your sites are busy with *writes*, you're kind of stuck. Replication
means that every write
Classic scenario where MMM will be your best bet. Check out
http://mysql-mmm.org for more information. Setup two masters and 2 or
more slaves for full High Availability. It scales extremely well if
your application is read-heavy (which most applications are).
If you need help implementing this, I
I may have missed what you are trying to do here. NoSQL is really a bad name
and should really be renamed to NoREL instead. NoSQL implementations are not
used just because of limitations of traditional RDBMS when it comes to sheer
traffic volume, they are also used because they scale horizontally
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:
offset + increment thingy is good if you know in advance that you'll have a
limited number of servers. But if you have no idea that you will have 2,
20,
or 200 servers in your array in the future, you just can't pick an
This is actually more for failover scenarios where databases are spread in
multiple locations with unreliable internet connections. But you want to
keep every single location working even when they are cut off from the other
databases. The primary purpose is not load distribution.
On Mon, Sep 13
on your
part :-)
On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
This is actually more for failover scenarios where databases are spread in
multiple locations with unreliable internet connections. But you want to
keep every single location working even when they are cut
:-)
On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
This is actually more for failover scenarios where databases are spread
in
multiple locations with unreliable internet connections. But you want to
keep every single location working even when they are cut off from
some good thinking on your
part :-)
On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
This is actually more for failover scenarios where databases are spread
in
multiple locations with unreliable internet connections. But you want to
keep every single location working
. To use two fields for primary and foreign keys is not the most
convenient to say the least. :)
I am just wondering if anyone has any better idea to fulfill the
requirements (small index size, dynamically increasable numbe of databases
in the array, incremental-like ID's are optimal for the MySQL
think you'd be better of with a star topology, but MySQL unfortunately
only allows ring-types. This is gonna require some good thinking on your
part :-)
On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
This is actually more for failover scenarios where databases
-Original Message-
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Sunday, September 12, 2010 1:47 PM
To: mysql@lists.mysql.com; replicat...@lists.mysql.com
Subject: Unique ID's across multiple databases
Hi,
I'm designing a master-to-master replication architecture.
I wonder what
-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Monday, September 13, 2010 7:27 AM
To: Kiss Dániel
Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases
Hmm
-Original Message-
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Monday, September 13, 2010 11:49 AM
To: Jerry Schwartz
Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases
Well, not exactly.
I do
Schubert; mysql@lists.mysql.com;
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases
Well, not exactly.
I do not own all the databases. Some of them are placed at customers, some
of them are at my data warehouse. So, neither NAS or Fibre Channel is a
solution
Schwartz
Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases
Well, not exactly.
I do not own all the databases. Some of them are placed at customers, some
of them are at my data warehouse. So, neither NAS
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Monday, September 13, 2010 3:17 PM
To: Jerry Schwartz
Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases
Well, that would be the plan, yes. :-)
Anyway, I'll
://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/
Is this UUID issue unique to mySQL or are there other RDBMS's that handle
it better (Postgress, Oracle, SQL Server, etc?)
I too have a need for a unique identifier that will mesh with other
databases periodically. So that a user in one
with other
databases periodically. So that
a user in one local DB/server will get
migrated to a
master DB which in turn will sync up with remote sites so
that
all sites will have all users in it each night (for example).
Having a mapping of UUID to local ID seems one way, but I feel
On Mon, Sep 13, 2010 at 8:59 PM, Johnny Withers joh...@pixelated.netwrote:
This sounds like a good job for a 'NoSQL' system. Maybe?
I can't help but blink at that. How exactly is NoSQL going to fix issues
that are related to topology, not inherent SQL limitations ? Which
particular
Hi,
I'm designing a master-to-master replication architecture.
I wonder what the best way is to make sure both databases generate unique
row ID's, so there won't be ID conflicts when replicating both directions.
I read on forums about pro's and con's using UUID's, also about setting the
*auto
.
You can maintain your own sequence tables a la postgres if you use transactions
to ensure atomicity, though that doesn't help across databases (I suspect the
same is true in postgres).
FWIW my auto_increment_offset value is usually the same as my server ID.
auto_increment_increment also reduces
Server offset + increment works really well, is simple, and well
documented and reliable - not sure why you would want to re-invent
something that works so well :).
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
You may be right. I'm not arguing that offset + increment is working.
I'm just wondering if that's the optimal solution when you do not know how
many servers you will have in your array in the future. In my view, the
offset + increment thingy is good if you know in advance that you'll have a
Hi all, I have upgraded a few test boxes and everything seems to work fine BUT
I
wanted to verify with the gurus if my syntax is correct so as to avoid any
future problems ;-)
The purpose is to dump all databases and users / user privileges from our
4.1.20
server and import it into our
Devart
Email: i...@devart.com
Web: http://www.devart.com
FOR IMMEDIATE RELEASE
CONTACT INFORMATION:
Julia Samarska
jul...@devart.com
27-Jul-2010
More Tools to Work with MySQL Databases in Visual Studio Provided by dbForge
Fusion!
Devart today releases dbForge Fusion for MySQL
binary snapshot of |InnoDB| tables
only after shutting down the MySQL Server..
.. If you are replicating only certain databases then make sure you copy
only those files that related to those tables. (For InnoDB, all tables in
all databases are stored in the shared tablespace files, unless you have
snapshot of |InnoDB|
tables only after shutting down the MySQL Server..
.. If you are replicating only certain databases then make sure you copy
only those files that related to those tables. (For InnoDB, all tables
in all databases are stored in the shared tablespace files, unless you
have
Devart
Email: i...@devart.com
Web: http://www.devart.com
FOR IMMEDIATE RELEASE
CONTACT INFORMATION:
Julia Samarska
jul...@devart.com
12-Jul-10
More Tools to Work with MySQL Databases Provided by dbForge Studio!
With dbForge Studio for MySQL, Devart continues its initiative to produce
Am 08.06.10 12:05, schrieb Rob Wultsch:
On Mon, Jun 7, 2010 at 11:59 PM, Götz Reinicke - IT-Koordinator
goetz.reini...@filmakademie.de wrote:
Hi,
we do have different LAMP systems and recently I started to put some
mysql databases on one, new master server. (RedHat, Fredora, MySQL 4.x -
5.0
Hi all!
Götz Reinicke - IT-Koordinator wrote:
Am 08.06.10 12:05, schrieb Rob Wultsch:
On Mon, Jun 7, 2010 at 11:59 PM, Götz Reinicke - IT-Koordinator
goetz.reini...@filmakademie.de wrote:
Hi,
we do have different LAMP systems and recently I started to put some
mysql databases on one, new
Hi,
we do have different LAMP systems and recently I started to put some
mysql databases on one, new master server. (RedHat, Fredora, MySQL 4.x -
5.0.xx)
I did this by exporting some databases with mysqldump and importing tham
on the new server.
Now I'd like to add a slave mysqlserver and so I
On Mon, Jun 7, 2010 at 11:59 PM, Götz Reinicke - IT-Koordinator
goetz.reini...@filmakademie.de wrote:
Hi,
we do have different LAMP systems and recently I started to put some
mysql databases on one, new master server. (RedHat, Fredora, MySQL 4.x -
5.0.xx)
MySQL 4.X is EOL. I strongly suggest
MySQL University: MySQL Column Databases
http://forge.mysql.com/wiki/MySQL_Column_Databases
This Thursday (March 4th, 15:00 UTC - slightly later than usual), Robin
Schumacher will present MySQL Column Databases. If you're doing Data
Warehouse with your databases this is a must-attend, but it's
On 17/12/2009 17:46, mos wrote:
Load Data ... is still going to be much faster.
Mike
Hiya
If you using on Linux and using LVM, look at mylvmbackup.
HTH
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday.
The dump takes about 12.5h to finish, generating a ~172 GB file. When
I try to load it though, *after* manually dumping the old databases,
it takes 1.5~2 days to load the same databases. I am
At 03:59 AM 12/17/2009, you wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days
von Baron Schwartz
Gesendet: Montag, 14. Dezember 2009 22:57
An: Lukas C. C. Hempel
Cc: mysql@lists.mysql.com
Betreff: Re: InnoDB Corrupted databases (innodb_force_recovery not working)
Lukas,
If you can't get innodb_force_recovery to work, then you might have to try
to recover the data
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing
for you to scp the database from one machine to another.
Regards,
Gavin Towey
-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 12:56 PM
To: mysql@lists.mysql.com
Subject: Importing large databases faster
Hi all,
I've got a fairly
as long
as it takes for you to scp the database from one machine to another.
Regards,
Gavin Towey
Thanks! Will the Maatkit script work on a simple --all-databases dump?
As for the copy, it's a temporary thing. This is just being done weekly
while we test out the new server. Once it's live
1 - 100 of 1291 matches
Mail list logo