Re: backup of databases which have a mix of MyISAM- and InnoDB-tables

2014-08-22 Thread Hartmut Holzgraefe
XTrabackup can handle both InnoDB and MyISAM in a consistent way while minimizing lock time on MyISAM tables ... http://www.percona.com/doc/percona-xtrabackup/2.1/ -- Hartmut Holzgraefe, Principal Support Engineer (EMEA) SkySQL - The MariaDB Company | http://www.skysql.com/ -- MySQL General Ma

Re: backup of databases which have a mix of MyISAM- and InnoDB-tables

2014-08-22 Thread Reindl Harald
Am 22.08.2014 um 19:40 schrieb Lentes, Bernd: > i've been already reading the documentation the whole day, but still confused > and unsure what to do. > > We have two databases which are important for our work. So both are stored > hourly. Now I recognized that each datab

backup of databases which have a mix of MyISAM- and InnoDB-tables

2014-08-22 Thread Lentes, Bernd
Hi, i've been already reading the documentation the whole day, but still confused and unsure what to do. We have two databases which are important for our work. So both are stored hourly. Now I recognized that each database has a mixture of MyISAM- and InnoDB-tables. A backup of this mix

Re: Mysql backup for large databases

2012-11-02 Thread Karen Abgarian
simpler if the replica for backups and replica for failovers are the same thing. Peace Karen On 02.11.2012, at 0:55, Manuel Arostegui wrote: > Hello, > > Just one more suggestion to do full backups in large databases: > > - Dedicated slave (either physical machine, a disk cabi

Re: Mysql backup for large databases

2012-11-02 Thread Manuel Arostegui
Hello, Just one more suggestion to do full backups in large databases: - Dedicated slave (either physical machine, a disk cabinet using iscsi connections from a machine just with a bunch of RAM etc) - Get the slave delayed a certain time (ie: 1 hour, 2 hours...depends on your workload) using

Re: Mysql backup for large databases

2012-11-01 Thread Karen Abgarian
; Sent via my BlackBerry from Vodacom - let your email find you! > > -Original Message- > From: Reindl Harald > Date: Thu, 01 Nov 2012 16:49:45 > To: mysql@lists.mysql.com > Subject: Re: Mysql backup for large databases > > good luck &g

RE: Mysql backup for large databases

2012-11-01 Thread Rick James
machiel.richa...@gmail.com [mailto:machiel.richa...@gmail.com] > Sent: Thursday, November 01, 2012 8:54 AM > To: Reindl Harald; mysql@lists.mysql.com > Subject: Re: Mysql backup for large databases > > Well, the biggest problem we have to answer for the clients is the > followin

Re: Mysql backup for large databases

2012-11-01 Thread Reindl Harald
ent via my BlackBerry from Vodacom - let your email find you! > > -Original Message- > From: Reindl Harald > Date: Thu, 01 Nov 2012 16:49:45 > To: mysql@lists.mysql.com > Subject: Re: Mysql backup for large databases > > good luck >

Re: Mysql backup for large databases

2012-11-01 Thread machiel . richards
for large databases good luck i would call snapshots on a running system much more dumb than "innodb_flush_log_at_trx_commit = 2" on systems with 100% stable power instead waste IOPS on shared storages Am 01.11.2012 16:45, schrieb Singer Wang: > Assuming you'

Re: Mysql backup for large databases

2012-11-01 Thread Reindl Harald
good luck i would call snapshots on a running system much more dumb than "innodb_flush_log_at_trx_commit = 2" on systems with 100% stable power instead waste IOPS on shared storages Am 01.11.2012 16:45, schrieb Singer Wang: > Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2

Re: Mysql backup for large databases

2012-11-01 Thread Singer Wang
Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2 and etc, you should be fine. We have been using the trio: flush tables with read lock, xfs_freeze, snapshot for months now without any issues. And we test the backups (we load the backup into a staging once a day, and dev once

Re: Mysql backup for large databases

2012-11-01 Thread Reindl Harald
Am 01.11.2012 16:36, schrieb Singer Wang: > On Thu, Nov 1, 2012 at 11:34 AM, Rick James > wrote: > > Full backup: > * Xtrabackup (Backup: slight impact on source; more if you have MyISAM > (as mentioned)) > * Slave (Backup: zero impact on Master -- once

Re: Mysql backup for large databases

2012-11-01 Thread Singer Wang
e (5.6.x?), you can disconnect a partition from a > table and move it to another table; this will greatly speed up "archiving". > > > -Original Message- > > From: Reindl Harald [mailto:h.rei...@thelounge.net] > > Sent: Thursday, November 01, 2012 4:47 AM > > To:

RE: Mysql backup for large databases

2012-11-01 Thread Rick James
t] > Sent: Thursday, November 01, 2012 4:47 AM > To: mysql@lists.mysql.com > Subject: Re: Mysql backup for large databases > > > > Am 01.11.2012 11:28, schrieb Machiel Richards - Gmail: > > Using mysqldump and restores on an 80-100GB database seems a bit > > un

Re: Mysql backup for large databases

2012-11-01 Thread Reindl Harald
Am 01.11.2012 11:28, schrieb Machiel Richards - Gmail: > Using mysqldump and restores on an 80-100GB database seems a bit unpractical > as the restore times seems to > get quite long as well as the backup times. * setup a master/slave configuration * stop the slave * rsync the raw datadir to wh

Re: Mysql backup for large databases

2012-11-01 Thread Radoulov, Dimitre
On 01/11/2012 11.28, Machiel Richards - Gmail wrote: [...] I am busy investigating some options relating to the backup for MySQL databases when they get quite large. When using the MySQL enterprise, there is the option to use the MySQL enterprise backup as it is part of the

Mysql backup for large databases

2012-11-01 Thread Machiel Richards - Gmail
Hi All I am busy investigating some options relating to the backup for MySQL databases when they get quite large. When using the MySQL enterprise, there is the option to use the MySQL enterprise backup as it is part of the Enterprise license. However, when using the GA

Re: Dumping drupal databases

2012-10-31 Thread Tim Johnson
* Reindl Harald [121031 17:22]: > > > Am 01.11.2012 01:54, schrieb Tim Johnson: > > * Reindl Harald [121031 08:12]: > >> you MUST NOT use "localhost" if you want to connect to > >> a different mysqld-port because "localhost" is unix-socket > >> > >> mysql -h 127.0.0.1 --port=3307 -u -p > > I

Re: Dumping drupal databases

2012-10-31 Thread Reindl Harald
Am 01.11.2012 01:54, schrieb Tim Johnson: > * Reindl Harald [121031 08:12]: >> you MUST NOT use "localhost" if you want to connect to >> a different mysqld-port because "localhost" is unix-socket >> >> mysql -h 127.0.0.1 --port=3307 -u -p > I get "access denied" when I do that. > thanks d

Re: Dumping drupal databases

2012-10-31 Thread Tim Johnson
hat whatever > >> installer you used created a second instance of mysql. Your drupal > >> configfile should hold the necessary data to connect to it. > > > > You are correct. Running on 3307. (from settings.php) But even > > loging into that instance was not

Re: Dumping drupal databases

2012-10-31 Thread Reindl Harald
nstance of mysql. Your drupal >> configfile should hold the necessary data to connect to it. > > You are correct. Running on 3307. (from settings.php) But even > loging into that instance was not showing me the drupal > databases - so I was stumped! I would guess that was because

Re: Dumping drupal databases

2012-10-31 Thread Tim Johnson
* Johan De Meersman [121031 07:10]: > Tim Johnson wrote: > > >* Ananda Kumar [121030 09:48]: > >> why dont u create a softlink > > From /opt/local/var/db/mysql5/ to /opt/local/var/db/mysql5/ ??? > > > > I can try that, but I am doing things to MySQL that I have never > > done before and I am re

Re: Dumping drupal databases

2012-10-31 Thread Tim Johnson
ata to connect to it. You are correct. Running on 3307. (from settings.php) But even loging into that instance was not showing me the drupal databases - so I was stumped! I would guess that was because the 3307 instance was using the system-wide my.cnf instead of the drupal my.cnf. I c

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Tim Johnson [121030 10:40]: > OP : > * Tim Johnson [121029 16:28]: <...snip> > > I want to back up the mysql data for drupal. > > However, I can't locate those databases and tables using MySQL > > server or PHPMyAdmin, even if I start mysql on port 3307. &l

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
OP : * Tim Johnson [121029 16:28]: > I've recently unstalled drupal 7.15 on Mac OS X 10.7. > > I want to back up the mysql data for drupal. > However, I can't locate those databases and tables using MySQL > server or PHPMyAdmin, even if I start mysql on port 3307. >

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Hassan Schroeder [121030 08:25]: > On Tue, Oct 30, 2012 at 9:12 AM, Tim Johnson wrote: > > > I might try cp -p -r, but need a second opinion. > > If you want to dump one or more databases, presumably you know > the name(s); just use the mysqldump utility. Copying files

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Ananda Kumar [121030 09:48]: > why dont u create a softlink From /opt/local/var/db/mysql5/ to /opt/local/var/db/mysql5/ ??? I can try that, but I am doing things to MySQL that I have never done before and I am reluctant to risk clobbering a complex development environment that has nothing t

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Kishore Vaishnav [121030 08:25]: <...snip> > Or the other option to install a module in Drupal "Backup & > Restore" which can you just login thru drupal and you can take a > dump of the existing database if you have admin credentials. OK. Use drupal directly? That makes sense. I see from htt

Re: Dumping drupal databases

2012-10-30 Thread Ananda Kumar
why dont u create a softlink On Tue, Oct 30, 2012 at 11:05 PM, Tim Johnson wrote: > * Reindl Harald [121030 08:49]: > > >The drupal mysql datafiles are located at > > > /Applications/drupal-7.15-0/mysql/data > > > > > > as opposed to /opt/local/var/db/mysql5 for > > > 'customary' mysql. > > > >

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Reindl Harald [121030 08:49]: > >The drupal mysql datafiles are located at > > /Applications/drupal-7.15-0/mysql/data > > > > as opposed to /opt/local/var/db/mysql5 for > > 'customary' mysql. > > this crap is outside your mysqldata I don't know what you mean by "crap". Sorry. Actually I do .

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
, so I will try > > again: > > > > * > > mysqldump does not recognize the drupal databases! > > Example : > > linus:prj tim$ mysqldump event -hlocalhost -utim > > -pXX > event.sql > > mysqldump: G

Re: Dumping drupal databases

2012-10-30 Thread Shawn Green
recognize the drupal databases! Example : linus:prj tim$ mysqldump event -hlocalhost -utim -pXX > event.sql mysqldump: Got error: 1049: Unknown database 'event' when selecting the database ... snip ... Your syntax is inverted. Put the name of the da

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Tim Johnson [121030 08:37]: > * > I remain as dumb as ever, but I hope I have made myself clearer > regards To elaborate further : See this entry from mysql --help """ Default options are read from the following files in

Re: Dumping drupal databases

2012-10-30 Thread Reindl Harald
Am 30.10.2012 17:34, schrieb Tim Johnson: > * Reindl Harald [121030 08:25]: >> if it are MyISAM tables it is the best way because all data >> are in a folder with the database name - i never in my life >> used a dump to migrate mysql-databases while started with >> M

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* Reindl Harald [121030 08:25]: > if it are MyISAM tables it is the best way because all data > are in a folder with the database name - i never in my life > used a dump to migrate mysql-databases while started with > MySQL-3.x long years ago and moved them between Linux, > Windows

Re: Dumping drupal databases

2012-10-30 Thread Reindl Harald
Am 30.10.2012 17:17, schrieb Hassan Schroeder: > On Tue, Oct 30, 2012 at 9:12 AM, Tim Johnson wrote: > >> I might try cp -p -r, but need a second opinion. > > If you want to dump one or more databases, presumably you know > the name(s); just use the mysqldump utility.

Re: Dumping drupal databases

2012-10-30 Thread Kishore Vaishnav
Is it the question that "how to dump drupal database ?" then here is one of the option. You can look at the details mentioned in the default/settings.php file and use mysql to export the data. Or the other option to install a module in Drupal "Backup & Restore" which can you just login thru drupal

Re: Dumping drupal databases

2012-10-30 Thread Tim Johnson
* 小刀 <13488684...@163.com> [121029 18:43]: > You can the the /etc/my.cnf and file the parameter about the data_dir I'm sorry. I don't understand. I might try cp -p -r, but need a second opinion. BTW: No need to CC me. thanks -- Tim tim at tee jay forty nine dot com or akwebsoft dot com h

Re:Dumping drupal databases

2012-10-29 Thread 小刀
You can the the /etc/my.cnf and file the parameter about the data_dir At 2012-10-30 08:24:14,"Tim Johnson" wrote: >I've recently unstalled drupal 7.15 on Mac OS X 10.7. > >I want to back up the mysql data for drupal. >However, I can't locate those databases a

Dumping drupal databases

2012-10-29 Thread Tim Johnson
I've recently unstalled drupal 7.15 on Mac OS X 10.7. I want to back up the mysql data for drupal. However, I can't locate those databases and tables using MySQL server or PHPMyAdmin, even if I start mysql on port 3307. The drupal mysql datafiles are located at /Applications/drupal-7.

Re: Is there any performance difference, maintaining separate ibdata files for each and every table insted of having one singl tabale for all databases.

2012-06-14 Thread Prabhat Kumar
separate > > ibdata files for each and every table insted of having one singl tabale > > for all databases. > > > > hi every one > > > > Is there any performance difference, maintaining separate ibdata > > files for each and every table insted of havin

RE: Is there any performance difference, maintaining separate ibdata files for each and every table insted of having one singl tabale for all databases.

2012-06-14 Thread Rick James
any performance difference, maintaining separate > ibdata files for each and every table insted of having one singl tabale > for all databases. > > hi every one > > Is there any performance difference, maintaining separate ibdata > files for each and every table insted

Is there any performance difference, maintaining separate ibdata files for each and every table insted of having one singl tabale for all databases.

2012-05-15 Thread Pothanaboyina Trimurthy
hi every one Is there any performance difference, maintaining separate ibdata files for each and every table insted of having one singl tabale for all databases, for InnoDB Storage Engine. please let me know the difference. -- 3murthy -- MySQL General Mailing List For list archives: http

Re: how to read existing databases copied from somewhere else

2012-02-27 Thread Reindl Harald
Am 27.02.2012 09:05, schrieb Pengkui Luo: > Hi, I am new to mysql and have a question regarding how to read/import > existing mysql databases (i.e. binaries, not sql dumps) copied from > somewhere. > > However, the "show databases;" command did not give me the "foo

how to read existing databases copied from somewhere else

2012-02-27 Thread Pengkui Luo
Hi, I am new to mysql and have a question regarding how to read/import existing mysql databases (i.e. binaries, not sql dumps) copied from somewhere. What I now have is a whole mysql directory that contains all binary files (version 5.0.27 for 32bit Linux) copied from somewhere, including

How to split a mysqldump file of multiple databases to singlefile databases... SOLVED with script

2012-02-20 Thread Andrés Tello
Today I needed to split a mysqldump -A into it several databases. I didn't have access to the original source, so I only had the texr file to work. It was a webhosting server dump, so there was a LOT of databases... I split the file with this little script I made: file= nextTable="&qu

Re: Script need for dropping databases

2011-11-18 Thread Nuno Tavares
Hi , Have a look at database information_schema.TABLES: SELECT * FROM information_schema.TABLES WHERE TABLE_SCHEMA=''; As long as your MySQL version is >= 5.1, you don't need a cron script, you can use the MySQL scheduler, create a stored procedure that will run each month. You'll need to use pr

Re: Script need for dropping databases

2011-11-18 Thread Mohan L
On Fri, Nov 18, 2011 at 1:32 PM, a bv wrote: > Hi, > I have a linux box running mysql plus phpmyadmin which has tables > getting montly data and when a new month starts a new table is > created. I want to store only 2 years of data so when a new month > starts i need to drop the table which b

Script need for dropping databases

2011-11-18 Thread a bv
Hi, I have a linux box running mysql plus phpmyadmin which has tables getting montly data and when a new month starts a new table is created. I want to store only 2 years of data so when a new month starts i need to drop the table which became the data container of 2 years previous data. So to

Re: calculate the total revenue from the two databases

2011-06-17 Thread Hal�sz S�ndor
>>>> 2011/06/17 14:09 +0700, HaidarPesebe >>>> Please help calculate the total revenue from the two databases below. The first database is the name of the item and price. The second database is the goods sold. I will make a recapitulation of every month to my total in

calculate the total revenue from the two databases

2011-06-17 Thread HaidarPesebe
Please help calculate the total revenue from the two databases below. The first database is the name of the item and price. The second database is the goods sold. I will make a recapitulation of every month to my total income (total only). I've tried tp always failed. TABLE A (item nam

managing a foreign key constraint over two databases

2011-02-21 Thread James Smith
ACK /* otherwise */ COMMIT - This makes me think that I can easily move the FILE and THEME tables to different databases if I use a distributed transaction in the above. I understand that on a very larg

Re: Moving of databases from one server to another

2010-12-08 Thread Machiel Richards
HI Johan Thank you for the advice... Problem resolved. Regards Machiel -Original Message- From: Johan De Meersman To: Machiel Richards Cc: mysql mailing list Subject: Re: Moving of databases from one server to another Date: Wed, 8 Dec 2010 11:15:52 +0100 On Wed, Dec 8, 2010 at

Re: Moving of databases from one server to another

2010-12-08 Thread Johan De Meersman
this. That user is used mostly for system-based maintenance: table checks at startup, clean shutdown and package upgrade operations, plus during the install of some other packags to create and initialize their necessary databases. -- Bier met grenadyn Is als mosterd by den wyn Sy die'

Re: Moving of databases from one server to another

2010-12-08 Thread Machiel Richards
Hi Johan Would the server require a restart after this or not? Machiel -Original Message- From: Johan De Meersman To: Machiel Richards Cc: mysql mailing list Subject: Re: Moving of databases from one server to another Date: Wed, 8 Dec 2010 11:02:55 +0100 That's a very D

Re: Moving of databases from one server to another

2010-12-08 Thread Johan De Meersman
That's a very Debian-specific issue. The credentials for the debian-sys-maint user are randomly generated at install, and stored in /etc/mysql/debian.cnf. Either copy the file from the old to the new machine, or update the user's password on the new machine to the one in the file. On Wed, Dec 8,

Re: Moving of databases from one server to another

2010-12-08 Thread Ananda Kumar
copy the /etc/init.d/mysql file from your old m/c to the new and try the start/stop. regards anandkl On Wed, Dec 8, 2010 at 2:21 PM, Machiel Richards wrote: > HI All > >I am hoping someone has had this before as this one is baffling me > entirely. > >We did a MySQL database move from on

Moving of databases from one server to another

2010-12-08 Thread Machiel Richards
HI All I am hoping someone has had this before as this one is baffling me entirely. We did a MySQL database move from one machine to another one last night. The O/S versions are the same and so are the database version (5.1.22). The database was installed and configured

Re: Replication on MySQL databases

2010-11-04 Thread Walter Heck
Classic scenario where MMM will be your best bet. Check out http://mysql-mmm.org for more information. Setup two masters and 2 or more slaves for full High Availability. It scales extremely well if your application is read-heavy (which most applications are). If you need help implementing this, I

Re: Replication on MySQL databases

2010-11-04 Thread Machiel Richards
: Johan De Meersman To: Machiel Richards Cc: mysql mailing list Subject: Re: Replication on MySQL databases Date: Thu, 4 Nov 2010 10:21:11 +0100 If your sites are busy with *writes*, you're kind of stuck. Replication means that every write that happens on one side, also MUST happen on the other

Re: Replication on MySQL databases

2010-11-04 Thread Johan De Meersman
in the next hour and was requested this > morning to research possible load balancing options for MySQL database. > > > What is currently running is a website (balanced over a couple of > web servers all connecting to the same database) using apacje and jdk. > > 2 MySQL

Replication on MySQL databases

2010-11-04 Thread Machiel Richards
over a couple of web servers all connecting to the same database) using apacje and jdk. 2 MySQL databases running as Masler/Slave replication with all reads and writes going to the master and the slave being used for data exports and failover if required. The websites are rather busy

Re: Unique ID's across multiple databases

2010-09-14 Thread Johnny Withers
I may have missed what you are trying to do here. NoSQL is really a bad name and should really be renamed to NoREL instead. NoSQL implementations are not used just because of limitations of traditional RDBMS when it comes to sheer traffic volume, they are also used because they scale horizontally v

Re: Unique ID's across multiple databases

2010-09-13 Thread Johan De Meersman
On Mon, Sep 13, 2010 at 8:59 PM, Johnny Withers wrote: > > This sounds like a good job for a 'NoSQL' system. Maybe? > I can't help but blink at that. How exactly is NoSQL going to fix issues that are related to topology, not inherent SQL limitations ? Which particular incarnation of NoSQL are you

RE: Unique ID's across multiple databases

2010-09-13 Thread Wm Mussatto
tter (Postgress, Oracle, SQL Server, etc?) > > I too have a need for a unique identifier that will "mesh" with other > databases periodically. So that a user in one "local" DB/server will get > migrated to a master DB which in turn will sync up with remote sites so

RE: Unique ID's across multiple databases

2010-09-13 Thread Daevid Vincent
is a good summary about the issues: > http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/ Is this UUID issue unique to mySQL or are there other RDBMS's that handle it better (Postgress, Oracle, SQL Server, etc?) I too have a need for a unique identifier that will "mesh&quo

RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
From: Kiss Dániel [mailto:n...@dinagon.com] Sent: Monday, September 13, 2010 3:17 PM To: Jerry Schwartz Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com Subject: Re: Unique ID's across multiple databases Well, that would be the plan, yes. :-) Anyway,

Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
M > >To: Jerry Schwartz > >Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com; > >replicat...@lists.mysql.com > >Subject: Re: Unique ID's across multiple databases > > > >Well, not exactly. > > > >I do not own all the databases. Some o

Re: Unique ID's across multiple databases

2010-09-13 Thread Johnny Withers
artz > >Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com; > >replicat...@lists.mysql.com > >Subject: Re: Unique ID's across multiple databases > > > >Well, not exactly. > > > >I do not own all the databases. Some of them are placed at customers

RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
>-Original Message- >From: Kiss Dániel [mailto:n...@dinagon.com] >Sent: Monday, September 13, 2010 11:49 AM >To: Jerry Schwartz >Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com; >replicat...@lists.mysql.com >Subject: Re: Unique ID's across multiple data

Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
Well, not exactly. I do not own all the databases. Some of them are placed at customers, some of them are at my data warehouse. So, neither NAS or Fibre Channel is a solution in this case. On Mon, Sep 13, 2010 at 4:30 PM, Jerry Schwartz wrote: > >-Original Message- > >

RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
>-Original Message- >From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De >Meersman >Sent: Monday, September 13, 2010 7:27 AM >To: Kiss Dániel >Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com >Subject: Re: Unique ID's

RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
>-Original Message- >From: Kiss Dániel [mailto:n...@dinagon.com] >Sent: Sunday, September 12, 2010 1:47 PM >To: mysql@lists.mysql.com; replicat...@lists.mysql.com >Subject: Unique ID's across multiple databases > >Hi, > >I'm designing a master-to-maste

Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
; >> One bad connection will break the chain, though, so in effect you'll be > >> multiplying the disconnecting rate... > >> > >> I think you'd be better of with a star topology, but MySQL unfortunately > >> only allows ring-types. This is gonna req

Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
pairs. To use two fields for primary and foreign keys is not the most convenient to say the least. :) I am just wondering if anyone has any better idea to fulfill the requirements (small index size, dynamically increasable numbe of databases in the array, incremental-like ID's are optimal for

Re: Unique ID's across multiple databases

2010-09-13 Thread Fish Kungfu
ySQL unfortunately >> only allows ring-types. This is gonna require some good thinking on your >> part :-) >> >> On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel wrote: >> >> > This is actually more for failover scenarios where databases are spread >> in &

Re: Unique ID's across multiple databases

2010-09-13 Thread Fish Kungfu
od thinking on your > part :-) > > On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel wrote: > > > This is actually more for failover scenarios where databases are spread > in > > multiple locations with unreliable internet connections. But you want to > > keep e

Re: Unique ID's across multiple databases

2010-09-13 Thread Johan De Meersman
e good thinking on your part :-) On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel wrote: > This is actually more for failover scenarios where databases are spread in > multiple locations with unreliable internet connections. But you want to > keep every single location working even when t

Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
This is actually more for failover scenarios where databases are spread in multiple locations with unreliable internet connections. But you want to keep every single location working even when they are cut off from the other databases. The primary purpose is not load distribution. On Mon, Sep 13

Re: Unique ID's across multiple databases

2010-09-13 Thread Johan De Meersman
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel wrote: > offset + increment thingy is good if you know in advance that you'll have a > limited number of servers. But if you have no idea that you will have 2, > 20, > or 200 servers in your array in the future, you just can't pick an optimal > What b

Re: Unique ID's across multiple databases

2010-09-12 Thread Kiss Dániel
You may be right. I'm not arguing that offset + increment is working. I'm just wondering if that's the optimal solution when you do not know how many servers you will have in your array in the future. In my view, the offset + increment thingy is good if you know in advance that you'll have a limit

Re: Unique ID's across multiple databases

2010-09-12 Thread Max Schubert
Server offset + increment works really well, is simple, and well documented and reliable - not sure why you would want to re-invent something that works so well :). -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=

Re: Unique ID's across multiple databases

2010-09-12 Thread Marcus Bointon
, even if > using BIGINT's. You can maintain your own sequence tables a la postgres if you use transactions to ensure atomicity, though that doesn't help across databases (I suspect the same is true in postgres). FWIW my auto_increment_offset value is usually the same as my server ID

Unique ID's across multiple databases

2010-09-12 Thread Kiss Dániel
Hi, I'm designing a master-to-master replication architecture. I wonder what the best way is to make sure both databases generate unique row ID's, so there won't be ID conflicts when replicating both directions. I read on forums about pro's and con's using UUID's

Is this the right export / import command for all databases and users?

2010-08-18 Thread Nunzio Daveri
Hi all, I have upgraded a few test boxes and everything seems to work fine BUT I wanted to verify with the gurus if my syntax is correct so as to avoid any future problems ;-) The purpose is to dump all databases and users / user privileges from our 4.1.20 server and import it into our

More Tools to Work with MySQL Databases in Visual Studio Provided by dbForge Fusion!

2010-07-27 Thread Julia Samarska
Devart Email: i...@devart.com Web: http://www.devart.com FOR IMMEDIATE RELEASE CONTACT INFORMATION: Julia Samarska jul...@devart.com 27-Jul-2010 More Tools to Work with MySQL Databases in Visual Studio Provided by dbForge Fusion! Devart today releases dbForge Fusion for MySQL

Re: how to setup replication - MySQL 5.0.x - Migration and new databases

2010-07-14 Thread Michael Dykman
.Otherwise, you can obtain a reliable binary snapshot of |InnoDB| tables > only after shutting down the MySQL Server.. > .. If you are replicating only certain databases then make sure you copy > only those files that related to those tables. (For InnoDB, all tables in > all databases are st

Re: how to setup replication - MySQL 5.0.x - Migration and new databases

2010-07-13 Thread lejeczek
in a reliable binary snapshot of |InnoDB| tables only after shutting down the MySQL Server.. .. If you are replicating only certain databases then make sure you copy only those files that related to those tables. (For InnoDB, all tables in all databases are stored in the shared tablespace files,

More Tools to Work with MySQL Databases Provided by dbForge Studio!

2010-07-12 Thread Julia Samarska
Devart Email: i...@devart.com Web: http://www.devart.com FOR IMMEDIATE RELEASE CONTACT INFORMATION: Julia Samarska jul...@devart.com 12-Jul-10 More Tools to Work with MySQL Databases Provided by dbForge Studio! With dbForge Studio for MySQL, Devart continues its initiative to produce

Re: how to setup replication - MySQL 5.0.x - Migration and new databases

2010-06-10 Thread Joerg Bruehe
Hi all! Götz Reinicke - IT-Koordinator wrote: > Am 08.06.10 12:05, schrieb Rob Wultsch: >> On Mon, Jun 7, 2010 at 11:59 PM, Götz Reinicke - IT-Koordinator >> wrote: >>> Hi, >>> >>> we do have different LAMP systems and recently I started to put some >

Re: how to setup replication - MySQL 5.0.x - Migration and new databases

2010-06-10 Thread Götz Reinicke - IT-Koordinator
Am 08.06.10 12:05, schrieb Rob Wultsch: > On Mon, Jun 7, 2010 at 11:59 PM, Götz Reinicke - IT-Koordinator > wrote: >> Hi, >> >> we do have different LAMP systems and recently I started to put some >> mysql databases on one, new master server. (RedHat, Fredora, MySQL

Re: how to setup replication - MySQL 5.0.x - Migration and new databases

2010-06-08 Thread Rob Wultsch
On Mon, Jun 7, 2010 at 11:59 PM, Götz Reinicke - IT-Koordinator wrote: > Hi, > > we do have different LAMP systems and recently I started to put some > mysql databases on one, new master server. (RedHat, Fredora, MySQL 4.x - > 5.0.xx) MySQL 4.X is EOL. I strongly suggest not u

how to setup replication - MySQL 5.0.x - Migration and new databases

2010-06-07 Thread Götz Reinicke - IT-Koordinator
Hi, we do have different LAMP systems and recently I started to put some mysql databases on one, new master server. (RedHat, Fredora, MySQL 4.x - 5.0.xx) I did this by exporting some databases with mysqldump and importing tham on the new server. Now I'd like to add a slave mysqlserver and

MySQL University session on March 4: MySQL Column Databases

2010-03-02 Thread Stefan Hinz
MySQL University: MySQL Column Databases http://forge.mysql.com/wiki/MySQL_Column_Databases This Thursday (March 4th, 15:00 UTC - slightly later than usual), Robin Schumacher will present MySQL Column Databases. If you're doing Data Warehouse with your databases this is a must-attend, but

Re: Importing large databases faster

2009-12-18 Thread Brent Clark
On 17/12/2009 17:46, mos wrote: "Load Data ..." is still going to be much faster. Mike Hiya If you using on Linux and using LVM, look at mylvmbackup. HTH Brent Clark -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mys

Re: Importing large databases faster

2009-12-17 Thread mos
At 03:59 AM 12/17/2009, you wrote: Madison Kelly wrote: Hi all, I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes

Re: Importing large databases faster

2009-12-17 Thread Jay Ess
Madison Kelly wrote: Hi all, I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes 1.5~2 days to load the same datab

Re: Importing large databases faster

2009-12-16 Thread Shawn Green
Madison Kelly wrote: Hi all, I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes 1.5~2 days to load the same datab

RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
n Towey Cc: mysql@lists.mysql.com Subject: Re: Importing large databases faster Gavin Towey wrote: > There are scripts out there such at the Maatkit mk-parallel-dump/restore that > can speed up this process by running in parallel. > > However if you're doing this every week on that l

Re: Importing large databases faster

2009-12-16 Thread Madison Kelly
then only take as long as it takes for you to scp the database from one machine to another. Regards, Gavin Towey Thanks! Will the Maatkit script work on a simple --all-databases dump? As for the copy, it's a temporary thing. This is just being done weekly while we test out the new se

  1   2   3   4   5   6   7   8   9   10   >