Hi,
It is not very surprising that the database cannot recover from a Time Machine
backup. This generally applies to any software that is running at the moment
the backup is taken. The InnoDB is especially sensitive to taking what is
called a 'dirty' backup because it has a cache. You
A touch of realism: we are all dying. For some, it may take a while,
hopefully.
On 04.12.2012, at 9:53, Tim Pownall wrote:
Mysql is used by just about every web host and is one of the most common
database servers around the world. I do not have any intent to stop using
mysql unless they
, Dec 4, 2012 at 12:50 PM, Singer Wang w...@singerwang.com wrote:
Lol! Good point Karen!
On Tue, Dec 4, 2012 at 1:02 PM, Karen Abgarian a...@apple.com wrote:
A touch of realism: we are all dying. For some, it may take a while,
hopefully.
On 04.12.2012, at 9:53, Tim Pownall wrote
Hello,
Well, you have just invented what is known as index organized tables. The
MyISAM engine does not implement those.
If it did, it would have to deal with quite a few circumstances unique to IOTs.
One such circumstance is degradation
of efficiency with the increase of record length,
It is always fun to watch people get into a conflict about something silly and
unimportant...
On 18.11.2012, at 18:13, Reindl Harald wrote:
Am 19.11.2012 02:07, schrieb Tianyin Xu:
You are saying as long as admins are careful, there's no misconfiguration?
But why misconfigurations
it.
And lastly, but probably most importanttest your backups periodically!!
Hope this helps
Manuel.
2012/11/1 Karen Abgarian a...@apple.com
Hi,
For doing backups on the primary database, I know nothing better than have
your tables in InnoDB and use Innobackup (or MySQL Enterprise backup
Hi,
For doing backups on the primary database, I know nothing better than have your
tables in InnoDB and use Innobackup (or MySQL Enterprise backup). This,
however, still has the possibility of hanging as it is using FLUSH TABLES WITH
READ LOCK for taking backups of MyISAM tables.One
I try to figure out something observing the stats with SHOW STATUS. There are
some reads, writes, etc that tell something about what is going on.
Looking just at the file sizes is likely going to not tell much about the
progress.
If there is a better way to monitor this progress, I would
Hi,
If MyISAM tables were being written directly to disk, the MyISAM tables would
be so slow that nobody would ever use them.That's the cornerstone of their
performance, that the writes do not wait for the physical I/O to complete!
On May 8, 2012, at 3:07 AM, Johan De Meersman wrote:
Hi,
A couple cents to this.
There isn't really a million of block writes. The record gets added to the
block, but that gets modified in OS cache if we assume MyISAM tables and in the
Innodb buffer if we assume InnoDB tables. In both cases, the actual writing
does not take place and does
should note that random indexes, such as
GUIDs, MD5s, etc, tend to
-Original Message-
From: Karen Abgarian [mailto:a...@apple.com]
Sent: Monday, May 07, 2012 10:31 AM
To: mysql@lists.mysql.com
Subject: Re: 回复: Why is creating indexes faster after inserting
massive data rows
not use this approach what you said which is complicated.
I agree with ohan De Meersman.
发件人: Karen Abgarian a...@apple.com
收件人: mysql@lists.mysql.com
发送日期: 2012年5月8日, 星期二, 上午 1:30
主题: Re: 回复: Why is creating indexes faster after inserting massive data
I vote 1) yes 2) no
It could be result of the app developer's convenience to just wrap anything
they submit to the database in a transaction. Selects are not transaction but
autocommit/commit do no harm. That might be the thinking.
On 09.04.2012, at 11:38, Rozeboom, Kay [DAS] wrote:
We
will be impacted if that node is down.
….
I ran out of time.But on these subjects, the chatting could go on for
years. You may want to clearer explain what you are trying to do. It could
make discussion more focused.
Peace,
Karen Abgarian.
On Mar 29, 2012, at 6:23 PM, Wes Modes wrote
Hello,
Unless I misunderstood the task, the exclusive lock would be one way to solve
it. What you want to do, is have both parent and children start their
activities with locking the table in exclusive mode and then performing their
operations. The parent and children will then all
The original problem is traditionally resolved by partitioning the tables by
historical range and creating a database job/event to drop older partitions and
add the new ones. Depending on the environment, some might prefer shell
scripts to do essentially the same.
On Jan 27, 2012, at 3:08
My two cents are this.
This is the kind of problems they invented transactions for. If you find
yourself doing this on non-transactional tables, you will need to use a lot of
error checking, triggers, application checks and post-cleanups to make it work
somehow. Likely, every once in a
Hola,
At the first look, it looks like one of the following will work:
1.Use MySQL's MERGE statement (that is, INSERT with ON DUPLICATE KEY).
What really happens, when the two transactions execute SELECT followed by an
INSERT, there is no way to hold off SELECT. The natural instinct
Even more stuff inline there
Actually, the gas tank is a good analogy.
There is limited volume in a vehicle which must contain the tank. In this
analogy, the vehicle must have space for not just fuel but passengers, cargo,
engine, transmission, etc. The fact that the tank may grow
Hi,
I have a support case with MySQL opened on this subject. Here is what we were
able to come up with.
1. Create the table with the primary key and unique key constraints defined
but no secondary indexes.
2. Bump up InnoDB logs to 2M and especially memory to the highest there can
be.
Hi inline there.
On 30.11.2011, at 0:16, Reindl Harald wrote:
Most people do not expect a gas tank to shrink once the
gas is consumed...right?
WHO THE FUCK is comparing computers with a gas tank?
Well, I do. I even managed to do it without using foul language.
Forgot to say.
On 29.11.2011, at 5:21, Reindl Harald wrote:
ibdata1 does NEVER get smaller, this is normal and a hughe problem
in your case, only if you are using innodb_file_per_table which
is NOT default would retire the space after drop tables
why is this dumb innodb_file_per_table=0 default since
Hi... there is stuff inline there.
The logic behind this is probably that without innodb_file_per_table=1
and with several large ibdata files, the space IS freed up when one does
optimize table or drop table. The space is freed up inside the database
files and can be reused.
well,
On Nov 29, 2011, at 11:50 AM, Claudio Nanni wrote:
This is not to say that MySQL could not have more of the file management
features. For example, the ability to add or remove datafiles on the fly and
the
ability to detach tablespaces as collections of tables.
That's where MySQL(read
Hi... and some more stuff inline.
Well, I would not base my database design on luck and playing. There
should be good awareness
of what the features do and what would be the plan to deal with file
allocations should the database
grow, shrink or somerset
if you are working many
Log sequence in the future means that, for whatever reason, the update in the
data pages
happened but update in the Innodb's log didn't.The InnoDB by itself,
without backups, is not
protected against media failures, and this happens to be just that.
Innodb_force_recovery is
not really a
Hello, comments inline. Regards, Karen.
I checked up in the mean time, and it does not make a truly consistent backup
of MyISAM - it locks all tables - yes, ALL tables - and then copies the
files. Given that MyISAM doesn't support transactions, that means that any
transactions (that
Hi! Inline, again.
On Jun 9, 2011, at 4:58 AM, Johan De Meersman wrote:
True, but I have never seen an application that checks for inconsistency in
it's tables. Making sure all users have stopped using the app ensures no
in-flight transactions, and then you have a consistent database -
Why, if they shut down the slave, it will be quite consistent. Only that this
technique is not as much of the 21th century, but is like 30 years old.
Placing locks is about the same as shutting it down.
On Mar 22, 2011, at 6:01 AM, Johan De Meersman wrote:
You are assuming that the
Hi,
The statement like 'I need to back up a 5T database' is not a backup strategy.
It is intention. There are some specifics that have to be determined to work
out a strategy. Going from there, the backup solution can be chosen. The
examples of questions one typically asks when
X.J. Wang wrote:
Also, very important but often not asked:
1) What's my budget?
On Mon, Mar 21, 2011 at 14:24, Karen Abgarian a...@apple.com wrote:
Hi,
The statement like 'I need to back up a 5T database' is not a backup
strategy. It is intention. There are some specifics that have
, 0x390F10A00, 230912, 0x0F69B4007BB2BC58) = 0
/1: kaio(AIOWRITE, 261, 0x391024A00, 91648, 0x0F6D3A007BB2BEE8) = 0
Thx
On Fri, Mar 18, 2011 at 6:00 AM, Karen Abgarian a...@apple.com wrote:
Hi,
For the actual question, I agree with the points Johan mentioned. MySQL,
to my knowledge, does
Hi,
For the actual question, I agree with the points Johan mentioned. MySQL, to
my knowledge, does not have an option to use raw devices for binary logs. Even
if it had it, it would not have the benefits Chao is seeking. There is indeed
a tradeoff between losing transactions and
Interestingly, this page does not say anything about MySQL Enterprise
Backups.
On Mar 15, 2011, at 8:48 AM, a.sm...@ukgrid.net wrote:
Hi,
there is a lot of info on different backup methods here:
http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html
For example, for incremental
34 matches
Mail list logo