XTrabackup can handle both InnoDB and MyISAM in
a consistent way while minimizing lock time on
MyISAM tables ...
http://www.percona.com/doc/percona-xtrabackup/2.1/
--
Hartmut Holzgraefe, Principal Support Engineer (EMEA)
SkySQL - The MariaDB Company | http://www.skysql.com/
--
MySQL General Ma
ase has a mixture of MyISAM- and
> InnoDB-tables. A backup of this mix does not seem to be easy. Until now it
> was dumped using "mysqldump --opt -u root --databases mausdb ...". What I
> understand until now is that --opt is not necessary because it is default. It
> incl
Hi,
i've been already reading the documentation the whole day, but still confused
and unsure what to do.
We have two databases which are important for our work. So both are stored
hourly. Now I recognized that each database has a mixture of MyISAM- and
InnoDB-tables. A backup of this mix
*please* don't use reply-all on mailing-lists
the list by definition distributes your message
Am 30.06.2014 13:14, schrieb Antonio Fernández Pérez:
> Thanks for your reply. Theorically the fragmented tables not offer the best
> performance to the InnoDB engine,
> that's correct or not?
practical
Hi Johan,
Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine, that's correct or not?
I don't know if is a problem or not, is a doubt/question for me. I'm not
sure if is an atypical behaviour.
Thanks in advance.
Regards,
Antonio.
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Re: Optimizing InnoDB tables
>
> I would like to know, if is possible, why after execute an analyze table
> command on some fragmented table, after that, appears fragmented again.
Simple question
Hello Antonio,
On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in Inno
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after exec
Am 27.06.2014 09:48, schrieb Antonio Fernández Pérez:
> Thanks for your reply. I have checked the link and my configuration.
> Innodb_file_per_table is enabled and in data directory appears a set of
> files by each table.
>
> Any ideas?
ideas for what?
* which files don't get shrinked (ls -lha)
Hi Andre,
Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.
Any ideas?
Thanks in advance.
Regards,
Antonio.
Have a look at this:
https://rtcamp.com/tutorials/mysql/enable-innodb-file-per-table/
--
Andre Matos
andrema...@mineirinho.org
On Jun 25, 2014, at 2:22 AM, Antonio Fernández Pérez
wrote:
> Hi again,
>
> I have enabled innodb_file_per_table (Its value is on).
> I don't have clear what I sho
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Re: Optimizing InnoDB tables
>
> I have enabled innodb_file_per_table (Its value is on).
> I don't have clear what I should to do ...
Then all new tables will be created in their own tablesp
Hi again,
I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...
Thanks in advance.
Regards,
Antonio.
Hello Reindl,
On 6/24/2014 3:29 PM, Reindl Harald wrote:
Am 24.06.2014 21:07, schrieb shawn l.green:
It makes a huge difference if the tables you are trying to optimize have their
own tablespace files or if they live
inside the common tablespace.
http://dev.mysql.com/doc/refman/5.5/en/innod
Am 24.06.2014 21:07, schrieb shawn l.green:
> It makes a huge difference if the tables you are trying to optimize have
> their own tablespace files or if they live
> inside the common tablespace.
>
> http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
whi
Hello Antonio,
On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_s
Hi Wagner,
I'm running
MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute
ALTER TABLE (Analyze with InnoDB tables do that, or not?).
Thanks in advance.
Regards,
Antonio.
Hi Antonio, como esta?
What's the mysql version you're running? Have you tried to ALTER TABLE x
ENGINE=InnoDB?
-- WB, MySQL Oracle ACE
> Em 24/06/2014, às 08:03, Antonio Fernández Pérez
> escreveu:
>
> Hi list,
>
> I was trying to optimize the InnoDB tables. I
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND
Data_free > 0
After that, I have
Hi,
Thanks for your replies.
In our case, we can't implement NOSQL solution. Thats requires modify/check
all our application and all services (Including FreeRADIUS that I'm not
sure if it's compatible).
Andrew, I have heard about people that has a lot of data, more than me. I
know that MySQL su
What kind of queries is this table serving? 8GB is not a huge amount of
data at all and IMO it's not enough to warrant sharding.
On Thu, May 15, 2014 at 1:26 PM, Antonio Fernández Pérez <
antoniofernan...@fabergroup.es> wrote:
>
>
>
> Hi,
>
> I have in my server database some tables that ar
2014-05-19 11:49 GMT+02:00 Johan De Meersman :
>
> - Original Message -
> > From: "Manuel Arostegui"
> > Subject: Re: Big innodb tables, how can I work with them?
> >
> > noSQL/table sharding/partitioning/archiving.
>
> I keep wondering how
- Original Message -
> From: "Manuel Arostegui"
> Subject: Re: Big innodb tables, how can I work with them?
>
> noSQL/table sharding/partitioning/archiving.
I keep wondering how people believe that NoSQL solutions magically don't need
RAM to work. Nearly
2014-05-15 14:26 GMT+02:00 Antonio Fernández Pérez <
antoniofernan...@fabergroup.es>:
>
>
>
> Hi,
>
> I have in my server database some tables that are too much big and produce
> some slow query, even with correct indexes created.
>
> For my application, it's necessary to have all the data be
Am 15.05.2014 14:26, schrieb Antonio Fernández Pérez:
> I have in my server database some tables that are too much big and produce
> some slow query, even with correct indexes created.
>
> For my application, it's necessary to have all the data because we make an
> authentication process with RA
Hi,
I have in my server database some tables that are too much big and produce
some slow query, even with correct indexes created.
For my application, it's necessary to have all the data because we make an
authentication process with RADIUS users (AAA protocol) to determine if one
user can
* 100 / $datalength))
echo "$database.$name is $fragmentation% fragmented."
mysql -u "$username" -p"$password" -NBe "OPTIMIZE TABLE $name;" "$database"
fi
done
done
I have run it and reports that several of my innodb ta
ime updatetime checktime
> collation checksum createoptions comment ; do if [ "$datafree" -gt 0 ]
> ; then fragmentation=$(($datafree * 100 / $datalength)) echo
> "$database.$name is $fragmentation% fragmented."
> mysql -u "$username" -p"$password" -NBe
n
> Subject: RE: problems with INNODB tables
>
> Thanks for your answer. I read http://mysql.rjweb.org/doc.php/memory
> where it tells you to do one thing if using MYIASM tables and another
> if using INNODB tables. We are using both. Any suggestions?
> Thanks for any help.
...
What is the database used for?
On Wed, Apr 25, 2012 at 5:14 AM, Malka Cymbalista <
malki.cymbali...@weizmann.ac.il> wrote:
> Thanks for your answer. I read http://mysql.rjweb.org/doc.php/memorywhere it
> tells you to do one thing if using MYIASM tables and another if
> using INN
Thanks for your answer. I read http://mysql.rjweb.org/doc.php/memory where it
tells you to do one thing if using MYIASM tables and another if using INNODB
tables. We are using both. Any suggestions?
Thanks for any help.
Malki Cymbalista
Webmaster, Weizmann Institute of Science
malki.cymbali
ject: Re: problems with INNODB tables
>
> Weird, I use a lot Innodb, and no issue, I even kill bravely the mysql
> process with pkill -9 -f mysql
>
> Y suppose the way drupal is being programed.
> PHP open and closes database connections each time a webpage with db
> access is
be the processes are not finishing
> normally and are just hanging around.
>
> 3. The machine is a web server and in the last few months we are
> moving over to drupal 7 to build our sites and Drupal 7 requires INNODB
> tables. Sometimes, when we restart MySQL using the commands
Am 15.03.2012 17:31, schrieb Malka Cymbalista:
> We are running MySQL version 5.0.45 on a Linux machine. Most of our tables
> are MyIASM but we have recently installed drupal 7 and drupal 7 requires
> INNODB tables. Every now and then when we restart MySQL using the commands
>
Am 25.11.2011 14:20, schrieb Machiel Richards - Gmail:
> Just a quick question relating to the use of transactions on innodb tables.
>
> We are doing some archiving on some innodb tables, however there seems to be
> some issues somewhere in the
> process with data not being upda
Hi All
Just a quick question relating to the use of transactions on
innodb tables.
We are doing some archiving on some innodb tables, however
there seems to be some issues somewhere in the process with data not
being updated accordingly.
We would like to make use
On Mon, Jan 24, 2011 at 6:43 PM, Gavin Towey wrote:
> If you show the EXPLAIN SELECT .. output, and the table structure, someone
> will be able to give a more definite answer.
>
>
Thanks for the reply Gavin. I actually did place this info in my very first
message on this thread, along with my bas
server doing simple inner join of two InnoDB
tables
On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford wrote:
>
>
> On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe wrote:
>
>> Hi everybody!
>>
>>
>> Shawn Green (MySQL) wrote:
>> > On 1/21/2011 14:21, Kend
ryone, I've got a database on an old Fedora Core 4 server
>> >> running
>> >> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
>> >> has
>> >> just two (InnoDB) tables:
>> >>
>> >> messages (app
(mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
> >> has
> >> just two (InnoDB) tables:
> >>
> >> messages (approx 2.5 million records)
> >> recipients (approx 6.5 million records)
> >>
> >> [[ ... see the original post
Hi everybody!
Shawn Green (MySQL) wrote:
> On 1/21/2011 14:21, Kendall Gifford wrote:
>> Hello everyone, I've got a database on an old Fedora Core 4 server
>> running
>> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
>> has
>> ju
The database in question has
>> just two (InnoDB) tables:
>>
>> messages (approx 2.5 million records)
>> recipients (approx 6.5 million records)
>>
>> These track information about email messages. Each message "has many"
>> recipient records. The s
On 1/21/2011 14:21, Kendall Gifford wrote:
Hello everyone, I've got a database on an old Fedora Core 4 server running
MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
just two (InnoDB) tables:
messages (approx 2.5 million records)
recipients (approx 6.5 million re
4 server running
> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
> just two (InnoDB) tables:
>
> messages (approx 2.5 million records)
> recipients (approx 6.5 million records)
>
> These track information about email messages. Each message "
Hello everyone, I've got a database on an old Fedora Core 4 server running
MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
just two (InnoDB) tables:
messages (approx 2.5 million records)
recipients (approx 6.5 million records)
These track information about
> The problem is I don't have any command line access, just direct MySQL
> access to the database tables.
>
>
whats wrong with mysqldump?
--
bEsT rEgArDs| "Confidence is what you have before you
tomasz dereszynski | understand the problem." -- Woody Allen
Quoting Tompkins Neil :
The problem is I don't have any command line access, just direct MySQL
access to the database tables.
I dont know xtra backup, but if thats not an option you can just use
mysqldump. This can be run from a remote server to your DB server,
just using MySQL network ac
>> Would really appreciate some help or suggestions on this please, if anyone
>> can assist ?
>>
>> Regards
>> Neil
>>
>> -- Forwarded message --
>> From: Tompkins Neil
>> Date: Tue, Oct 12, 2010 at 5:45 PM
>> Subject: Bac
, 2010 at 5:45 PM
> Subject: Backing up the InnoDB tables
> To: "[MySQL]"
>
>
> Hi
>
> On a shared MySQL server with access just to my own database, what is the
> recommend backup methods and strategies for the InnoDB tables ?
>
> Cheers
> Neil
>
--
Thanks
Suresh Kuna
MySQL DBA
Would really appreciate some help or suggestions on this please, if anyone
can assist ?
Regards
Neil
-- Forwarded message --
From: Tompkins Neil
Date: Tue, Oct 12, 2010 at 5:45 PM
Subject: Backing up the InnoDB tables
To: "[MySQL]"
Hi
On a shared MySQL server w
Hi
On a shared MySQL server with access just to my own database, what is the
recommend backup methods and strategies for the InnoDB tables ?
Cheers
Neil
The book “High Performance MySQL” states the following about using LVM
snapshots with innodb tables: “All innodb files (InnoDB tablespace files
and InnoDB transaction logs) must be on a single logical volume
(partition).” Here is portion of a df command performed on one of our
hosts:
/dev
ulate the total size of all InnoDB tables?
> --
> Ryan Schwartz
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
--
MySQL General Mailing List
For list
See Thread at: http://www.techienuggets.com/Detail?tx=48414 Posted on behalf of
a User
I have a MySQL 5.0 InnoDB database that's about 1 GB in size so it's still
pretty tiny. Is there any performance enhancement maintenance that should be
done on the tables? I do a weekly Optimize through the M
hdparm -Tt /dev/sdX ?
Ian Simpson wrote:
That's pretty much what I've been doing to get that the drive is running
at 100% bandwidth.
What I'd like is something that just gives the bandwidth of the device
in terms of Mb/s: you can probably work it out using that iostat
command, seeing how mu
On Fri, 2008-06-13 at 17:43 +0530, Alex Arul
> >> Lurthu
> >> >> >> wrote:
> >> >> >> > Please check if the my.cnf configurations to be
> >> >the
> >> >> >> same.
> >
>
>> >> >> > What are your configuration parameters in terms
>> >of
>> >> >> innodh flush log
>> >> >> > trx commit , bin logging, sync binlog and innodb
>> >> >
gt; >> > trx commit , bin logging, sync binlog and innodb
> >> >> unsafe for binlog ?
> >> >> >
> >> >> > If the systems have raid, check if the BBWC is
> >> >>
If the systems have raid, check if the BBWC is
>> >> enabled on the new host
>> >> > and WB is enabled.
>> >> >
>> >> >
>> >> > On Fri,
n Simpson
> >> <[EMAIL PROTECTED]>
> >> > wrote:
> >> > Hi list,
> >> >
> >> > Have a bit of a mystery here that I hope
> >>
>
>> > Have a bit of a mystery here that I hope
>> somebody can help
>> > with.
>> >
>> > I've just got a new server that I'm using as
>>
?
> > >
> > > If the systems have raid, check if the BBWC is
> > enabled on the new host
> > > and WB is enabled.
> > >
> > >
> > > On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
> > <[EMAIL PROTECTED]>
> > > wrote:
>
n terms of hardware it's pretty much
> identical, if not
> > slightly
> > superior to an existing server already in
> production use.
> >
> >
t;
>> > wrote:
>> > Hi list,
>> >
>> > Have a bit of a mystery here that I hope somebody can help
>> > with.
>> >
>> > I've just got a new server that I'm using as a dedicated MySQL
>> &g
f hardware it's pretty much identical, if not
> > slightly
> > superior to an existing server already in production use.
> >
> > It's having a real struggle processing INSERT statements to
> > InnoDB
> > tables; it
In terms of hardware it's pretty much identical, if not
> slightly
> superior to an existing server already in production use.
>
> It's having a real struggle processing INSERT statements to
> InnoDB
> tables; it'
dentical, if not slightly
> superior to an existing server already in production use.
>
> It's having a real struggle processing INSERT statements to InnoDB
> tables; it's maxing out at around 100 inserts per second, even with very
> simple two column tables (inserts in
ving a real struggle processing INSERT statements to InnoDB
tables; it's maxing out at around 100 inserts per second, even with very
simple two column tables (inserts into MyISAM tables run fine).
Meanwhile, the original server can happily process around 1000
inserts/sec into an id
` WRITE;
INSERT INTO `members_orders_items` VALUES (137,750,'54.00',25,45); //<--
Here should be an error ?
UNLOCK TABLES;
Thank you for any kind help !!
Matt.
--
View this message in context:
http://www.nabble.com/InnoDB-tables-but-no-FK-constraints-tp17364156p17364156.html
Sent from the MySQL
-
> >From: Sebastian Mendel [mailto:[EMAIL PROTECTED]
> >Sent: Wednesday, April 23, 2008 9:27 AM
> >To: Dobromir Velev
> >Cc: mysql@lists.mysql.com
> >Subject: Re: Symlink InnoDB tables without stoping MySQL
> >
> >Dobromir Velev schrieb:
> >> Hi,
>
-Original Message-
>From: Sebastian Mendel [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, April 23, 2008 9:27 AM
>To: Dobromir Velev
>Cc: mysql@lists.mysql.com
>Subject: Re: Symlink InnoDB tables without stoping MySQL
>
>Dobromir Velev schrieb:
>> Hi,
>> What I'm try
Hi,
Thanks for pointing it out - I just found the following commands.
ALTER TABLE tbl_name DISCARD TABLESPACE;
ALTER TABLE tbl_name IMPORT TABLESPACE;
I will test it and let you know if it works
Thanks
Dobromir Velev
On Wednesday 23 April 2008 16:27, Sebastian Mendel wrote:
> Dobromir Velev
Dobromir Velev schrieb:
Hi,
What I'm trying to do is to create a new InnoDB table on a different disk and
symlink it to an existing database.
I have innodb_file_per_table turned on and here is how I tried to do it
mysql> \u test
mysql> create table test (...) ENGINE = 'InnoDB';
mysql>\q
mov
Hi,
What I'm trying to do is to create a new InnoDB table on a different disk and
symlink it to an existing database.
I have innodb_file_per_table turned on and here is how I tried to do it
mysql> \u test
mysql> create table test (...) ENGINE = 'InnoDB';
mysql>\q
move the test.ibd file to the
I have a question regarding the innodb_file_per_table configuration
option. We currently do not have this enabled, so our ibdata1 file is huge.
Is it recommended that we have this configured to store the tables in
their own files? What are the performance implications of doing this,
especial
quot;Error Code : 1214
The used table type doesn't support FULLTEXT indexes"
So, what is the deal? Am I missing something?
And if I can't use boolean searches on InnoDB tables with mySQL 5.0.18,
Then WHEN will I be able to?
In the mean time, what is the best way to generate this e
Check out this thread:
http://www.sitepoint.com/forums/showpost.php?p=3357628&postcount=2
2007/7/17, [EMAIL PROTECTED] <[EMAIL PROTECTED]>:
Hello,
we have a MySQL DBMS with a lot of databases. Most of them are using
MyISAM tables but three databases use InnoDB and MyISAM tables.
What is the b
Hello,
we have a MySQL DBMS with a lot of databases. Most of them are using MyISAM
tables but three databases use InnoDB and MyISAM tables.
What is the best method to get a consitent ONLINE backup of both table types?
Thanks,
Spiker
--
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kan
Do you have to do something special with InnoDB tables to accept
various character sets like accented, European characters? Using the
default, these accented characters come out as garbage.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http
Hi Dusan,
You replied to a forum post of mine on mysql.com yeah? ;)
I have tried adjusting the max_allowed_packet on both the server and
client. Both are set to 1G now (apparently the highest value
accepted) even though each row is no larger than 100M at very most.
I am thinking this may h
Message-
From: Dušan Pavlica [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 19, 2007 5:08 AM
To: Hartleigh Burton
Cc: MySql
Subject: {Spam?} Re: mysqldump problem with large innodb tables...
Try to look for Lost connection error in MySQL manual and it can give your some
hints like http
Try to look for Lost connection error in MySQL manual and it can give
your some hints like
http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html
Dusan
Hartleigh Burton napsal(a):
Hi All,
I have a database which is currently at ~10GB in it's test phase. It
is containing uncompressed a
My backups use mysqldump, but they have always just worked. I would suggest you
try to make a minimal test case that can reproduce the problem and submit it as
a bug report, if possible.
I'm not familiar with the error message off-hand, but the InnoDB manual is large
and complete, so I'm sure
Ok... this error has just started popping up in my .err log file...
070618 14:31:10 InnoDB: ERROR: the age of the last checkpoint is
237821842,
InnoDB: which exceeds the log group capacity 237813351.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log
I'm out of ideas right now. I don't actually use mysqldump that much and have
never had this happen. Hopefully someone else on the mailing list can help, or
perhaps you can try #mysql on Freenode IRC.
Baron
Hartleigh Burton wrote:
No there is no indication of that at all. The server service
No there is no indication of that at all. The server service appears
to be in perfect order, does not drop/restart and my other
applications continue to function without any interruption.
It appears as if the mysqldump connection to the server is
interrupted or maybe there is something in r
Is there any indication that the mysqldump crash is killing the server and
causing it to restart? For example, "ready for connections" notifications just
after you try a mysqldump?
Hartleigh Burton wrote:
H no there are no new errors in there. Nothing out of the ordinary
thats for sure. J
H no there are no new errors in there. Nothing out of the
ordinary thats for sure. Just notifications that MySQL has started
and is accepting connections etc. :|
On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:
How about in c:\Program Files\MySQL\MySQL Server 5.0\data
\.err?
Cheers
How about in c:\Program Files\MySQL\MySQL Server 5.0\data\.err?
Cheers
Baron
Hartleigh Burton wrote:
Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:
Hi Hartleigh,
Hartleigh Burton wrote:
Hi All,
I have a database which is currentl
Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:
Hi Hartleigh,
Hartleigh Burton wrote:
Hi All,
I have a database which is currently at ~10GB in it's test phase.
It is containing uncompressed audio and is expected to reach 1.5TB
in
Hi Hartleigh,
Hartleigh Burton wrote:
Hi All,
I have a database which is currently at ~10GB in it's test phase. It is
containing uncompressed audio and is expected to reach 1.5TB in no time
at all. I am just running some backup tests and I have been having lots
of problems creating an accura
Hi All,
I have a database which is currently at ~10GB in it's test phase. It
is containing uncompressed audio and is expected to reach 1.5TB in no
time at all. I am just running some backup tests and I have been
having lots of problems creating an accurate backup.
I have tried both MySQL
ps”.
> Does this option only affect MyISAM performance, or does it also affect
> performance of operations on InnoDB tables?
key_buffer_size has nothing to do with InnoDB tables.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
y affect MyISAM performance, or does it also affect
performance of operations on InnoDB tables?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
On 2007-02-19 abhishek jain wrote:
> I want to copy paste the data files of Innodb database, is it possible, i
> mean can i just copy the data files like that we do for myisam tables
If you mean for a daily backup while the server is running: No!
You often end up with corrupted tables doing that
Hi,
I want to copy paste the data files of Innodb database, is it possible, i
mean can i just copy the data files like that we do for myisam tables,
Thanks,
Abhishek jain
- Original Message -
From: "Vitaliy Okulov" <[EMAIL PROTECTED]>
To:
Sent: Monday, January 22, 2007 7:27 PM
Subject: low-priority-updates and innodb tables
> Здравствуйте, mysql.
>
> Hi all.
> I want to ask about low-priority-updates and innodb tables. Does
> l
Здравствуйте, mysql.
Hi all.
I want to ask about low-priority-updates and innodb tables. Does
low-priority-updates=1 affect on priority of select or update query on
innodb type tables?
--
С уважением,
Vitaliy mailto:[EMAIL PROTECTED]
--
MySQL General Mailing List
For
Heikki
thanks for filing that report. You can close it again.
I had a look at the create-table statements for these 3 tables.
As it turns out, the person who initially created those tables had a
create statement like "create table ... comment='InnoDB free: 6144 kB'"
for some tables.
All my
Dominik,
I have now filed:
http://bugs.mysql.com/bug.php?id=23211
about this. Is there any pattern that could explain why the double print
is only in those 3 tables? What values does it print for the tables
where the printout is wrong, and what values does it print for ok tables?
Best regar
1 - 100 of 370 matches
Mail list logo