simply saw table_open_cache
using the full value of 3000 and once again the same behaviour.
When running "Show open tables" though, we find that the amount of open
tables is only 283 and a count of all tables returned that there is only 381 tables on
the server.
However when
Hi all
I am hoping this mail finds all well.
I have a question about mysql open tables and open_table_cache which I do
not seem to be finding the answer to on the internet.
We are busy investigating an issue on a server where we have erattic
behaviour.
During the times
Hello Mogens,
On 8/18/2018 2:32 PM, Mogens Melander wrote:
Guys,
I think I remember this from way back.
You could ask for a lock, and get an OK if it is safe.
Something like, if there is pending transactions, on your target tables,
you would get a NO.
But then again. I could be wrong, and
Guys,
I think I remember this from way back.
You could ask for a lock, and get an OK if it is safe.
Something like, if there is pending transactions, on your target tables,
you would get a NO.
But then again. I could be wrong, and Shawn is the authority on this.
On 2018-08-18 23:59, shawn
Hello Jeff,
On 8/13/2018 12:05 PM, j...@lxvi.net wrote:
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an expl
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an explanation as to *why* that is. Could someone please
enlighten m
Hi Martin,
On 4/12/2016 07:23, Martin Mueller wrote:
I abandoned a MySQL 5.22 database that quite suddenly andthat I wasn’t able to
start up again. The data directory consists of a mix of ISAM and Inno tables.
I was able to copy the ISAM tables into a new 5.6 version, and they work
Hi Martin
On 4/12/2016 07:23, Martin Mueller wrote:
I abandoned a MySQL 5.22 database that quite suddenly andthat I wasn’t able to
start up again. The data directory consists of a mix of ISAM and Inno tables.
I was able to copy the ISAM tables into a new 5.6 version, and they work.
I
On 12/3/2016 14:23, Martin Mueller wrote:
I abandoned a MySQL 5.22 database
There's been 5.0m 5,1, 5,4 (briefly), 5.5, 5.6 and now 5.7. No 5,.2.
that quite suddenly andthat I wasn’t able to start up again. The data
directory consists of a mix of ISAM and Inno tables.
You mean M
Am 03.12.2016 um 21:23 schrieb Martin Mueller:
In my case, I can reproduce Time machine backups of data directories at varying
times. At one point I was able to replace the non-working installation with an
earlier installation, but then it failed unpredictably.
Are the Inno tables on Time
I abandoned a MySQL 5.22 database that quite suddenly andthat I wasn’t able to
start up again. The data directory consists of a mix of ISAM and Inno tables.
I was able to copy the ISAM tables into a new 5.6 version, and they work.
I understand that INNO tables are different because different
I am regularly using indices on medium-big tables (1000 to > 5
entries), and even on temporary tables (which I use a lot) in joins
(EXPLAIN SELECT is your friend).
But I'd never thought indices were needed for small tables (100-200
entries). I recently found they are useful too,
r we are discussing (TDE) is
between the storage engine and the physical file (the data
in the file is encrypted), then any technique for doing
safe file-level backups will preserve the encryption.
Examples:
cold backups (copying off the files after stopping the
daemon)
FTWRL + wait for backg
ngine and the physical file (the data in the file is
encrypted), then any technique for doing safe file-level backups will
preserve the encryption.
Examples:
cold backups (copying off the files after stopping the daemon)
FTWRL + wait for background threads to complete their queues + file
sy
Am 03.03.2016 um 16:40 schrieb lejeczek:
how to backup in a way that this in-database-encryption will be taken
advantage of?
does any of present backup solutions can do it?
many thanks
think once again what "transparent encryption" means
the most effective backup is anyways running slave con
On 02/03/16 00:51, shawn l.green wrote:
On 3/1/2016 6:26 PM, lejeczek wrote:
On 29/02/16 21:35, shawn l.green wrote:
On 2/29/2016 3:13 PM, Reindl Harald wrote:
Am 29.02.2016 um 20:54 schrieb Gary Smith:
On 29/02/2016 19:50, Reindl Harald wrote:
cryptsetup/luks can achieve that way b
On 3/1/2016 6:26 PM, lejeczek wrote:
On 29/02/16 21:35, shawn l.green wrote:
On 2/29/2016 3:13 PM, Reindl Harald wrote:
Am 29.02.2016 um 20:54 schrieb Gary Smith:
On 29/02/2016 19:50, Reindl Harald wrote:
cryptsetup/luks can achieve that way better
Only to a degree.
no - not only
On 29/02/16 21:35, shawn l.green wrote:
On 2/29/2016 3:13 PM, Reindl Harald wrote:
Am 29.02.2016 um 20:54 schrieb Gary Smith:
On 29/02/2016 19:50, Reindl Harald wrote:
cryptsetup/luks can achieve that way better
Only to a degree.
no - not only to a degree - when the question is "not
On 2/29/2016 3:13 PM, Reindl Harald wrote:
Am 29.02.2016 um 20:54 schrieb Gary Smith:
On 29/02/2016 19:50, Reindl Harald wrote:
cryptsetup/luks can achieve that way better
Only to a degree.
no - not only to a degree - when the question is "not store anything
unencrypted on the disk" th
Am 29.02.2016 um 20:54 schrieb Gary Smith:
On 29/02/2016 19:50, Reindl Harald wrote:
cryptsetup/luks can achieve that way better
Only to a degree.
no - not only to a degree - when the question is "not store anything
unencrypted on the disk" the is no degree, but or if
Once the disk is
On 29/02/2016 19:54, Gary Smith wrote:
However, if TDE is employed, then you've got another significant
obstacle to overcome: The data is only encrypted (aiui) once it's in
memory.
Apologies, that should read "unencrypted (aiui) once it's in memory"
Gary
--
MySQL General Mailing List
For list
On 29/02/2016 19:50, Reindl Harald wrote:
cryptsetup/luks can achieve that way better
Only to a degree. Once the disk is unencrypted, you've got access to the
filesystem. If you've got physical access to the machine, then anything
which gives you console access gives you (potentially) access
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new,
internal encryption?
Starting with MysQL 5.7.11, there is transparent data encryption (TDE)
for InnoDB tables. If you use that, it is as the name suggest
transparent for PHP. See also
Hi Reindl,
On 2/29/2016 2:16 PM, Reindl Harald wrote:
Am 29.02.2016 um 20:07 schrieb Jesper Wisborg Krogh:
Hi Lejeczek,
On 1/03/2016 00:31, lejeczek wrote:
hi everybody
a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine
Am 29.02.2016 um 20:07 schrieb Jesper Wisborg Krogh:
Hi Lejeczek,
On 1/03/2016 00:31, lejeczek wrote:
hi everybody
a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with
Hi Lejeczek,
On 1/03/2016 00:31, lejeczek wrote:
hi everybody
a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new,
internal encryption?
Starting with MysQL
On 29/02/16 14:38, Steven Siebert wrote:
Simple answer is no. What are you trying to accomplish?
I was hoping that with this new feature (google's) where
mysql itself, internally uses keys to encrypt/decrypt tables
or tablespaces I could just secure data, simply.
Chance is I don't
Simple answer is no. What are you trying to accomplish?
S
On Mon, Feb 29, 2016 at 8:31 AM, lejeczek wrote:
> hi everybody
>
> a novice type of question - having a php + mysql, can one just encrypt
> (internally in mysql) tables and php will be fine?
> If not, would it be easy to
hi everybody
a novice type of question - having a php + mysql, can one
just encrypt (internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this
new, internal encryption?
thanks.
--
MySQL General Mailing List
For list archives: http
> From: Martin Mueller
>
> I moved the data directory of a MySQL installation from one computer to
> another. This works for MyISAM tables. Unfortunately I inadvertently
> created some INNO tables, and it doesn't seem to work.
Oh dear. Hope you have a backup.
I fought w
Am 20.08.2015 um 23:06 schrieb Martin Mueller:
I moved the data directory of a MySQL installation from one computer to
another. This works for MyISAM tables. Unfortunately I inadvertently
created some INNO tables, and it doesn't seem to work.
The show tables command accurately list
I moved the data directory of a MySQL installation from one computer to
another. This works for MyISAM tables. Unfortunately I inadvertently
created some INNO tables, and it doesn't seem to work.
The show tables command accurately lists the following the tables from a
longer list
pos
The problem was here:
---TRANSACTION 154B1E00, ACTIVE 265942 sec rollback
mysql tables in use 1, locked 1
ROLLING BACK 297751 lock struct(s), heap size 35387832, 74438247 row
lock(s), undo log entries 66688203
MySQL thread id 37, OS thread handle 0x7f11bc4b9700, query id 110 localhost
pau query
o < 0
> > History list length 1136
> > LIST OF TRANSACTIONS FOR EACH SESSION:
> > ---TRANSACTION 0, not started
> > MySQL thread id 67, OS thread handle 0x7f11bc426700, query id 244
> > localhost pau
> > SHOW ENGINE INNODB STATUS
> > ---TRANSACTION 154B1E00, ACTIVE 26
OS thread handle 0x7f11bc426700, query id 244
> localhost pau
> SHOW ENGINE INNODB STATUS
> ---TRANSACTION 154B1E00, ACTIVE 265942 sec rollback
> mysql tables in use 1, locked 1
> ROLLING BACK 297751 lock struct(s), heap size 35387832, 74438247 row
> lock(s), undo log entri
e for trx's n:o < 154B1E0A undo n:o < 0
History list length 1136
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 67, OS thread handle 0x7f11bc426700, query id 244 localhost
pau
SHOW ENGINE INNODB STATUS
---TRANSACTION 154B1E00, ACTIVE 265942 sec rollback
On Sun, May 17, 2015 at 02:01:57PM +0530, Pothanaboyina Trimurthy wrote:
Guys, can I implore you to post to a mailing list using its address in To:
field and not CC:ing it? You are constantly breaking out of my filters.
--
vag·a·bond adjective \ˈva-gə-ˌbänd\
a : of, relating to, or characteri
get lock on that table.
> >>
> >> Cheers,
> >> Adarsh Sharma
> >>
> >>
> >> On Sat, 16 May 2015 at 23:34 Pau Marc Muñoz Torres
> >> wrote:
> >>
> >>> Hello every body
> >>>
> >>> i have a b
;> Cheers,
>> Adarsh Sharma
>>
>>
>> On Sat, 16 May 2015 at 23:34 Pau Marc Muñoz Torres
>> wrote:
>>
>>> Hello every body
>>>
>>> i have a big table in my sql server and i want to delete it, it also
>>> have
>>>
uot;delete" commands but i
>> eventually get a time out. Wath can i do with it, does it exist any method
>> to delete tables quicly?
>>
>> i know that drop and delete are not equivalent but i want to get rid of
>> all
>> information inside
>>
>>
;
> i have a big table in my sql server and i want to delete it, it also have
> some indexes. I tried to "drop table" and "delete" commands but i
> eventually get a time out. Wath can i do with it, does it exist any method
> to delete tables quicly?
>
> i know t
erver and i want to delete it, it also have
> some indexes. I tried to "drop table" and "delete" commands but i
> eventually get a time out. Wath can i do with it, does it exist any method
> to delete tables quicly?
>
> i know that drop and delete are not equivalen
Hello every body
i have a big table in my sql server and i want to delete it, it also have
some indexes. I tried to "drop table" and "delete" commands but i
eventually get a time out. Wath can i do with it, does it exist any method
to delete tables quicly?
i know that dro
I just upgraded a server from 5.1 to 5.5. Our tables are all MyISAM. I
have a python script that inserts rows to a table, in 5.1 it worked
fine. In 5.5 it's failing with 'Lock wait timeout exceeded'. I google
this, and it seems that all the cases of people getting that were with
Inn
XTrabackup can handle both InnoDB and MyISAM in
a consistent way while minimizing lock time on
MyISAM tables ...
http://www.percona.com/doc/percona-xtrabackup/2.1/
--
Hartmut Holzgraefe, Principal Support Engineer (EMEA)
SkySQL - The MariaDB Company | http://www.skysql.com/
--
MySQL General
ase has a mixture of MyISAM- and
> InnoDB-tables. A backup of this mix does not seem to be easy. Until now it
> was dumped using "mysqldump --opt -u root --databases mausdb ...". What I
> understand until now is that --opt is not necessary because it is default. It
> incl
Hi,
i've been already reading the documentation the whole day, but still confused
and unsure what to do.
We have two databases which are important for our work. So both are stored
hourly. Now I recognized that each database has a mixture of MyISAM- and
InnoDB-tables. A backup of this mix
*please* don't use reply-all on mailing-lists
the list by definition distributes your message
Am 30.06.2014 13:14, schrieb Antonio Fernández Pérez:
> Thanks for your reply. Theorically the fragmented tables not offer the best
> performance to the InnoDB engine,
> that
Hi Johan,
Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine, that's correct or not?
I don't know if is a problem or not, is a doubt/question for me. I'm not
sure if is an atypical behaviour.
Thanks in advance.
Regards,
Antonio.
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Re: Optimizing InnoDB tables
>
> I would like to know, if is possible, why after execute an analyze table
> command on some fragmented table, after that, appears fragmented again.
Simple question
Hello Antonio,
On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tabl
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why
Am 27.06.2014 09:48, schrieb Antonio Fernández Pérez:
> Thanks for your reply. I have checked the link and my configuration.
> Innodb_file_per_table is enabled and in data directory appears a set of
> files by each table.
>
> Any ideas?
ideas for what?
* which files don't get shrinked (ls -lha)
Hi Andre,
Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.
Any ideas?
Thanks in advance.
Regards,
Antonio.
Have a look at this:
https://rtcamp.com/tutorials/mysql/enable-innodb-file-per-table/
--
Andre Matos
andrema...@mineirinho.org
On Jun 25, 2014, at 2:22 AM, Antonio Fernández Pérez
wrote:
> Hi again,
>
> I have enabled innodb_file_per_table (Its value is on).
> I don't have clear what I sho
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Re: Optimizing InnoDB tables
>
> I have enabled innodb_file_per_table (Its value is on).
> I don't have clear what I should to do ...
Then all new tables will be created in their own tablesp
Hi again,
I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...
Thanks in advance.
Regards,
Antonio.
Hello Reindl,
On 6/24/2014 3:29 PM, Reindl Harald wrote:
Am 24.06.2014 21:07, schrieb shawn l.green:
It makes a huge difference if the tables you are trying to optimize have their
own tablespace files or if they live
inside the common tablespace.
http://dev.mysql.com/doc/refman/5.5/en
Am 24.06.2014 21:07, schrieb shawn l.green:
> It makes a huge difference if the tables you are trying to optimize have
> their own tablespace files or if they live
> inside the common tablespace.
>
> http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb
Hello Antonio,
On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_s
Hi Wagner,
I'm running
MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute
ALTER TABLE (Analyze with InnoDB tables do that, or not?).
Thanks in advance.
Regards,
Antonio.
Hi Antonio, como esta?
What's the mysql version you're running? Have you tried to ALTER TABLE x
ENGINE=InnoDB?
-- WB, MySQL Oracle ACE
> Em 24/06/2014, às 08:03, Antonio Fernández Pérez
> escreveu:
>
> Hi list,
>
> I was trying to optimize the InnoDB tables. I
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND
Data_free > 0
After that, I have
know that MySQL support this amount but in this case and thinking in the
future, I have this problem with my architecture; how can I grow in
database servers without delete rows in the tables.
I have checked slow queries and now there aren't.
These tables are serving queries from FreeRADIUS
What kind of queries is this table serving? 8GB is not a huge amount of
data at all and IMO it's not enough to warrant sharding.
On Thu, May 15, 2014 at 1:26 PM, Antonio Fernández Pérez <
antoniofernan...@fabergroup.es> wrote:
>
>
>
> Hi,
>
> I have in my ser
2014-05-19 11:49 GMT+02:00 Johan De Meersman :
>
> - Original Message -
> > From: "Manuel Arostegui"
> > Subject: Re: Big innodb tables, how can I work with them?
> >
> > noSQL/table sharding/partitioning/archiving.
>
> I keep wondering how
- Original Message -
> From: "Manuel Arostegui"
> Subject: Re: Big innodb tables, how can I work with them?
>
> noSQL/table sharding/partitioning/archiving.
I keep wondering how people believe that NoSQL solutions magically don't need
RAM to work. Nearly
2014-05-15 14:26 GMT+02:00 Antonio Fernández Pérez <
antoniofernan...@fabergroup.es>:
>
>
>
> Hi,
>
> I have in my server database some tables that are too much big and produce
> some slow query, even with correct indexes created.
>
> For my application, it
Hello Antonio,
On 5/16/2014 9:49 AM, Antonio Fernández Pérez wrote:
Hi,
I write to the list because I need your advices.
I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.
How you design your tables can have a huge impa
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Advices for work with big tables
>
> Hi,
>
> I write to the list because I need your advices.
>
> I'm working with a database with some tables that have a lot of rows, for
&g
Am 16.05.2014 15:49, schrieb Antonio Fernández Pérez:
> I write to the list because I need your advices.
>
> I'm working with a database with some tables that have a lot of rows, for
> example I have a table with 8GB of data.
>
> How can I do to have a fluid job with thi
Hi,
I write to the list because I need your advices.
I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.
How can I do to have a fluid job with this table?
My server works with disk cabin and I think that sharding and partiti
Am 15.05.2014 14:26, schrieb Antonio Fernández Pérez:
> I have in my server database some tables that are too much big and produce
> some slow query, even with correct indexes created.
>
> For my application, it's necessary to have all the data because we make an
> authent
Hi,
I have in my server database some tables that are too much big and produce
some slow query, even with correct indexes created.
For my application, it's necessary to have all the data because we make an
authentication process with RADIUS users (AAA protocol) to determine if one
use
* h...@tbbs.net [140407 23:09]:
> 2014/04/07 08:02 -0800, Tim Johnson
> 2)mysqldump forces all database names to lower case in the "CREATE
> DATABASE" statement. I know, one shouldn't use upper case in
> database names, but :) tell that to my clients.
>
> Why not? That is
2014/04/07 08:02 -0800, Tim Johnson
2)mysqldump forces all database names to lower case in the "CREATE
DATABASE" statement. I know, one shouldn't use upper case in
database names, but :) tell that to my clients.
Why not? That is not mentioned in the section devoted to mapp
mysql on
> >>it. I have already set up a mysql user on the ubuntu OS.
> >>
> >>In the past I have used mysqldump with just the --all-databases
> >>option to transfer data across different linux partitions.
> >>
> >>I'm wondering if I
ldump with just the --all-databases
option to transfer data across different linux partitions.
I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?
thoughts? Opinions?
thanks
I should add the following:
1)the only
l-databases
> option to transfer data across different linux partitions.
>
> I'm wondering if I should explicitly exclude some of the tables from
> the mysql database. If so, which? perhaps mysql.user?
>
> thoughts? Opinions?
> thanks
I should add the following:
1)the
linux partitions.
I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?
thoughts? Opinions?
thanks
--
Tim
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com
--
MySQL General Ma
ilto:shawn.l.gr...@oracle.com]
Sent: Friday, March 21, 2014 3:34 PM
To: mysql@lists.mysql.com
Subject: Re: Locking a Database (not tables) x
Hi David.
On 3/21/2014 1:42 PM, David Lerer wrote:
Frequently, we import a production dump that contains only 1 or 2 databases into one of our QA instances
that con
1, 2014 3:34 PM
To: mysql@lists.mysql.com
Subject: Re: Locking a Database (not tables) x
Hi David.
On 3/21/2014 1:42 PM, David Lerer wrote:
> Frequently, we import a production dump that contains only 1 or 2 databases
> into one of our QA instances that contains many more databases. (i.e.
Perhaps enabling read only, followed by import with super user will do what you
want.
On Mar 22, 2014, at 12:26 AM, Manuel Arostegui wrote:
> 2014-03-21 18:42 GMT+01:00 David Lerer :
>
>> Frequently, we import a production dump that contains only 1 or 2
>> databases into one of our QA inst
2014-03-21 18:42 GMT+01:00 David Lerer :
> Frequently, we import a production dump that contains only 1 or 2
> databases into one of our QA instances that contains many more databases.
> (i.e. "database" being a "schema" or a "catalogue).
> At the beginning of the import script, we first drop all
s wondering whether there is a
better approach.
If you start with a DROP DATABASE that will pretty much ensure
that nobody gets back into it.
Then re-create your tables in a new DB (yyy)
As a last set of steps do
CREATE DATABASE
RENAME TABLE yyy.table1 to .table1,
From: Wayne Leutwyler [mailto:wleut...@columbus.rr.com]
Sent: Friday, March 21, 2014 2:12 PM
To: David Lerer
Subject: Re: Locking a Database (not tables) x
You could set max_connections = 0; then kill off any remaining connections. Do
your data load and then set you max_connections back to what it was prior.
Frequently, we import a production dump that contains only 1 or 2 databases
into one of our QA instances that contains many more databases. (i.e.
"database" being a "schema" or a "catalogue).
At the beginning of the import script, we first drop all objects in the QA
database so that it will be a
Frequently, we import a production dump that contains only 1 or 2 databases
into one of our QA instances that contains many more databases. (i.e.
"database" being a "schema" or a "catalogue).
At the beginning of the import script, we first drop all objects in the QA
database so that it will be a
Parallel Universe* now features Parallel Network Query (Distributed Query)
which joins tables from multiple servers in the network with unprecedented
speed.
Parallel Network Query may also be used to speed up slow server by
distributing tables of the query to multiple servers for processing which
I got an "interesting" problem with creation of indexes on MyISAM
tables in MySQL 5.6.15 and MySQL 5.6.14 running on FreeBSD 8.4 for float
columns - I am not able to create indexes on these columns
Indexes on all other columns work just fine
The problem occur while I was loading data
--databases, methinks.
- Original Message -
> From: "Daevid Vincent"
> To: mysql@lists.mysql.com
> Sent: Thursday, 21 November, 2013 10:44:39 PM
> Subject: How do I mysqldump different database tables to the same .sql file?
>
> I'm working on some co
21, 2013 1:59 PM
> To: MySql
> Subject: Re: How do I mysqldump different database tables to the same .sql
> file?
>
> There is a good reason that the USE database is not output in those
dumps..
> it would make the tool very difficult to use for moving data around.
>
> If I
id Vincent wrote:
> I'm working on some code where I am trying to merge two customer accounts
> (we get people signing up under different usernames, emails, or just create
> a new account sometimes). I want to test it, and so I need a way to restore
> the data in the particula
I'm working on some code where I am trying to merge two customer accounts
(we get people signing up under different usernames, emails, or just create
a new account sometimes). I want to test it, and so I need a way to restore
the data in the particular tables. Taking a dump of all the DB
2.6.32-279.14.1.el6.x86_64
I have created checksum table and tried to use --ignore-tables-regex to
remove some tables from checking.
pt-table-checksum --chunk-size-limit= --nocheck-plan --replicate-check
--ignore-tables-regex=^test.s_.*_tmp$
--ignore-tables=test.catalogsearch_fulltext
Hello Neil,
On 8/24/2013 5:21 AM, Neil Tompkins wrote:
I have the following four MySQL tables
Region
RegionId
City
CityId
RegionId
Hotel
HotelId
CityId
HotelRegion
HotelId
RegionId
I'm struggling to write a UPDATE statement to update the City table's
RegionId field from d
I have the following four MySQL tables
Region
RegionId
City
CityId
RegionId
Hotel
HotelId
CityId
HotelRegion
HotelId
RegionId
I'm struggling to write a UPDATE statement to update the City table's
RegionId field from data in the HotelRegion table.
Basically how can I update the
T() and HAVING to do the filtering.
> -Original Message-
> From: Daevid Vincent [mailto:dae...@daevid.com]
> Sent: Monday, July 08, 2013 7:57 PM
> To: mysql@lists.mysql.com
> Subject: RE: Need query to determine different column definitions across
> tables
>
>
> -Original Message-
> From: Daevid Vincent [mailto:dae...@daevid.com]
> Sent: Monday, July 08, 2013 2:11 PM
> To: mysql@lists.mysql.com
> Subject: Need query to determine different column definitions across
tables
>
> I'm noticing that across our several
to:dae...@daevid.com]
> Sent: Monday, July 08, 2013 2:11 PM
> To: mysql@lists.mysql.com
> Subject: Need query to determine different column definitions across
> tables
>
> I'm noticing that across our several databases and hundreds of tables that
> column definitions are
1 - 100 of 4933 matches
Mail list logo