On 13/03/16 14:52, Richard wrote:
Date: Wednesday, March 09, 2016 14:38:45 +
From: lejeczek
hi everybody
I imagine this is theoretical rather than practical question,
albeit I don't have much practice, so I hope experts could comment
logical view of the procedure is: mysqldump && truncat
> Date: Wednesday, March 09, 2016 14:38:45 +
> From: lejeczek
>
> hi everybody
>
> I imagine this is theoretical rather than practical question,
> albeit I don't have much practice, so I hope experts could comment
> logical view of the procedure is: mysqldump && truncate - what are
> the ch
Ah, ok, if I understand correctly within this context every record in the
one table _should_ have a unique identifier. Please verify this is the
case, though, if for example the primary key is an auto increment what I'm
going to suggest is not good and Really Bad Things will, not may, happen.
If
Totally with you, I had to get up and wash my hands after writing such
filth =)
On Mon, Feb 29, 2016 at 12:14 PM, Gary Smith wrote:
> On 29/02/2016 16:32, Steven Siebert wrote:
>
>>
>> At risk of giving you too much rope to hang yourself: if you use
>> mysqldump to dump the database, if you use
On 29/02/2016 16:32, Steven Siebert wrote:
At risk of giving you too much rope to hang yourself: if you use
mysqldump to dump the database, if you use the --replace flag you'll
convert all INSERT statements to REPLACE, which when you merge will
update or insert the record, effectively "mergin
On 29/02/16 16:32, Steven Siebert wrote:
What level of control do you have on the remote end that is
collecting/dumping the data? Can you specify the command/arguments on how
to dump? Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively
What level of control do you have on the remote end that is
collecting/dumping the data? Can you specify the command/arguments on how
to dump? Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?
I agree with o
- Original Message -
> From: "lejeczek"
> Subject: Re: dump, drop database then merge/aggregate
>
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then
> collect some and again, dump then remove
> and
On 29/02/16 15:42, Gary Smith wrote:
On 29/02/2016 15:30, lejeczek wrote:
On 28/02/16 20:50, lejeczek wrote:
fellow users, hopefully you experts too, could help...
...me to understand how, and what should be the best
practice to dump database, then drop it and merge the
dumps..
What I'd like
On 29/02/2016 15:30, lejeczek wrote:
On 28/02/16 20:50, lejeczek wrote:
fellow users, hopefully you experts too, could help...
...me to understand how, and what should be the best practice to dump
database, then drop it and merge the dumps..
What I'd like to do is something probably many have
On 28/02/16 20:50, lejeczek wrote:
fellow users, hopefully you experts too, could help...
...me to understand how, and what should be the best
practice to dump database, then drop it and merge the dumps..
What I'd like to do is something probably many have done
and I wonder how it's done best.
OPTIMIZE TABLE sometimes helps, ymmv.
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
/ Carsten
Nico Sabbi skrev:
Hi,
I noticed that over the months the dump of my databases (very
subject to modifications, but not subject to increase significantly in
size) gets progressively slo
>-Original Message-
>From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
>Sent: Wednesday, October 07, 2009 2:15 PM
>To: 'John Oliver'; mysql@lists.mysql.com
>Subject: RE: Dump / restore rows in table?
>
[JS] I should have mentioned that you can do thi
Are you just trying to copy a subset of one table into another? If so, simply
do this:
CREATE TABLE new_one SELECT * FROM old_one LIMIT 1000,5000;
That will create a table with the same columns, but no keys or such. If you
want to copy the key structure, it will take you two commands:
CREATE T
In the last episode (Oct 07), John Oliver said:
> I did try to find out how to do this in the manual, but "row" and
> "table" occur so many times...
>
> I want to dump a certain number of rows from one table, and then restore
> them to another database. I'm guessing I'd try "mysqldump -u root
> -
נור דאוד schrieb:
Hello list,
I have a problem dumping a database. The problem is that the database uses the swedish charset (historical, hosting provider didn't have all sets). The data itself is Arabic (windows-1256), and although I have no idea how it is stored inside the database's files, t
Tim, it's a gnarly problem that most DBAs struggle with in some form or
another, whether using MySQL or another database package.
If you're using only MyISAM tables, MySQL's free, included 'mysqlhotcopy'
script might work for you, as it's generally a bit faster than mysqldump in
my experience. I
In the last episode (Feb 01), Jim C. said:
> Is it possible to dump to the old MySQL 4.x format? There are some
> conversion tools I would like to use and they don't support 5.0 yet.
mysqldump --compatible=mysql40 ; see the mysqldump manpage for all the
options.
--
Dan Nelson
[E
In the last episode (May 22), Alain Roger said:
> I would like to make a dump of my database.
> however, i have some stored procedure and they are not exported to the *.txt
> file i've created via mysqldump.
>
> So, how can i dump stored procedure ?
Add the -R flag to mysqldump:
-R, --routin
Hello.
Not enough information to make a conclusion. Please, could you provide
versions of MySQL Server and mysqldump utility. Include the command line
options for mysqldump. Check if --skip-quote-names helps you. See:
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
wangxu wrote:
> When
In article <[EMAIL PROTECTED]>,
Paul DuBois <[EMAIL PROTECTED]> writes:
> At 18:59 -0200 1/23/06, Luiz Rafael Culik Guimaraes wrote:
>> Dear Friends
>>
>> What are the best options to dump an entire database on linux (with
>> creation of databases and tables) with out dumping the index
>> creatio
invent
---
-Original Message-
From: Luiz Rafael Culik Guimaraes [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 24 January 2006 11:40 AM
To: MYSQL List; Paul DuBois
Subject: Re: Dump
Paul
Dear Friends
What are the best options to dump an entire database on linux (with
creation of databases and tables) with out dumping the index creation
sentences
What is an "index creation sentence"?
Do you mean that you want the dump to include the CREATE TABLE statements,
but for thos
At 18:59 -0200 1/23/06, Luiz Rafael Culik Guimaraes wrote:
Dear Friends
What are the best options to dump an entire database on linux (with
creation of databases and tables) with out dumping the index
creation sentences
What is an "index creation sentence"?
Do you mean that you want the dum
Hello.
If you have such a big database, may be you should think about
--tab option of mysqldump:
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
Tom Brown wrote:
> is it possible to do a mysql dump to more than 1 file? We will shortly
> be needing to dump a db that will be
The output of mysqldump is standard output, not a file. You can pipe
it into another program, or redirect the output to a file, but
mysqldump does not make a file. Therefore, there is no option in
mysqldump to make more than 1 file.
How is your database stored on disk? The documentation Edwi
The output of mysqldump is standard output, not a file. You can pipe
it into another program, or redirect the output to a file, but
mysqldump does not make a file. Therefore, there is no option in
mysqldump to make more than 1 file.
How is your database stored on disk? The documentation Edwi
At 3:56 PM + 11/21/05, Tom Brown wrote:
is it possible to do a mysql dump to more than 1 file? We will
shortly be needing to dump a db that will be in excess of 50gb so
will encounter file size issues
This is on 4.1.x and rhel 4
Probably the best approach - knowing nothing about your db
The output of mysqldump is standard output, not a file. You can pipe
it into another program, or redirect the output to a file, but
mysqldump does not make a file. Therefore, there is no option in
mysqldump to make more than 1 file.
How is your database stored on disk? The documentation Edwin p
Mysqldump has "where" condition, you may have to segment your data and dump
it in diferents files
Mysqldump --where='date between \'dateStart\' and \'dateFinish\' ' (for
example)
See full documentation at:
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
Regards!!
Edwin Cruz
-Me
> >Isn't this what the --hex-blob option to mysqldump is for?
>
> There is no such option to mysqldump in version 4.1.11.
>
>From the manual:
--hex-blob
Dump binary string columns using hexadecimal notation (for example,
'abc' becomes 0x616263). The affected columns are BINARY, VARBINARY,
and B
>> We did a mysqldump to produce a transport file
>> from version 3 of mysql to insert the data into version 4 of mysql.
>> The encoded numbers were munged, presumably because they were
>> binary data in the dump.
>Isn't this what the --hex-blob option to mysqldump is for?
There is no such opti
> We did a mysqldump to produce a transport file
> from version 3 of mysql to insert the data into version 4 of mysql.
> The encoded numbers were munged, presumably because they were
> binary data in the dump.
Isn't this what the --hex-blob option to mysqldump is for?
--
MySQL General Mailing L
Csongor,
- Alkuperäinen viesti -
Lähettäjä: "Fagyal Csongor" <[EMAIL PROTECTED]>
Vastaanottaja: "Heikki Tuuri" <[EMAIL PROTECTED]>
Kopio: <[EMAIL PROTECTED]>
Lähetetty: Monday, September 13, 2004 3:36 PM
Aihe: Re: Dump question: transactions v
Hi Ed,
Maybe MyISAM is still a better choice for this use...?
For MyISAM and BDB tables you can specify AUTO_INCREMENT on a secondary
column (or three columns in your case) in a multiple-column index. In
this case, the generated value for the AUTO_INCREMENT column is
calculated as MAX(auto_incremen
Hi Heikki,
Csongor,
in InnoDB, it is better to use
SELECT ... FOR UPDATE
to lock the result set of a SELECT.
Thank you, I think I will go with this one.
A plain SELECT in InnoDB is a consistent, non-locking read that reads a
snapshot of the database at an earlier time. It does not lock anything.
Csongor,
in InnoDB, it is better to use
SELECT ... FOR UPDATE
to lock the result set of a SELECT.
A plain SELECT in InnoDB is a consistent, non-locking read that reads a
snapshot of the database at an earlier time. It does not lock anything.
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys
Maybe MyISAM is still a better choice for this use...?
For MyISAM and BDB tables you can specify AUTO_INCREMENT on a secondary
column (or three columns in your case) in a multiple-column index. In
this case, the generated value for the AUTO_INCREMENT column is
calculated as MAX(auto_increment_colu
You may need to user lower_case_names = 0 to turn off the case sensitivity
on the unix system since windows is not case sensitive.
-Original Message-
From: Ben David, Tomer
To: [EMAIL PROTECTED]
Sent: 7/27/04 5:56 AM
Subject: dump case sensitive windows
Hi
I'm using mysqldump in windows
On Wed, Jun 16, 2004 at 09:16:52AM +0400, Vinay wrote:
> Thanks Brian
> but i'm on Red Hat enterprise server so i wouldn't be able to use the
> Features of FreeBSD and restarting the server is not a solution for me
> as i need to be able to perform hot backup.
> One of my solutions was to bump t
If you enter the command...
mysqldump --help
you'll find a long listing of qualifiers that you can use with this, one
of which is -w (or --where=) which allows you to specify what you want
dumped.
HTH,
Ron
James wrote:
Hello,
I'm trying to get my brain around this problem. I have a data
Look at the output of mysqldump --help
-w, --where=nameDump only selected records;
-t, --no-create-info
-Original Message-
From: James
To: Mysql List
Sent: 6/16/04 8:56 AM
Subject: dump data based on query
Hello,
I'm trying to get my brain around this problem. I have a database that
Thanks Brian
but i'm on Red Hat enterprise server so i wouldn't be able to use the
Features of FreeBSD and restarting the server is not a solution for me
as i need to be able to perform hot backup.
One of my solutions was to bump the database in a text file. this wroks
fine for small tables but
On Tue, Jun 15, 2004 at 10:14:54AM +0400, Vinay wrote:
> Hi,
> I just like to know what is the best way to dump the databases to flat
> files as some of the tables on my system are up to 1.4G and the whole
> database size is over 2.5G . I trying to set up a cron job that will
> back up the table
John,
Wednesday, November 20, 2002, 8:32:35 AM, you wrote:
JC> After I dump the database what is the command to import it into another
JC> computer?
mysql -h -u -p [database_name] <
dump_file.sql
JC> Do I have to create a new database and then import all tables
JC> and data or will it create th
sage -
From: "Mike Baranski" <[EMAIL PROTECTED]>
To: "Steve Brazill" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, July 16, 2001 10:06 PM
Subject: Re: Dump SQL editor thats cross-compatWin/Linux
> -- Original Message
gt;
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, July 16, 2001 11:10 AM
Subject: Re: Dump SQL editor thats cross-compatWin/Linux
> Also, if you want the real deal, go get emacs, either at
> http://xemacs.org
> or http://ww
gt;
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, July 16, 2001 11:10 AM
Subject: Re: Dump SQL editor thats cross-compatWin/Linux
> Also, if you want the real deal, go get emacs, either at
> http://xemacs.org
> or http://ww
TextPad ( http://www.textpad.com ) also handles Unix/Win (for table/db
dumps, etc) without a problem and is my own personal choice. Includes syntax
highlighting if you want that
kind of thing.
Emacs on Windows is also good, but it can be a bear to configure and get
running the way you might want
t;[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, July 16, 2001 11:10 AM
Subject: Re: Dump SQL editor thats cross-compatWin/Linux
> Also, if you want the real deal, go get emacs, either at
> http://xemacs.org
> or http://www.gnu.org
>
> It's the best (no flames
Also, if you want the real deal, go get emacs, either at
http://xemacs.org
or http://www.gnu.org
It's the best (no flames here), but has a pretty steep learning curve.
-- Original Message --
From: "Steve Brazill" <[EMAIL PROTECTED]>
Reply-To: "Steve Brazil
Don't forget that "WinVI" exists and can be found (and downloaded) at:
http://www.winvi.de/en/
And for those who like the 'code warrior' or 'slickedit' type of stuff
(where different types of 'statements' are highlighted in different colors),
there's "Code Genie" available at:
http://www.c
i guess that is confusing to me because it says "query".
i'm really stuck though -see my recent 'please help' post to the mysql list.
thanks.
At 2:48 AM -0600 6/18/01, Chris Bolt wrote:
> > thank you.
>> that's sort of what i thought.
>>
>> now for the silly question:
>> i know how to do it w
> thank you.
> that's sort of what i thought.
>
> now for the silly question:
> i know how to do it with 'monitor' (CLI), but where/how do you
> "import" using phpMyAdmin, please? (this i'd love to know :) i looked
> through the manual but didn't see anything for "import" db.
Click a database on
thank you.
that's sort of what i thought.
now for the silly question:
i know how to do it with 'monitor' (CLI), but where/how do you
"import" using phpMyAdmin, please? (this i'd love to know :) i looked
through the manual but didn't see anything for "import" db.
Ted
At 4:30 AM -0600 6/17/01,
That is a .sql file renamed to a .dump file, there's no difference. Import
it with phpMyAdmin or mysql -u username -ppassword dbname < whatever.dump
> i had some others that i browsed into phpmyadmin and they seemed to
> work, but others, like this one, showed a Query error -but that
> "method" o
i had some others that i browsed into phpmyadmin and they seemed to
work, but others, like this one, showed a Query error -but that
"method" of "imported population" was probably wrong anyway, i guess.
thanks here's the file:
# MySQL dump 6.8
#
# Host: localhostDatabase: netsloth
#
What format are they in? Please paste a few sample lines.
> hello,
>
> i have some files from a Cd that are .dump files, how do i get these
> db into mysql? (is it possible using phpMyAdmin).
>
> again, these are ".dump" files, not ".sql" files.
--
58 matches
Mail list logo