Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-06 Thread Roberta Jask�lski
 2014/11/04 11:04 -0800, Jan Steinman 
I was not suggesting it for dates. The OP appeared to have a well-defined set 
of strings in a VARCHAR field — THAT is what I suggested ENUMs for!



What is the update frequency of those VARCHARs? If you're adding them often — 
or if you need to occasionally change their value — I'd use another table with 
a reference.

If they're immutable and new ones are not added often, there's no design cost 
at all to using ENUMs. I'd argue there's a higher maintenance cost to NOT using 
them! 

Ugh--I missed the discussion shift from the DATEs to the VARCHAR labels ... and 
now I wholeheartedly agree with you.

As for the DATEs, I yet suspect that for performance maybe TIMESTAMP is 
slightly better than DATE.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-06 Thread Zbigniew
2014-11-06 21:49 GMT+01:00, Roberta Jaskólski h...@tbbs.net:

 Ugh--I missed the discussion shift from the DATEs to the VARCHAR labels ...
 and now I wholeheartedly agree with you.

 As for the DATEs, I yet suspect that for performance maybe TIMESTAMP is
 slightly better than DATE.

Well what I'm interested in - and I was asking my original question
about - is SIGNIFICANT difference. If everything I can count for is
just slight improvements (by which I understand difference that can
be detected only by benchmarks, but not really during database
operation), then actually it doesn't make much sense, it seems.
-- 
Zbig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-04 Thread Johan De Meersman


- Original Message -
 From: Zbigniew zbigniew2...@gmail.com
 Subject: Using INTEGER instead of VARCHAR/DATE - is this a way to faster 
 access?

 What about using ENUMs? They have nearly the performance of INTEGERs, but
 you don't have to maintain a string mapping in your programming logic.
 
 But are you able to estimate, what boost can i notice? 5% - or 50%,
 or maybe even 500%?

I'll give you an interesting reason to switch to ENUM (or smallint, if so 
inclined): Your data fields will be smaller. That not only means more records 
in a page (might be negligable), but more importantly, it'll make the index on 
that field smaller, meaning a) more of it will remain in memory and b) lookups 
on it will be marginally faster, too.

I have no hard data on how it'll impact index performance (your dataset is 
yours to benchmark), but on one million of records (and you were talking 
several), a each byte saved is a megabyte of memory that can be used for other 
purposes, like data cache, which will speed up other things, too.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-04 Thread Jan Steinman
 From: (Halász Sándor) h...@tbbs.net
 
 2014/10/29 20:56 +0100, Zbigniew 
 if instead of textual labels I'll use SMALLINT as simple integer code for 
 each different label (and in my program a translation table)? 
 
 This, in effect, is ENUM...

Well, not really! With INTEGERs, your referential integrity is external to 
MySQL, and has to be managed.

 ... and maybe is useful if the set of dates is well determined...

I was not suggesting it for dates. The OP appeared to have a well-defined set 
of strings in a VARCHAR field — THAT is what I suggested ENUMs for!

 There is a design cost in using ENUM: If you find that your first set of 
 dates is too small, later, with ALTER TABLE, you have to change the type.

Again, the suggestion for ENUM was to replace a constrained set of VARCHARs, 
and yet, you raise a valid point.

What is the update frequency of those VARCHARs? If you're adding them often — 
or if you need to occasionally change their value — I'd use another table with 
a reference.

If they're immutable and new ones are not added often, there's no design cost 
at all to using ENUMs. I'd argue there's a higher maintenance cost to NOT using 
them!

 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-03 Thread Hal�sz S�ndor
 2014/11/02 13:19 +0100, Zbigniew 
So you guys (Jan and hsv) suggest, that switching from DATE to more
numeric data type may not be necessary, but using ENUM instead of
VARCHAR can be real performance gain, right?

But are you able to estimate, what boost can i notice? 5% - or 50%,
or maybe even 500%? 

_Maybe_! but I will not venture to measure it. I doubt that it would be big. I 
just looked ENUM up: it allows 65535 distinct values, which sounds like 16 
bits, usually two bytes, and numeric operators would be used.

 2014/11/02 11:19 -0800, Jan Steinman 
I would hope that the query optimizer converts '2014-11-02' to the three-bytes 
internal representation of DATE before doing the query, in which case, DATE 
should actually be a tiny bit faster than TIMESTAMP. 

That is doubtful. In the processors that I know, one built-in numeric operation 
is enough for either 2 bytes of ENUM or 4 bytes of TIMESTAMP, but three are 
needed for DATE. In any case, the C-overhead, since MySQL is implemented in C, 
overwhelms the cost of any single comparison. The equality comparison is at 
least simpler than an ordered comparison.

 2014/10/29 20:56 +0100, Zbigniew 
if instead of textual labels I'll use SMALLINT as simple integer code for 
each different label (and in my program a translation table)? 

This, in effect, is ENUM, and maybe is useful if the set of dates is well 
determined. If you use ENUM at least the overhead of translation is built in 
into MySQL, and, one hopes, costs less than doing it for oneself.

There is a design cost in using ENUM: If you find that your first set of dates 
is too small, later, with ALTER TABLE, you have to change the type. If you add 
the new string to the end, there is, maybe, no great cost to the adding, but if 
in the middle If every new date-string is added to the end, it will, maybe, 
be in random order. The same applys to your translation table.

I suggested TIMESTAMP because I suspect that one built-in comparison (after the 
optimizer is done with it) is enough, and it allows the appearance of real 
dates. (If the processor fetches 32 bits at a time (nowadays 64 or more is 
likly) then a 32-bit type is fetched as swiftly as any other. Both shorter and 
longer types take longer.) The more I debate this, the better I like TIMESTAMP 
for your problem.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-02 Thread Zbigniew
2014-10-31 5:29 GMT+01:00, Jan Steinman j...@ecoreality.org:

 What about using ENUMs? They have nearly the performance of INTEGERs, but
 you don't have to maintain a string mapping in your programming logic.

So you guys (Jan and hsv) suggest, that switching from DATE to more
numeric data type may not be necessary, but using ENUM instead of
VARCHAR can be real performance gain, right?

But are you able to estimate, what boost can i notice? 5% - or 50%,
or maybe even 500%?
-- 
Zbig

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-11-02 Thread Jan Steinman
 From: Zbigniew zbigniew2...@gmail.com
 
 switching from DATE to more numeric data type may not be necessary...

I would hope that the query optimizer converts '2014-11-02' to the three-bytes 
internal representation of DATE before doing the query, in which case, DATE 
should actually be a tiny bit faster than TIMESTAMP.

 using ENUM instead of VARCHAR can be real performance gain, right?

Not just in performance, but it appears to simply be The Right Thing To Do(TM) 
in your case. (Codd Rule #10: referential integrity.)

Consider an insert into a day-of-week column (for instance) that somehow got 
Sudnay in the VARCHAR field instead of Sunday. Using an ENUM eliminates the 
possibility of a typo at a more fundamental level than your programming logic. 
If you do a massive insert with Sudnay in the ENUM field, the entire 
transaction will fail, which is really what you want rather than having to 
track down bad data after the fact, no?

If it REALLY is one value out of a known set, it SHOULD be either an ENUM, or a 
reference to another table. Use the latter technique if you need to add new 
values very often.

 But are you able to estimate, what boost can i notice? 5% - or 50%, or 
 maybe even 500%?

Very hard to say. That's like saying, If I eat well and get enough exercise, 
will I live 5% or 50% or 500% longer? Probably more like 5%, but it may FEEL 
like 500%! :-)

If the value is constrained to a set, having it as an ENUM (or reference to 
another table) will save you grief in many other ways besides boosting 
performance.

 Private enterprise, indeed, became too private. It became privileged 
enterprise, not private enterprise. -- Franklin Delano Roosevelt
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-10-30 Thread hsv
 2014/10/29 20:56 +0100, Zbigniew 
Now to the point: considering, that the second column shall contain
about 100-200 different labels - so in average many of such labels
can be repeated one million times (or even more) - will it speed-up
the selection done with something like ...WHERE label='xyz' AND
date='2013-02-25'  (and maybe the insertion as well?), if instead of
textual labels I'll use SMALLINT as simple integer code for each
different label (and in my program a translation table)?

If so - will I have further performance gain, if instead of
human-readable DATE for the third column I'll use Unix time put into
INTEGER type column, not DATE-type? 

I really cannot answer your real question, but say only that DATE and other 
time types are numeric, although mostly constructed.
DATE takes three bytes with range '1000-01-01' to '-12-31'.
TIMESTAMP (which has special features that one can suppress) is a four-byte 
integer that is a 31-bit Unix timestamp with range '1970-01-01 00:00:01.00' 
UTC to '2038-01-19 03:14:07.99' UTC.

Maybe TIMESTAMP, which doubtless uses the underlying integer mechanism for 
comparison, is best for you. Consider also the functions UNIX_TIMESTAMP and 
FROM_UNIXTIME.

The zone is not involved in DATE, but is involved in the rest aforesaid.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-10-30 Thread Jan Steinman
 From: Zbigniew zbigniew2...@gmail.com
 
 Now to the point: considering, that the second column shall contain
 about 100-200 different labels - so in average many of such labels
 can be repeated one million times (or even more) 

What about using ENUMs?

They have essentially the performance of INTEGERs, but you don't have to 
maintain a string mapping in your programming logic.

 Yes'm, old friends is always best, 'less you can catch a new one that's 
fit to make an old one out of. -- Sarah Jewett
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-10-30 Thread Jan Steinman
 From: Zbigniew zbigniew2...@gmail.com
 
 Now to the point: considering, that the second column shall contain
 about 100-200 different labels - so in average many of such labels
 can be repeated one million times (or even more) 

What about using ENUMs? They have nearly the performance of INTEGERs, but you 
don't have to maintain a string mapping in your programming logic.

 Yes'm, old friends is always best, 'less you can catch a new one that's 
fit to make an old one out of. -- Sarah Jewett
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Using INTEGER instead of VARCHAR/DATE - is this a way to faster access?

2014-10-29 Thread Zbigniew
I'm going to establish a large database (ISAM) of simple structure,
say the table with only 3 columns:

- VARCHAR(80)
- VARCHAR(40)
- DATE

The number of rows can be quite large, about hundred million or so.
The first column, containing actual information, will contain unique
values, unlike the two others - but the two others shall be used for
data selection (and I'll index them).

Now to the point: considering, that the second column shall contain
about 100-200 different labels - so in average many of such labels
can be repeated one million times (or even more) - will it speed-up
the selection done with something like ...WHERE label='xyz' AND
date='2013-02-25'  (and maybe the insertion as well?), if instead of
textual labels I'll use SMALLINT as simple integer code for each
different label (and in my program a translation table)?

If so - will I have further performance gain, if instead of
human-readable DATE for the third column I'll use Unix time put into
INTEGER type column, not DATE-type?

And the final question: even, if so - is it worthy? I mean: will the
supposed performance gain be significant (e.g. 2-3 times faster
selection) - and not, say, just 5% faster (only possible to detect by
using benchmarking tools)?

Thanks in advance for your opinions.
-- 
Zbig

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ALTER TABLE - correct way of adding columns?

2013-07-08 Thread Rafał Radecki
Hi All.

I would like to change the layout of my production database, I would like
to add a column to an existing table. As I think before the ALTER TABLE
statement all access to the database should be denied/ended, then the ALTER
TABLE should be performed, and then user/applications should be able to use
the database once again.

My tables is quite small ~4MB data  indexes.

So is the ALTER TABLE on a running/opened to clients database/table
desirable or should it be run when all access to the database/table is
forbidden?

Best regards,
Rafal Radecki.


Re: ALTER TABLE - correct way of adding columns?

2013-07-08 Thread Johan De Meersman
- Original Message -
 From: Rafał Radecki radecki.ra...@gmail.com
 
 I would like to change the layout of my production database, I would
 like to add a column to an existing table. As I think before the ALTER
 TABLE statement all access to the database should be denied/ended, then the
 ALTER TABLE should be performed, and then user/applications should be able
 to use the database once again.

Quite right, but ALTER TABLE will acquire a full table lock all by itself - and 
in 5.6, it's actually getting pretty smart about wether or not it's needed. If 
it does take a lock, any clients trying to access the table will simply wait 
for the lock to release, just like happens on other locking queries.

The pt-schema-upgrade tool you found is a big help if you need to do 
long-running changes but want to keep the table online during the operation.

 My tables is quite small ~4MB data  indexes.

In this case, however, I shouldn't expect the change to take more than a few 
seconds in any case; so if your clients won't need to be updated because of the 
new layout, you could probably just run it and nobody will notice. Given that 
this is production, though, it is good practice to inform the (human) clients 
about your changes beforehand :-)


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: ALTER TABLE - correct way of adding columns?

2013-07-08 Thread Rafał Radecki
Have you used pt-online-schema-change.html from
http://www.percona.com/doc/percona-toolkit/2.0/pt-online-schema-change.html ?
What do you think about this tool?

Best regards,
Rafal Radecki.


2013/7/8 Rafał Radecki radecki.ra...@gmail.com

 Hi All.

 I would like to change the layout of my production database, I would like
 to add a column to an existing table. As I think before the ALTER TABLE
 statement all access to the database should be denied/ended, then the ALTER
 TABLE should be performed, and then user/applications should be able to use
 the database once again.

 My tables is quite small ~4MB data  indexes.

 So is the ALTER TABLE on a running/opened to clients database/table
 desirable or should it be run when all access to the database/table is
 forbidden?

 Best regards,
 Rafal Radecki.



Re: best way to copy a innodb table

2013-07-02 Thread Arjun
Well, the easy way to chunk the inserts is by use of limit. Here is what I 
used for one of my projects:

Insert ignore into t1 (f1, f2, f3)
Select f1, f2, f3 from t2 limit 100, 100

Inserts 1M records at a time starting from 1M th record in t2 and you can keep 
incrementing this offset as you progress. This will help in monitoring the 
table inserts and at the same time move chunks of records from source table.

Enjoy! 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: best way to copy a innodb table

2013-07-02 Thread Michael Dykman
Another technique to avoid impact to the source database is to create your
target as MyISAM, pump your records into that (no ACID overhead) and at the
end :

ALTER mytable engine=InnoDb

The alter can take awhile but it will impose no strain on the source server
at all.


On Tue, Jul 2, 2013 at 3:48 AM, Arjun na...@yahoo.com wrote:

 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:

 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100

 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep
 incrementing this offset as you progress. This will help in monitoring the
 table inserts and at the same time move chunks of records from source
 table.

 Enjoy!


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: best way to copy a innodb table

2013-07-02 Thread Perrin Harkins
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto miguel.ni...@percona.com
 wrote:

 You should check pt-archiver.


+1.  It works very well for this type of job.

- Perrin


RE: best way to copy a innodb table

2013-07-02 Thread Rick James
The particular example given here is unsafe and slow.

* Without an ORDER BY, you are not guaranteed that the chunks will be distinct.

* If there are any INSERTs/DELETEs between chunk copies, you will get 
dups/missing rows for two reasons:  the inserted/deleted rows, and the OFFSET 
is not quite right.

* OFFSET requires walking over the skipped rows.  As you get farther into the 
table, this takes longer.  That is, you have an ORDER(N**2) operation, not what 
could be ORDER(N).

* If replication is involved, 1M rows is a lot -- there will be noticeable 
delays where other replication activity is stalled.

If you have an AUTO_INCREMENT PRIMARY KEY, then using WHERE id  100 AND id 
= 200 is a better approach -- Order(N), and chunks guaranteed to be 
distinct.  Still, it is not immune from INSERTs/DELETEs.  Replication is fixed 
by decreasing chunk size (and by avoiding OFFSET).

 -Original Message-
 From: Arjun [mailto:na...@yahoo.com]
 Sent: Tuesday, July 02, 2013 12:48 AM
 To: mysql@lists.mysql.com
 Subject: Re: best way to copy a innodb table
 
 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:
 
 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100
 
 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep incrementing this offset as you progress. This will help in
 monitoring the table inserts and at the same time move chunks of records
 from source table.
 
 Enjoy!
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: The most elegant/efficient way to pull this SQL query

2012-04-12 Thread Hal�sz S�ndor
 2012/04/11 17:51 -0500, Peter Brawley 
select b.peopleID, concat('(',p.fname,,')'), b.stateID, concat('(',s.state,')')
from bridge b
join people p on b.peopleID=p.peopleID
join state s on b.stateID=s.stateID;

Since the names are the same in the tables, it works to use USING, too, and 
you are relieved of the burden of an alias:

from bridge
join people USING(peopleID)
join state USING(stateID)

If the fields peopleId and stateID are the only field names in common, 
NATURAL JOIN also works.

from bridge
NATURAL join people
NATURAL join state


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: The most elegant/efficient way to pull this SQL query

2012-04-12 Thread Haluk Karamete
My initial goal was to write a very convenient php function that display a
table view based on arguments that are super simple to write - without
requiring the developer to type-in ( or know ) the ins and outs of joins,
natural joins etc.

Something like this
function showtable($dbh,$table,$fields){
//get the $fields argument  parse it out to come up
//with one of the SQL statements you guys are discussing.
 SQL = BuildSQL($table,$fields)
//and then use this SQL to output the table
}

The magic would be happening in the $fields argument.

The example in my original question was like this.
I'm repeating it for convenience purposes.

quote
I've got this relational mySQL table that ties peopleIDs from the people
table to the states IDs

peopleID___stateID
1__1
2__4
3__5


people table is like this;

___peopleID_FName_
___1joe
___2bob___
___3charlie_


and finally the state table goes like this;

___stateID_State___
___1___california
___2___new york
___3___washington__
___4___texas___
___5___florida__


What's the most straightforward way to achieve the following view with one
SQL statement?


peopleID__stateID_
1_(joe)___1__(california)_
2_(bob)___4__(texas)__
3_(charlie)___5__(florida)

/quote

if the final table ( PeopleAndStates ) view I want were to be as follows;

peopleID__stateID_
1_1___
2_4___
3_5___

Then I would have called the function like this;

showtable($dbh,$myTable,peopleID,stateID)

But if I want to get, the following view instead;

peopleID__stateID_
1_(joe)___1___
2_(bob)___4___
3_(charlie)___5___

I would like to be able to call my function as follows;

showtable($dbh,$PeopleAndStates,peopleID(PeopleTable.PeopleID 
FName),stateID)



To mean the following;

When you are outputting the peopleID, provide the corresponding Fname
field from the PeopleTable where peopleID there is equal to the peopleID
you are outputting.

What I was seeking from you guys was to find out to most simplistic SQL
statement so that when I parse the area with (PeopleTable.PeopleID 
FName), I can extract the pieces and place it in the final SQL.

I'm not sure if you all get the idea of how such a function make debugging
super easy.

Once you write the parser, you can deploy it over many different cases such
as

ShowRecord($dbh,$table,$fields,where peopleID5,limit 100) and so on.

So, the simpler the SQL, the easier the transition from the starting slate
which is really no different than

SELECT peopleID(PeopleTable.PeopleID  FName),stateID from PeopleAndStates





2012/4/12 Halász Sándor h...@tbbs.net

  2012/04/11 17:51 -0500, Peter Brawley 
 select b.peopleID, concat('(',p.fname,,')'), b.stateID,
 concat('(',s.state,')')
 from bridge b
 join people p on b.peopleID=p.peopleID
 join state s on b.stateID=s.stateID;
 
 Since the names are the same in the tables, it works to use USING, too,
 and you are relieved of the burden of an alias:

 from bridge
 join people USING(peopleID)
 join state USING(stateID)

 If the fields peopleId and stateID are the only field names in common,
 NATURAL JOIN also works.

 from bridge
 NATURAL join people
 NATURAL join state


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: The most elegant/efficient way to pull this SQL query

2012-04-12 Thread Hal�sz S�ndor
; 2012/04/12 11:56 -0700, Haluk Karamete 
My initial goal was to write a very convenient php function that display a 
table view based on arguments that are super simple to write - without 
requiring the developer to type-in ( or know ) the ins and outs of joins, 
natural joins etc.

Something like this 
function showtable($dbh,$table,$fields){
//get the $fields argument  parse it out to come up 
//with one of the SQL statements you guys are discussing.
SQL = BuildSQL($table,$fields)
//and then use this SQL to output the table
}

The magic would be happening in the $fields argument. 



What's the most straightforward way to achieve the following view with one SQL 
statement?


peopleID__stateID_
1_(joe)___1__(california)_
2_(bob)___4__(texas)__
3_(charlie)___5__(florida)

/quote

if the final table ( PeopleAndStates ) view I want were to be as follows;

peopleID__stateID_
1_1___
2_4___
3_5___

Then I would have called the function like this;

showtable($dbh,$myTable,peopleID,stateID) 

But if I want to get, the following view instead;

peopleID__stateID_
1_(joe)___1___
2_(bob)___4___
3_(charlie)___5___

I would like to be able to call my function as follows; 

showtable($dbh,$PeopleAndStates,peopleID(PeopleTable.PeopleID  
FName),stateID)  



To mean the following; 

When you are outputting the peopleID, provide the corresponding Fname field 
from the PeopleTable where peopleID there is equal to the peopleID you are 
outputting. 

What I was seeking from you guys was to find out to most simplistic SQL 
statement so that when I parse the area with (PeopleTable.PeopleID  FName), I 
can extract the pieces and place it in the final SQL.

I'm not sure if you all get the idea of how such a function make debugging 
super easy. 

Once you write the parser, you can deploy it over many different cases such as 

ShowRecord($dbh,$table,$fields,where peopleID5,limit 100) and so on.

So, the simpler the SQL, the easier the transition from the starting slate 
which is really no different than 

SELECT peopleID(PeopleTable.PeopleID  FName),stateID from PeopleAndStates

(note that in MySQL '' is only right-shift.)

I fear that for this function in the end you will need 
information_schema.COLUMNS.

Peter Brawley already gave you a good answer for one of your examples (but I 
thus would write it):

select concat(peopleID, ' (',fname,')') AS peopleID, concat(stateID, ' 
(',state,')') AS stateID
from people
join PeopleAndStates USING(peopleID)
join state USING(stateID)

The middle example:

select peopleID, stateID from PeopleAndStates

The last:

select concat(peopleID, ' (',fname,')') AS peopleID, stateID
from people
join PeopleAndStates USING(peopleID)
join state USING(stateID)


I have assumed that you mean to join only on same-named fields with equality; 
if not, JOIN ... ON ... is needed. In any case, you have to keep track of it, 
whether in a result field the table name, too, is needed. If you use USING, for 
that field leave the table name out.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: The most elegant/efficient way to pull this SQL query

2012-04-11 Thread Peter Brawley

On 4/11/2012 1:30 PM, Haluk Karamete wrote:

I've got this relational mySQL table that ties peopleIDs from the people
table to the states IDs

peopleID___stateID
1__1
2__4
3__5


people table is like this;

___peopleID_FName_
___1joe
___2bob___
___3charlie_


and finally the state table goes like this;

___stateID_State___
___1___california
___2___new york
___3___washington__
___4___texas___
___5___florida__


What's the most straightforward way to achieve the following view with one
SQL statement?


peopleID__stateID_
1_(joe)___1__(california)_
2_(bob)___4__(texas)__
3_(charlie)___5__(florida)


select b.peopleID, concat('(',p.fname,,')'), b.stateID, 
concat('(',s.state,')')

from bridge b
join people p on b.peopleID=p.peopleID
join state s on b.stateID=s.stateID;

PB

-





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: The most elegant/efficient way to pull this SQL query

2012-04-11 Thread Hal�sz S�ndor
; 2012/04/11 11:30 -0700, Haluk Karamete 
I've got this relational mySQL table that ties peopleIDs from the people
table to the states IDs

peopleID___stateID
1__1
2__4
3__5


people table is like this;

___peopleID_FName_
___1joe
___2bob___
___3charlie_


and finally the state table goes like this;

___stateID_State___
___1___california
___2___new york
___3___washington__
___4___texas___
___5___florida__


What's the most straightforward way to achieve the following view with one
SQL statement?


peopleID__stateID_
1_(joe)___1__(california)_
2_(bob)___4__(texas)__
3_(charlie)___5__(florida)

Look at Stephen Tu s original post under the subject forcing mysql to use 
batched key access (BKA) optimization for joins. That his query solves a 
problem very much like yours--but use explicit JOINing, not implicit. He also 
uses an unhappy style of making every field name in the database unique. 
Joining is easier if the fields to be joined on have same names.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Best way to tune substr,min,max query

2012-02-07 Thread Andrey Dmitriev
I have a query that I need to tune.
Basically, substr a text, and select first and last entry.
The table is currently a few million rows big. Index is on FromHost (text
field) and ReceivedAt (index field)
Is the best way to optimize my query.
1) create an index on substr() and the two date columns
2) create some triggering mechanism to just populate this data in a report
3) create table as select

I would strongly prefer 1, or something similar/simple so that I wouldn't
have to maintain triggers in case they break. I don't like 3 because it's
heavy.

Thanks,

select
substr(rsyslog.SystemEvents.FromHost,1,if((locate(_latin1'.',rsyslog.SystemEvents.FromHost)
 0),(locate(_latin1'.',rsyslog.SystemEvents.FromHost) -
1),length(rsyslog.SystemEvents.FromHost))) AS hostname,
max(rsyslog.SystemEvents.DeviceReportedTime) AS first_syslog_entry,
min(rsyslog.SystemEvents.DeviceReportedTime) AS last_syslog_entry
from rsyslog.SystemEvents
group by
substr(rsyslog.SystemEvents.FromHost,1,if((locate(_latin1'.',rsyslog.SystemEvents.FromHost)
 0),(locate(_latin1'.',rsyslog.SystemEvents.FromHost) -
1),length(rsyslog.SystemEvents.FromHost)))


Re: the best way compare String

2011-12-05 Thread Hal�sz S�ndor
; 2011/12/04 20:25 +0100, Rafael Valenzuela 
In one query,  I am comparing a string with a 'like=%patron%', but is very
slow and expensive .What is the best way to compare a string? like or
regular expression?
The data base is too big table 1TB, any idea? 

Any decent implementation of like '%patron%' is in the gross linear, and 
also, say, LOCATE('patron', ) in a setting where only 0 or not 0 is tested for. 
If all your LIKE-searches look like this one, a string to be matched from a 
value by putting it between '%'s, maybe by using LOCATE you can save 
time--unless the MySQL developers were clever enough to make a special case of 
this, and used the same algorithm for both. (One can always use the Boyer-Moore 
algorithm for LOCATE.)

In any case, if an in-the-gross-linear algorithm is used and that is too slow, 
indexing is the only help, and, as Shiva said, you have to know your problem.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: the best way compare String

2011-12-05 Thread Basil Daoust
For the best chance to have a efficient query you would need to loose the
leading % if possible.
If you could then the query can optimize using a index.

On Sun, Dec 4, 2011 at 2:50 PM, Reindl Harald h.rei...@thelounge.netwrote:

 you THINK the engine is? WTF?

 fulltext if you need fulltext-search
 please read the manuals i linked!

 Am 04.12.2011 21:20, schrieb Rafael Valenzuela:
  Hi Reindl,
  Thanks for your quick answer, but i think  the engine is MySAM but I'm
 not
  sure, What type of index recomendais me, unique, full-text, etc..?
  Thanks a lot
 
  2011/12/4 Reindl Harald h.rei...@thelounge.net
 
 
  Am 04.12.2011 20:25, schrieb Rafael Valenzuela:
  Hi all,
  In one query,  I am comparing a string with a 'like=%patron%', but is
  very
  slow and expensive .What is the best way to compare a string? like or
  regular expression?
  The data base is too big table 1TB, any idea?
  thanks a lot
 
  http://dev.mysql.com/doc/refman/5.5/en/fulltext-search.html
 
  without indexes is big tables is no fast search possible
  if you are useing innodb you have currently lost




Re: the best way compare String

2011-12-04 Thread Rafael Valenzuela
Hi Reindl,
Thanks for your quick answer, but i think  the engine is MySAM but I'm not
sure, What type of index recomendais me, unique, full-text, etc..?
Thanks a lot

2011/12/4 Reindl Harald h.rei...@thelounge.net



 Am 04.12.2011 20:25, schrieb Rafael Valenzuela:
  Hi all,
  In one query,  I am comparing a string with a 'like=%patron%', but is
 very
  slow and expensive .What is the best way to compare a string? like or
  regular expression?
  The data base is too big table 1TB, any idea?
  thanks a lot

 http://dev.mysql.com/doc/refman/5.5/en/fulltext-search.html

 without indexes is big tables is no fast search possible
 if you are useing innodb you have currently lost




-- 
Mit forever
My Blog http://www.redcloverbi.wordpress.com
My Faborite 
Webhttp://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
http://www.technologyreview.com/


Re: the best way compare String

2011-12-04 Thread Reindl Harald
you THINK the engine is? WTF?

fulltext if you need fulltext-search
please read the manuals i linked!

Am 04.12.2011 21:20, schrieb Rafael Valenzuela:
 Hi Reindl,
 Thanks for your quick answer, but i think  the engine is MySAM but I'm not
 sure, What type of index recomendais me, unique, full-text, etc..?
 Thanks a lot
 
 2011/12/4 Reindl Harald h.rei...@thelounge.net


 Am 04.12.2011 20:25, schrieb Rafael Valenzuela:
 Hi all,
 In one query,  I am comparing a string with a 'like=%patron%', but is
 very
 slow and expensive .What is the best way to compare a string? like or
 regular expression?
 The data base is too big table 1TB, any idea?
 thanks a lot

 http://dev.mysql.com/doc/refman/5.5/en/fulltext-search.html

 without indexes is big tables is no fast search possible
 if you are useing innodb you have currently lost



signature.asc
Description: OpenPGP digital signature


Re: the best way compare String

2011-12-04 Thread Shiva
Rafael,
  Without more details it is hard how you can optimize a query.
1. even though database 1TB, what about the table on which your running
this query? Do you have any index on the column? How may chars long is the
column? etc.
2. Run the explain and see what is the optimizer is telling - using key or
not, full scan or not, etc.
3. see whether you can change the like to 'ABCpatron%'.  See - 
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html - The index also
can be used for
LIKEhttp://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_likecomparisons
if the argument to
LIKEhttp://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_likeis
a constant string that does not start with a wildcard character.
4. you may want to take subset of data and try both RegEx and like. Compare
them.
5. Also, see if you can make use of prefix-indexing where you only first N
characters are used.

Best,
Shiva


On Sun, Dec 4, 2011 at 11:25 AM, Rafael Valenzuela rav...@gmail.com wrote:

 Hi all,
 In one query,  I am comparing a string with a 'like=%patron%', but is very
 slow and expensive .What is the best way to compare a string? like or
 regular expression?
 The data base is too big table 1TB, any idea?
 thanks a lot
 --
 Mit forever
 My Blog http://www.redcloverbi.wordpress.com
 My Faborite Web
 http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/index.htm
 
 http://www.technologyreview.com/



Re: best way to copy a innodb table

2011-12-01 Thread Claudio Nanni
Sure you can, and you should.

but in case you also update/delete rows from the first table you have to
set up trigger to log changes.
if you are lucky (only inserts) then its easier.

Cheers

Claudio

2011/12/1 Angela liu yyll2...@yahoo.com

 Hi, folks:


 I have a situation:

 A large innodb table t1 with 45 million rows, need to have a new table t2
 exactly the same as t1, to copy the data from t1 to t2, I have the
 following query:

 create table t2 like t1;

 insert into t2 select * from t1;


 but the above insert may run long time , that can interface with
 performance, is there a way to chunk the insert into the new table? like
 breaking it down into chunks of 100,000 rows ?


 Thanks




-- 
Claudio


Re: best way to copy a innodb table

2011-12-01 Thread Angela liu
how to break the table into 100,000 chunks?

thanks



 From: Claudio Nanni claudio.na...@gmail.com
To: Angela liu yyll2...@yahoo.com 
Cc: mysql@lists.mysql.com mysql@lists.mysql.com 
Sent: Thursday, December 1, 2011 2:24 PM
Subject: Re: best way to copy a innodb table
 
Sure you can, and you should.

but in case you also update/delete rows from the first table you have to
set up trigger to log changes.
if you are lucky (only inserts) then its easier.

Cheers

Claudio

2011/12/1 Angela liu yyll2...@yahoo.com

 Hi, folks:


 I have a situation:

 A large innodb table t1 with 45 million rows, need to have a new table t2
 exactly the same as t1, to copy the data from t1 to t2, I have the
 following query:

 create table t2 like t1;

 insert into t2 select * from t1;


 but the above insert may run long time , that can interface with
 performance, is there a way to chunk the insert into the new table? like
 breaking it down into chunks of 100,000 rows ?


 Thanks




-- 
Claudio

Re: best way to copy a innodb table

2011-12-01 Thread Miguel Angel Nieto
Hi,

You should check pt-archiver.

http://www.percona.com/doc/percona-toolkit/pt-archiver.html

The goal is a low-impact, forward-only job to nibble old data out of the table 
without impacting OLTP queries much.

This tool can copy data and insert in another table in chunks of data. The 
destination doesn't need to be on the same database or on the same server. Take 
in account that this tool by default deletes data from the source table, so use 
--no-delete option.

Try it on a testing environment before executing it in production :)

Regards,
-- 
Miguel Ángel Nieto,
Support Engineer, Percona Inc.
http://www.percona.com | http://www.mysqlperformanceblog.com
Email : miguel.ni...@percona.com
Skype : percona.migueln

24/7 Emergency : +1 888 401 3401 ext 911
Training : http://www.percona.com/training/
Support : http://www.percona.com/mysql-support/

Percona Live MySQL Conference April 10-12 Santa Clara
http://www.percona.com/live/mysql-conference-2012/

El 01/12/2011, a las 23:16, Angela liu escribió:

 Hi, folks:
 
 
 I have a situation:
 
 A large innodb table t1 with 45 million rows, need to have a new table t2 
 exactly the same as t1, to copy the data from t1 to t2, I have the following 
 query:
 
 create table t2 like t1;
 
 insert into t2 select * from t1;
 
 
 but the above insert may run long time , that can interface with performance, 
 is there a way to chunk the insert into the new table? like breaking it down 
 into chunks of 100,000 rows ?
 
 
 Thanks



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: best way to copy a innodb table

2011-12-01 Thread Karen Abgarian
Hi, 

I have a support case with MySQL opened on this subject.   Here is what we were 
able to come up with.  

1.  Create the table with the primary key and unique key constraints defined 
but no secondary indexes. 
2.  Bump up InnoDB logs to 2M and especially memory to the highest there can 
be. 
3.  Disable binary logging, InnoDB sync if running that. 
4.  Split the job into several insert..select statements using some criteria 
dependent on the table. 
5.  Run the load in parallel. 
6.  Create secondary indexes via fast create option. 

My experience is that the inserts run at blazing speed until the table roughly 
becomes as big as there size of the Innodb pool. 
The inserts then slow down gradually to like 100 inserts/second.  The net 
result is that the 25M of records can, say, be loaded 
in an hour, and the remaining, say 25M of records would be loaded in, like, 24 
hours.  

If anyone can shed some light on it that would be great.   I observe no 
bottlenecks anywhere on the OS.  CPU is low, no paging,
no significant disk activity.   

Tx
Karen.




On Dec 1, 2011, at 2:16 PM, Angela liu wrote:

 Hi, folks:
 
 
 I have a situation:
 
 A large innodb table t1 with 45 million rows, need to have a new table t2 
 exactly the same as t1, to copy the data from t1 to t2, I have the following 
 query:
 
 create table t2 like t1;
 
 insert into t2 select * from t1;
 
 
 but the above insert may run long time , that can interface with performance, 
 is there a way to chunk the insert into the new table? like breaking it down 
 into chunks of 100,000 rows ?
 
 
 Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: how to optimize mysql in easy way

2011-10-22 Thread mos

At 05:49 AM 10/21/2011, you wrote:

how to optimize mysql in easy way step, i know indexing, mapping other than
that is any way.

--
Thanks  Regards,
P.Benaya Paul



Apart from enrolling in Hogwarts, you may be interested in the 
book  High Performance MySQL 2nd Edition. It can be found in 
several online bookstores.


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



how to optimize mysql in easy way

2011-10-21 Thread Benaya Paul
how to optimize mysql in easy way step, i know indexing, mapping other than
that is any way.

-- 
Thanks  Regards,
P.Benaya Paul

http://www.codeasearch.com
http://www.iwannasearch.com


Re: how to optimize mysql in easy way

2011-10-21 Thread Johan De Meersman
- Original Message -
 From: Benaya Paul benayap...@gmail.com
 
 how to optimize mysql in easy way step, i know indexing, mapping
 other than that is any way.

Forsooth, three paths extend before thee. The right way is to learn about mysql 
so you know what you're doing. The intermediate way is tools like 
mysqltuner.pl, and the easy way is to hire someone who knows what they're doing.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Best Way to store Hierarchical Data in Mysql

2011-08-19 Thread Adarsh Sharma

Dear all,

I researched a lot on storing Hierarchical data in mysql tables.
Below links :

http://scvinodkumar.wordpress.com/2011/01/30/managing-hierarchical-data-in-mysql/
http://www.sitepoint.com/hierarchical-data-database-2/

shows different ways to store.

But is there any standard way of doing this fore.g storing twitter 
tweets or comments on a link etc.



Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best Way to store Hierarchical Data in Mysql

2011-08-19 Thread Adarsh Sharma

Its Ok, it happen sometimes.



Alex Schaft wrote:

Apologies. I hid reply to on the wrong message

On 2011/08/19 12:10 PM, Adarsh Sharma wrote:

Dear all,

I researched a lot on storing Hierarchical data in mysql tables.
Below links :

http://scvinodkumar.wordpress.com/2011/01/30/managing-hierarchical-data-in-mysql/ 


http://www.sitepoint.com/hierarchical-data-database-2/

shows different ways to store.

But is there any standard way of doing this fore.g storing twitter 
tweets or comments on a link etc.



Thanks






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best Way to store Hierarchical Data in Mysql

2011-08-19 Thread Rik Wasmus
 I researched a lot on storing Hierarchical data in mysql tables.
 Below links :
 
 http://scvinodkumar.wordpress.com/2011/01/30/managing-hierarchical-data-in-
 mysql/ http://www.sitepoint.com/hierarchical-data-database-2/
 
 shows different ways to store.
 
 But is there any standard way of doing this fore.g storing twitter
 tweets or comments on a link etc.

Most used is de adjacency model, but it depends on how you are going to use 
it, and how deep hierarchical data goes / can go. Also a good source of 
information: 
- http://www.slideshare.net/billkarwin/models-for-hierarchical-data (nice 
concise comparison of 4 alternatives)
- http://www.amazon.com/Hierarchies-Smarties-Kaufmann-Management-
Systems/dp/1558609202/ref=ntt_at_ep_dpt_2
-- 
Regards,

Rik

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best Way to store Hierarchical Data in Mysql

2011-08-19 Thread Peter Brawley

On 8/19/2011 5:10 AM, Adarsh Sharma wrote:

Dear all,

I researched a lot on storing Hierarchical data in mysql tables.
Below links :

http://scvinodkumar.wordpress.com/2011/01/30/managing-hierarchical-data-in-mysql/ 


http://www.sitepoint.com/hierarchical-data-database-2/

shows different ways to store.

But is there any standard way of doing this fore.g storing twitter 
tweets or comments on a link etc.
Several ways, see 
http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html.


PB

-




Thanks



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: best way to have a unique key

2011-01-21 Thread Jerry Schwartz
-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Friday, January 21, 2011 1:22 AM
To: Anthony Pace
Cc: Michael Dykman; mysql.
Subject: Re: best way to have a unique key

I have to say, something similar was my first thought, too - you never
mention uuid in your original post. As already stated, uuid() should be a
Universal Unique IDentifier. It's afaik a random 128-bit number; given the
space to choose from it should be rather unique. I have to admit that I'm
not entirely confident about that myself, either, though: as Pratchett put
it, one-in-a-million chances tend to pop up nine times out of ten.

[JS] A UUID (what Microsoft calls a GUID) is based in part on the MAC address 
of the generating device. Since MAC addresses are supposed to be unique across 
the known universe, so should a UUID.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



The code should have bits for handling duplicate primaries regardless of the
method used to generate it, tough, so there's no reason to not do it. Having
two subsequent UUID() calls generate pre-existing numbers seems to me to be
likely in the same way as having Bush return his dirty oil dollars to Irak.

On Thu, Jan 20, 2011 at 8:10 PM, Anthony Pace 
anthony.p...@utoronto.cawrote:

 Dude, come on.  I know that all primary keys have to be unique; however, I
 was obviously referring to the use of uuid over auto incrementation.

 On 1/20/2011 1:36 PM, Michael Dykman wrote:

 It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
 MySQL nor any other RDBMS will allow you to establish a primary key
 that is not unique.

  - michael dykman

 On Thu, Jan 20, 2011 at 1:32 PM, Anthony Paceanthony.p...@utoronto.ca
  wrote:

 Due to certain reasons, the company I am doing business with has decided
 that the primary key, for an orders table, be a unique key; however, I
 don't
 like the possibility of it conflicting if moved to another machine.

 What are some pitfalls of using a unique key, that is generated by a
 server
 side script, rather than by mysql?
 What are the best ways to do this?

 Please keep in mind this variable will also be displayed on the
 customer's
 Receipt, but again, since it's random, it doesn't have to mean anything.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-21 Thread Michael Dykman
One of the components of the UUID is drawn form the mac address of the
server..  While in practice this is not true of all systems

(except from 
http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_uuid)
Currently, the MAC address of an interface is taken into
account only on FreeBSD and Linux. On other operating systems, MySQL
uses a randomly generated 48-bit number.
(end except)

that potentially 48-bit random portion has 281474976710656
possibilities, which makes in far, far more likely that your server is
hit by a meteor during a snowstorm in August while the Dalai Lama is
doing an Elvis impression for the Chinese premier.

 - michael dykman

On Fri, Jan 21, 2011 at 1:22 AM, Johan De Meersman vegiv...@tuxera.be wrote:
 I have to say, something similar was my first thought, too - you never
 mention uuid in your original post. As already stated, uuid() should be a
 Universal Unique IDentifier. It's afaik a random 128-bit number; given the
 space to choose from it should be rather unique. I have to admit that I'm
 not entirely confident about that myself, either, though: as Pratchett put
 it, one-in-a-million chances tend to pop up nine times out of ten.

 The code should have bits for handling duplicate primaries regardless of the
 method used to generate it, tough, so there's no reason to not do it. Having
 two subsequent UUID() calls generate pre-existing numbers seems to me to be
 likely in the same way as having Bush return his dirty oil dollars to Irak.

 On Thu, Jan 20, 2011 at 8:10 PM, Anthony Pace anthony.p...@utoronto.cawrote:

 Dude, come on.  I know that all primary keys have to be unique; however, I
 was obviously referring to the use of uuid over auto incrementation.

 On 1/20/2011 1:36 PM, Michael Dykman wrote:

 It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
 MySQL nor any other RDBMS will allow you to establish a primary key
 that is not unique.

  - michael dykman

 On Thu, Jan 20, 2011 at 1:32 PM, Anthony Paceanthony.p...@utoronto.ca
  wrote:

 Due to certain reasons, the company I am doing business with has decided
 that the primary key, for an orders table, be a unique key; however, I
 don't
 like the possibility of it conflicting if moved to another machine.

 What are some pitfalls of using a unique key, that is generated by a
 server
 side script, rather than by mysql?
 What are the best ways to do this?

 Please keep in mind this variable will also be displayed on the
 customer's
 Receipt, but again, since it's random, it doesn't have to mean anything.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: best way to have a unique key

2011-01-21 Thread Jerry Schwartz
-Original Message-
From: Michael Dykman [mailto:mdyk...@gmail.com]
Sent: Friday, January 21, 2011 11:35 AM
To: Johan De Meersman
Cc: Anthony Pace; mysql.
Subject: Re: best way to have a unique key

One of the components of the UUID is drawn form the mac address of the
server..  While in practice this is not true of all systems

(except from http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-
functions.html#function_uuid)
Currently, the MAC address of an interface is taken into
account only on FreeBSD and Linux. On other operating systems, MySQL
uses a randomly generated 48-bit number.
(end except)

that potentially 48-bit random portion has 281474976710656
possibilities, which makes in far, far more likely that your server is
hit by a meteor during a snowstorm in August while the Dalai Lama is
doing an Elvis impression for the Chinese premier.

[JS] Boy, if I had a nickel for every time I've heard that...

Seriously, though I've written and used enough pseudo-random number generators 
to have a tiny nagging doubt. I didn't realize that MySQL didn't use the MAC 
address on Windows platforms.

Also, it is possible to programmatically change a MAC address (for system 
failover, for example).

If you're not the kind of person who stays up nights worrying about proton 
decay, you're probably justified in shrugging this off. You'll likely be 
collecting your pension before anything nasty happens.


 - michael dykman

On Fri, Jan 21, 2011 at 1:22 AM, Johan De Meersman vegiv...@tuxera.be 
wrote:
 I have to say, something similar was my first thought, too - you never
 mention uuid in your original post. As already stated, uuid() should be a
 Universal Unique IDentifier. It's afaik a random 128-bit number; given the
 space to choose from it should be rather unique. I have to admit that I'm
 not entirely confident about that myself, either, though: as Pratchett put
 it, one-in-a-million chances tend to pop up nine times out of ten.

 The code should have bits for handling duplicate primaries regardless of 
 the
 method used to generate it, tough, so there's no reason to not do it. 
 Having
 two subsequent UUID() calls generate pre-existing numbers seems to me to be
 likely in the same way as having Bush return his dirty oil dollars to Irak.

 On Thu, Jan 20, 2011 at 8:10 PM, Anthony Pace
anthony.p...@utoronto.cawrote:

 Dude, come on.  I know that all primary keys have to be unique; however, I
 was obviously referring to the use of uuid over auto incrementation.

 On 1/20/2011 1:36 PM, Michael Dykman wrote:

 It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
 MySQL nor any other RDBMS will allow you to establish a primary key
 that is not unique.

  - michael dykman

 On Thu, Jan 20, 2011 at 1:32 PM, Anthony Paceanthony.p...@utoronto.ca
  wrote:

 Due to certain reasons, the company I am doing business with has decided
 that the primary key, for an orders table, be a unique key; however, I
 don't
 like the possibility of it conflicting if moved to another machine.

 What are some pitfalls of using a unique key, that is generated by a
 server
 side script, rather than by mysql?
 What are the best ways to do this?

 Please keep in mind this variable will also be displayed on the
 customer's
 Receipt, but again, since it's random, it doesn't have to mean anything.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




--
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-21 Thread Michael Satterwhite
On Friday, January 21, 2011 09:23:47 am Jerry Schwartz wrote:

 
 [JS] A UUID (what Microsoft calls a GUID) is based in part on the MAC
 address of the generating device. Since MAC addresses are supposed to be
 unique across the known universe, so should a UUID.
 

Not entirely true - and even when true your conclusion doesn't follow.

I'll be the first to admit that we are straining at gnats, here. Generating a 
UUID does not require a machine to have an Ethernet card. The presence of a 
Mac address is, therefore, not guaranteed. Even if the machine does have an 
ethernet card, the definition of the UUID suggests - but does not require - 
that it be used (I believe this is discussed in another reply). Even if it is 
used, there are still a (admittedly large) portion of it that is created by a 
random number generation. Assuming a reasonably good generator, it is still 
possible for a collision on a single machine. 

The question then becomes, how much of a problem is that? The odds are that 
I'll be LONG gone before that happens. But it could happen tomorrow. If it's 
just a matter of Oops, lets do a rollback and go again, who cares? If it's a 
life sustaining mission critical problem where an atomic war starts if the 
transaction fails ... you might want to at least think about another key. 
Personally, I wouldn't lose sleep over it.

Another reasonable - and faster - source of a unique key would be to have a 
human set unique server id on each machine generating the key. In that case a 
two field primary key with the first being the server id and the second part 
being an auto_increment field should guarantee a unique key - at least until it 
wraps around. This, of course, assumes that the administrator that sets up the 
servers doesn't make a mistake with setting the server id. Which brings up the 
how much of a problem would that be question again.

Bottom line ... exactly how paranoid are you?

---Michael

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



best way to have a unique key

2011-01-20 Thread Anthony Pace
Due to certain reasons, the company I am doing business with has decided 
that the primary key, for an orders table, be a unique key; however, I 
don't like the possibility of it conflicting if moved to another machine.


What are some pitfalls of using a unique key, that is generated by a 
server side script, rather than by mysql?

What are the best ways to do this?

Please keep in mind this variable will also be displayed on the 
customer's Receipt, but again, since it's random, it doesn't have to 
mean anything.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-20 Thread Krishna Chandra Prajapati
uuid()

Krishna

On Fri, Jan 21, 2011 at 12:02 AM, Anthony Pace anthony.p...@utoronto.cawrote:

 Due to certain reasons, the company I am doing business with has decided
 that the primary key, for an orders table, be a unique key; however, I don't
 like the possibility of it conflicting if moved to another machine.

 What are some pitfalls of using a unique key, that is generated by a server
 side script, rather than by mysql?
 What are the best ways to do this?

 Please keep in mind this variable will also be displayed on the customer's
 Receipt, but again, since it's random, it doesn't have to mean anything.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com




Re: best way to have a unique key

2011-01-20 Thread Anthony Pace
Dude, come on.  I know that all primary keys have to be unique; however, 
I was obviously referring to the use of uuid over auto incrementation.


On 1/20/2011 1:36 PM, Michael Dykman wrote:

It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
MySQL nor any other RDBMS will allow you to establish a primary key
that is not unique.

  - michael dykman

On Thu, Jan 20, 2011 at 1:32 PM, Anthony Paceanthony.p...@utoronto.ca  wrote:

Due to certain reasons, the company I am doing business with has decided
that the primary key, for an orders table, be a unique key; however, I don't
like the possibility of it conflicting if moved to another machine.

What are some pitfalls of using a unique key, that is generated by a server
side script, rather than by mysql?
What are the best ways to do this?

Please keep in mind this variable will also be displayed on the customer's
Receipt, but again, since it's random, it doesn't have to mean anything.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-20 Thread Anthony Pace
I know of uuid() my problem is that there can be conflicts when copying 
the DB to a different machine, or working with sections of the db on 
different machines for load balancing.


On 1/20/2011 1:44 PM, Krishna Chandra Prajapati wrote:

  Please keep in mind this variable will also be displayed on the customer's
  Receipt, but again, since it's random, it doesn't have to mean anything.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-20 Thread Anthony Pace
Although I did berate you for your obvious cheek, I will of course 
complement the acuteness of your response.


On 1/20/2011 2:10 PM, Anthony Pace wrote:
Dude, come on.  I know that all primary keys have to be unique; 
however, I was obviously referring to the use of uuid over auto 
incrementation.


On 1/20/2011 1:36 PM, Michael Dykman wrote:

It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
MySQL nor any other RDBMS will allow you to establish a primary key
that is not unique.

  - michael dykman

On Thu, Jan 20, 2011 at 1:32 PM, Anthony 
Paceanthony.p...@utoronto.ca  wrote:
Due to certain reasons, the company I am doing business with has 
decided
that the primary key, for an orders table, be a unique key; however, 
I don't

like the possibility of it conflicting if moved to another machine.

What are some pitfalls of using a unique key, that is generated by a 
server

side script, rather than by mysql?
What are the best ways to do this?

Please keep in mind this variable will also be displayed on the 
customer's
Receipt, but again, since it's random, it doesn't have to mean 
anything.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com











--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-20 Thread Luciano Furtado
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

What conflicts are you expecting? according to the documentation:

A UUID is designed as a number that is globally unique in space and
time. Two calls to UUID() are expected to generate two different
values, even if these calls are performed on two separate computers
that are not connected to each other.

On 11-01-20 14:11, Anthony Pace wrote:
 I know of uuid() my problem is that there can be conflicts when copying
 the DB to a different machine, or working with sections of the db on
 different machines for load balancing.
 
 On 1/20/2011 1:44 PM, Krishna Chandra Prajapati wrote:
   Please keep in mind this variable will also be displayed on the
 customer's
   Receipt, but again, since it's random, it doesn't have to mean
 anything.
 
 

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.14 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJNOJQBAAoJENgwSj9ZOOwr7NcIAJUVGGPhtiRRrNjrSUBZiO4c
0tsAKbsugnlJ7EI/61ALD8UCelBKKvDnRD0MFfCLHLTukbPkF+4YmwAi1tOMVo0J
OteGOxXroo0dZhKt6/SommvtM9uXnHT6WJiTs8w5uP/TEUmqIECp4x3M0Fwjs9HY
HzV/Tqo/pqlBpbdagahm+pm+9mK+g5AYR7xenBXwynu05XqClUCptSdh6NIhnBD5
fLw9e6AVOAeG1sbswR51pFtuDpXT0IlHn3U/7rdIioglYakphT21MQ5oM2kuTuis
LE3xAR/YydiKa9GUPfghR/+0Xp7DGes1+HAXq4dkmnSWFEw218Jt5y6r131/EPg=
=fra3
-END PGP SIGNATURE-


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-20 Thread Michael Dykman
I should have read more carefully..  I apologize for my snap response.

At a guess: as I recall, under M$ SQLServer the typical (only?) form
of unique identifier used is something very UUID-like.  MY information
might be dated.  I was certified as a SQL Server administrator perhaps
12 years agoI would not be terribly surprised to learn that was their
motivation.  I recall, many years ago, having this same, largely
theoretical concern regarding MD5 hashes.  Since then experience has
taught me that my worries were unfounded.  UUID() should be very safe,
even for very large datasets.

 - michael dykman


On Thu, Jan 20, 2011 at 2:20 PM, Anthony Pace anthony.p...@utoronto.ca wrote:
 Although I did berate you for your obvious cheek, I will of course
 complement the acuteness of your response.

 On 1/20/2011 2:10 PM, Anthony Pace wrote:

 Dude, come on.  I know that all primary keys have to be unique; however, I
 was obviously referring to the use of uuid over auto incrementation.

 On 1/20/2011 1:36 PM, Michael Dykman wrote:

 It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
 MySQL nor any other RDBMS will allow you to establish a primary key
 that is not unique.

  - michael dykman

 On Thu, Jan 20, 2011 at 1:32 PM, Anthony Paceanthony.p...@utoronto.ca
  wrote:

 Due to certain reasons, the company I am doing business with has decided
 that the primary key, for an orders table, be a unique key; however, I
 don't
 like the possibility of it conflicting if moved to another machine.

 What are some pitfalls of using a unique key, that is generated by a
 server
 side script, rather than by mysql?
 What are the best ways to do this?

 Please keep in mind this variable will also be displayed on the
 customer's
 Receipt, but again, since it's random, it doesn't have to mean anything.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com








 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: best way to have a unique key

2011-01-20 Thread Daevid Vincent
http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/ 

 -Original Message-
 From: Krishna Chandra Prajapati [mailto:prajapat...@gmail.com] 
 Sent: Thursday, January 20, 2011 10:45 AM
 To: Anthony Pace
 Cc: mysql.
 Subject: Re: best way to have a unique key
 
 uuid()
 
 Krishna
 
 On Fri, Jan 21, 2011 at 12:02 AM, Anthony Pace 
 anthony.p...@utoronto.cawrote:
 
  Due to certain reasons, the company I am doing business 
 with has decided
  that the primary key, for an orders table, be a unique key; 
 however, I don't
  like the possibility of it conflicting if moved to another machine.
 
  What are some pitfalls of using a unique key, that is 
 generated by a server
  side script, rather than by mysql?
  What are the best ways to do this?
 
  Please keep in mind this variable will also be displayed on 
 the customer's
  Receipt, but again, since it's random, it doesn't have to 
 mean anything.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: best way to have a unique key

2011-01-20 Thread Johan De Meersman
I have to say, something similar was my first thought, too - you never
mention uuid in your original post. As already stated, uuid() should be a
Universal Unique IDentifier. It's afaik a random 128-bit number; given the
space to choose from it should be rather unique. I have to admit that I'm
not entirely confident about that myself, either, though: as Pratchett put
it, one-in-a-million chances tend to pop up nine times out of ten.

The code should have bits for handling duplicate primaries regardless of the
method used to generate it, tough, so there's no reason to not do it. Having
two subsequent UUID() calls generate pre-existing numbers seems to me to be
likely in the same way as having Bush return his dirty oil dollars to Irak.

On Thu, Jan 20, 2011 at 8:10 PM, Anthony Pace anthony.p...@utoronto.cawrote:

 Dude, come on.  I know that all primary keys have to be unique; however, I
 was obviously referring to the use of uuid over auto incrementation.

 On 1/20/2011 1:36 PM, Michael Dykman wrote:

 It is axiomatic in the relational model that a primary must be unique.
  This is not a quirk put forth by your current employer.  Neither
 MySQL nor any other RDBMS will allow you to establish a primary key
 that is not unique.

  - michael dykman

 On Thu, Jan 20, 2011 at 1:32 PM, Anthony Paceanthony.p...@utoronto.ca
  wrote:

 Due to certain reasons, the company I am doing business with has decided
 that the primary key, for an orders table, be a unique key; however, I
 don't
 like the possibility of it conflicting if moved to another machine.

 What are some pitfalls of using a unique key, that is generated by a
 server
 side script, rather than by mysql?
 What are the best ways to do this?

 Please keep in mind this variable will also be displayed on the
 customer's
 Receipt, but again, since it's random, it doesn't have to mean anything.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: Any way to change timezone WITHOUT mysqld restart?

2010-10-04 Thread Daevid Vincent
Trust me, I read it. 
 
We had an I18N product at my last company and all our time was stored in
UTC in mySQL and we'd alter it on the fly for each user. This isn't rocket
science. It's done every day in probably many of the sites you visit and
don't even know it.
 
To clarify for you (again):
 

*   
Per-connection time zones. Each client that connects has its own time zone
setting, given by the session
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar
_time_zone time_zone variable. Initially, the session variable takes its
value from the global
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar
_time_zone time_zone variable, but the client can change its own time zone
with this statement: 

mysql SET time_zone = timezone;



The current session time zone setting affects display and storage of time
values that are zone-sensitive. This includes the values displayed by
functions such as
http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#functi
on_now NOW() or
http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#functi
on_curtime CURTIME(), and values stored in and retrieved from
http://dev.mysql.com/doc/refman/5.1/en/datetime.html TIMESTAMP columns.
Values for  http://dev.mysql.com/doc/refman/5.1/en/datetime.html
TIMESTAMP columns are converted from the current time zone to UTC for
storage, and from UTC to the current time zone for retrieval. 

Don't forget to do this stuff too:
http://dev.mysql.com/doc/refman/5.1/en/mysql-tzinfo-to-sql.html

So if it's not clear by now, you store all your dates/times in UTC (convert
them via some script if you didn't start out that way). Then per web page
connection, you read the user's profile TZ (presumably from the user
session object or some other persistent means), execute that SQL statement
above as one of the first things on the page, and FM ensues. All your
properly saved mysql rows will display in the LOCAL timezone instead of
UTC. You ALSO have to set the TZ in PHP too don't forget or you'll get
whacky discrepencies.

http://php.net/manual/en/function.date-default-timezone-set.php

There's plenty of info on this out there for using PHP  MySQL if that's
what you're using too...

http://www.ferdychristant.com/blog//archive/DOMM-84NEJN

 

  _  

From: Bryan Cantwell [mailto:bcantw...@firescope.com] 
Sent: Saturday, October 02, 2010 5:18 AM
To: Daevid Vincent
Cc: mysql@lists.mysql.com
Subject: RE: Any way to change timezone WITHOUT mysqld restart?


As a matter of fact I did, the real question is : Did you even read my
email? I said WITHOUT a restart...
The manual states that a restart of the mysqld is required. The reason for
the post to such a list is because on many occasions, user have suggestions
on some workaround for things that do work in spite of what the manual
says. 

On Fri, 2010-10-01 at 15:42 -0700, Daevid Vincent wrote: 

Did you even look at the manual?



http://lmgtfy.com/?q=mysql+set+timezone



First link.

 



 -Original Message-

 From: Bryan Cantwell [mailto:bcantw...@firescope.com] 

 Sent: Friday, October 01, 2010 10:25 AM

 To: mysql@lists.mysql.com

 Subject: Any way to change timezone WITHOUT mysqld restart?

 

 Any way to change timezone WITHOUT mysqld restart?

 It would be a lifesaver if there were some way for me not to have to

 restart because if mysql restarts then I have to go through a lot of

 other issues with my other apps.

 

 

 






Re: Any way to change timezone WITHOUT mysqld restart?

2010-10-03 Thread Johan De Meersman
I suggest you put your glasses on, then. Getting of that horse might help,
too.

default-time-zone='*timezone*'

  If you have the 
 SUPERhttp://dev.mysql.com/doc/refman/5.1/en/privileges-provided.html#priv_superprivilege,
  you can set the global server time zone value at runtime with
 this statement:




On Sat, Oct 2, 2010 at 2:18 PM, Bryan Cantwell bcantw...@firescope.comwrote:

 As a matter of fact I did, the real question is : Did you even read my
 email? I said WITHOUT a restart...
 The manual states that a restart of the mysqld is required. The reason
 for the post to such a list is because on many occasions, user have
 suggestions on some workaround for things that do work in spite of what
 the manual says.

 On Fri, 2010-10-01 at 15:42 -0700, Daevid Vincent wrote:

  Did you even look at the manual?
 
  http://lmgtfy.com/?q=mysql+set+timezone
 
  First link.
 
 
   -Original Message-
   From: Bryan Cantwell [mailto:bcantw...@firescope.com]
   Sent: Friday, October 01, 2010 10:25 AM
   To: mysql@lists.mysql.com
   Subject: Any way to change timezone WITHOUT mysqld restart?
  
   Any way to change timezone WITHOUT mysqld restart?
   It would be a lifesaver if there were some way for me not to have to
   restart because if mysql restarts then I have to go through a lot of
   other issues with my other apps.
  
  
  
 





-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: Any way to change timezone WITHOUT mysqld restart?

2010-10-02 Thread Bryan Cantwell
As a matter of fact I did, the real question is : Did you even read my
email? I said WITHOUT a restart...
The manual states that a restart of the mysqld is required. The reason
for the post to such a list is because on many occasions, user have
suggestions on some workaround for things that do work in spite of what
the manual says. 

On Fri, 2010-10-01 at 15:42 -0700, Daevid Vincent wrote:

 Did you even look at the manual?
 
 http://lmgtfy.com/?q=mysql+set+timezone
 
 First link.
  
 
  -Original Message-
  From: Bryan Cantwell [mailto:bcantw...@firescope.com] 
  Sent: Friday, October 01, 2010 10:25 AM
  To: mysql@lists.mysql.com
  Subject: Any way to change timezone WITHOUT mysqld restart?
  
  Any way to change timezone WITHOUT mysqld restart?
  It would be a lifesaver if there were some way for me not to have to
  restart because if mysql restarts then I have to go through a lot of
  other issues with my other apps.
  
  
  
 




Any way to change tinezone WITHOUT mysqld restart?

2010-10-01 Thread Bryan Cantwell
Any way to change timezone WITHOUT mysqld restart?
It would be a lifesaver if there were some way for me not to have to
restart because if mysql restarts then I have to go through a lot of
other issues with my other apps.




RE: Any way to change tinezone WITHOUT mysqld restart?

2010-10-01 Thread Daevid Vincent
Did you even look at the manual?

http://lmgtfy.com/?q=mysql+set+timezone

First link.
 

 -Original Message-
 From: Bryan Cantwell [mailto:bcantw...@firescope.com] 
 Sent: Friday, October 01, 2010 10:25 AM
 To: mysql@lists.mysql.com
 Subject: Any way to change tinezone WITHOUT mysqld restart?
 
 Any way to change timezone WITHOUT mysqld restart?
 It would be a lifesaver if there were some way for me not to have to
 restart because if mysql restarts then I have to go through a lot of
 other issues with my other apps.
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Best way to purge old records from a huge table?

2010-06-04 Thread Brian Dunning
Hey all -

I have a table with 12,000,000 records spread over about 6 years. I'm trying to 
delete all but the last 2 years, but no matter how small of a group I try to 
delete at a time, it keeps hanging up the server and I eventually have to 
restart MySQL. The table looks like this:

  `creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
  `lat` double NOT NULL default '0',
  `lon` double NOT NULL default '0',
  `referer` int(12) NOT NULL default '0',
  PRIMARY KEY  (`referer`,`lat`,`lon`),
  KEY `creation` (`creation`,`referer`)

And the query I've been trying looks like this:

delete from tablename where `creation`  '2006-04-01 00:00:00'

...trying to do the oldest 1 month of records at a time. So am I just trying a 
really inefficient query? Is there a better way to do this?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to purge old records from a huge table?

2010-06-04 Thread Ananda Kumar
dont use a single delete statment.
Use a stored proc, loop through and delete record by record and commit for
every 10k. In this way, your mysql will not hang and if you replication
setup, slave also will not lag behind.

regards
anandkl

On Fri, Jun 4, 2010 at 8:40 PM, Brian Dunning br...@briandunning.comwrote:

 Hey all -

 I have a table with 12,000,000 records spread over about 6 years. I'm
 trying to delete all but the last 2 years, but no matter how small of a
 group I try to delete at a time, it keeps hanging up the server and I
 eventually have to restart MySQL. The table looks like this:

  `creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update
 CURRENT_TIMESTAMP,
  `lat` double NOT NULL default '0',
  `lon` double NOT NULL default '0',
  `referer` int(12) NOT NULL default '0',
  PRIMARY KEY  (`referer`,`lat`,`lon`),
  KEY `creation` (`creation`,`referer`)

 And the query I've been trying looks like this:

 delete from tablename where `creation`  '2006-04-01 00:00:00'

 ...trying to do the oldest 1 month of records at a time. So am I just
 trying a really inefficient query? Is there a better way to do this?
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: Best way to purge old records from a huge table?

2010-06-04 Thread Krishna Chandra Prajapati
Hi Brian,

I would suggest you to use mk-archiver (Maatkit Tools) for this activity.

http://www.percona.com/files/presentations/Make_Life_Easier_Maatkit_v2.pdf

Regards,
Krishna

On Fri, Jun 4, 2010 at 8:40 PM, Brian Dunning br...@briandunning.comwrote:

 Hey all -

 I have a table with 12,000,000 records spread over about 6 years. I'm
 trying to delete all but the last 2 years, but no matter how small of a
 group I try to delete at a time, it keeps hanging up the server and I
 eventually have to restart MySQL. The table looks like this:

  `creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update
 CURRENT_TIMESTAMP,
  `lat` double NOT NULL default '0',
  `lon` double NOT NULL default '0',
  `referer` int(12) NOT NULL default '0',
  PRIMARY KEY  (`referer`,`lat`,`lon`),
  KEY `creation` (`creation`,`referer`)

 And the query I've been trying looks like this:

 delete from tablename where `creation`  '2006-04-01 00:00:00'

 ...trying to do the oldest 1 month of records at a time. So am I just
 trying a really inefficient query? Is there a better way to do this?
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com




RE: Best way to purge old records from a huge table?

2010-06-04 Thread Martin Gainty

Hi Brian-

 

i think the best way to ensure your dates are using -MM-DD format is for 
your dml to reference dates with DATE_FORMAT('-MM-DD','%Y-%m-%d') e.g.

 

mysql select DEIT_EVENT_SEQUENCE_ID,DEIT_EVENT_STATUS_CODE,DEIT_EVENT_DATE 
from DEIT;
+++-+
| DEIT_EVENT_SEQUENCE_ID | DEIT_EVENT_STATUS_CODE | DEIT_EVENT_DATE |
+++-+
|1 | 1  | 2006-09-04
  |
|2 | 2  | 2006-09-05
  |
|3 | 3  | 2006-09-06
  |
+++-+
3 rows in set (0.00 sec)

 

mysql delete from DEIT where 
DEIT_EVENT_DATEDATE_FORMAT('2006-09-05','%Y-%m-%d');
Query OK, 1 row affected (0.02 sec)

 

--the record is deleted so lets select to make sure

mysql select DEIT_EVENT_SEQUENCE_ID,DEIT_EVENT_STATUS_CODE,DEIT_EVENT_DATE 
from DEIT;
+++-+
| DEIT_EVENT_SEQUENCE_ID | DEIT_EVENT_STATUS_CODE | DEIT_EVENT_DATE |
+++-+
|  2 | 2  | 2006-09-05  |
|  3 | 3  | 2006-09-06  |
+++-+
2 rows in set (0.00 sec)


hth

Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.

Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.



 

 From: br...@briandunning.com
 Subject: Best way to purge old records from a huge table?
 Date: Fri, 4 Jun 2010 08:10:07 -0700
 To: mysql@lists.mysql.com
 
 Hey all -
 
 I have a table with 12,000,000 records spread over about 6 years. I'm trying 
 to delete all but the last 2 years, but no matter how small of a group I try 
 to delete at a time, it keeps hanging up the server and I eventually have to 
 restart MySQL. The table looks like this:
 
 `creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
 CURRENT_TIMESTAMP,
 `lat` double NOT NULL default '0',
 `lon` double NOT NULL default '0',
 `referer` int(12) NOT NULL default '0',
 PRIMARY KEY (`referer`,`lat`,`lon`),
 KEY `creation` (`creation`,`referer`)
 
 And the query I've been trying looks like this:
 
 delete from tablename where `creation`  '2006-04-01 00:00:00'
 
 ...trying to do the oldest 1 month of records at a time. So am I just trying 
 a really inefficient query? Is there a better way to do this?
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=mgai...@hotmail.com
 
  
_
Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_1

Re: Best way to purge old records from a huge table?

2010-06-04 Thread Johan De Meersman
I can't help but wonder how this is in any way relevant to the
original question.

On Fri, Jun 4, 2010 at 6:12 PM, Martin Gainty mgai...@hotmail.com wrote:

 Hi Brian-



 i think the best way to ensure your dates are using -MM-DD format is for 
 your dml to reference dates with DATE_FORMAT('-MM-DD','%Y-%m-%d') e.g.



 mysql select DEIT_EVENT_SEQUENCE_ID,DEIT_EVENT_STATUS_CODE,DEIT_EVENT_DATE 
 from DEIT;
 +++-+
 | DEIT_EVENT_SEQUENCE_ID | DEIT_EVENT_STATUS_CODE | DEIT_EVENT_DATE |
 +++-+
 |                                    1 | 1                      | 2006-09-04  
     |
 |                                    2 | 2                      | 2006-09-05  
     |
 |                                    3 | 3                      | 2006-09-06  
     |
 +++-+
 3 rows in set (0.00 sec)



 mysql delete from DEIT where 
 DEIT_EVENT_DATEDATE_FORMAT('2006-09-05','%Y-%m-%d');
 Query OK, 1 row affected (0.02 sec)



 --the record is deleted so lets select to make sure

 mysql select DEIT_EVENT_SEQUENCE_ID,DEIT_EVENT_STATUS_CODE,DEIT_EVENT_DATE 
 from DEIT;
 +++-+
 | DEIT_EVENT_SEQUENCE_ID | DEIT_EVENT_STATUS_CODE | DEIT_EVENT_DATE |
 +++-+
 |                      2 | 2                      | 2006-09-05      |
 |                      3 | 3                      | 2006-09-06      |
 +++-+
 2 rows in set (0.00 sec)


 hth

 Martin Gainty
 __
 Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité

 Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
 sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
 oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich 
 dem Austausch von Informationen und entfaltet keine rechtliche 
 Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von E-Mails koennen 
 wir keine Haftung fuer den Inhalt uebernehmen.

 Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
 destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
 l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci 
 est interdite. Ce message sert à l'information seulement et n'aura pas 
 n'importe quel effet légalement obligatoire. Étant donné que les email 
 peuvent facilement être sujets à la manipulation, nous ne pouvons accepter 
 aucune responsabilité pour le contenu fourni.





 From: br...@briandunning.com
 Subject: Best way to purge old records from a huge table?
 Date: Fri, 4 Jun 2010 08:10:07 -0700
 To: mysql@lists.mysql.com

 Hey all -

 I have a table with 12,000,000 records spread over about 6 years. I'm trying 
 to delete all but the last 2 years, but no matter how small of a group I try 
 to delete at a time, it keeps hanging up the server and I eventually have to 
 restart MySQL. The table looks like this:

 `creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
 CURRENT_TIMESTAMP,
 `lat` double NOT NULL default '0',
 `lon` double NOT NULL default '0',
 `referer` int(12) NOT NULL default '0',
 PRIMARY KEY (`referer`,`lat`,`lon`),
 KEY `creation` (`creation`,`referer`)

 And the query I've been trying looks like this:

 delete from tablename where `creation`  '2006-04-01 00:00:00'

 ...trying to do the oldest 1 month of records at a time. So am I just trying 
 a really inefficient query? Is there a better way to do this?
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=mgai...@hotmail.com


 _
 Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox.
 http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_1



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to purge old records from a huge table?

2010-06-04 Thread Shawn Green

Brian Dunning wrote:

Hey all -

I have a table with 12,000,000 records spread over about 6 years. I'm trying to 
delete all but the last 2 years, but no matter how small of a group I try to 
delete at a time, it keeps hanging up the server and I eventually have to 
restart MySQL. The table looks like this:

  `creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
  `lat` double NOT NULL default '0',
  `lon` double NOT NULL default '0',
  `referer` int(12) NOT NULL default '0',
  PRIMARY KEY  (`referer`,`lat`,`lon`),
  KEY `creation` (`creation`,`referer`)

And the query I've been trying looks like this:

delete from tablename where `creation`  '2006-04-01 00:00:00'

...trying to do the oldest 1 month of records at a time. So am I just trying a 
really inefficient query? Is there a better way to do this?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=sh...@mysql.com



My idea is to create a new table with just the data you want to keep and 
drop the old one. Every batch you delete must update the indexes on the 
existing table. Creating a new,smaller, batch of data with a fresh set 
of indexes  should be much faster than incrementally deflating the 
existing huge set of data.


Once the new table is created, use a RENAME TABLE to swap both table 
names to put the new table into the old one's place and to give the old 
table a name you can work with later.


--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: better way to backup 50 Gig db?

2010-04-21 Thread Claudio Nanni

Gavin,
Right,
that is also an option, but you are really not sure 100% that everything 
that is on memory is on the disk (buffers etc...)

also if it is definitely good for a disaster recovery.
What I meant is that the only way to have a 100% guaranteed consistent 
binary backup is when the database is shut down.
Of course this is almost never an option, unless (tada) you have a slave 
dedicated for that.

One remark on your note:

Just a note though, I noticed someone added replication to a slave as a backup 
option.  I really discourage that.  Replication makes no guarantees that the 
data on your slave is the same as the data on your master.  Unless you're also 
checking consistency, a slave should be treated as a somewhat unreliable copy 
of your data.

While it is true that replication makes no guarantees, if your slave is 
not the same as the master and you rely on that for production, you have 
some problems,
try to go to business and say, our slave (which at least 50% of our 
applications use to read data) is not really in sync, watch their facial 
expression!
Believe me, in many production environments the method used for backups 
relies on the slave, not on the master.
It is so much useful and important that you should have all your efforts 
go for having a consistent read-only slave 'dedicated' only for backups, 
no other client messing with it.


Just my two cents

Claudio


Gavin Towey wrote:

You can make binary backups from the master using filesystem snapshots.  You 
only need to hold a global read lock for a split second.

Regards,
Gavin Towey


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: better way to backup 50 Gig db?

2010-04-21 Thread Johan De Meersman
How does the below not guarantee me 100% that everything that can be
consistent, is ?

mysql flush tables with read lock;
unixhost# sync
unixhost# lvm create snapshot
mysql unlock tables;

I agree that there may be data inconsistencies within MyISAM, as it has no
transactions, but there's also no guarantee that there isn't an application
in the middle of multiple insert statements the moment you shut your
database or your application. You can't magically get full consistency out
of your database if your application hasn't been designed for it.

Shutting systems down for backup may be fine for small setups, but on a
server that hosts multiple critical applications, you just can't do that.
You back up as consistently as you can without downtime, and pick the time
of night with the lowest likelyhood of activity for it.


On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Gavin,
 Right,
 that is also an option, but you are really not sure 100% that everything
 that is on memory is on the disk (buffers etc...)
 also if it is definitely good for a disaster recovery.
 What I meant is that the only way to have a 100% guaranteed consistent
 binary backup is when the database is shut down.
 Of course this is almost never an option, unless (tada) you have a slave
 dedicated for that.
 One remark on your note:


 Just a note though, I noticed someone added replication to a slave as a
 backup option.  I really discourage that.  Replication makes no guarantees
 that the data on your slave is the same as the data on your master.  Unless
 you're also checking consistency, a slave should be treated as a somewhat
 unreliable copy of your data.

 While it is true that replication makes no guarantees, if your slave is not
 the same as the master and you rely on that for production, you have some
 problems,
 try to go to business and say, our slave (which at least 50% of our
 applications use to read data) is not really in sync, watch their facial
 expression!
 Believe me, in many production environments the method used for backups
 relies on the slave, not on the master.
 It is so much useful and important that you should have all your efforts go
 for having a consistent read-only slave 'dedicated' only for backups, no
 other client messing with it.

 Just my two cents

 Claudio



 Gavin Towey wrote:

 You can make binary backups from the master using filesystem snapshots.
  You only need to hold a global read lock for a split second.

 Regards,
 Gavin Towey


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: better way to backup 50 Gig db?

2010-04-21 Thread Claudio Nanni
Johan,
Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
enough? :)
InnoDB has buffers and activities going on even if you locked the tables and
you are not sure that its buffers are on the disk when you snapshot.
Again, you might be lucky and trust in the InnoDB recovery, what I state is
that there is only one 100% guaranteed safe way to have binary backups.

have a look at these, very interesting:
http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_File_System_Snapshots-2009-02-26.pdf
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

Cheers

Claudio



2010/4/21 Johan De Meersman vegiv...@tuxera.be

 How does the below not guarantee me 100% that everything that can be
 consistent, is ?

 mysql flush tables with read lock;
 unixhost# sync
 unixhost# lvm create snapshot
 mysql unlock tables;

 I agree that there may be data inconsistencies within MyISAM, as it has no
 transactions, but there's also no guarantee that there isn't an application
 in the middle of multiple insert statements the moment you shut your
 database or your application. You can't magically get full consistency out
 of your database if your application hasn't been designed for it.

 Shutting systems down for backup may be fine for small setups, but on a
 server that hosts multiple critical applications, you just can't do that.
 You back up as consistently as you can without downtime, and pick the time
 of night with the lowest likelyhood of activity for it.


 On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Gavin,
 Right,
 that is also an option, but you are really not sure 100% that everything
 that is on memory is on the disk (buffers etc...)
 also if it is definitely good for a disaster recovery.
 What I meant is that the only way to have a 100% guaranteed consistent
 binary backup is when the database is shut down.
 Of course this is almost never an option, unless (tada) you have a slave
 dedicated for that.
 One remark on your note:


 Just a note though, I noticed someone added replication to a slave as a
 backup option.  I really discourage that.  Replication makes no guarantees
 that the data on your slave is the same as the data on your master.  Unless
 you're also checking consistency, a slave should be treated as a somewhat
 unreliable copy of your data.

 While it is true that replication makes no guarantees, if your slave is
 not the same as the master and you rely on that for production, you have
 some problems,
 try to go to business and say, our slave (which at least 50% of our
 applications use to read data) is not really in sync, watch their facial
 expression!
 Believe me, in many production environments the method used for backups
 relies on the slave, not on the master.
 It is so much useful and important that you should have all your efforts
 go for having a consistent read-only slave 'dedicated' only for backups, no
 other client messing with it.

 Just my two cents

 Claudio



 Gavin Towey wrote:

 You can make binary backups from the master using filesystem snapshots.
  You only need to hold a global read lock for a split second.

 Regards,
 Gavin Towey


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
Claudio


Re: better way to backup 50 Gig db?

2010-04-21 Thread Johan De Meersman
On Wed, Apr 21, 2010 at 10:50 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Johan,
 Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
 enough? :)

 InnoDB has buffers and activities going on even if you locked the tables
 and you are not sure that its buffers are on the disk when you snapshot.
 Again, you might be lucky and trust in the InnoDB recovery, what I state is
 that there is only one 100% guaranteed safe way to have binary backups.


Aha, that's a tidbit I didn't know :-) It does hold back queries, but the
engine doesn't stop it's maintenance threads.

I tend to do a full dump with --single-transaction once a week, and copy the
binlogs on the other days, though. That should suffice, I guess ? :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: better way to backup 50 Gig db?

2010-04-21 Thread Claudio Nanni
A Nice SQL dump with table locked are also very good, the problem is you
will lock the database for the time needed to dump it.
If you do it good like:

lock all tables
flush logs
dump
unlock tables

you will know where to start in case of recovery dump+binary logs

Cheers!


Claudio


2010/4/21 Johan De Meersman vegiv...@tuxera.be



 On Wed, Apr 21, 2010 at 10:50 AM, Claudio Nanni 
 claudio.na...@gmail.comwrote:

 Johan,
 Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
 enough? :)

 InnoDB has buffers and activities going on even if you locked the tables
 and you are not sure that its buffers are on the disk when you snapshot.
 Again, you might be lucky and trust in the InnoDB recovery, what I state
 is that there is only one 100% guaranteed safe way to have binary backups.


 Aha, that's a tidbit I didn't know :-) It does hold back queries, but the
 engine doesn't stop it's maintenance threads.

 I tend to do a full dump with --single-transaction once a week, and copy
 the binlogs on the other days, though. That should suffice, I guess ? :-)


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
Claudio


Re: better way to backup 50 Gig db?

2010-04-21 Thread Claudio Nanni
Hi Ben,
as said, you have to consider that a database data lives both on disk and on
ram,
on ram you have transactions, buffers that are asyncronously written to
disk, etc.

While the datadir of a 'shutdown' database is the FULL dataset(knowledge)
since no information is in ram,
a datadir of a running database is not the FULL dataset of any database.

This is nice:
http://www.percona.com/ppc2009/PPC2009_Life_of_a_dirty_pageInnoDB_disk_IO.pdf

Cheers

Claudio

2010/4/21 Ben Mildren ben.mild...@gmail.com

 LVM Snapshots on InnoDB setup are quite common, the transaction logs have
 to be on the same volume, and you have to pay attention to some of the
 configuration options, but I don't see any reason why this isn't a viable
 alternative.  Why wouldn't you trust InnoDB recovery?  If you can't trust
 that, there's no point in running it at all.. ..I'd hate to be in a
 situation where my server crashed and I couldn't trust the database to come
 back up again.  The key with all this is to test your backups..

 On 21 April 2010 09:50, Claudio Nanni claudio.na...@gmail.com wrote:

 Johan,
 Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
 enough? :)
 InnoDB has buffers and activities going on even if you locked the tables
 and
 you are not sure that its buffers are on the disk when you snapshot.
 Again, you might be lucky and trust in the InnoDB recovery, what I state
 is
 that there is only one 100% guaranteed safe way to have binary backups.

 have a look at these, very interesting:

 http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_File_System_Snapshots-2009-02-26.pdf

 http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

 Cheers

 Claudio



 2010/4/21 Johan De Meersman vegiv...@tuxera.be

  How does the below not guarantee me 100% that everything that can be
  consistent, is ?
 
  mysql flush tables with read lock;
  unixhost# sync
  unixhost# lvm create snapshot
  mysql unlock tables;
 
  I agree that there may be data inconsistencies within MyISAM, as it has
 no
  transactions, but there's also no guarantee that there isn't an
 application
  in the middle of multiple insert statements the moment you shut your
  database or your application. You can't magically get full consistency
 out
  of your database if your application hasn't been designed for it.
 
  Shutting systems down for backup may be fine for small setups, but on a
  server that hosts multiple critical applications, you just can't do
 that.
  You back up as consistently as you can without downtime, and pick the
 time
  of night with the lowest likelyhood of activity for it.
 
 
  On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni claudio.na...@gmail.com
 wrote:
 
  Gavin,
  Right,
  that is also an option, but you are really not sure 100% that
 everything
  that is on memory is on the disk (buffers etc...)
  also if it is definitely good for a disaster recovery.
  What I meant is that the only way to have a 100% guaranteed consistent
  binary backup is when the database is shut down.
  Of course this is almost never an option, unless (tada) you have a
 slave
  dedicated for that.
  One remark on your note:
 
 
  Just a note though, I noticed someone added replication to a slave as a
  backup option.  I really discourage that.  Replication makes no
 guarantees
  that the data on your slave is the same as the data on your master.
  Unless
  you're also checking consistency, a slave should be treated as a
 somewhat
  unreliable copy of your data.
 
  While it is true that replication makes no guarantees, if your slave is
  not the same as the master and you rely on that for production, you
 have
  some problems,
  try to go to business and say, our slave (which at least 50% of our
  applications use to read data) is not really in sync, watch their
 facial
  expression!
  Believe me, in many production environments the method used for backups
  relies on the slave, not on the master.
  It is so much useful and important that you should have all your
 efforts
  go for having a consistent read-only slave 'dedicated' only for
 backups, no
  other client messing with it.
 
  Just my two cents
 
  Claudio
 
 
 
  Gavin Towey wrote:
 
  You can make binary backups from the master using filesystem snapshots.
   You only need to hold a global read lock for a split second.
 
  Regards,
  Gavin Towey
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
 
 
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 



 --
 Claudio





-- 
Claudio


RE: better way to backup 50 Gig db?

2010-04-21 Thread Jerry Schwartz

-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Wednesday, April 21, 2010 2:12 AM
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?


[JS] snip

[JS] Unless I've forgotten something from earlier in my career (what day is 
it, anyways?), there are three aspects to this problem:

1. Ensuring that your databases, slave and master individually, are internally 
consistent;
2. Ensuring that your master has captured the latest externally-supplied data; 
and
3. Ensuring that your slave and you master are totally in synch.

#1 is the proper goal for the master. That's the whole point of ACID. For the 
master database, #2 is unattainable. You can buffer as many times and as many 
ways and as many places as you like, there is always going to be the 
**possibility** that some incoming data will be lost. Even if you push the 
problem all the way back to a human user, it will still be possible to lose 
data. If something is possible, it will happen: perhaps not for millennia, but 
more likely as soon as you leave on vacation.

Similarly, #1 is an attainable and necessary goal for a slave; and #2 is just 
as unattainable for a slave as for a master. The only way to guarantee #3 is 
to include the replication somewhere in the ACID transaction. The penalty for 
that is going to be a loss of throughput, possibly a horrendous loss of 
throughput. That is where somebody needs to do a cost/benefit analysis.

Just my two cents

[JS] ... and mine ...

Claudio


Gavin Towey wrote:

You can make binary backups from the master using filesystem snapshots.  You
only need to hold a global read lock for a split second.

Regards,
Gavin Towey


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: better way to backup 50 Gig db?

2010-04-21 Thread Gavin Towey
Claudio,

So innodb may not be consistent between memory and disk at all times, but 
that's actually not important.  What is important is that the files on disk to 
be consistent with a specific binlog position.  That's all that is needed for a 
consistent backup, and can be done with filesystem snapshots.

Innodb may continue to do background flushing even during a FLUSH TABLES WITH 
READ LOCK, but it always keeps consistency between its log files, the binlog, 
and the tablespaces.

When you load your snapshot back into an instance of mysql, you'll often see it 
go through the crash recovery as it applies log file items to the tablespace, 
and deals with any unfinished open transactions that happened to be running, 
but once done your data will be consistent with the binlog position that was 
recorded when the read lock was held.

I do this every day on many servers both for backup, and creating new slaves.  
It always works.  Neither of those links you gave contradict this, in fact they 
both essentially say this works great, as long as you're aware of the caveats

Regards,
Gavin Towey


-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Wednesday, April 21, 2010 1:51 AM
To: Johan De Meersman
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Johan,
Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
enough? :)
InnoDB has buffers and activities going on even if you locked the tables and
you are not sure that its buffers are on the disk when you snapshot.
Again, you might be lucky and trust in the InnoDB recovery, what I state is
that there is only one 100% guaranteed safe way to have binary backups.

have a look at these, very interesting:
http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_File_System_Snapshots-2009-02-26.pdf
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

Cheers

Claudio



2010/4/21 Johan De Meersman vegiv...@tuxera.be

 How does the below not guarantee me 100% that everything that can be
 consistent, is ?

 mysql flush tables with read lock;
 unixhost# sync
 unixhost# lvm create snapshot
 mysql unlock tables;

 I agree that there may be data inconsistencies within MyISAM, as it has no
 transactions, but there's also no guarantee that there isn't an application
 in the middle of multiple insert statements the moment you shut your
 database or your application. You can't magically get full consistency out
 of your database if your application hasn't been designed for it.

 Shutting systems down for backup may be fine for small setups, but on a
 server that hosts multiple critical applications, you just can't do that.
 You back up as consistently as you can without downtime, and pick the time
 of night with the lowest likelyhood of activity for it.


 On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Gavin,
 Right,
 that is also an option, but you are really not sure 100% that everything
 that is on memory is on the disk (buffers etc...)
 also if it is definitely good for a disaster recovery.
 What I meant is that the only way to have a 100% guaranteed consistent
 binary backup is when the database is shut down.
 Of course this is almost never an option, unless (tada) you have a slave
 dedicated for that.
 One remark on your note:


 Just a note though, I noticed someone added replication to a slave as a
 backup option.  I really discourage that.  Replication makes no guarantees
 that the data on your slave is the same as the data on your master.  Unless
 you're also checking consistency, a slave should be treated as a somewhat
 unreliable copy of your data.

 While it is true that replication makes no guarantees, if your slave is
 not the same as the master and you rely on that for production, you have
 some problems,
 try to go to business and say, our slave (which at least 50% of our
 applications use to read data) is not really in sync, watch their facial
 expression!
 Believe me, in many production environments the method used for backups
 relies on the slave, not on the master.
 It is so much useful and important that you should have all your efforts
 go for having a consistent read-only slave 'dedicated' only for backups, no
 other client messing with it.

 Just my two cents

 Claudio



 Gavin Towey wrote:

 You can make binary backups from the master using filesystem snapshots.
  You only need to hold a global read lock for a split second.

 Regards,
 Gavin Towey


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




--
Claudio

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing

Re: better way to backup 50 Gig db?

2010-04-20 Thread Jay Ess

Gavin Towey wrote:

What Shawn said is important.

Better options:
1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
--single-transaction  backup.sql`  and keep your db server actively responding 
to requests at the same time.

2. Use something like LVM to create filesytem snapshots which allow you to 
backup your database, while only keeping a read lock on the db for a second or 
so.
  
3. Set up replication and backup the replicated data using any of the 
above method.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: better way to backup 50 Gig db?

2010-04-20 Thread andrew.2.moore
I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, 
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel 
dump/import.

There's some great features in both products. I will leave you to do your own 
research into the tools as knowing their features will benefit you.

Best wishes

Andy

From: ext Jay Ess [li...@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
 What Shawn said is important.

 Better options:
 1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
 --single-transaction  backup.sql`  and keep your db server actively 
 responding to requests at the same time.

 2. Use something like LVM to create filesytem snapshots which allow you to 
 backup your database, while only keeping a read lock on the db for a second 
 or so.

3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: better way to backup 50 Gig db?

2010-04-20 Thread Gavin Towey
More good ideas from Andrew!

Just a note though, I noticed someone added replication to a slave as a backup 
option.  I really discourage that.  Replication makes no guarantees that the 
data on your slave is the same as the data on your master.  Unless you're also 
checking consistency, a slave should be treated as a somewhat unreliable copy 
of your data.

Regards,
Gavin Towey

-Original Message-
From: andrew.2.mo...@nokia.com [mailto:andrew.2.mo...@nokia.com]
Sent: Tuesday, April 20, 2010 2:08 AM
To: li...@netrogenic.com
Cc: mysql@lists.mysql.com
Subject: RE: better way to backup 50 Gig db?

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, 
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel 
dump/import.

There's some great features in both products. I will leave you to do your own 
research into the tools as knowing their features will benefit you.

Best wishes

Andy

From: ext Jay Ess [li...@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
 What Shawn said is important.

 Better options:
 1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
 --single-transaction  backup.sql`  and keep your db server actively 
 responding to requests at the same time.

 2. Use something like LVM to create filesytem snapshots which allow you to 
 backup your database, while only keeping a read lock on the db for a second 
 or so.

3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: better way to backup 50 Gig db?

2010-04-20 Thread Rob Wultsch
On Tue, Apr 20, 2010 at 11:03 AM, Gavin Towey gto...@ffn.com wrote:
 More good ideas from Andrew!

 Just a note though, I noticed someone added replication to a slave as a 
 backup option.  I really discourage that.  Replication makes no guarantees 
 that the data on your slave is the same as the data on your master.  Unless 
 you're also checking consistency, a slave should be treated as a somewhat 
 unreliable copy of your data.

 Regards,
 Gavin Towey


I would like to second this sentiment.  Once you start looking for
data inconsistencies on slaves you will be surprised how often you
find them.

-- 
Rob Wultsch
wult...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: better way to backup 50 Gig db?

2010-04-20 Thread Claudio Nanni
Where is Falcon (Sorry)

the only way to have a really consistent binary backup is to shut down the
server.
the best way to shut down a server is to have a slave dedicated to backups
that you can shutdown any time.

if you have only the content of the database folders under [datadir] it is
not enough, you need the full [datadir] to 'dream' to restore
your db, unless you only use MyISAM tables, then you are more lucky.

The bottom line is:  Don't Dream, Prove it. Or it will become a nightmare
sooner or later.


Ciao!

Claudio



2010/4/20 Gavin Towey gto...@ffn.com

 More good ideas from Andrew!

 Just a note though, I noticed someone added replication to a slave as a
 backup option.  I really discourage that.  Replication makes no guarantees
 that the data on your slave is the same as the data on your master.  Unless
 you're also checking consistency, a slave should be treated as a somewhat
 unreliable copy of your data.

 Regards,
 Gavin Towey

 -Original Message-
 From: andrew.2.mo...@nokia.com [mailto:andrew.2.mo...@nokia.com]
 Sent: Tuesday, April 20, 2010 2:08 AM
 To: li...@netrogenic.com
 Cc: mysql@lists.mysql.com
 Subject: RE: better way to backup 50 Gig db?

 I would also recommend looking into some 3rd party tools.

 http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb,
 MyISAM and XtraDB engines.
 http://www.maatkit.org/ - Packed with useful features inc a parallel
 dump/import.

 There's some great features in both products. I will leave you to do your
 own research into the tools as knowing their features will benefit you.

 Best wishes

 Andy
 
 From: ext Jay Ess [li...@netrogenic.com]
 Sent: 20 April 2010 09:06
 Cc: mysql@lists.mysql.com
 Subject: Re: better way to backup 50 Gig db?

 Gavin Towey wrote:
  What Shawn said is important.
 
  Better options:
  1. Use InnoDB, and then you can make a consistent backup with `mysqldump
 --single-transaction  backup.sql`  and keep your db server actively
 responding to requests at the same time.
 
  2. Use something like LVM to create filesytem snapshots which allow you
 to backup your database, while only keeping a read lock on the db for a
 second or so.
 
 3. Set up replication and backup the replicated data using any of the
 above method.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


 This message contains confidential information and is intended only for the
 individual named.  If you are not the named addressee, you are notified that
 reviewing, disseminating, disclosing, copying or distributing this e-mail is
 strictly prohibited.  Please notify the sender immediately by e-mail if you
 have received this e-mail by mistake and delete this e-mail from your
 system. E-mail transmission cannot be guaranteed to be secure or error-free
 as information could be intercepted, corrupted, lost, destroyed, arrive late
 or incomplete, or contain viruses. The sender therefore does not accept
 liability for any loss or damage caused by viruses or errors or omissions in
 the contents of this message, which arise as a result of e-mail
 transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA
 94089, USA, FriendFinder.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com




-- 
Claudio


RE: better way to backup 50 Gig db?

2010-04-20 Thread Gavin Towey
You can make binary backups from the master using filesystem snapshots.  You 
only need to hold a global read lock for a split second.

Regards,
Gavin Towey

From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Tuesday, April 20, 2010 1:19 PM
To: Gavin Towey
Cc: andrew.2.mo...@nokia.com; li...@netrogenic.com; mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Where is Falcon (Sorry)

the only way to have a really consistent binary backup is to shut down the 
server.
the best way to shut down a server is to have a slave dedicated to backups that 
you can shutdown any time.

if you have only the content of the database folders under [datadir] it is not 
enough, you need the full [datadir] to 'dream' to restore
your db, unless you only use MyISAM tables, then you are more lucky.

The bottom line is:  Don't Dream, Prove it. Or it will become a nightmare 
sooner or later.


Ciao!

Claudio



2010/4/20 Gavin Towey gto...@ffn.commailto:gto...@ffn.com
More good ideas from Andrew!

Just a note though, I noticed someone added replication to a slave as a backup 
option.  I really discourage that.  Replication makes no guarantees that the 
data on your slave is the same as the data on your master.  Unless you're also 
checking consistency, a slave should be treated as a somewhat unreliable copy 
of your data.

Regards,
Gavin Towey

-Original Message-
From: andrew.2.mo...@nokia.commailto:andrew.2.mo...@nokia.com 
[mailto:andrew.2.mo...@nokia.commailto:andrew.2.mo...@nokia.com]
Sent: Tuesday, April 20, 2010 2:08 AM
To: li...@netrogenic.commailto:li...@netrogenic.com
Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: RE: better way to backup 50 Gig db?

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, 
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel 
dump/import.

There's some great features in both products. I will leave you to do your own 
research into the tools as knowing their features will benefit you.

Best wishes

Andy

From: ext Jay Ess [li...@netrogenic.commailto:li...@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
 What Shawn said is important.

 Better options:
 1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
 --single-transaction  backup.sql`  and keep your db server actively 
 responding to requests at the same time.

 2. Use something like LVM to create filesytem snapshots which allow you to 
 backup your database, while only keeping a read lock on the db for a second 
 or so.

3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com



--
Claudio


This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited. Please notify the sender immediately by e-mail if you have 
received this e-mail by mistake and delete this e-mail from your system. E-mail 
transmission cannot be guaranteed to be secure or error-free as information 
could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
contain viruses. The sender therefore does not accept liability for any loss or 
damage caused by viruses or errors or omissions in the contents of this 
message, which arise as a result of e-mail

better way to backup 50 Gig db?

2010-04-19 Thread Mitchell Maltenfort
I'm using MySQL to manage data on my computer .

The total data is 50 Gig in MyISAM folders.

As I type, I already have the folder with the myd, frm, etc being
copied offsite.  As I understand it, if this computer dies tomorrow, I
can reinstall MySQL on a new computer, drag over the archive, stick
the folder under data and I'm back in business.

Or am I dreaming?

I'd rather be corrected now than find out the hard way.

Any advice?+

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: better way to backup 50 Gig db?

2010-04-19 Thread Shawn Green

Mitchell Maltenfort wrote:

I'm using MySQL to manage data on my computer .

The total data is 50 Gig in MyISAM folders.

As I type, I already have the folder with the myd, frm, etc being
copied offsite.  As I understand it, if this computer dies tomorrow, I
can reinstall MySQL on a new computer, drag over the archive, stick
the folder under data and I'm back in business.

Or am I dreaming?

I'd rather be corrected now than find out the hard way.

Any advice?+



Did you remember to FLUSH (with read lock) those tables before you 
started copying. Alternatively, you could have shutdown your MySQL 
instance, too.


If not, then the in-memory and on-disk images of your tables are out of 
sync. You may be copying away data that will appear corrupted after 
restoration.


--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: better way to backup 50 Gig db?

2010-04-19 Thread Gavin Towey
What Shawn said is important.

Better options:
1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
--single-transaction  backup.sql`  and keep your db server actively responding 
to requests at the same time.

2. Use something like LVM to create filesytem snapshots which allow you to 
backup your database, while only keeping a read lock on the db for a second or 
so.



-Original Message-
From: Shawn Green [mailto:shawn.l.gr...@oracle.com]
Sent: Monday, April 19, 2010 3:24 PM
To: Mitchell Maltenfort
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Mitchell Maltenfort wrote:
 I'm using MySQL to manage data on my computer .

 The total data is 50 Gig in MyISAM folders.

 As I type, I already have the folder with the myd, frm, etc being
 copied offsite.  As I understand it, if this computer dies tomorrow, I
 can reinstall MySQL on a new computer, drag over the archive, stick
 the folder under data and I'm back in business.

 Or am I dreaming?

 I'd rather be corrected now than find out the hard way.

 Any advice?+


Did you remember to FLUSH (with read lock) those tables before you
started copying. Alternatively, you could have shutdown your MySQL
instance, too.

If not, then the in-memory and on-disk images of your tables are out of
sync. You may be copying away data that will appear corrupted after
restoration.

--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to synchronize two database schemas

2010-01-23 Thread Johan De Meersman
As Mr. Withers also indicated, I meant frefixing the *filename* of each
change script with the date, and of course also keeping all changes in a
single (or limited set of) folder(s), so you can easily collect and
sequentially apply all of them when release time comes.

Also, it is preferable that database structure changes are handled by one
person or team, usually the database administrator(s) if you have them.

On Fri, Jan 22, 2010 at 11:12 PM, Daevid Vincent dae...@daevid.com wrote:

  Huh? This makes no sense.

 There is one single file UPDATES.sql that has ALL the changes in it.

 In order to prefix a line, it would need to be commented. It would also
 need to be one single change per line. Highly inefficient.

 This still doesn't solve the fact that different developers checkout or
 update the code on different days/times. How would you automate the fact
 that b/c I did an update today, which specific UPDATE.sql commands need to
 be run since some have already been after the last update. You'd need to
 store a file somewhere with a date stamp.

 Now, we could split each block of SQL commands into separate files and the
 script could check file dates, but you still have to save off the last time
 you updated somewhere.

 Anyways, it's just easier for a developer to mentally keep track of what
 the last SQL they remember running was. And if they forgot, all they have to
 do is check their version of the database schema against what the SQL
 command wants to do. A diff if you will.

 ÐÆ5ÏÐ
  *Light travels faster than sound. This is why some people appear bright
 until you hear them speak.*

  --
 *From:* vegiv...@gmail.com [mailto:vegiv...@gmail.com] *On Behalf Of *Johan
 De Meersman
 *Sent:* Friday, January 22, 2010 1:06 AM
 *To:* Daevid Vincent
 *Cc:* mysql@lists.mysql.com; Price, Randall

 *Subject:* Re: Best way to synchronize two database schemas

 Simple: prefix the change files with mmddhhmm formatted timestamps, so
 they sort correctly :-)

 On Thu, Jan 21, 2010 at 11:31 PM, Daevid Vincent dae...@daevid.comwrote:

 Exactly what Johan said.

 I keep structure like so:

 develo...@mypse /var/www/dart2/UPDATES $ ll
 -rw-rw-rw- 1 developer developer 551097 2009-12-22 23:16
 airports_city_country.sql
 drwxrwxrwx 2 developer developer   4096 2010-01-21 04:51 CVS
 -rw-rw-rw- 1 developer developer   3063 2009-07-15 01:40 fix_airports.php
 -rw-r--r-- 1 developer developer  23414 2010-01-21 03:52
 ps_access_to_mysql.sql
 -rw-rw-rw- 1 developer developer  12259 2010-01-06 05:22 UPDATES.sql

 Any and all changes to DB schema are in the UPDATES/UPDATES.sql file and
 each are commented with the date of the change, who did it, and why. This
 file/dir is part of your repository, so as each developer checks out, they
 would run the appropriate part of the script as well. I've not yet found a
 good (and safe) way to automate this process.

 /* 2009-06-01 [dv] fix the privileges for various users as they were all
 whacked out
  * http://dev.mysql.com/doc/refman/5.0/en/grant.html
  */
 REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'foo'@'10.10.10.%';
 ...

 If you're using Subversion, you may find my Subversion Flagged Update
 script helpful...
 http://daevid.com/content/examples/snippets.php

 I also can vouche for SQLYog. Aside from being the absolute BEST mySQL GUI
 I've ever used, it has a feature to create the schema differences between
 two live databases. I've used it before to get a DEV and PROD server in
 sync, so that I could then implement the above methodology.

  -Original Message-
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
  Behalf Of Johan De Meersman
  Sent: Thursday, January 21, 2010 10:35 AM
  To: Price, Randall
  Cc: mysql@lists.mysql.com
  Subject: Re: Best way to synchronize two database schemas
 
  The best way is to keep track of all individual changes to your
  staging environment, including fire-and-forget style scripts; and
  apply those to your production environment as needed. This is part of
  the process of change management, and generally a very good idea :-)
 
  Lacking that, there are several tools that can generate a differential
  script to do exactly this. I don't really use them, but I seem to
  remember that SQLyog and some expensive but excellent Quest tool could
  do it.
 
  On 1/21/10, Price, Randall randall.pr...@vt.edu wrote:
   I have a two databases, one in a production environment
  (let's call it
   db_prod) and the other in a testing environments (Let's
  call it db_test).
  
   What is the best way to synchronize the database schemas?
  db_test has had a
   few indexes and constraints added to several tables and I
  need to generate a
   MySQL script to apply these changes to db_prod.  So
  basically I want to dump
   the schemas of the two database, compare, and generate the
  necessary script
   to apply to db_prod.
  
   Thanks,
   Randall Price
  
  
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy

Re: Best way to synchronize two database schemas

2010-01-23 Thread Alexander Kolesen
One of the ways is to keep db scema under revision control system. And
update it every N minutes.

% crontab -l
0 * * * * mysqldump testdb --no-data  testdb_schema.sql  svn ci -m
db schema: `date`  /dev/null

 I have a two databases, one in a production environment (let's call it 
 db_prod) and the other in a testing environments (Let's call it db_test).

 What is the best way to synchronize the database schemas?  db_test has had a 
 few indexes and constraints added to several tables and I need to generate a 
 MySQL script to apply these changes to db_prod.  So basically I want to dump 
 the schemas of the two database, compare, and generate the necessary script 
 to apply to db_prod.

 Thanks,
 Randall Price



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to synchronize two database schemas

2010-01-23 Thread fsb
On 1/21/10 12:03 PM, Price, Randall randall.pr...@vt.edu wrote:

 I have a two databases, one in a production environment (let's call it
 db_prod) and the other in a testing environments (Let's call it db_test).
 
 What is the best way to synchronize the database schemas?  db_test has had a
 few indexes and constraints added to several tables and I need to generate a
 MySQL script to apply these changes to db_prod.  So basically I want to dump
 the schemas of the two database, compare, and generate the necessary script to
 apply to db_prod.

i'd use export structure in phpmyadmin on both dbs, then run diff on the
two export files, and write a script by hand based on the diff.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to synchronize two database schemas

2010-01-23 Thread Hassan Schroeder
 What is the best way to synchronize the database schemas?  db_test has had a
 few indexes and constraints added to several tables and I need to generate a
 MySQL script to apply these changes to db_prod.

Ruby on Rails comes with tools to dump and load db schemas;
it's trivial to create a skeleton app and point it at your DB if you
want to try it out.

Of course, Rails also uses what are called migrations to make
changes to existing DBs, which keeps you out of this problem in
the first place :-)

I haven't looked, but I wouldn't be surprised if other comparable
(i.e., Rails clone) platforms have similar features.

FWIW,
-- 
Hassan Schroeder  hassan.schroe...@gmail.com
twitter: @hassan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to synchronize two database schemas

2010-01-23 Thread Martijn Tonies

Hi,

Have you checked out our tool Database Workbench yet? It
includes a Schema Compare tool that generates a script.

See www.upscene.com

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Database questions? Check the forum:
http://www.databasedevelopmentforum.com



I have a two databases, one in a production environment (let's call it 
db_prod) and the other in a testing environments (Let's call it db_test).


What is the best way to synchronize the database schemas?  db_test has had a 
few indexes and constraints added to several tables and I need to generate a 
MySQL script to apply these changes to db_prod.  So basically I want to dump 
the schemas of the two database, compare, and generate the necessary script 
to apply to db_prod.


Thanks,
Randall Price



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to synchronize two database schemas

2010-01-22 Thread Johan De Meersman
Simple: prefix the change files with mmddhhmm formatted timestamps, so
they sort correctly :-)

On Thu, Jan 21, 2010 at 11:31 PM, Daevid Vincent dae...@daevid.com wrote:

 Exactly what Johan said.

 I keep structure like so:

 develo...@mypse /var/www/dart2/UPDATES $ ll
 -rw-rw-rw- 1 developer developer 551097 2009-12-22 23:16
 airports_city_country.sql
 drwxrwxrwx 2 developer developer   4096 2010-01-21 04:51 CVS
 -rw-rw-rw- 1 developer developer   3063 2009-07-15 01:40 fix_airports.php
 -rw-r--r-- 1 developer developer  23414 2010-01-21 03:52
 ps_access_to_mysql.sql
 -rw-rw-rw- 1 developer developer  12259 2010-01-06 05:22 UPDATES.sql

 Any and all changes to DB schema are in the UPDATES/UPDATES.sql file and
 each are commented with the date of the change, who did it, and why. This
 file/dir is part of your repository, so as each developer checks out, they
 would run the appropriate part of the script as well. I've not yet found a
 good (and safe) way to automate this process.

 /* 2009-06-01 [dv] fix the privileges for various users as they were all
 whacked out
  * http://dev.mysql.com/doc/refman/5.0/en/grant.html
  */
 REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'foo'@'10.10.10.%';
 ...

 If you're using Subversion, you may find my Subversion Flagged Update
 script helpful...
 http://daevid.com/content/examples/snippets.php

 I also can vouche for SQLYog. Aside from being the absolute BEST mySQL GUI
 I've ever used, it has a feature to create the schema differences between
 two live databases. I've used it before to get a DEV and PROD server in
 sync, so that I could then implement the above methodology.

  -Original Message-
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
  Behalf Of Johan De Meersman
  Sent: Thursday, January 21, 2010 10:35 AM
  To: Price, Randall
  Cc: mysql@lists.mysql.com
  Subject: Re: Best way to synchronize two database schemas
 
  The best way is to keep track of all individual changes to your
  staging environment, including fire-and-forget style scripts; and
  apply those to your production environment as needed. This is part of
  the process of change management, and generally a very good idea :-)
 
  Lacking that, there are several tools that can generate a differential
  script to do exactly this. I don't really use them, but I seem to
  remember that SQLyog and some expensive but excellent Quest tool could
  do it.
 
  On 1/21/10, Price, Randall randall.pr...@vt.edu wrote:
   I have a two databases, one in a production environment
  (let's call it
   db_prod) and the other in a testing environments (Let's
  call it db_test).
  
   What is the best way to synchronize the database schemas?
  db_test has had a
   few indexes and constraints added to several tables and I
  need to generate a
   MySQL script to apply these changes to db_prod.  So
  basically I want to dump
   the schemas of the two database, compare, and generate the
  necessary script
   to apply to db_prod.
  
   Thanks,
   Randall Price
  
  
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=dae...@daevid.com
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: Best way to synchronize two database schemas

2010-01-22 Thread Daevid Vincent
Huh? This makes no sense.
 
There is one single file UPDATES.sql that has ALL the changes in it.
 
In order to prefix a line, it would need to be commented. It would also
need to be one single change per line. Highly inefficient.
 
This still doesn't solve the fact that different developers checkout or
update the code on different days/times. How would you automate the fact
that b/c I did an update today, which specific UPDATE.sql commands need to
be run since some have already been after the last update. You'd need to
store a file somewhere with a date stamp.
 
Now, we could split each block of SQL commands into separate files and the
script could check file dates, but you still have to save off the last time
you updated somewhere.
 
Anyways, it's just easier for a developer to mentally keep track of what
the last SQL they remember running was. And if they forgot, all they have
to do is check their version of the database schema against what the SQL
command wants to do. A diff if you will.
 
ÐÆ5ÏÐ 
Light travels faster than sound. This is why some people appear bright
until you hear them speak.

  _  

From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Friday, January 22, 2010 1:06 AM
To: Daevid Vincent
Cc: mysql@lists.mysql.com; Price, Randall
Subject: Re: Best way to synchronize two database schemas


Simple: prefix the change files with mmddhhmm formatted timestamps, so
they sort correctly :-)


On Thu, Jan 21, 2010 at 11:31 PM, Daevid Vincent dae...@daevid.com wrote:


Exactly what Johan said.

I keep structure like so:

develo...@mypse /var/www/dart2/UPDATES $ ll
-rw-rw-rw- 1 developer developer 551097 2009-12-22 23:16
airports_city_country.sql
drwxrwxrwx 2 developer developer   4096 2010-01-21 04:51 CVS
-rw-rw-rw- 1 developer developer   3063 2009-07-15 01:40 fix_airports.php
-rw-r--r-- 1 developer developer  23414 2010-01-21 03:52
ps_access_to_mysql.sql
-rw-rw-rw- 1 developer developer  12259 2010-01-06 05:22 UPDATES.sql

Any and all changes to DB schema are in the UPDATES/UPDATES.sql file and
each are commented with the date of the change, who did it, and why. This
file/dir is part of your repository, so as each developer checks out, they
would run the appropriate part of the script as well. I've not yet found a
good (and safe) way to automate this process.

/* 2009-06-01 [dv] fix the privileges for various users as they were all
whacked out
 * http://dev.mysql.com/doc/refman/5.0/en/grant.html
 */
REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'foo'@'10.10.10.%';
...

If you're using Subversion, you may find my Subversion Flagged Update
script helpful...
http://daevid.com/content/examples/snippets.php

I also can vouche for SQLYog. Aside from being the absolute BEST mySQL GUI
I've ever used, it has a feature to create the schema differences between
two live databases. I've used it before to get a DEV and PROD server in
sync, so that I could then implement the above methodology.


 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
 Behalf Of Johan De Meersman
 Sent: Thursday, January 21, 2010 10:35 AM
 To: Price, Randall
 Cc: mysql@lists.mysql.com
 Subject: Re: Best way to synchronize two database schemas

 The best way is to keep track of all individual changes to your
 staging environment, including fire-and-forget style scripts; and
 apply those to your production environment as needed. This is part of
 the process of change management, and generally a very good idea :-)

 Lacking that, there are several tools that can generate a differential
 script to do exactly this. I don't really use them, but I seem to
 remember that SQLyog and some expensive but excellent Quest tool could
 do it.

 On 1/21/10, Price, Randall randall.pr...@vt.edu wrote:
  I have a two databases, one in a production environment
 (let's call it
  db_prod) and the other in a testing environments (Let's
 call it db_test).
 
  What is the best way to synchronize the database schemas?
 db_test has had a
  few indexes and constraints added to several tables and I
 need to generate a
  MySQL script to apply these changes to db_prod.  So
 basically I want to dump
  the schemas of the two database, compare, and generate the
 necessary script
  to apply to db_prod.
 
  Thanks,
  Randall Price
 
 


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=dae...@daevid.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be






-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel




Re: Best way to synchronize two database schemas

2010-01-22 Thread Johnny Withers
I handle this by having each developer put the scripts into a folder named:

MMDD

Where the folder name is the date of the next scheduled release.

Under that folder each script is named MMDD_NNN_DESCRIPTION.sql

Where MMDD in this case is the date the script was created and NNN is
the sequence # of the script if there is more than one script created on a
single day.

When release time comes, we simply combine all scripts in that release
folder.

If, in the event, we have to bring a really old database up to date, we
simply combine all scripts in all folders since the last release date on
that database up to now.

-JW


On Fri, Jan 22, 2010 at 4:12 PM, Daevid Vincent dae...@daevid.com wrote:

 Huh? This makes no sense.

 There is one single file UPDATES.sql that has ALL the changes in it.

 In order to prefix a line, it would need to be commented. It would also
 need to be one single change per line. Highly inefficient.

 This still doesn't solve the fact that different developers checkout or
 update the code on different days/times. How would you automate the fact
 that b/c I did an update today, which specific UPDATE.sql commands need to
 be run since some have already been after the last update. You'd need to
 store a file somewhere with a date stamp.

 Now, we could split each block of SQL commands into separate files and the
 script could check file dates, but you still have to save off the last time
 you updated somewhere.

 Anyways, it's just easier for a developer to mentally keep track of what
 the last SQL they remember running was. And if they forgot, all they have
 to do is check their version of the database schema against what the SQL
 command wants to do. A diff if you will.

 ÐÆ5ÏÐ
 Light travels faster than sound. This is why some people appear bright
 until you hear them speak.

  _

 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
 Meersman
 Sent: Friday, January 22, 2010 1:06 AM
 To: Daevid Vincent
 Cc: mysql@lists.mysql.com; Price, Randall
 Subject: Re: Best way to synchronize two database schemas


 Simple: prefix the change files with mmddhhmm formatted timestamps, so
 they sort correctly :-)


 On Thu, Jan 21, 2010 at 11:31 PM, Daevid Vincent dae...@daevid.com
 wrote:


 Exactly what Johan said.

 I keep structure like so:

 develo...@mypse /var/www/dart2/UPDATES $ ll
 -rw-rw-rw- 1 developer developer 551097 2009-12-22 23:16
 airports_city_country.sql
 drwxrwxrwx 2 developer developer   4096 2010-01-21 04:51 CVS
 -rw-rw-rw- 1 developer developer   3063 2009-07-15 01:40 fix_airports.php
 -rw-r--r-- 1 developer developer  23414 2010-01-21 03:52
 ps_access_to_mysql.sql
 -rw-rw-rw- 1 developer developer  12259 2010-01-06 05:22 UPDATES.sql

 Any and all changes to DB schema are in the UPDATES/UPDATES.sql file and
 each are commented with the date of the change, who did it, and why. This
 file/dir is part of your repository, so as each developer checks out, they
 would run the appropriate part of the script as well. I've not yet found a
 good (and safe) way to automate this process.

 /* 2009-06-01 [dv] fix the privileges for various users as they were all
 whacked out
  * http://dev.mysql.com/doc/refman/5.0/en/grant.html
  */
 REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'foo'@'10.10.10.%';
 ...

 If you're using Subversion, you may find my Subversion Flagged Update
 script helpful...
 http://daevid.com/content/examples/snippets.php

 I also can vouche for SQLYog. Aside from being the absolute BEST mySQL GUI
 I've ever used, it has a feature to create the schema differences between
 two live databases. I've used it before to get a DEV and PROD server in
 sync, so that I could then implement the above methodology.


  -Original Message-
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
  Behalf Of Johan De Meersman
  Sent: Thursday, January 21, 2010 10:35 AM
  To: Price, Randall
  Cc: mysql@lists.mysql.com
  Subject: Re: Best way to synchronize two database schemas
 
  The best way is to keep track of all individual changes to your
  staging environment, including fire-and-forget style scripts; and
  apply those to your production environment as needed. This is part of
  the process of change management, and generally a very good idea :-)
 
  Lacking that, there are several tools that can generate a differential
  script to do exactly this. I don't really use them, but I seem to
  remember that SQLyog and some expensive but excellent Quest tool could
  do it.
 
  On 1/21/10, Price, Randall randall.pr...@vt.edu wrote:
   I have a two databases, one in a production environment
  (let's call it
   db_prod) and the other in a testing environments (Let's
  call it db_test).
  
   What is the best way to synchronize the database schemas?
  db_test has had a
   few indexes and constraints added to several tables and I
  need to generate a
   MySQL script to apply these changes to db_prod.  So
  basically I want to dump
   the schemas

Best way to synchronize two database schemas

2010-01-21 Thread Price, Randall
I have a two databases, one in a production environment (let's call it db_prod) 
and the other in a testing environments (Let's call it db_test).

What is the best way to synchronize the database schemas?  db_test has had a 
few indexes and constraints added to several tables and I need to generate a 
MySQL script to apply these changes to db_prod.  So basically I want to dump 
the schemas of the two database, compare, and generate the necessary script to 
apply to db_prod.

Thanks,
Randall Price



Re: Best way to synchronize two database schemas

2010-01-21 Thread Johan De Meersman
The best way is to keep track of all individual changes to your
staging environment, including fire-and-forget style scripts; and
apply those to your production environment as needed. This is part of
the process of change management, and generally a very good idea :-)

Lacking that, there are several tools that can generate a differential
script to do exactly this. I don't really use them, but I seem to
remember that SQLyog and some expensive but excellent Quest tool could
do it.

On 1/21/10, Price, Randall randall.pr...@vt.edu wrote:
 I have a two databases, one in a production environment (let's call it
 db_prod) and the other in a testing environments (Let's call it db_test).

 What is the best way to synchronize the database schemas?  db_test has had a
 few indexes and constraints added to several tables and I need to generate a
 MySQL script to apply these changes to db_prod.  So basically I want to dump
 the schemas of the two database, compare, and generate the necessary script
 to apply to db_prod.

 Thanks,
 Randall Price




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Best way to synchronize two database schemas

2010-01-21 Thread Daevid Vincent
Exactly what Johan said.

I keep structure like so:

develo...@mypse /var/www/dart2/UPDATES $ ll
-rw-rw-rw- 1 developer developer 551097 2009-12-22 23:16
airports_city_country.sql
drwxrwxrwx 2 developer developer   4096 2010-01-21 04:51 CVS
-rw-rw-rw- 1 developer developer   3063 2009-07-15 01:40 fix_airports.php
-rw-r--r-- 1 developer developer  23414 2010-01-21 03:52
ps_access_to_mysql.sql
-rw-rw-rw- 1 developer developer  12259 2010-01-06 05:22 UPDATES.sql 

Any and all changes to DB schema are in the UPDATES/UPDATES.sql file and
each are commented with the date of the change, who did it, and why. This
file/dir is part of your repository, so as each developer checks out, they
would run the appropriate part of the script as well. I've not yet found a
good (and safe) way to automate this process.

/* 2009-06-01 [dv] fix the privileges for various users as they were all
whacked out
 * http://dev.mysql.com/doc/refman/5.0/en/grant.html
 */
REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'foo'@'10.10.10.%';
...

If you're using Subversion, you may find my Subversion Flagged Update
script helpful...
http://daevid.com/content/examples/snippets.php

I also can vouche for SQLYog. Aside from being the absolute BEST mySQL GUI
I've ever used, it has a feature to create the schema differences between
two live databases. I've used it before to get a DEV and PROD server in
sync, so that I could then implement the above methodology.

 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On 
 Behalf Of Johan De Meersman
 Sent: Thursday, January 21, 2010 10:35 AM
 To: Price, Randall
 Cc: mysql@lists.mysql.com
 Subject: Re: Best way to synchronize two database schemas
 
 The best way is to keep track of all individual changes to your
 staging environment, including fire-and-forget style scripts; and
 apply those to your production environment as needed. This is part of
 the process of change management, and generally a very good idea :-)
 
 Lacking that, there are several tools that can generate a differential
 script to do exactly this. I don't really use them, but I seem to
 remember that SQLyog and some expensive but excellent Quest tool could
 do it.
 
 On 1/21/10, Price, Randall randall.pr...@vt.edu wrote:
  I have a two databases, one in a production environment 
 (let's call it
  db_prod) and the other in a testing environments (Let's 
 call it db_test).
 
  What is the best way to synchronize the database schemas?  
 db_test has had a
  few indexes and constraints added to several tables and I 
 need to generate a
  MySQL script to apply these changes to db_prod.  So 
 basically I want to dump
  the schemas of the two database, compare, and generate the 
 necessary script
  to apply to db_prod.
 
  Thanks,
  Randall Price
 
 
 
 
 -- 
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=dae...@daevid.com
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is there a better way than this?

2009-12-28 Thread Tim Molter
Thanks for the replies!

Chris, yeah, that's the first thing I tried. The problem though is
that SQL statement also returns Row #2 (x=1, y=25) since y=25 is
associated with both x=1 and x=2. I want it only to return row #3.

As John said, it may not be possible with a simple SQL statement.

My table is used as a mapping table for an M to N relationship similar
as described here:
http://stackoverflow.com/questions/1680855/sql-select-with-mn-relationship

My idea was to get a set of Xs with SELECT X FROM `A` WHERE Y IN (25)
and another set of Xs with SELECT X FROM `A` WHERE Y IN (24)

,then return the common elements between the two sets and use THAT set
to query X again.

Like I said, that approach works but I thought there might be a more
elegant way.





 Unless I am missing something, this should work.

 SELECT DISTINCT X FROM `A`
 WHERE Y IN (25)
 AND Y NOT IN (24)

 Chris W


 Tim Molter wrote:

 I'm new to MySQL and I'm looking for some guidance. I have a table A,
 with two columns X and Y with the following data:

 |   X    |    Y    |
    1          24
    1          25
    2          25
    2          26
    3          27

 I want my SQL query to return 2 following this verbose logic: SELECT
 DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

 I came up with the following SQL, which gives me my desired result,
 but is there a better way to do it? Can it be achieved using MINUS or
 UNION somehow?

 BTW, I'm using IN here because I intend to replace the single numbers
 (24 and 25) with arrays that have 0 to N members.

 SELECT DISTINCT X FROM `A`

 WHERE X IN (
 SELECT X FROM `A` WHERE Y IN (25)
 )

 AND X NOT IN (
 SELECT X FROM `A` WHERE Y IN (24)
 )

 Thanks!






-- 
~Tim
http://obscuredclarity.blogspot.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is there a better way than this?

2009-12-28 Thread Michael Dykman
Gavin,

very nice,

 - michael dykman

On Mon, Dec 28, 2009 at 2:16 PM, Gavin Towey gto...@ffn.com wrote:
 No, that won't work, remember that the WHERE clause is applied to each row 
 individually -- y is 25, then it also cannot possibly be 24 at the same time, 
 so AND condition has no meaning there.  What you're asking for there is the 
 set of all x that have 25 as a y value, which is 1 and 2.

 You need to use aggregates to create conditions that are meaningful for all x 
 with the same value:

 SELECT x FROM a GROUP BY x HAVING sum(y=25) and not sum(y=24);

 Regards,
 Gavin Towey

 -Original Message-
 From: Chris W [mailto:4rfv...@cox.net]
 Sent: Sunday, December 27, 2009 6:02 PM
 To: Tim Molter
 Cc: mysql@lists.mysql.com
 Subject: Re: Is there a better way than this?

 Unless I am missing something, this should work.

 SELECT DISTINCT X FROM `A`
 WHERE Y IN (25)
 AND Y NOT IN (24)

 Chris W


 Tim Molter wrote:
 I'm new to MySQL and I'm looking for some guidance. I have a table A,
 with two columns X and Y with the following data:

 |   X    |    Y    |
     1          24
     1          25
     2          25
     2          26
     3          27

 I want my SQL query to return 2 following this verbose logic: SELECT
 DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

 I came up with the following SQL, which gives me my desired result,
 but is there a better way to do it? Can it be achieved using MINUS or
 UNION somehow?

 BTW, I'm using IN here because I intend to replace the single numbers
 (24 and 25) with arrays that have 0 to N members.

 SELECT DISTINCT X FROM `A`

 WHERE X IN (
 SELECT X FROM `A` WHERE Y IN (25)
 )

 AND X NOT IN (
 SELECT X FROM `A` WHERE Y IN (24)
 )

 Thanks!



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=gto...@ffn.com


 This message contains confidential information and is intended only for the 
 individual named.  If you are not the named addressee, you are notified that 
 reviewing, disseminating, disclosing, copying or distributing this e-mail is 
 strictly prohibited.  Please notify the sender immediately by e-mail if you 
 have received this e-mail by mistake and delete this e-mail from your system. 
 E-mail transmission cannot be guaranteed to be secure or error-free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any loss or damage caused by viruses or errors or omissions in 
 the contents of this message, which arise as a result of e-mail transmission. 
 [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, 
 FriendFinder.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





-- 
 - michael dykman
 - mdyk...@gmail.com

May you live every day of your life.
Jonathan Swift

Larry's First Law of Language Redesign: Everyone wants the colon.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Is there a better way than this?

2009-12-28 Thread Gavin Towey
No, that won't work, remember that the WHERE clause is applied to each row 
individually -- y is 25, then it also cannot possibly be 24 at the same time, 
so AND condition has no meaning there.  What you're asking for there is the set 
of all x that have 25 as a y value, which is 1 and 2.

You need to use aggregates to create conditions that are meaningful for all x 
with the same value:

SELECT x FROM a GROUP BY x HAVING sum(y=25) and not sum(y=24);

Regards,
Gavin Towey

-Original Message-
From: Chris W [mailto:4rfv...@cox.net]
Sent: Sunday, December 27, 2009 6:02 PM
To: Tim Molter
Cc: mysql@lists.mysql.com
Subject: Re: Is there a better way than this?

Unless I am missing something, this should work.

SELECT DISTINCT X FROM `A`
WHERE Y IN (25)
AND Y NOT IN (24)

Chris W


Tim Molter wrote:
 I'm new to MySQL and I'm looking for some guidance. I have a table A,
 with two columns X and Y with the following data:

 |   X|Y|
 1  24
 1  25
 2  25
 2  26
 3  27

 I want my SQL query to return 2 following this verbose logic: SELECT
 DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

 I came up with the following SQL, which gives me my desired result,
 but is there a better way to do it? Can it be achieved using MINUS or
 UNION somehow?

 BTW, I'm using IN here because I intend to replace the single numbers
 (24 and 25) with arrays that have 0 to N members.

 SELECT DISTINCT X FROM `A`

 WHERE X IN (
 SELECT X FROM `A` WHERE Y IN (25)
 )

 AND X NOT IN (
 SELECT X FROM `A` WHERE Y IN (24)
 )

 Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is there a better way than this?

2009-12-28 Thread DaWiz

This will work:

select distinct X from a as a
where Y in(25)
and
not exists (select X from a as b where a.X = b.X and b.Y in(24))


- Original Message - 
From: Tim Molter tim.mol...@gmail.com

To: mysql@lists.mysql.com
Sent: Sunday, December 27, 2009 4:04 PM
Subject: Is there a better way than this?



I'm new to MySQL and I'm looking for some guidance. I have a table A,
with two columns X and Y with the following data:

|   X|Y|
   1  24
   1  25
   2  25
   2  26
   3  27

I want my SQL query to return 2 following this verbose logic: SELECT
DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

I came up with the following SQL, which gives me my desired result,
but is there a better way to do it? Can it be achieved using MINUS or
UNION somehow?

BTW, I'm using IN here because I intend to replace the single numbers
(24 and 25) with arrays that have 0 to N members.

SELECT DISTINCT X FROM `A`

WHERE X IN (
SELECT X FROM `A` WHERE Y IN (25)
)

AND X NOT IN (
SELECT X FROM `A` WHERE Y IN (24)
)

Thanks!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=my...@dawiz.net


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Is there a better way than this?

2009-12-27 Thread Tim Molter
I'm new to MySQL and I'm looking for some guidance. I have a table A,
with two columns X and Y with the following data:

|   X|Y|
1  24
1  25
2  25
2  26
3  27

I want my SQL query to return 2 following this verbose logic: SELECT
DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

I came up with the following SQL, which gives me my desired result,
but is there a better way to do it? Can it be achieved using MINUS or
UNION somehow?

BTW, I'm using IN here because I intend to replace the single numbers
(24 and 25) with arrays that have 0 to N members.

SELECT DISTINCT X FROM `A`

WHERE X IN (
SELECT X FROM `A` WHERE Y IN (25)
)

AND X NOT IN (
SELECT X FROM `A` WHERE Y IN (24)
)

Thanks!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is there a better way than this?

2009-12-27 Thread John List

On 12/27/2009 06:04 PM, Tim Molter wrote:

I'm new to MySQL and I'm looking for some guidance. I have a table A,
with two columns X and Y with the following data:

|   X|Y|
1  24
1  25
2  25
2  26
3  27

I want my SQL query to return 2 following this verbose logic: SELECT
DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.
Since y=25 is associated with both x=1 and x=2, there's no way a simple 
select can result in 2.


Perhaps your assignment calls for the count()  of the results?

select count(*) from A where y = 25

Good luck,

John


Re: Is there a better way than this?

2009-12-27 Thread Chris W

Unless I am missing something, this should work.

SELECT DISTINCT X FROM `A`
WHERE Y IN (25)
AND Y NOT IN (24)

Chris W


Tim Molter wrote:

I'm new to MySQL and I'm looking for some guidance. I have a table A,
with two columns X and Y with the following data:

|   X|Y|
1  24
1  25
2  25
2  26
3  27

I want my SQL query to return 2 following this verbose logic: SELECT
DISTINCT X FROM A WHERE Y equals 25 and Y also does NOT equal 24.

I came up with the following SQL, which gives me my desired result,
but is there a better way to do it? Can it be achieved using MINUS or
UNION somehow?

BTW, I'm using IN here because I intend to replace the single numbers
(24 and 25) with arrays that have 0 to N members.

SELECT DISTINCT X FROM `A`

WHERE X IN (
SELECT X FROM `A` WHERE Y IN (25)
)

AND X NOT IN (
SELECT X FROM `A` WHERE Y IN (24)
)

Thanks!

  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Best way to purge a table

2009-11-09 Thread Jones, Keven
Hi,

I need to get rid of all data in one table of my database. The table just has 
old
Data that I no longer need. What is the best way to accomplish this?

If I simply drop the table what can I do prior to dropping the table to ensure
I can recreate the table right after I drop it? I still need the table just not
The data in it. I'm running out of space.

Thank you
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



  1   2   3   4   5   6   7   8   9   >