InnoDB best practices for ensuring unique tuple where one column can be NULL

2009-04-16 Thread Lev Lvovsky

hello,

assume the following table:

CREATE TABLE t (
id INT UNSIGNED auto_increment PRIMARY KEY,
c1 INT UNSIGNED NOT NULL,
c2 INT UNSIGNED NOT NULL,
c3 INT UNSIGNED,
UNIQUE (c1, c2, c3)
) engine = InnoDB;

Our first issue is that the UNIQUE constraint on (c1,c2,c3) does not  
work in the case that a NULL value for c3 is inserted:

mysql> insert into t (c1, c2, c3) VALUES (1,1,NULL);
Query OK, 1 row affected (0.01 sec)
mysql> insert into t (c1, c2, c3) VALUES (1,1,NULL);
Query OK, 1 row affected (0.01 sec)

Given this behavior, we cannot rely on the UNIQUE constraint to  
enforce two sets of otherwise identical values. However, in addition  
to the UNIQUE requirement that we have
above, we *only* want the UNIQUE constraint to be checked when the c3  
column has a NULL value, e.g.:


--
insert of (1,1,NULL) and (1,1,NULL): error
insert of (1,1,1) and (1,1,1): ok
--

Clearly the latter case would not be allowed with a UNIQUE(c1,c2,c3)  
constraint.


Attempting to ensure these constraints via triggers is problematic,  
because within separate transactions two different clients can insert  
identical values, and once finished
with the transaction, the triggers will already have done their  
validation finding no error.


Is there a standard way to perform this sort of checking?

thanks!
-lev

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Document / Image (Blob) archival -- Best Practices

2006-04-20 Thread Tim Lucia
How about Alfresco as a C/DMS?  (http://www.alfresco.com/)  Our CEO read an
article about Boise Cascade using this model and thought it would scale well
for us.

There is also this

http://www.mysql.com/news-and-events/web-seminars/mysql-alfresco.php

Which I didn't know about (or necessarily care ;-) on March 9th.

Right now, some of our DBs are about 35-40Gb, of which half or slightly more
consists of archive blobs.  This archive feature is increasingly popular and
so we're looking to move it out of the main database(s) and onto a separate
server.

Tim

-Original Message-
From: Daniel Kasak [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 18, 2006 11:35 PM
To: Tim Lucia
Cc: mysql@lists.mysql.com
Subject: Re: Document / Image (Blob) archival -- Best Practices

Tim Lucia wrote:

> Hi all,
>
>
> I am considering moving some archival records which largely consist of
blobs
> (PDF Image files) out of an Oracle DB and onto MySQL.  Has anyone done
this
> (not necessarily the Oracle part) who can relate their experience(s)?  I
can
> go with MyISAM or archive storage engine, from the looks of it.  This is
not
> transactional data, although eventually some reporting may be required
> (although since the blob data are things like fax archival documents, I'm
> not sure what can be reported.)
>
>  
>
> Another possible storage model would be to use MySQL as the index but put
> the blobs (.PDFs) into some document management system.  If you've done
> anything like this, I'd like to hear about it as well.
>   

I've done a couple of file-based systems. Briefly:

- separate filename into the actual name and the extension
- insert details into MySQL table
- fetch primary key
- figure out where to store document
- *copy* the document there, with a new filename: PRIMARY_KEY.arch ( 
replace PRIMARY_KEY )
- test that it's there
- delete original

I rename the file to make sure I've got no filename clashes. It also 
stops people from editing archived documents by snooping around your 
network shares ( they don't know how to open a .arch file ).

I also have a limit of 100 files per folder. If you put too many files 
in 1 folder, directory listing slows down a LOT.

I know others have had great success with storing blobs in MySQL tables, 
but I really don't think that's the way to go. Maybe I'm just paranoid, 
but storing things as real files seems safer to me.

I've considered writing an open-source document archiving system, using 
a gtk2-perl GUI and a MySQL backend. It would be trivial to do - as I 
said, I've done a number of special-purpose ones already. If other 
people show an interest I'll have one up over the next couple of days / 
weeks. It will of course be cross-platform.

Dan


-- 
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Document / Image (Blob) archival -- Best Practices

2006-04-18 Thread Daniel Kasak

Tim Lucia wrote:


Hi all,


I am considering moving some archival records which largely consist of blobs
(PDF Image files) out of an Oracle DB and onto MySQL.  Has anyone done this
(not necessarily the Oracle part) who can relate their experience(s)?  I can
go with MyISAM or archive storage engine, from the looks of it.  This is not
transactional data, although eventually some reporting may be required
(although since the blob data are things like fax archival documents, I'm
not sure what can be reported.)

 


Another possible storage model would be to use MySQL as the index but put
the blobs (.PDFs) into some document management system.  If you've done
anything like this, I'd like to hear about it as well.
  


I've done a couple of file-based systems. Briefly:

- separate filename into the actual name and the extension
- insert details into MySQL table
- fetch primary key
- figure out where to store document
- *copy* the document there, with a new filename: PRIMARY_KEY.arch ( 
replace PRIMARY_KEY )

- test that it's there
- delete original

I rename the file to make sure I've got no filename clashes. It also 
stops people from editing archived documents by snooping around your 
network shares ( they don't know how to open a .arch file ).


I also have a limit of 100 files per folder. If you put too many files 
in 1 folder, directory listing slows down a LOT.


I know others have had great success with storing blobs in MySQL tables, 
but I really don't think that's the way to go. Maybe I'm just paranoid, 
but storing things as real files seems safer to me.


I've considered writing an open-source document archiving system, using 
a gtk2-perl GUI and a MySQL backend. It would be trivial to do - as I 
said, I've done a number of special-purpose ones already. If other 
people show an interest I'll have one up over the next couple of days / 
weeks. It will of course be cross-platform.


Dan


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Document / Image (Blob) archival -- Best Practices

2006-04-18 Thread Michael Kruckenberg

Tim,

I did this for a large collection of images, ~1million images up  
around 40 gigs total last time I checked (no longer involved in the  
project). It worked very well, performance was not horrible compared  
to file-based storage. I always feared that MyISAM table getting  
corrupted and having to run a myisamcheck. The backup was a bit  
tricky too, had to have the data replicated elsewhere because it took  
over an hour to backup and we couldn't have the table locked up that  
long on production.


There was a good discussion about this on this weblog recently:
http://sheeri.com/archives/39

No experience with document mangement system. We talked a few times  
about working with a digital library for storage but were never  
compelled to go beyond what we had in MySQL.


Mike

On Apr 18, 2006, at 9:48 PM, Tim Lucia wrote:

Hi all,

I am considering moving some archival records which largely consist  
of blobs
(PDF Image files) out of an Oracle DB and onto MySQL.  Has anyone  
done this
(not necessarily the Oracle part) who can relate their experience 
(s)?  I can
go with MyISAM or archive storage engine, from the looks of it.   
This is not

transactional data, although eventually some reporting may be required
(although since the blob data are things like fax archival  
documents, I'm

not sure what can be reported.)

Another possible storage model would be to use MySQL as the index  
but put
the blobs (.PDFs) into some document management system.  If you've  
done

anything like this, I'd like to hear about it as well.

All input appreciated (to the list, please ;-) )

Tim


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Document / Image (Blob) archival -- Best Practices

2006-04-18 Thread Tim Lucia
Hi all,

 

I am considering moving some archival records which largely consist of blobs
(PDF Image files) out of an Oracle DB and onto MySQL.  Has anyone done this
(not necessarily the Oracle part) who can relate their experience(s)?  I can
go with MyISAM or archive storage engine, from the looks of it.  This is not
transactional data, although eventually some reporting may be required
(although since the blob data are things like fax archival documents, I'm
not sure what can be reported.)

 

Another possible storage model would be to use MySQL as the index but put
the blobs (.PDFs) into some document management system.  If you've done
anything like this, I'd like to hear about it as well.

 

All input appreciated (to the list, please ;-) )

 

Tim

 



Re: Best practices

2006-04-12 Thread Shawn Green
Answers intermingled below

--- Bruno B B Magalh�es <[EMAIL PROTECTED]> wrote:

> Hi guys I need some help with two things...
> 
> I have the following table:
> 
> CREATE TABLE `telephones` (
>`contact_id` int(20) unsigned NOT NULL default '0',
>`telephone_id` int(20) unsigned NOT NULL default '0',
>`telephone_country_code` char(5) NOT NULL default '',
>`telephone_area_code` char(5) NOT NULL default '',
>`telephone_number` char(20) NOT NULL default '',
>`telephone_extension` char(5) NOT NULL default '',
>`telephone_primary` int(1) unsigned NOT NULL default '0',
>`telephone_type_id` int(1) unsigned NOT NULL default '0',
>`telephone_inserted` datetime NOT NULL default '-00-00
> 00:00:00',
>`telephone_updated` datetime NOT NULL default '-00-00
> 00:00:00',
>`telephone_deleted` datetime NOT NULL default '-00-00
> 00:00:00'
> ) ENGINE=MyISAM DEFAULT CHARSET=latin1
> 
> As you can see I have a column called telephone_deleted, and I was  
> thinking about instead of deleting a record I would change the  
> telephone_delete with a valid date time when it happened. With that I
>  
> think it would avoid loosing records as a mistake, as it would just  
> need to reset the delete date and time. But my question is it  
> scalable? Any other ideas, as I am using "IS NOT NULL" to find the  
> records that haven't been deleted.
> 

What do you mean by "scalable"?  If you mean "can I use an index to
locate records that have or have not been deleted" the answer would be
yes.

> Another thing is how can I build a statistical analisys of  
> telephones, for example xx% belongs to country_code X and another xx%
>  
> belongs to country_code Y, but here's the trick part: I would like it
>  
> fetch it in a date range, for example what was the evolution between 
> 
> date X and date Y... I have this working now with a cronjob  
> performing a logging operation in a table like this which stores all 
> 
> statistics regarding every entity in the system:
> 
> CREATE TABLE `flx_contacts_stats` (
>`stat_date` date NOT NULL default '-00-00',
>`stat_entity` char(64) NOT NULL default '',
>`stat_key` char(128) NOT NULL default '0',
>`stat_value` int(10) unsigned NOT NULL default '0',
>KEY `stat_date` (`stat_date`,`stat_entity`)
> ) ENGINE=MyISAM DEFAULT CHARSET=latin1
> 
> For example with this kind of data:
> 2005-12-04; phone_countrycodes; 55; 63
> 2005-12-04; phone_areacodes; 473, 32; 1
> 2005-12-04; phone_areacodes; 53, 32; 1
> 2005-12-04; phone_areacodes; 54, 32; 1
> 2005-12-04; phone_areacodes; 11, 55; 1
> 2005-12-04; phone_areacodes; 21, 55; 62
> 
> How can I do this on the fly without using any generic table to store
>  
> stats? I suspect that storing stats this way is not practical in  
> terms of portability and that's not definitely a good practice. Or  
> this kind of data is necessarily stored separated?
> 

If the data in the reports is truly static, then you GAIN performance
by only making your server compute it once. Storing the results of
statistical analysis is a commonly used practice when it comes to data
warehousing and OLAP system design. In your case, it especially makes
sense from a performance standpoint to query the smaller
`flx_contact_stats` table rather than to recompute those values every
time you need them.

> 
> Thanks in advance for any kind of advice in this matter.
> 
> Regards,
> Bruno B B Magalhaes
> 

Best Wishes!
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best practices

2006-04-11 Thread Bruno B B Magalhães

Hi guys I need some help with two things...

I have the following table:

CREATE TABLE `telephones` (
  `contact_id` int(20) unsigned NOT NULL default '0',
  `telephone_id` int(20) unsigned NOT NULL default '0',
  `telephone_country_code` char(5) NOT NULL default '',
  `telephone_area_code` char(5) NOT NULL default '',
  `telephone_number` char(20) NOT NULL default '',
  `telephone_extension` char(5) NOT NULL default '',
  `telephone_primary` int(1) unsigned NOT NULL default '0',
  `telephone_type_id` int(1) unsigned NOT NULL default '0',
  `telephone_inserted` datetime NOT NULL default '-00-00 00:00:00',
  `telephone_updated` datetime NOT NULL default '-00-00 00:00:00',
  `telephone_deleted` datetime NOT NULL default '-00-00 00:00:00'
) ENGINE=MyISAM DEFAULT CHARSET=latin1

As you can see I have a column called telephone_deleted, and I was  
thinking about instead of deleting a record I would change the  
telephone_delete with a valid date time when it happened. With that I  
think it would avoid loosing records as a mistake, as it would just  
need to reset the delete date and time. But my question is it  
scalable? Any other ideas, as I am using "IS NOT NULL" to find the  
records that haven't been deleted.


Another thing is how can I build a statistical analisys of  
telephones, for example xx% belongs to country_code X and another xx%  
belongs to country_code Y, but here's the trick part: I would like it  
fetch it in a date range, for example what was the evolution between  
date X and date Y... I have this working now with a cronjob  
performing a logging operation in a table like this which stores all  
statistics regarding every entity in the system:


CREATE TABLE `flx_contacts_stats` (
  `stat_date` date NOT NULL default '-00-00',
  `stat_entity` char(64) NOT NULL default '',
  `stat_key` char(128) NOT NULL default '0',
  `stat_value` int(10) unsigned NOT NULL default '0',
  KEY `stat_date` (`stat_date`,`stat_entity`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

For example with this kind of data:
2005-12-04; phone_countrycodes; 55; 63
2005-12-04; phone_areacodes; 473, 32; 1
2005-12-04; phone_areacodes; 53, 32; 1
2005-12-04; phone_areacodes; 54, 32; 1
2005-12-04; phone_areacodes; 11, 55; 1
2005-12-04; phone_areacodes; 21, 55; 62

How can I do this on the fly without using any generic table to store  
stats? I suspect that storing stats this way is not practical in  
terms of portability and that's not definitely a good practice. Or  
this kind of data is necessarily stored separated?



Thanks in advance for any kind of advice in this matter.

Regards,
Bruno B B Magalhaes

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best practices for finding duplicate chunks

2005-08-14 Thread Alexey Polyakov
You can modify the algorithm I proposed to find groups of records that
are likely to have duplicate chunks. Simply record only a part of
hashes, something like: if md5(concat(word1,word2,...,word20))%32=0.
Disk usage for this table will be maybe 60 bytes per record, if your
average word is 8 bytes (counting whitespace), then disk space you'll
need is about 25% of data size.
After groups of record are found, you can do brute-force indexing to
find duplicate chunks.

On 8/15/05, Gerald Taylor <[EMAIL PROTECTED]> wrote:
> Thanks for your answer.  It would certainly work provided having
> enough disk space to do that.  I thought something like
> that but was hoping I can leverage fulltext  and just
> record the fulltext result between a each record
> and each other record. Then I can group all records that
> highly correlate and maybe do a much smaller scale version of
> the brute force indexing thing that you are proposing, i.e. only
> do it on a group of records that we already know  have a high
> correlation, ie a high probability of sharing a chunk in common
>   Then when done I can throw away that data
> and do another group.  What do you think?   Processing cycles I have
> but easy disk space I don't.

-- 
Alexey Polyakov

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best practices for finding duplicate chunks

2005-08-14 Thread Gerald Taylor

Thanks for your answer.  It would certainly work provided having
enough disk space to do that.  I thought something like
that but was hoping I can leverage fulltext  and just
record the fulltext result between a each record
and each other record. Then I can group all records that
highly correlate and maybe do a much smaller scale version of
the brute force indexing thing that you are proposing, i.e. only
do it on a group of records that we already know  have a high
correlation, ie a high probability of sharing a chunk in common
  Then when done I can throw away that data
and do another group.  What do you think?   Processing cycles I have
but easy disk space I don't.

Alexey Polyakov wrote:

There's no easy way to do it I think. But if spending a few hours
(days?) programming is ok with you I'd suggest something like this:
1) create a table (let's call it hashes) with three columns: hash,
doc_id, pos_id (doc_id is an identifier for records from table with
big text chunks)
2) retrieve a record from big table. Calculate hash value for
concatenated first 20 words from text. Insert this value into
hash/doc_id table, and 1 as value of pos_id. Calculate hash for
concatenated 20 words starting from 2-nd word of this text, and also
insert it into hash/doc_id table (2 as value of pos_id). Repeat until
you reach the end of this text.
3) Repeat 2) for all records of big table
4) Now you have all data needed for identifying those duplicate chunks.
select count(doc_id) as c from hashes group by hash where c>1;
will return all hashes for 20-word chunks that are found in 2 or more documents
select doc_id from hashes where hash=some_value; 
will return documents that contain this chunk.

select h1.pos_id, h2.pos_id from hashes h1, hashes h2 where
h1.doc_id=doc1 and h2.doc_id=doc2 and h1.hash=h2.hash order by
h1.pos_id;
will return word positions for duplicate text in two documents.
For example last query returns:
156 587
157 588
...
193 624
It means that you can take words 156-213 from doc1, insert it into
subchunks table, and replace words 156-212 at doc1 and words 587-643
at doc2 with a marker.


Yeah it looks ugly, and will take a lot of space for temporary data.
But in the end you'll have all 20+ words duplicate chunks properly
identified.

On 8/14/05, Gerald Taylor <[EMAIL PROTECTED]> wrote:


I just revived a database that was in a version 3.23 server and moved it
to a 4.1   There are big fields of TEXT based data.  They have a way of
compressing the amount of TEXT data by identifying common subchunks and
putting them in a "subchunk" table and replacing them with a marker
inside the main text that will pull in that subchunk whenever the parent
chunk is requested.  This subchunking seems to have been done kind of
ad hoc, because I've noticed the database still has quite a bit of
duplicated chunks from one record to another.  The client does not want
to buy another drive to store data (even tho he really should for
other reasons anyway but who cares what I think) , so he wants it
compressed, and oh well I look on it as an opportunity for some
housecleaning.  Now that we have 4.1 what is the best practice for
automated looking for common subchunks, factoring them out, and then
replacing the original parent text with itself with the chunk cut out
and a marker inserted.  The hard part is finding them, ovbiously.  The
rest is easy.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



best practices for finding duplicate chunks

2005-08-14 Thread Gerald Taylor
I just revived a database that was in a version 3.23 server and moved it 
to a 4.1   There are big fields of TEXT based data.  They have a way of 
compressing the amount of TEXT data by identifying common subchunks and 
putting them in a "subchunk" table and replacing them with a marker 
inside the main text that will pull in that subchunk whenever the parent 
chunk is requested.  This subchunking seems to have been done kind of
ad hoc, because I've noticed the database still has quite a bit of 
duplicated chunks from one record to another.  The client does not want 
to buy another drive to store data (even tho he really should for
other reasons anyway but who cares what I think) , so he wants it 
compressed, and oh well I look on it as an opportunity for some 
housecleaning.  Now that we have 4.1 what is the best practice for 
automated looking for common subchunks, factoring them out, and then 
replacing the original parent text with itself with the chunk cut out
and a marker inserted.  The hard part is finding them, ovbiously.  The 
rest is easy.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best practices for deleting and restoring records - moving vs flagging

2005-08-12 Thread Arno Coetzee

Bastian Balthazar Bux wrote:


We need to track the modification to the records too so the route has
been to keep them all in a different, specular databases.

If the "real" table look like this:

CREATE TABLE `users` (
 `id` int(11) NOT NULL auto_increment,
 `ts` timestamp NOT NULL
  default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
 `username` varchar(32) default NULL,
 `password` varchar(32) default NULL,
 PRIMARY KEY  (`id`)
);

The backup one look like this:

CREATE TABLE `users` (
 `del__id` int(11) NOT NULL auto_increment,
 `del__ts` timestamp NOT NULL
  default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
 `del__flag` char(1) default 'D',
 `del__note` mediumtext,
 `id` int(11) NOT NULL auto_increment,
 `ts` datetime NOT NULL default '-00-00 00:00:00',
 `username` varchar(32) default NULL,
 `password` varchar(32) default NULL,
 PRIMARY KEY  (`del__id`)
);

That is the first one whit "del__*" fields added but all indexed removed.

Having the same name and similar schema for the two tables make easier
have a photo of  the database in a defined time slice.

Usefull with small, not very often changing databases.

 


hi bastian

just a thought. rather stay away from auto_increment PK's and rather 
generate your own PK.


i have run into trouble a couple of times using auto_increment when i 
made backups and restored the data again. The PK changed and i had 
records in other tables referencing the old PK , but then the PK changed.


--
Arno Coetzee
Developer
Flash Media Group
Office : 2712 342 7595
Mobile : 2782 693 6180


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best practices for deleting and restoring records - moving vs flagging

2005-08-12 Thread Bastian Balthazar Bux
Saqib Ali wrote:
> Hello All,
> 
> What are best practices for deleting records in a DB. We need the
> ability to restore the records.
> 
> Two obvious choices are:
> 
> 1) Flag them deleted or undeleted
> 2) Move the deleted records to seperate table for deleted records.
> 
> We have a  complex schema. However the the records that need to be
> deleted and restored reside in 2 different tables (Table1 and Table2).
> 
> Table2 uses the primary key of the Table1 as the Foriegn key. The
> Primary key for Table1 is auto-generated. This make the restoring with
> the same primary key impossible, if we move deleted data to a
> different table. However if we just flag the record as deleted the
> restoring is quite easy.

Sorry I don't understud this, why it's impossible ?
If the PK is auto-generated from MySQL it will have progressive numbers,
and it's always possible to force a lower, non-existant number in the PK.

> 
> Any thoughts/ideas ?
> 

We need to track the modification to the records too so the route has
been to keep them all in a different, specular databases.

If the "real" table look like this:

CREATE TABLE `users` (
  `id` int(11) NOT NULL auto_increment,
  `ts` timestamp NOT NULL
   default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
  `username` varchar(32) default NULL,
  `password` varchar(32) default NULL,
  PRIMARY KEY  (`id`)
);

The backup one look like this:

CREATE TABLE `users` (
  `del__id` int(11) NOT NULL auto_increment,
  `del__ts` timestamp NOT NULL
   default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
  `del__flag` char(1) default 'D',
  `del__note` mediumtext,
  `id` int(11) NOT NULL auto_increment,
  `ts` datetime NOT NULL default '-00-00 00:00:00',
  `username` varchar(32) default NULL,
  `password` varchar(32) default NULL,
  PRIMARY KEY  (`del__id`)
);

That is the first one whit "del__*" fields added but all indexed removed.

Having the same name and similar schema for the two tables make easier
have a photo of  the database in a defined time slice.

Usefull with small, not very often changing databases.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best practices for deleting and restoring records - moving vs flagging

2005-08-11 Thread Arno Coetzee

Saqib Ali wrote:


Hello All,

What are best practices for deleting records in a DB. We need the
ability to restore the records.

Two obvious choices are:

1) Flag them deleted or undeleted
2) Move the deleted records to seperate table for deleted records.

We have a  complex schema. However the the records that need to be
deleted and restored reside in 2 different tables (Table1 and Table2).

Table2 uses the primary key of the Table1 as the Foriegn key. The
Primary key for Table1 is auto-generated. This make the restoring with
the same primary key impossible, if we move deleted data to a
different table. However if we just flag the record as deleted the
restoring is quite easy.

Any thoughts/ideas ?

 

There are pros and cons to both ways.(As you pointed out with moving the 
records to another table)


I allways prefer flagging the records. The draw back with flagging the 
records is that you might sacrifice some speed(depends on the number of 
records in the table.) If the table does not grow that fast most def 
just flag the records as deleted.


my2c worth

--
Arno Coetzee
Developer
Flash Media Group
Office : 2712 342 7595
Mobile : 2782 693 6180


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best practices for deleting and restoring records - moving vs flagging

2005-08-11 Thread Nuno Pereira

Saqib Ali wrote:

Hello All,

What are best practices for deleting records in a DB. We need the
ability to restore the records.

Two obvious choices are:

1) Flag them deleted or undeleted
2) Move the deleted records to seperate table for deleted records.


The first is what I like more.
While in the first to mark as deleted (or restore), you only have to 
change one column, and in the second, you have to move (and move again 
to restore) from one table to another. Getting the value from the normal 
value to store it in the second could lead to a problem


Implement the first in a developed schema, is just add a column of type 
bool (for example) with the default beeing not deleted.


The second has the other problem of a change in the schema of the normal 
table has to be done in the deleted values table.



We have a  complex schema. However the the records that need to be
deleted and restored reside in 2 different tables (Table1 and Table2).


This lets you to have two different tables of deleted values.


Table2 uses the primary key of the Table1 as the Foriegn key. The
Primary key for Table1 is auto-generated. This make the restoring with
the same primary key impossible, if we move deleted data to a


If you mean "The Primary key for Table1 is auto-generated" by using 
auto_increment, it is not impossible. You can just copy the entire 
contents of the row.



different table. However if we just flag the record as deleted the
restoring is quite easy.


As I said.


Any thoughts/ideas ?


Just my opinion, and it seems to be the opinion of mambo developers, as 
they implement the deletion of values to restore like this way, and they 
have also a published column. If they have done this they would need 4 
tables: published_and_not_deleted, published_and_deleted, 
not_published_and_not_deleted and not_published_and_deleted.


I would say that the second is very bad.
--
Nuno Pereira

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best practices for deleting and restoring records - moving vs flagging

2005-08-11 Thread Saqib Ali
Hello All,

What are best practices for deleting records in a DB. We need the
ability to restore the records.

Two obvious choices are:

1) Flag them deleted or undeleted
2) Move the deleted records to seperate table for deleted records.

We have a  complex schema. However the the records that need to be
deleted and restored reside in 2 different tables (Table1 and Table2).

Table2 uses the primary key of the Table1 as the Foriegn key. The
Primary key for Table1 is auto-generated. This make the restoring with
the same primary key impossible, if we move deleted data to a
different table. However if we just flag the record as deleted the
restoring is quite easy.

Any thoughts/ideas ?

-- 
In Peace,
Saqib Ali
http://www.xml-dev.com/blog/
Consensus is good, but informed dictatorship is better.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best Practices

2004-11-10 Thread Eamon Daly
Starting with 4.0, when you do a LOAD DATA INFILE on the
master, it actually writes the full insert in the binary
log, which the slave then reproduces.
And if any gurus are listening, I /believe/ that setting
max_allowed_packet on the master and slave to the same value
prevents any "Packet too large" problems, but I couldn't
find confirmation in the docs. If I set max_allowed_packet
to, say, 16M on the master, does it write the data from a
LOAD DATA INFILE command in 16M chunks to the binary log?

Eamon Daly

- Original Message - 
From: "Michael Haggerty" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, November 10, 2004 11:27 AM
Subject: Re: Best Practices


Yes, there can be a small lag in data updates, in fact
I believe the lag time will be less than a second
considering our architecture.
We have been considering replication as a solution but
have been hesitant to do so because I have heard there
are problems with data inserted through a LOAD DATA
INFILE command. We regularly import csv data from
spreadsheets from people working offline and have some
pretty sophisticated processes built around this
requirement.
Has anyone run into this problem, and are there any
solutions?
Thanks,
Michael Haggerty
--- Eamon Daly <[EMAIL PROTECTED]> wrote:
Can there be a small lag between servers? If a
second or two
is acceptable, this sounds like a perfect
environment for
replication:
http://dev.mysql.com/doc/mysql/en/Replication.html
Basically, when the master writes something to the
database,
it also logs the transaction to a log file. The
slave simply
reads that log file and executes the same
transaction
locally. The additional load is very very small,
your tables
will all be consistent, and you can index the
reporting
database six ways from Sunday without touching the
master.


Eamon Daly

- Original Message - 
From: "Michael Haggerty" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, November 09, 2004 6:06 PM
Subject: Best Practices

>I am working on a data warehousing solution
involving
> mysql and have a question about best practices. We
are
> standardized on mysql 4.1, and this is for a
rather
> picky client.
>
> We have a relational transaction database that
stores
> the results of customer calls and a dimensional
> reporting database used as a data mart by several
> applications. Each night, we run a process that
> aggregates the number of calls, the subjects of
each
> call, and various other data to populate the
reporting
> database. We would like to move to a real time
> solution, and are struggling with the best way to
> implment it.
>
> What we are considering is a solution where we
mirror
> the transactional database and repopulate key
tables
> in the reporting database every minute or few
minutes.
> I am loathe to do this, mainly because it would
add to
> our server load and could possibly lead to 'dirty
> reads' (i.e. where one table in the reporting
database
> is populated with fresh data but others are not).
At
> the same time, the client is demanding we
implement
> something.
>
> Does anyone have any war stories or suggestions
for
> how to accomplish this?
>
> Thank You,
> M


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Best Practices

2004-11-10 Thread Michael Haggerty
Yes, there can be a small lag in data updates, in fact
I believe the lag time will be less than a second
considering our architecture. 

We have been considering replication as a solution but
have been hesitant to do so because I have heard there
are problems with data inserted through a LOAD DATA
INFILE command. We regularly import csv data from
spreadsheets from people working offline and have some
pretty sophisticated processes built around this
requirement.

Has anyone run into this problem, and are there any
solutions?

Thanks,
Michael Haggerty

--- Eamon Daly <[EMAIL PROTECTED]> wrote:

> Can there be a small lag between servers? If a
> second or two
> is acceptable, this sounds like a perfect
> environment for
> replication:
> 
> http://dev.mysql.com/doc/mysql/en/Replication.html
> 
> Basically, when the master writes something to the
> database,
> it also logs the transaction to a log file. The
> slave simply
> reads that log file and executes the same
> transaction
> locally. The additional load is very very small,
> your tables
> will all be consistent, and you can index the
> reporting
> database six ways from Sunday without touching the
> master.
> 
>

> Eamon Daly
> 
> 
> 
> - Original Message - 
> From: "Michael Haggerty" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Tuesday, November 09, 2004 6:06 PM
> Subject: Best Practices
> 
> 
> >I am working on a data warehousing solution
> involving
> > mysql and have a question about best practices. We
> are
> > standardized on mysql 4.1, and this is for a
> rather
> > picky client.
> > 
> > We have a relational transaction database that
> stores
> > the results of customer calls and a dimensional
> > reporting database used as a data mart by several
> > applications. Each night, we run a process that
> > aggregates the number of calls, the subjects of
> each
> > call, and various other data to populate the
> reporting
> > database. We would like to move to a real time
> > solution, and are struggling with the best way to
> > implment it.
> > 
> > What we are considering is a solution where we
> mirror
> > the transactional database and repopulate key
> tables
> > in the reporting database every minute or few
> minutes.
> > I am loathe to do this, mainly because it would
> add to
> > our server load and could possibly lead to 'dirty
> > reads' (i.e. where one table in the reporting
> database
> > is populated with fresh data but others are not).
> At
> > the same time, the client is demanding we
> implement
> > something.
> > 
> > Does anyone have any war stories or suggestions
> for
> > how to accomplish this?
> > 
> > Thank You,
> > M
> 
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best Practices

2004-11-10 Thread Eamon Daly
Can there be a small lag between servers? If a second or two
is acceptable, this sounds like a perfect environment for
replication:
http://dev.mysql.com/doc/mysql/en/Replication.html
Basically, when the master writes something to the database,
it also logs the transaction to a log file. The slave simply
reads that log file and executes the same transaction
locally. The additional load is very very small, your tables
will all be consistent, and you can index the reporting
database six ways from Sunday without touching the master.

Eamon Daly

- Original Message - 
From: "Michael Haggerty" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, November 09, 2004 6:06 PM
Subject: Best Practices


I am working on a data warehousing solution involving
mysql and have a question about best practices. We are
standardized on mysql 4.1, and this is for a rather
picky client.
We have a relational transaction database that stores
the results of customer calls and a dimensional
reporting database used as a data mart by several
applications. Each night, we run a process that
aggregates the number of calls, the subjects of each
call, and various other data to populate the reporting
database. We would like to move to a real time
solution, and are struggling with the best way to
implment it.
What we are considering is a solution where we mirror
the transactional database and repopulate key tables
in the reporting database every minute or few minutes.
I am loathe to do this, mainly because it would add to
our server load and could possibly lead to 'dirty
reads' (i.e. where one table in the reporting database
is populated with fresh data but others are not). At
the same time, the client is demanding we implement
something.
Does anyone have any war stories or suggestions for
how to accomplish this?
Thank You,
M

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Best Practices

2004-11-10 Thread SGreen
It sounds to me like they want two databases (they probably need to be on 
two separate servers) and that your logging application may need to pull 
double duty. You are being asked to keep an OLTP database in sync with an 
OLAP database in real time. That means that you probably need to commit 
changes to both at the same time OR somehow queue up a list of updates for 
a third-party process (any process that is NOT your application) to come 
back and make the batch changes you need.

I had a similar situation once, thousands of ad clicks per minute had to 
be both logged (OLTP) and aggregated (OLAP) for billing.  The solution we 
used was to build a "raw" log table (only one index) and hit that table 
once every minute or so with an application (not the logging application) 
that first took a snapshot of the records it was going to process, copied 
them into a long-term log, and aggregated them into the OLAP tables. Then 
the raw log was purged of the processed records to keep it small.  We used 
multiple parallel processes to aggregate the raw results. What we ended up 
doing was running 4 aggregating processes with each process working only 
it's section of our client list. That way no two threads could collide on 
processing raw records from the same client.

I know it sounds rather complex but it was able to keep up with almost 
25 click-throughs per day.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Michael Haggerty <[EMAIL PROTECTED]> wrote on 11/09/2004 07:06:18 PM:

> I am working on a data warehousing solution involving
> mysql and have a question about best practices. We are
> standardized on mysql 4.1, and this is for a rather
> picky client.
> 
> We have a relational transaction database that stores
> the results of customer calls and a dimensional
> reporting database used as a data mart by several
> applications. Each night, we run a process that
> aggregates the number of calls, the subjects of each
> call, and various other data to populate the reporting
> database. We would like to move to a real time
> solution, and are struggling with the best way to
> implment it.
> 
> What we are considering is a solution where we mirror
> the transactional database and repopulate key tables
> in the reporting database every minute or few minutes.
> I am loathe to do this, mainly because it would add to
> our server load and could possibly lead to 'dirty
> reads' (i.e. where one table in the reporting database
> is populated with fresh data but others are not). At
> the same time, the client is demanding we implement
> something.
> 
> Does anyone have any war stories or suggestions for
> how to accomplish this?
> 
> Thank You,
> M
> 
> 
> 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Re: Best Practices

2004-11-10 Thread Karma Dorji
Hello all,
I am using this script and it takes 100 % of the process, can anyone tell me
how to optimize this,


insert into incoming
select s.Date as Datein, s.Time as Timein, e.Date as Dateend, e.Time as
Timeend, s.CallingStationId, s.CalledStationId,
SEC_TO_TIME(unix_timestamp(concat(e.Date,' ',e.Time)) -
unix_timestamp(concat(s.Date,' ',s.Time))) as time from VOIP s left join
VOIP e on
( s.CallingStationId=e.CallingStationId and
s.CalledStationId=e.CalledStationId and e.AcctStatusType='Stop' )
where s.AcctStatusType='Start' and s.Time < e.Time and s.Date = e.Date and
length(s.CallingStationId) > 8 group by
s.Time,s.CallingStationId,s.CalledStationId,e.CalledStationId,e.CallingStati
onId order by s.Date,s.Time ASC;

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best Practices

2004-11-09 Thread Gary Richardson
Have you thought about locking the reporting database for write? You
could eliminate the dirty reads.

If you are using InnoDB on the reporting tables, you could use a
transaction for the update operation. That would accomplish the same
thing.

You could use replication to move the load to another server all together.

On Tue, 9 Nov 2004 16:06:18 -0800 (PST), Michael Haggerty
<[EMAIL PROTECTED]> wrote:
> I am working on a data warehousing solution involving
> mysql and have a question about best practices. We are
> standardized on mysql 4.1, and this is for a rather
> picky client.
> 
> We have a relational transaction database that stores
> the results of customer calls and a dimensional
> reporting database used as a data mart by several
> applications. Each night, we run a process that
> aggregates the number of calls, the subjects of each
> call, and various other data to populate the reporting
> database. We would like to move to a real time
> solution, and are struggling with the best way to
> implment it.
> 
> What we are considering is a solution where we mirror
> the transactional database and repopulate key tables
> in the reporting database every minute or few minutes.
> I am loathe to do this, mainly because it would add to
> our server load and could possibly lead to 'dirty
> reads' (i.e. where one table in the reporting database
> is populated with fresh data but others are not). At
> the same time, the client is demanding we implement
> something.
> 
> Does anyone have any war stories or suggestions for
> how to accomplish this?
> 
> Thank You,
> M
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best Practices

2004-11-09 Thread Michael Haggerty
I am working on a data warehousing solution involving
mysql and have a question about best practices. We are
standardized on mysql 4.1, and this is for a rather
picky client.

We have a relational transaction database that stores
the results of customer calls and a dimensional
reporting database used as a data mart by several
applications. Each night, we run a process that
aggregates the number of calls, the subjects of each
call, and various other data to populate the reporting
database. We would like to move to a real time
solution, and are struggling with the best way to
implment it.

What we are considering is a solution where we mirror
the transactional database and repopulate key tables
in the reporting database every minute or few minutes.
I am loathe to do this, mainly because it would add to
our server load and could possibly lead to 'dirty
reads' (i.e. where one table in the reporting database
is populated with fresh data but others are not). At
the same time, the client is demanding we implement
something.

Does anyone have any war stories or suggestions for
how to accomplish this?

Thank You,
M





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best-practices backups

2004-02-14 Thread Tbird67ForSale
In a message dated 2/11/2004 2:26:09 PM Eastern Standard Time, 
[EMAIL PROTECTED] writes:
I read this over and over.. I am curious why replication is such high 
finance?? I run it here. The Production system is a high finance machine and the 
replicated box is a old clunker basically.. It doesn't take much for the stuff to 
be replicated over.. The high dollar has queries, this and that. The 
replicated machine is just simply keeping up with the changes.. That's it.

You could do that with just about any decent machine.. I would think.. sure, 
there is going to be the few that load and change data constantly.. But I 
still think that would be ok.. (have to test it). 

Do you guys agree?
Hmmm...not in all cases.  While I'll agree that this would be a 
cost-effective method for many MySQL installations, I use MySQL for in a data 
warehousing 
environment which typically has few, but extremely large bulk updates.  We are 
in the multi-TB range, so this would not work for us.


Re: best-practices backups

2004-02-14 Thread Tbird67ForSale
In a message dated 2/11/2004 4:44:00 PM Eastern Standard Time, 
[EMAIL PROTECTED] writes:
Hi,

I do just this at the moment - I have a cron job that runs MySQL dump, gzips
the output, and will then ftp the important files to a machine that get's
backed-up to a tape drive.  I also time the dump, and it currently takes
just over 3 minutes which is quite acceptable for what I'm doing.  I'm
thinking about piping the output of mysqldump straight through gzip and then
ftp'd away to ease the disk access too, but that maybe later.

I would still like a best-practices guide though, so that if everything does
go wrong I'm sure that I've got everything I need to reconstruct the system
as swiftly as possible.  I've done some dry runs, but still feel that this
isn't the same as learning from that gleaned by others that may have
actually been faced with disaster in the past!

Thanks,

Mike



Mike,

This is a great topic of interest to me, as I am rolling out MySQL throughout 
our enterprise and naturally, the MS SysAdmin are not comfortable doing 
backups on a Linux box--so I move the dumps to their backup server.

Have you tried to do all of that in one step using SSH?  For example, I often 
transfer big datafiles using the following command:

tar cf -   BigUncompressedDataFile  |  ssh  -C  [EMAIL PROTECTED]  tar xf 
-

This effectively compresses the data on the fly, without creating a temporary 
tar file; pipes it to the remote host over SSH ( I use -C for SSH compression 
in case any more could be squeezed out) and then uncompresses the file on the 
remote host.  

Seems to me that your process makes perfect sense, I am just lazy and would 
want it one in one command in my cron job.  However, that's just what I use to 
transfer files to a place I want to work on them, in an uncompressed format on 
the remote host...obviously not what you'd do for backups.  I'll mess with 
trying this with secure copy (SCP) to replace the SSH portion above.

Just thinking out loud.  A Backup/Restore Best Practices Guide would be very 
valuable to everyone I should think.  Happy to help develop/host one if anyone 
wants to pitch in ideas.

/T


RE: best-practices backups

2004-02-11 Thread Paul Owen
Don't know wether this is best practice o rnot but what I have set up is a
batch file that:

1.  stops the mysql service.
2.  copies and zips the databases to a separate machine 3.  restarts the
service.

I have used the archive files on other machine sand they all seem to work
fine, the whole thing simply uses the MS scheduler in windows.

Might be a help

Paul 

> -Original Message-
> From: Michael McTernan [mailto:[EMAIL PROTECTED]
> Sent: 11 February 2004 21:41
> To: David Brodbeck; Michael Collins
> Cc: [EMAIL PROTECTED]
> Subject: RE: best-practices backups
> 
> Hi,
> 
> I do just this at the moment - I have a cron job that runs MySQL dump, 
> gzips the output, and will then ftp the important files to a machine 
> that get's backed-up to a tape drive.  I also time the dump, and it 
> currently takes just over 3 minutes which is quite acceptable for what 
> I'm doing.  I'm thinking about piping the output of mysqldump straight 
> through gzip and then ftp'd away to ease the disk access too, but that 
> maybe later.
> 
> I would still like a best-practices guide though, so that if 
> everything does go wrong I'm sure that I've got everything I need to 
> reconstruct the system as swiftly as possible.  I've done some dry 
> runs, but still feel that this isn't the same as learning from that 
> gleaned by others that may have actually been faced with disaster in 
> the past!
> 
> Thanks,
> 
> Mike
> 
> > -Original Message-
> > From: David Brodbeck [mailto:[EMAIL PROTECTED]
> > Sent: 11 February 2004 19:27
> > To: 'Michael McTernan'; Michael Collins
> > Cc: [EMAIL PROTECTED]
> > Subject: RE: best-practices backups
> >
> >
> > > > -Original Message-
> > > > From: Michael Collins [mailto:[EMAIL PROTECTED]
> >
> > > > Is there any "best-practices" wisdom on what is the most 
> > > > preferable method of backing up moderately (~10-20,000 record) 
> > > > MySQL 4 databases? A mysql dump to store records as text, the
> > > format provided
> > > > by the BACKUP sql command, or some other method?
> >
> > I think it depends on how long a backup window you can tolerate.
> >
> > On our databases, I use mysqldump to dump to a text file.  
> The backup
> > is piped through gzip to reduce the size on disk.  This has the 
> > advantage of being portable between architectures, robust,
> and human-readable.
> >
> > I also run with update logging turned on, and turn over the
> logfiles
> > after each backup.  This way I can restore the database to
> any point
> > in time by restoring the next earliest backup, then running
> as much of
> > the update log as necessary to get to the desired point.  I
> use a script with the 'find'
> > command to weed out old backups and update logs older than
> a certain
> > number of weeks.  All of this is run nightly by cron entries.
> >
> > The disadvantage of this method is that mysqldump can take
> a while to
> > dump large databases, and the tables are locked during the backup 
> > process.  If this is a problem, you should probably
> investigate mysqlhotcopy.
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
> >
> >
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: best-practices backups

2004-02-11 Thread Michael McTernan
Hi,

I do just this at the moment - I have a cron job that runs MySQL dump, gzips
the output, and will then ftp the important files to a machine that get's
backed-up to a tape drive.  I also time the dump, and it currently takes
just over 3 minutes which is quite acceptable for what I'm doing.  I'm
thinking about piping the output of mysqldump straight through gzip and then
ftp'd away to ease the disk access too, but that maybe later.

I would still like a best-practices guide though, so that if everything does
go wrong I'm sure that I've got everything I need to reconstruct the system
as swiftly as possible.  I've done some dry runs, but still feel that this
isn't the same as learning from that gleaned by others that may have
actually been faced with disaster in the past!

Thanks,

Mike

> -Original Message-
> From: David Brodbeck [mailto:[EMAIL PROTECTED]
> Sent: 11 February 2004 19:27
> To: 'Michael McTernan'; Michael Collins
> Cc: [EMAIL PROTECTED]
> Subject: RE: best-practices backups
>
>
> > > -Original Message-----
> > > From: Michael Collins [mailto:[EMAIL PROTECTED]
>
> > > Is there any "best-practices" wisdom on what is the most preferable
> > > method of backing up moderately (~10-20,000 record) MySQL 4
> > > databases? A mysql dump to store records as text, the
> > format provided
> > > by the BACKUP sql command, or some other method?
>
> I think it depends on how long a backup window you can tolerate.
>
> On our databases, I use mysqldump to dump to a text file.  The backup is
> piped through gzip to reduce the size on disk.  This has the advantage of
> being portable between architectures, robust, and human-readable.
>
> I also run with update logging turned on, and turn over the logfiles after
> each backup.  This way I can restore the database to any point in time by
> restoring the next earliest backup, then running as much of the update log
> as necessary to get to the desired point.  I use a script with the 'find'
> command to weed out old backups and update logs older than a
> certain number
> of weeks.  All of this is run nightly by cron entries.
>
> The disadvantage of this method is that mysqldump can take a while to dump
> large databases, and the tables are locked during the backup process.  If
> this is a problem, you should probably investigate mysqlhotcopy.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: best-practices backups

2004-02-11 Thread David Brodbeck


> -Original Message-
> From: Madscientist [mailto:[EMAIL PROTECTED]

> We use this mechanism, but we do our mysqldumps from a slave 
> so the time doesn't matter.

Excellent idea.

> Interesting side effect: A GZIP of the data files is _huge_. 
> A GZIP of the 
> mysqldump is _tiny_. For our data it seems there is a lot of 
> repetition.

I think the difference is probably that the mysqldump file doesn't contain
any index data.  On some of our tables the index file is bigger than the
actual data file.  And yeah, the dump files (and update logs!) do compress
really well.  The fact that they're text and contain a lot of repeated
commands means they pack down quite small.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best-practices backups

2004-02-11 Thread Davut Topcan
From: "David Brodbeck" <[EMAIL PROTECTED]>
Sent: Wednesday, February 11, 2004 9:27 PM
> > > -Original Message-
> > > From: Michael Collins [mailto:[EMAIL PROTECTED]
>
> > > Is there any "best-practices" wisdom on what is the most preferable
> > > method of backing up moderately (~10-20,000 record) MySQL 4
> > > databases? A mysql dump to store records as text, the
> > format provided
> > > by the BACKUP sql command, or some other method?
>

-- If your operation system is Ms windows then, The Solution is using "batc
file" for this problem!
-- If your operation system is Linux then, The solution is using "shell
script",
but attention permissions for Linux..

Ok..
---
Regards..
Jack Daniel from Turkey
my web : http://portled.nogate.org
---


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: best-practices backups

2004-02-11 Thread David Brodbeck
> > -Original Message-
> > From: Michael Collins [mailto:[EMAIL PROTECTED]

> > Is there any "best-practices" wisdom on what is the most preferable
> > method of backing up moderately (~10-20,000 record) MySQL 4
> > databases? A mysql dump to store records as text, the 
> format provided
> > by the BACKUP sql command, or some other method?

I think it depends on how long a backup window you can tolerate.

On our databases, I use mysqldump to dump to a text file.  The backup is
piped through gzip to reduce the size on disk.  This has the advantage of
being portable between architectures, robust, and human-readable.

I also run with update logging turned on, and turn over the logfiles after
each backup.  This way I can restore the database to any point in time by
restoring the next earliest backup, then running as much of the update log
as necessary to get to the desired point.  I use a script with the 'find'
command to weed out old backups and update logs older than a certain number
of weeks.  All of this is run nightly by cron entries.

The disadvantage of this method is that mysqldump can take a while to dump
large databases, and the tables are locked during the backup process.  If
this is a problem, you should probably investigate mysqlhotcopy.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: best-practices backups

2004-02-11 Thread Jeffrey Smelser

> > Is there any "best-practices" wisdom on what is the most preferable
> > method of backing up moderately (~10-20,000 record) MySQL 4
> > databases? A mysql dump to store records as text, the 
> format provided
> > by the BACKUP sql command, or some other method? I am not asking
> > about replication, rotating backups, or remote storage, and I am not
> > concerned about the size of the backup files.  Replication might be
> > the best scenario for some sites but this case is not high finance.

I read this over and over.. I am curious why replication is such high finance?? I run 
it here. The Production system is a high finance machine and the replicated box is a 
old clunker basically.. It doesn't take much for the stuff to be replicated over.. The 
high dollar has queries, this and that. The replicated machine is just simply keeping 
up with the changes.. That's it.

You could do that with just about any decent machine.. I would think.. sure, there is 
going to be the few that load and change data constantly.. But I still think that 
would be ok.. (have to test it). 

Do you guys agree?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: best-practices backups

2004-02-11 Thread Michael McTernan
Hi,

I'd love to see this too.  Even if it was a book that cost ?40 to buy, I'd
get a copy.

Hey, maybe someone can recommend a book - I've looked hard and not really
come up with anything better than the MySQL manual, which while great, is
missing the 'best practices' :(

Thanks,

Mike

> -Original Message-
> From: Michael Collins [mailto:[EMAIL PROTECTED]
> Sent: 05 February 2004 22:56
> To: [EMAIL PROTECTED]
> Subject: best-practices backups
>
>
> Is there any "best-practices" wisdom on what is the most preferable
> method of backing up moderately (~10-20,000 record) MySQL 4
> databases? A mysql dump to store records as text, the format provided
> by the BACKUP sql command, or some other method? I am not asking
> about replication, rotating backups, or remote storage, and I am not
> concerned about the size of the backup files.  Replication might be
> the best scenario for some sites but this case is not high finance.
>
> --
> Michael
> __
> ||| Michael Collins
> ||| Kuwago Inc  mailto:[EMAIL PROTECTED]
> ||| Seattle, WA, USAhttp://michaelcollins.net
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best-practices backups

2004-02-06 Thread Brent Baisley
For databases I usually just make a backup for each day of the month. 
After all, disk space is cheap. So if a month has 31 days, I have 31 
backups. That gives you about 30 days to discover any corruption that 
may have occurred in a database. A crashed database is obvious, but 
corruption usually takes a little while to be noticed, so you want to 
make sure you can go back far enough to get good data. It's probably a 
bit overkill, but it's automated so it's no extra work for me. This is 
on top of the tape backups done for all systems with rotating off site 
tapes.
To avoid extended down time, I also "restore" the latest backup on 
another machine. Then if the main computer crashes, I just change a DNS 
setting (or an IP address if you don't manage your own DNS) to redirect 
everything to the backup server. This is all done with a fairly simple 
shell script.

On Feb 5, 2004, at 5:55 PM, Michael Collins wrote:

Is there any "best-practices" wisdom on what is the most preferable 
method of backing up moderately (~10-20,000 record) MySQL 4 databases? 
A mysql dump to store records as text, the format provided by the 
BACKUP sql command, or some other method? I am not asking about 
replication, rotating backups, or remote storage, and I am not 
concerned about the size of the backup files.  Replication might be 
the best scenario for some sites but this case is not high finance.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


best-practices backups

2004-02-05 Thread Michael Collins
Is there any "best-practices" wisdom on what is the most preferable 
method of backing up moderately (~10-20,000 record) MySQL 4 
databases? A mysql dump to store records as text, the format provided 
by the BACKUP sql command, or some other method? I am not asking 
about replication, rotating backups, or remote storage, and I am not 
concerned about the size of the backup files.  Replication might be 
the best scenario for some sites but this case is not high finance.

--
Michael
__
||| Michael Collins
||| Kuwago Inc  mailto:[EMAIL PROTECTED]
||| Seattle, WA, USAhttp://michaelcollins.net
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Is there any documentation of Best Practices/Troubleshooting Guides for Administering MySQL

2004-02-04 Thread David Hodgkinson
On 4 Feb 2004, at 20:32, Dan Muey wrote:

We are implementing three or four MySql servers (as a start)
and I'm writing the Troubleshooting Guide for our operational
staff.  None of these folks have any MySQL experience (and
I'm a newbie myself). I need a pretty basic 'Cheat Sheet' for
troubleshooting common production type problems.
The staff is all very technical - Senior level Oracle DBAs -
I'm going to have to drag them kicking and screaming into the
MySQL world :-)
Thanks in advance.  I'm having fun with this tool, I'm
looking forward to see how it does in production.
It will do awesome, it always has for me anyway!
I'd say the best general guide is the mysql.com website, very
informtive and intuitive.


No, Evelyn's request is a good one. I use MySQL day to day
for some very different applications and have little trouble
with it. Others coming to it from so-called "real" database
backgrounds try to make it behave like Oracle and it rebels.
There are design and code considerations that just make life
easier for the programmer and the DBA. As with any database
(ask a Sybase DBA!)
The mod_perl support mailing list, led by Stas Bekman, produced
the "mod_perl guide" with community support that recently led to
an 800+ page O'Reilly book. I'd like to see something like this
for MySQL: for those beyond basic web applications and trying
to make their lives easier.
Um, does this make sense?

--
Dave Hodgkinson
CTO, Rockit Factory Ltd.
http://www.rockitfactory.com/
Web sites for rock bands
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Is there any documentation of Best Practices/Troubleshooting Guides for Administering MySQL

2004-02-04 Thread Dan Muey
> Thanks,

No problem, but please post to the list and not just 
me so we can all learn and share.
Also top posting is bad form, just FYI.

> 
> I have been perusing the web site, but the manuals don't 
> always give reasons WHY you would run something.  For example 

http://www.mysql.com/doc/en/FLUSH.html the why/what/who:

You should use the FLUSH command if you want to clear some 
of the internal caches MySQL uses. To execute FLUSH, you 
must have the RELOAD privilege.

> the flush-tables command.  Why would you run it and what does 
> it do?  There are several references to this command but I 

man mysqladmin
 summed it up for me niceley

> can't seem to find exactly what it does.  Can I do it any 
> time, is it non-destructive etc.  

If the site and man don't give you the kind of answer you 
seek then post the specific question to this list.

HTH

DMuey

> 
> 
> -Original Message-
> From: Dan Muey [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, February 04, 2004 3:33 PM
> To: Schwartz, Evelyn; [EMAIL PROTECTED]
> Subject: RE: Is there any documentation of Best 
> Practices/Troubleshooting Guides for Administering MySQL
> 
> 
> > We are implementing three or four MySql servers (as a 
> start) and I'm 
> > writing the Troubleshooting Guide for our operational 
> staff.  None of 
> > these folks have any MySQL experience (and I'm a newbie myself). I 
> > need a pretty basic 'Cheat Sheet' for troubleshooting common 
> > production type problems.
> > 
> > The staff is all very technical - Senior level Oracle DBAs 
> - I'm going 
> > to have to drag them kicking and screaming into the MySQL world :-)
> > 
> > Thanks in advance.  I'm having fun with this tool, I'm 
> looking forward 
> > to see how it does in production.
> 
> It will do awesome, it always has for me anyway!
> I'd say the best general guide is the mysql.com website, very 
> informtive and intuitive.
> 
> HTH
> 
> DMuey 
> 
> 
> > Evelyn

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Is there any documentation of Best Practices/Troubleshooting Guides for Administering MySQL

2004-02-04 Thread Dan Muey
> We are implementing three or four MySql servers (as a start) 
> and I'm writing the Troubleshooting Guide for our operational 
> staff.  None of these folks have any MySQL experience (and 
> I'm a newbie myself). I need a pretty basic 'Cheat Sheet' for 
> troubleshooting common production type problems.
> 
> The staff is all very technical - Senior level Oracle DBAs - 
> I'm going to have to drag them kicking and screaming into the 
> MySQL world :-)
> 
> Thanks in advance.  I'm having fun with this tool, I'm 
> looking forward to see how it does in production.

It will do awesome, it always has for me anyway!
I'd say the best general guide is the mysql.com website, very 
informtive and intuitive.

HTH

DMuey 


> Evelyn
> 
>  
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Is there any documentation of Best Practices/Troubleshooting Guides for Administering MySQL

2004-02-04 Thread Schwartz, Evelyn
We are implementing three or four MySql servers (as a start) and I'm
writing the Troubleshooting Guide for our operational staff.  None of
these folks have any MySQL experience (and I'm a newbie myself). I need
a pretty basic 'Cheat Sheet' for troubleshooting common production type
problems.

The staff is all very technical - Senior level Oracle DBAs - I'm going
to have to drag them kicking and screaming into the MySQL world :-)

Thanks in advance.  I'm having fun with this tool, I'm looking forward
to see how it does in production.

Evelyn

 



Re: Best practices for sharing members database between different portals

2003-12-09 Thread Skippy
On Tue, 09 Dec 2003 15:26:10 -0600 Tariq Murtaza <[EMAIL PROTECTED]>
wrote:
> Please comment on Best Practices for sharing members database between 
> different portals.
> 
> Suppose we have 3 portals running on different networks.
> Assignment is to make a single Login/Pass for all portals, means once 
> LogedIn in one of the portal, could able to access the other portals 
> without loging In.
> Constraints are, every portal have different Database structure and
> also have different Global / Session variables.

In one word, webservices, more specifically the nuSoap library (for
PHP). Take the user databases out of the 3 portals and unify them in one
place. Add an interface (webservice server) in front of the user db that
will handle user authentication and session tasks. Add code in the 3
portals that will talk SOAP to the user db interface, thus making the
portals webservice clients.

The beautiful part is, the portals can have completely different
databases, languages, or webservers. They just each need a bunch of
functions that speak SOAP and that implement a common user auth/session
API.

If you're worried about having to connect to the user db for each page a
portal serves, you can cache the session ID locally in each portal's own
database after the authentication. But you'll run into some other issues
this way.

There's however one big problem I see here: I don't see how you're gonna
convince a browser to remember state information from one site address
and apply it automatically to the other two portals upon login. You
can't set cookies for other domains.

-- 
Skippy - Romanian Web Developers - http://ROWD.ORG

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best practices for sharing members database between different portals

2003-12-09 Thread Tariq Murtaza
Dear All,

Please comment on Best Practices for sharing members database between 
different portals.

Suppose we have 3 portals running on different networks.
Assignment is to make a single Login/Pass for all portals, means once 
LogedIn in one of the portal, could able to access the other portals 
without loging In.
Constraints are, every portal have different Database structure and also 
have different Global / Session variables.

Please share your experience and which approach is the best practice.

Regards,

TM


Re: best practices for running 3.23 and 4.016 on the same box?

2003-11-06 Thread Thierno Cissé
Hi Mark,
there is no problem to run both MySQL servers 3.23 / 4.0.16.
Just pay attention to (particulary options file) :
- configure two my.cnf files, each version must have it server specific
options file.
  remarks that you cannot put the same file in the same place  /etc/.
  Place each of them in his own install directory
/usr/local/mysql-version_number/data/my.cnf for example.
- use two different PORT, SOCKET, DATADIR, LOGs FILE names
- you are already setup a new directory for version 4.0.16, it's ok .
- prepare two start/stop script (with different names in /etc/init.d/rc.d/)
and customize them
  according to each my.cnf variable present in these scripts.

This is sufficient for running the 2 servers version independently .
I have the same configuration (on RedHat 8, neverthless).

Regards,
Thierno6C - MySQL 4.0.15

- Original Message - 
From: "Mark Teehan" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, November 06, 2003 8:35 AM
Subject: best practices for running 3.23 and 4.016 on the same box?


> Hi all
> I am planning an upgrade for a very busy 3.23 server to 4.016. As we dont
> have a text box, I have to install both releases on the same machine, and
> test 4.016 for some time.  For 4.016 I will unpack a tar.gz, as I dont
> trust rpm not to clobber at least some of the 3.23 installation. I will
> then set up a new 4016/data directory, and copy each database in for
> testing. My question is : can someone give me some guidelines on the best
> practices for setting this up? How to make the two installations
completely
> independant of each other? Also anything I should know about 4.016 thats
> not mentioned in the release notes?
> I am running on redhat 7.2.
>
> TIA!
> Mark, Singapore.
>
>  ERG Group --
>  The contents of this email and any attachments are confidential
>  and may only be read by the intended recipient.
> -
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



best practices for running 3.23 and 4.016 on the same box?

2003-11-06 Thread Mark Teehan
Hi all
I am planning an upgrade for a very busy 3.23 server to 4.016. As we dont
have a text box, I have to install both releases on the same machine, and
test 4.016 for some time.  For 4.016 I will unpack a tar.gz, as I dont
trust rpm not to clobber at least some of the 3.23 installation. I will
then set up a new 4016/data directory, and copy each database in for
testing. My question is : can someone give me some guidelines on the best
practices for setting this up? How to make the two installations completely
independant of each other? Also anything I should know about 4.016 thats
not mentioned in the release notes?
I am running on redhat 7.2.

TIA!
Mark, Singapore.

 ERG Group --
 The contents of this email and any attachments are confidential
 and may only be read by the intended recipient.
-


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Best practices

2003-08-14 Thread Miguel Perez
Hi list:

I would like to know some of the best practices to manage innodb tables.  I 
have an ibdata file that its size is 4.5GB, and it will increase every day 
the max size of the hard disk is  about 330GB, the question is should I 
split this ibdata file in several files in a way that I can reach this 
size?. If I split into several files what is the best size to each file.

Thanx in advanced

Regardas

Mikel.

_
MSN Fotos: la forma más fácil de compartir e imprimir fotos.  
http://photos.msn.es/support/worldwide.aspx

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Best Practices for mySQL Backups in Enterprise

2003-06-27 Thread Subhakar Burri
If you can identify the log files to apply, you can issue the following command:
mysqlbinlog log_file_to_apply | mysql -h server-name

hth,
SB

-Original Message-
From: Nils Valentin [mailto:[EMAIL PROTECTED]
Sent: Friday, June 27, 2003 5:53 PM
To: gerald_clark; Subhakar Burri
Cc: Keith C. Ivey; [EMAIL PROTECTED]
Subject: Re: Best Practices for mySQL Backups in Enterprise


Hi Gerald,

Do you know some good information about it, seems like I need to brush up a 
bit on this. 

I dont understand how you want to do a roll forward for a MySQL table - 
especially if the backup is lets ay from 8 AM and the crash is at 2 PM.

Best regards
NIls Valentin
Tokyo/Japan

2003年 6月 27日 金曜日 22:08、gerald_clark さんは書きました:
> Yes, if you have transaction logging turned on.
> You can edit the transaction log, and run it against the restored database.
>
> Subhakar Burri wrote:
> >Can I roll forward if I do backups using Mysqldump? Say, I did backups
> > using Mysqldump @ 8:00 AM and my instance crashed @ 2:00 PM. I can
> > restore the tables (both Innodb and MyISAM tables) from my 8:00AM backup,
> > but can I roll forward the data that changed after 8:00 AM or do I lose
> > the data after 8:00 AM. Pls clarify... a link to relevant information
> > would be just fine too...
> >
> >Thankx in advance
> >SB

-- 
---
Valentin Nils
Internet Technology

 E-Mail: [EMAIL PROTECTED]
 URL: http://www.knowd.co.jp
 Personal URL: http://www.knowd.co.jp/staff/nils



Re: Best Practices for mySQL Backups in Enterprise

2003-06-27 Thread Nils Valentin
Hi Gerald,

Do you know some good information about it, seems like I need to brush up a 
bit on this. 

I dont understand how you want to do a roll forward for a MySQL table - 
especially if the backup is lets ay from 8 AM and the crash is at 2 PM.

Best regards
NIls Valentin
Tokyo/Japan

2003年 6月 27日 金曜日 22:08、gerald_clark さんは書きました:
> Yes, if you have transaction logging turned on.
> You can edit the transaction log, and run it against the restored database.
>
> Subhakar Burri wrote:
> >Can I roll forward if I do backups using Mysqldump? Say, I did backups
> > using Mysqldump @ 8:00 AM and my instance crashed @ 2:00 PM. I can
> > restore the tables (both Innodb and MyISAM tables) from my 8:00AM backup,
> > but can I roll forward the data that changed after 8:00 AM or do I lose
> > the data after 8:00 AM. Pls clarify... a link to relevant information
> > would be just fine too...
> >
> >Thankx in advance
> >SB

-- 
---
Valentin Nils
Internet Technology

 E-Mail: [EMAIL PROTECTED]
 URL: http://www.knowd.co.jp
 Personal URL: http://www.knowd.co.jp/staff/nils


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Best Practices for mySQL Backups in Enterprise

2003-06-27 Thread Subhakar Burri
Jeremy/Gerald

I just tested it with an insert statement on a MyISAM table and an Innodb table, and I 
saw the both insert statements in the binary log. So, I can roll forward these 
changes, right? 
I still don't see how your answer (MyISAM doesn't have transactions) relate to my 
initial question of can I roll forward the changes to both table types ... Am I 
missing something? Please clarify

SB

Original Message-
From: gerald_clark [mailto:[EMAIL PROTECTED]
Sent: Friday, June 27, 2003 11:39 AM
To: [EMAIL PROTECTED]
Subject: Re: Best Practices for mySQL Backups in Enterprise


Ok, update log.

Jeremy Zawodny wrote:

>On Fri, Jun 27, 2003 at 08:08:40AM -0500, gerald_clark wrote:
>  
>
>>Yes, if you have transaction logging turned on.
>>You can edit the transaction log, and run it against the restored database.
>>
>>
>
>MyISAM doesn't have transactions.
>
>Jeremy
>  
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best Practices for mySQL Backups in Enterprise

2003-06-27 Thread gerald_clark
Ok, update log.

Jeremy Zawodny wrote:

On Fri, Jun 27, 2003 at 08:08:40AM -0500, gerald_clark wrote:
 

Yes, if you have transaction logging turned on.
You can edit the transaction log, and run it against the restored database.
   

MyISAM doesn't have transactions.

Jeremy
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Best Practices for mySQL Backups in Enterprise

2003-06-27 Thread Jeremy Zawodny
On Fri, Jun 27, 2003 at 08:08:40AM -0500, gerald_clark wrote:
> Yes, if you have transaction logging turned on.
> You can edit the transaction log, and run it against the restored database.

MyISAM doesn't have transactions.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

MySQL 4.0.13: up 24 days, processed 772,819,180 queries (365/sec. avg)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best Practices for mySQL Backups in Enterprise

2003-06-27 Thread gerald_clark
Yes, if you have transaction logging turned on.
You can edit the transaction log, and run it against the restored database.
Subhakar Burri wrote:

Can I roll forward if I do backups using Mysqldump? Say, I did backups using Mysqldump @ 8:00 AM and my instance crashed @ 2:00 PM. I can restore the tables (both Innodb and MyISAM tables) from my 8:00AM backup, but can I roll forward the data that changed after 8:00 AM or do I lose the data after 8:00 AM. Pls clarify... a link to relevant information would be just fine too...

Thankx in advance
SB
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Best Practices for mySQL Backups in Enterprise

2003-06-26 Thread Nils Valentin
Hi Subhakar,

I would be interested to know what you mean with roll forward ?

In case you have another backup let's say @10AM you could use this one, but if 
you dont have another backup where do you want to do a roll forward from ??

Do I miss something here ??

Best regards

Nils Valentin
Tokyo/Japan


2003年 6月 27日 金曜日 09:03、Subhakar Burri さんは書きました:
> Can I roll forward if I do backups using Mysqldump? Say, I did backups
> using Mysqldump @ 8:00 AM and my instance crashed @ 2:00 PM. I can restore
> the tables (both Innodb and MyISAM tables) from my 8:00AM backup, but can I
> roll forward the data that changed after 8:00 AM or do I lose the data
> after 8:00 AM. Pls clarify... a link to relevant information would be just
> fine too...
>
> Thankx in advance
> SB
>
> -Original Message-
> From: Keith C. Ivey [mailto:[EMAIL PROTECTED]
> Sent: Thursday, June 26, 2003 2:49 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Best Pratices for mySQL Backups in Enterprise
>
> On 26 Jun 2003 at 17:16, Ware Adams wrote:
> > mysqldump creates text files containing insert statements that
> > recreate a table and repopulate it with data.  They are somewhat
> > portable across database servers and human editable if necessary.
> > They take up less space than the original table because they do not
> > contain indices (only the statements that would create the indices).
>
> The dump file will be larger than the MyISAM data file for the
> original table -- especially if you have many non-text columns
> (dates, numbers, ENUM columns, etc.).  In some cases, when you have
> large indexes, the index file will be large enough that it and the
> data file combined will be larger than the dump file, but in some
> cases it won't be.  I wouldn't consider the difference in size,
> whichever way it goes, to be significant in deciding between backup
> methods.
>
> --
> Keith C. Ivey <[EMAIL PROTECTED]>
> Tobacco Documents Online
> http://tobaccodocuments.org
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
---
Valentin Nils
Internet Technology

 E-Mail: [EMAIL PROTECTED]
 URL: http://www.knowd.co.jp
 Personal URL: http://www.knowd.co.jp/staff/nils


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Best Practices for mySQL Backups in Enterprise

2003-06-26 Thread Subhakar Burri
Can I roll forward if I do backups using Mysqldump? Say, I did backups using Mysqldump 
@ 8:00 AM and my instance crashed @ 2:00 PM. I can restore the tables (both Innodb and 
MyISAM tables) from my 8:00AM backup, but can I roll forward the data that changed 
after 8:00 AM or do I lose the data after 8:00 AM. Pls clarify... a link to relevant 
information would be just fine too...

Thankx in advance
SB

-Original Message-
From: Keith C. Ivey [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 26, 2003 2:49 PM
To: [EMAIL PROTECTED]
Subject: Re: Best Pratices for mySQL Backups in Enterprise


On 26 Jun 2003 at 17:16, Ware Adams wrote:

> mysqldump creates text files containing insert statements that
> recreate a table and repopulate it with data.  They are somewhat
> portable across database servers and human editable if necessary. 
> They take up less space than the original table because they do not
> contain indices (only the statements that would create the indices). 

The dump file will be larger than the MyISAM data file for the 
original table -- especially if you have many non-text columns 
(dates, numbers, ENUM columns, etc.).  In some cases, when you have 
large indexes, the index file will be large enough that it and the 
data file combined will be larger than the dump file, but in some 
cases it won't be.  I wouldn't consider the difference in size, 
whichever way it goes, to be significant in deciding between backup 
methods.

-- 
Keith C. Ivey <[EMAIL PROTECTED]>
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Best Practices

2002-07-18 Thread Håkon Eriksen

Imro STROK <[EMAIL PROTECTED]> writes:

> But I would also like to have some "MySQL Best Practices"  documents
> regarding:
> * Performance & Tuning 
> * Backup & Recovery
> Appreciate if you guys/gals can send me some documents.

Take a look at http://www.onlamp.com/pub/a/onlamp/2002/07/11/MySQLtips.html >

-- 
 - håkon

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL Best Practices

2002-07-16 Thread Imro STROK

Hi,

We, StarHub Pte Ltd, decided to use MySQL/Linux for one of our new projects.
Since all of our DBAs are Oracle trained, we need to speed up our MySQL
skills.
Therefore we already installed MySQL (3.23.51) on our Linux server (Red Hat
Linux release 7.2)
to play with this database. Using the book "Teach Yourself MySQL in 21 days"
by Mark Maslakowski
is very helpful to speed our MySQL skills.
But I would also like to have some "MySQL Best Practices"  documents
regarding:
*   Performance & Tuning 
*   Backup & Recovery

Appreciate if you guys/gals can send me some documents.

Best Regards,
Imro Strok

StarHub Pte Ltd
Information Technology
Phone: +65 6825 5703
Email  : [EMAIL PROTECTED]
Website: www.starhub.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: best practices

2002-01-10 Thread Roger Baklund

* adam nelson
> Management seems like the biggest reason for me.  Just from a time spent
> point of view, I would go with 16 tables instead of 1600.  Not only
> that, I wonder if there would be a big memory hit from having all those
> objects open at once.  Just seems to me that mysql was designed for big
> tables, 1600 tables is really quite a few.

Yes, I would think the size of the tables is the most important issue...
with small data amounts it probably is better to keep it in one db, if
possible... but 1600 x 100M is probably better than 16 x 10G... and it would
make a difference if you use innodb compared to myisam, and of course the
2GB and 4GB and any other OS limits may apply.

There is also a security aspect: if the 100 customers are going to have
access to the same database, they will also be able to see each others data.

Maybe the 100 customers have telnet access to the server and are responsible
for their own backups? That would be no problem if each had a separate
database. There's no problem to make backup scripts extracting one customers
data from one big db, but... it's easier not having to do that. :)

Maybe the software using the database can be installed on the company
intranet, along with a copy of the database? Maybe a fresh copy could be
downloaded each morning. That would be real easy with myisam tables and
separate databases, while it would reqire some export script and mysqld
processing for the one db solution.

Depending on the application using the 16 tables, maybe some tables could be
shared? With mysql you can use multiple databases in a single select,
simplifying this kind of shared tables: just make a db called "shared" or
similar, put the shared tables in it, and refer to "shared.tablename" in the
queries. This could reduce required disk space and improve overall
performance.

I'm sure there are more things to consider, these just came to mind. :)

--
Roger


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: best practices

2002-01-10 Thread adam nelson

Management seems like the biggest reason for me.  Just from a time spent
point of view, I would go with 16 tables instead of 1600.  Not only
that, I wonder if there would be a big memory hit from having all those
objects open at once.  Just seems to me that mysql was designed for big
tables, 1600 tables is really quite a few.

You do have a point though.  I just don't think it would help that much
(escpecially if it's using a lot of indexes).  Indexes will slow down
the writes, but that doesn't matter much since it will be at night.


-Original Message-
From: Roger Baklund [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, January 10, 2002 2:04 PM
To: [EMAIL PROTECTED]
Cc: adam nelson
Subject: RE: best practices


* Stephen S Zappardo 
>> a) 1 db with 16 tables b) 100 dbs each with 16 tables
* adam nelson
> Certainly 1 db with 16 tables.  

Why? Normally, bigger means slower...

-- 
Roger





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: best practices

2002-01-10 Thread Roger Baklund

* Stephen S Zappardo 
>> a) 1 db with 16 tables b) 100 dbs each with 16 tables
* adam nelson
> Certainly 1 db with 16 tables.  

Why? Normally, bigger means slower...

-- 
Roger

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: best practices

2002-01-10 Thread adam nelson

Certainly 1 db with 16 tables.  Since it will be read only and then
write only, I would also use MyISAM instead of InnoDB, although I could
be wrong, anyone else?

-Original Message-
From: Stephen S Zappardo [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, January 10, 2002 12:13 PM
To: [EMAIL PROTECTED]
Subject: best practices


Hello,

I'm trying to determine the best way to setup a new mysql install. I
have
about 100 clients and each client has the same 16 tables. In a years
time
there will be about 3.6 million total rows spread out between the
tables.
The tables will be updated nightly from a legacy system. All other i/o
will
be reads only.

In a raid5 configuration, which is a better design as far as access
speed:
a) 1 db with 16 tables b) 100 dbs each with 16 tables

Thanks,
steve





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




best practices

2002-01-10 Thread Stephen S Zappardo

Hello,

I'm trying to determine the best way to setup a new mysql install. I have
about 100 clients and each client has the same 16 tables. In a years time
there will be about 3.6 million total rows spread out between the tables.
The tables will be updated nightly from a legacy system. All other i/o will
be reads only.

In a raid5 configuration, which is a better design as far as access speed:
a) 1 db with 16 tables b) 100 dbs each with 16 tables

Thanks,
steve


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php