Answers intermingled below
--- Bruno B B Magalh�es <[EMAIL PROTECTED]> wrote:
> Hi guys I need some help with two things...
>
> I have the following table:
>
> CREATE TABLE `telephones` (
>`contact_id` int(20) unsigned NOT NULL default '0',
>`telephone_id` int(20) unsigned NOT NUL
You can modify the algorithm I proposed to find groups of records that
are likely to have duplicate chunks. Simply record only a part of
hashes, something like: if md5(concat(word1,word2,...,word20))%32=0.
Disk usage for this table will be maybe 60 bytes per record, if your
average word is 8 bytes
Thanks for your answer. It would certainly work provided having
enough disk space to do that. I thought something like
that but was hoping I can leverage fulltext and just
record the fulltext result between a each record
and each other record. Then I can group all records that
highly correlate
Bastian Balthazar Bux wrote:
We need to track the modification to the records too so the route has
been to keep them all in a different, specular databases.
If the "real" table look like this:
CREATE TABLE `users` (
`id` int(11) NOT NULL auto_increment,
`ts` timestamp NOT NULL
default
Saqib Ali wrote:
> Hello All,
>
> What are best practices for deleting records in a DB. We need the
> ability to restore the records.
>
> Two obvious choices are:
>
> 1) Flag them deleted or undeleted
> 2) Move the deleted records to seperate table for deleted records.
>
> We have a complex sc
Saqib Ali wrote:
Hello All,
What are best practices for deleting records in a DB. We need the
ability to restore the records.
Two obvious choices are:
1) Flag them deleted or undeleted
2) Move the deleted records to seperate table for deleted records.
We have a complex schema. However the t
Saqib Ali wrote:
Hello All,
What are best practices for deleting records in a DB. We need the
ability to restore the records.
Two obvious choices are:
1) Flag them deleted or undeleted
2) Move the deleted records to seperate table for deleted records.
The first is what I like more.
While in
riginal Message -
From: "Michael Haggerty" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, November 10, 2004 11:27 AM
Subject: Re: Best Practices
Yes, there can be a small lag in data updates, in fact
I believe the lag time will be less than a second
consi
Yes, there can be a small lag in data updates, in fact
I believe the lag time will be less than a second
considering our architecture.
We have been considering replication as a solution but
have been hesitant to do so because I have heard there
are problems with data inserted through a LOAD DATA
Can there be a small lag between servers? If a second or two
is acceptable, this sounds like a perfect environment for
replication:
http://dev.mysql.com/doc/mysql/en/Replication.html
Basically, when the master writes something to the database,
it also logs the transaction to a log file. The slave s
It sounds to me like they want two databases (they probably need to be on
two separate servers) and that your logging application may need to pull
double duty. You are being asked to keep an OLTP database in sync with an
OLAP database in real time. That means that you probably need to commit
ch
Hello all,
I am using this script and it takes 100 % of the process, can anyone tell me
how to optimize this,
insert into incoming
select s.Date as Datein, s.Time as Timein, e.Date as Dateend, e.Time as
Timeend, s.CallingStationId, s.CalledStationId,
SEC_TO_TIME(unix_timestamp(concat(e.Date,' ',e
Have you thought about locking the reporting database for write? You
could eliminate the dirty reads.
If you are using InnoDB on the reporting tables, you could use a
transaction for the update operation. That would accomplish the same
thing.
You could use replication to move the load to another
In a message dated 2/11/2004 2:26:09 PM Eastern Standard Time,
[EMAIL PROTECTED] writes:
I read this over and over.. I am curious why replication is such high
finance?? I run it here. The Production system is a high finance machine and the
replicated box is a old clunker basically.. It doesn't t
In a message dated 2/11/2004 4:44:00 PM Eastern Standard Time,
[EMAIL PROTECTED] writes:
Hi,
I do just this at the moment - I have a cron job that runs MySQL dump, gzips
the output, and will then ftp the important files to a machine that get's
backed-up to a tape drive. I also time the dump, and
whole thing simply uses the MS scheduler in windows.
Might be a help
Paul
> -Original Message-
> From: Michael McTernan [mailto:[EMAIL PROTECTED]
> Sent: 11 February 2004 21:41
> To: David Brodbeck; Michael Collins
> Cc: [EMAIL PROTECTED]
> Subject: RE: best-practices bac
[mailto:[EMAIL PROTECTED]
> Sent: 11 February 2004 19:27
> To: 'Michael McTernan'; Michael Collins
> Cc: [EMAIL PROTECTED]
> Subject: RE: best-practices backups
>
>
> > > -Original Message-
> > > From: Michael Collins [mailto:[EMAIL PROTECTED]
> -Original Message-
> From: Madscientist [mailto:[EMAIL PROTECTED]
> We use this mechanism, but we do our mysqldumps from a slave
> so the time doesn't matter.
Excellent idea.
> Interesting side effect: A GZIP of the data files is _huge_.
> A GZIP of the
> mysqldump is _tiny_. For
From: "David Brodbeck" <[EMAIL PROTECTED]>
Sent: Wednesday, February 11, 2004 9:27 PM
> > > -Original Message-
> > > From: Michael Collins [mailto:[EMAIL PROTECTED]
>
> > > Is there any "best-practices" wisdom on what is the most preferable
> > > method of backing up moderately (~10-20,000
> > -Original Message-
> > From: Michael Collins [mailto:[EMAIL PROTECTED]
> > Is there any "best-practices" wisdom on what is the most preferable
> > method of backing up moderately (~10-20,000 record) MySQL 4
> > databases? A mysql dump to store records as text, the
> format provided
>
> > Is there any "best-practices" wisdom on what is the most preferable
> > method of backing up moderately (~10-20,000 record) MySQL 4
> > databases? A mysql dump to store records as text, the
> format provided
> > by the BACKUP sql command, or some other method? I am not asking
> > about replic
Hi,
I'd love to see this too. Even if it was a book that cost ?40 to buy, I'd
get a copy.
Hey, maybe someone can recommend a book - I've looked hard and not really
come up with anything better than the MySQL manual, which while great, is
missing the 'best practices' :(
Thanks,
Mike
> -Ori
For databases I usually just make a backup for each day of the month.
After all, disk space is cheap. So if a month has 31 days, I have 31
backups. That gives you about 30 days to discover any corruption that
may have occurred in a database. A crashed database is obvious, but
corruption usually
On Tue, 09 Dec 2003 15:26:10 -0600 Tariq Murtaza <[EMAIL PROTECTED]>
wrote:
> Please comment on Best Practices for sharing members database between
> different portals.
>
> Suppose we have 3 portals running on different networks.
> Assignment is to make a single Login/Pass for all portals, means
Hi Mark,
there is no problem to run both MySQL servers 3.23 / 4.0.16.
Just pay attention to (particulary options file) :
- configure two my.cnf files, each version must have it server specific
options file.
remarks that you cannot put the same file in the same place /etc/.
Place each of them i
. Ivey; [EMAIL PROTECTED]
Subject: Re: Best Practices for mySQL Backups in Enterprise
Hi Gerald,
Do you know some good information about it, seems like I need to brush up a
bit on this.
I dont understand how you want to do a roll forward for a MySQL table -
especially if the backup is lets ay
Hi Gerald,
Do you know some good information about it, seems like I need to brush up a
bit on this.
I dont understand how you want to do a roll forward for a MySQL table -
especially if the backup is lets ay from 8 AM and the crash is at 2 PM.
Best regards
NIls Valentin
Tokyo/Japan
2003年 6月
y
initial question of can I roll forward the changes to both table types ... Am I
missing something? Please clarify
SB
Original Message-
From: gerald_clark [mailto:[EMAIL PROTECTED]
Sent: Friday, June 27, 2003 11:39 AM
To: [EMAIL PROTECTED]
Subject: Re: Best Practices for mySQL Backups in Enter
Ok, update log.
Jeremy Zawodny wrote:
On Fri, Jun 27, 2003 at 08:08:40AM -0500, gerald_clark wrote:
Yes, if you have transaction logging turned on.
You can edit the transaction log, and run it against the restored database.
MyISAM doesn't have transactions.
Jeremy
--
MySQL General M
On Fri, Jun 27, 2003 at 08:08:40AM -0500, gerald_clark wrote:
> Yes, if you have transaction logging turned on.
> You can edit the transaction log, and run it against the restored database.
MyISAM doesn't have transactions.
Jeremy
--
Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yah
Yes, if you have transaction logging turned on.
You can edit the transaction log, and run it against the restored database.
Subhakar Burri wrote:
Can I roll forward if I do backups using Mysqldump? Say, I did backups using Mysqldump @ 8:00 AM and my instance crashed @ 2:00 PM. I can restore the ta
Hi Subhakar,
I would be interested to know what you mean with roll forward ?
In case you have another backup let's say @10AM you could use this one, but if
you dont have another backup where do you want to do a roll forward from ??
Do I miss something here ??
Best regards
Nils Valentin
Tokyo/
Can I roll forward if I do backups using Mysqldump? Say, I did backups using Mysqldump
@ 8:00 AM and my instance crashed @ 2:00 PM. I can restore the tables (both Innodb and
MyISAM tables) from my 8:00AM backup, but can I roll forward the data that changed
after 8:00 AM or do I lose the data aft
* adam nelson
> Management seems like the biggest reason for me. Just from a time spent
> point of view, I would go with 16 tables instead of 1600. Not only
> that, I wonder if there would be a big memory hit from having all those
> objects open at once. Just seems to me that mysql was designed
Baklund [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 10, 2002 2:04 PM
To: [EMAIL PROTECTED]
Cc: adam nelson
Subject: RE: best practices
* Stephen S Zappardo
>> a) 1 db with 16 tables b) 100 dbs each with 16 tables
* adam nelson
> Certainly 1 db with 16 tables.
Why? Norma
* Stephen S Zappardo
>> a) 1 db with 16 tables b) 100 dbs each with 16 tables
* adam nelson
> Certainly 1 db with 16 tables.
Why? Normally, bigger means slower...
--
Roger
-
Before posting, please check:
http://www.mysql
Certainly 1 db with 16 tables. Since it will be read only and then
write only, I would also use MyISAM instead of InnoDB, although I could
be wrong, anyone else?
-Original Message-
From: Stephen S Zappardo [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 10, 2002 12:13 PM
To: [EMAIL P
37 matches
Mail list logo