Hello Ananda,
yes, the testmachine has the same data.
Regards,
Spiker
--
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubsc
If I'm replicating a master database to a slave (MyISAM tables), but the
slave is busy serving up web pages, how does it get write access to the
slave's table if it is always being read? TIA
Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Interesting, never tried compressing the data, sounds like that might be
a nice addon.. Do you have any performance numbers you can share? I posted
some performance numbers on one of my implementations some time ago.
I found the thread here:
http://lists.mysql.com/mysql/206337
On Tue, 3 Ju
And while you are at it, you may as well compress the chunks. You machine
probably can compress/uncompress faster than it can write/read disk. I use
Perl's Zlib::Compress or PHP's equivalent instead of Mysql's function for 2
reasons:
* The network traffic is compressed.
* Mysql puts an unnecess
Rick is dead on correct, I call I chunking blob data.. There is an
article here on a simple implementation:
http://www.dreamwerx.net/phpforum/?id=1
I've had hundreds of thousands of files in this type of storage before
with no issues.
On Tue, 3 Jul 2007, Rick James wrote:
> I gave up on putt
Guys,
I would like to know if there is a way to have individual databases
under the same instance or server write to separate binary log
files. The idea is to have multiple binary log file for each
database on the same server. The problem that I experiencing is
sorting through the binary log
Rick James wrote:
Instead I broke blobs into pieces, inserting them with a sequence number.
Understanding the underlying problem, that still seems like an
unnatural way to store pictures and documents.
Added benefit: Does not clog up replication while huge single-insert is
being copied over
I gave up on putting large blobs in Mysql -- too many limits around 16MB.
Instead I broke blobs into pieces, inserting them with a sequence number.
Added benefit: Does not clog up replication while huge single-insert is
being copied over network and reexecuted on slaves.
> -Original Messag
Hi.
Ive been using mysqlcheck on some very large databases, and im running into
a situation that the partition the database files reside on is now getting
to small to handle the mysqlcheck temp files.
Ive also checked to see if mysqlcheck had a tmpdir command line option, and
it doesn't, and I al
hello,
from the page
http://dev.mysql.com/doc/refman/5.0/en/show-warnings.html
I understand that if I want to look at all the warnings with the command :
show warnings;
then I have first to set a limit bigger than any numbers of warnings that
could happen, say :
(I know that it might be painfu
Thanks for the leads. I'll double check my indices and check out the
following links.
> http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html
> http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To u
Please, mount your disks using "forcedirectio".
Regards,
Juan
On 7/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Hello,
I've a performance problem with our database:
Some select statements take about 20 seconds.
The same statements on an equal testmachine take less than 1 second.
Serve
Hey there again,
I suggest you look up a tutorial about database normalisation. Good ones
are often hard to find.
In general, you give all tables that contain data you will be
referencing in other tables a numeric id (INT(11) UNSIGNED) as primary
key, and you use that key as the constraint.
Rich Brant schrieb:
> Is there anyway to prevent the temporary and filesort?
>
> SELECT
> t1.sourceID as sourceID,
> count(t1.sourceID) as clicks,
> [...]
> ORDER BY clicks desc, conversions desc;
>
> When using EXPLAIN:
>
> [...] Using where; Using temporary; Using filesort
Now, if I have a location table with id, name, address, phone, fax, etc...
Should I put id or name into the tag table?
If id used, then how do i look up the name, address, phone, fax, etc... when
I do a select on tag table?
Thank you for all your helps
T. Hiep
-Original Message-
From: Mo
At 2:45 PM +1000 7/3/07, Daniel Kasak wrote:
On Mon, 2007-07-02 at 21:19 -0700, Ed Lazor wrote:
I have a 400mb database. The first query to tables takes about 90 seconds.
Additional queries take about 5 seconds. I wait a while and run a query
again; it takes about 90 seconds for the first
On 6/29/07, Rich Brant <[EMAIL PROTECTED]> wrote:
Hello all. I'm looking for help with the query below. Is there anyway
to prevent the temporary and filesort?
The filesort is caused by either the ORDER BY or the GROUP BY. There
are sections in the manual about how to get it to use indexes for
Hi All,
We have setup replication for our production database. We need to do
monitoring of the slave and master.
I created a user with only "SELECT" privileges, and when i do "show master
status" on master db, its saying
"Access denied; you need the SUPER,REPLICATION CLIENT privilege for this
oper
does your test machine have the same data as your problem database.
Can you also please show the explain plan from both the machines.
On 7/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Hello,
I've a performance problem with our database:
Some select statements take about 20 seconds.
The
Hello,
I've a performance problem with our database:
Some select statements take about 20 seconds.
The same statements on an equal testmachine take less than 1 second.
Server: CPU: 2 x 440 MHz sparcv9
RAM: 2GB
(top: Memory: 2048M real, 931M free, 732M swap in use, 2839M swap fre
I believe varchar(50) means 50 characters, not 50 bytes.
So, usually I don't care when designing table schema at all, for
Japanese characters.
On 7/3/07, Cathy Murphy <[EMAIL PROTECTED]> wrote:
I am limiting text to 50 chars in mysql field by varchar(50) ( UTF-8
enabled)
but what if the user ent
From my experience with InnoDB,
IF the field is an index, it will use 3 bytes per character. So
VARCHAR(50) = 150 bytes, when fully populated. (+ 1 for the length =
151 bytes.)
IF the field is not an index, each character will consume between 1 and
3 chars. So VARCHAR(50) = 51 -> 151 char
22 matches
Mail list logo