| Key_blocks_used | 1687734 |
It won't seem to go over 1687734. I have constantly growing indexed
tables, so it's not a matter of all the keys being in memory. I've
restarted the server, and it still stops at that number. my.cnf has
key_buffer = 3072M, and indeed:
| key_buffer_size
I think part of the confusion stems from the dynamic
tables I was creating with Dreamweaver. I thought they
were a necessary part of the equation, when they may
in fact be optional.
Do you mean temporary tables? These are only necessary when there's no way
to solve the problem with a join.
--- Jigal van Hemert [EMAIL PROTECTED] wrote:
Great, you've given me a lot of ideas (and examples)
to play with. Thanks!
__
Do you Yahoo!?
SBC Yahoo! - Internet access at a great low price.
http://promo.yahoo.com/sbc/
--
MySQL General
--- Jigal van Hemert [EMAIL PROTECTED] wrote:
If you need to know how to display the resulting
record sets, example 1 on:
http://www.php.net/manual/en/ref.mysql.php
gives you a complete piece of code to print out the
resulting records.
OK, I think this example points out what I'm doing
Jigal van Hemert writes:
Do you mean temporary tables? These are only necessary when there's no way
to solve the problem with a join.
Actually a temporary table can be used with a join to do what is usually
knows as a sub-select or sub query. In this fashion you select the elements
that
Aloha!
I want to create a date table which will get about 2 thousand entries per
year and should easyly handle 10 years, which means about 20.000 entries.
Is that realistic? Or shall I reorganize my database?
Where is the border of too many entries and MySQL gets slow?
Thanks for your help,
Karsten,
I wouln't worry about this - There are plenty of examples where people are
running tables with millions of records without significant performance
problems.
Of course it all depends on the complexity - will you have many tables and
many joins in your queries ?
Just make sure to index
Well, in fact it is the table that is kind of the heart of it all. And there
surely will be some other tables depending of it, but there probably won't
be serious traffic between them and complex is really not the right word for
that.
But when others handle millions of entries I will be able to
Hi,
I teach a class using Oracle and MySQL and I store a few things like creating tables
for student labs in SQL files. I have them load these files by typing:
SOURCE filename.sql;
from the MySQL prompt.
A nice feature in Oracle is that you can set a default path where the client looks for
At 14:27 -0500 5/15/04, Josh Trutwin wrote:
Hi,
I teach a class using Oracle and MySQL and I store a few things like
creating tables for student labs in SQL files. I have them load
these files by typing:
SOURCE filename.sql;
from the MySQL prompt.
A nice feature in Oracle is that you can set a
On Sat, 15 May 2004 14:38:52 -0500
Paul DuBois [EMAIL PROTECTED] wrote:
snip
Is there something similar in MySQL?
No.
Dang - oh well. :)
snip
Also, any idea why SOURCE filename.sql; works in MySQL 5.0.0 and \.
filename.sql; does not?
Leave out the trailing semicolon for short-form
At 15:05 -0500 5/15/04, Josh Trutwin wrote:
On Sat, 15 May 2004 14:38:52 -0500
Paul DuBois [EMAIL PROTECTED] wrote:
snip
Is there something similar in MySQL?
No.
Dang - oh well. :)
snip
Also, any idea why SOURCE filename.sql; works in MySQL 5.0.0 and \.
filename.sql; does not?
Leave out
Brad Eacker wrote:
Jigal van Hemert writes:
Do you mean temporary tables? These are only necessary when there's no
way
to solve the problem with a join.
Actually a temporary table can be used with a join to do what is
usually
knows as a sub-select or sub query. In this fashion you
Joshua Beall wrote:
Hi All,
Is there a way to automatically optimize a table anytime data is changed. I
have a table that only has changes made to it occasionally (average over a 1
week period is probably 1 row is updated each day), and I would like it to
automatically optimize the table, rather
Daniel Kasak [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Is there any particular reason why you think the table will need
optimizing, or do you just want everything to be super-optimized?
Because when I pull up phpMyAdmin, and it says there is 3,768 bytes of
overhead, I just feel
Hi List,
Recently, a machine running an instance of mySQL crashed and had to be
rebooted. Cause still unknown, but could have been RAID controller
problem. Since then, the mySQL sserver has been extremely slow. For
instance, a SELECT that would have taken 1 second can take 70
seconds.
I have a table that is:
CREATE TABLE GPSData (
ID int(10) unsigned NOT NULL auto_increment,
Lat decimal(9,5) default '0.0',
Lon decimal(9,5) default '0.0',
TDate datetime default NULL,
PRIMARY KEY (ID),
UNIQUE KEY ID (ID),
KEY ID_2 (ID)
) TYPE=MyISAM;
When I insert a GPS log
17 matches
Mail list logo