Did you see this part of the stack trace?
It is possible that mysqld could use up to key_buffer_size +
(record_buffer + sort_buffer)*max_connections = 182271 K bytes of memory
Hope that's ok, if not, decrease some variables in the equation
How much memory (MyISAM and InnoDB) are you allocating
me know how it goes.
David
-Original Message-
From: Julien Marchand [mailto:[EMAIL PROTECTED]
Sent: June 22, 2007 12:43 PM
To: David Griffiths
Cc: mysql@lists.mysql.com
Subject: Re : InnoDB: Assertion failure - MySQL keeps crashing
Yes, and
I also saw InnoDB: Out of memory in additional
This really isn't what he's talking about - rather than storing data as
rows and tables, you store as objects and methods.
MySQL does not support this; you can get this sort of functionality
using something like Hibernate (an Object-Relational-Mapping tool),
which is free but has a learning
SQLWays also works, and it does stored procedures, etc, but it's
thousands of dollars (depending on the # of objects in your database).
http://www.ispirer.com/products/
David
Arun Kumar PG wrote:
http://www.mysql.com/products/tools/migration-toolkit/
There is a video presentation as well
I'd given some thought to this a while ago. The only way you are going
to be able to tell if a row changes is to have a date column on every
oracle table that indicates the last time the data changed.
You'll need some program to start up that knows the last time it ran,
and the current date,
If you can wait for 5.1 (in beta now), you can use partitioning to store
a client on a different database in a different geographical site. You'd
need to partition by region/state (assuming you capture address info).
If you wanted to do any reporting, however, you'd need to set up a data
Very cool - looking forward to reading it with the new replication
options in 5.1
David
Stefan Hinz wrote:
Replication Chapter Revamped
We have completely reworked the replication chapter for the 5.1 manual.
This targets a number of issues, including a major
This is a little dangerous as well; if an exception gets thrown, the
statement doesn't get closed. My sample code is below.
That said, your query looks fine. Add logging to your code to figure out
what's going on. Run your query by hand against your database; I suspect
you have an issue
Daevid,
That page looks a little misleading.
First, it says it's stored in main memory, not on disk.
Then it says that on server-startup, it finds the largest value in the
table, and initialized it to that. So it is disk-based on startup, and
then resides in memory thereafter.
This doesn't
A java.sql.Date does not have time information - just the day-month-year.
A java.util.Date has date and time information.
If you need date and time, use java.sql.Timestamp.
It's not very pretty moving from one to the other.
David
-Original Message-
From: Feliks Shvartsburd
The cluster engine has been available since the 4.0 tree, I believe. You can
begin using it immediately with 5.0 (which is GA).
David
Logan, David (SST - Adelaide) wrote:
Hi Folks,
Does anybody have any idea when 5.1 may come to General Release? I am
particularly interested in MySQL
Is the table heavily indexed? Indexes cause inserts and deletes (and updates
under certain conditions) to slow down.
Also, since you are deleting approx 2.5% of your rows, your delete-statement must have a where-clause. You might want to do an
explain-plan on it to make sure it's not an
Embarcadero, who in my humble opinion make the best database tools around, have
added MySQL to some of their tools.
Their tools (DBArtisan for database management, and ER Studio for diagramming and change-management) are phenomenal in quality and
features. They are also very very expensive.
Log into the server, and type, use mysql; without the quotes.
Look at the user table - that defines what user can connect to the database, the ip addresses they can use, and the password
they must provide.
For example, you could enter,
INSERT INTO USER (host, user, password)
values
This isn't a bug, it's a missing feature. The parent query won't use indexes,
just the subquery.
There's been a lot of complaints about how it renders sub queries less than
useful.
I seem to remember that it might be fixed in 5.1, but I'm not a reliable source
of info on this.
David
That's a pretty difficult request:
1) What versions? MySQL 4.0 has no stored procedures, functions, views, subselects. 4.1 added subselects. 5.0 added the rest.
Oracle 8i doesn't support ANSI-style outer joins (they use a proprietary format).
2) Different engines; MySQL supports federated,
Definitely give this tool a try if you haven't - it has some very powerful features - schema diffs (and the ability to create
patches, etc - very powerful if you maintain development, quality-assurance, and production databases).
The ability to sort connections, etc, by machine, by database,
That's not entirely true.
You can have two instances of mysql running on the slave, and dbA connects to
one instance, and dbB connects to the other.
Jeff, when you say, different databases, do you mean that each master has a
single mysql instance, and if you typed on M1,
show databases
for (I
don't think).
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
David Griffiths [EMAIL PROTECTED] wrote on 03/01/2006 04:34:26 PM:
That's not entirely true.
You can have two instances of mysql running on the slave, and dbA
connects to one instance, and dbB
Indexes slow down inserts, updates (if the indexed column is being updated),
and deletes.
If this is a data-refresh, consider dropping the indexes, importing the data,
and then indexing the table.
You haven't mentioned how you are getting the data into the database? Is this a bulk-load?
Your select has two tables, but you don't join them.
Can you provide the table info, and a subset of the data that should be
pulled back by this query?
David
Paul Nowosielski wrote:
Hello,
I'm trying to run q query with an IN statement in MySQL 5. Like so:
SELECT * from encore enc,
Auto-incremented integers (be it bigint, mediumint, etc) are, from a
purist point of view, better than natural primary keys, like part
number etc. Read Practical Issues in Database Management, by Fabian
Pascal. He argues against natural primary keys, because the business
rules that underly
Is your database connection auto-commit? MyISAM commits everything at
once, where InnoDB you can commit whenever you want. You might want to
commit at the end of your batch.
Also, look at your indexes - indexes make selects fast, but slow down
inserts and deletes, and can slow down updates
MySQL can use the index on one of the columns in a multi-column index,
with caveats.
If this is your index,
UNIQUE KEY `tidadx` (`price_data_ticker`,`price_data_date`)
and you plan to use price_data_date in all your queries, but never
price_data_ticker, then simply reverse the order of the
Suppose you subscribe to a public email list that offers support on a
free open source database, and you see an email where someone doesn't
really provide nearly enough information to answer, what would you do?
What is the algorithm you are trying to implement to get the query-output?
Roy
Kevin,
From http://www.innodb.com/todo.php;
*In progress:*
Add |FULLTEXT| indexes on InnoDB tables. A sponsor for this project
has been found, and a developer has been hired. Appears probably in
2006.
David.
Kevin Burton wrote:
Anyone know the ETA of having full-text index
No, with the default transaction isolation level, REPEATABLE READ,
that's how it is supposed to work. You've started a transaction in
Window B, so Window B is immune to changes made in Window A until you
finish the transaction in Window B. See the manual for details
smarter about it, only starting a transaction behind the
scenes if you've actually done something that warrants a transaction.
David
Michael Stassen wrote:
David Griffiths wrote:
No, with the default transaction isolation level, REPEATABLE READ,
that's how it is supposed to work. You've
unless you've done a locking-read. I guess
Oracle is just smarter about it, only starting a transaction behind the
scenes if you've actually done something that warrants a transaction.
David
Michael Stassen wrote:
David Griffiths wrote:
No, with the default transaction isolation level
You need to talk to Oracle, or look on an Oracle mailing list. People
here are more concerned about migrating from Oracle to MySQL, rather
than the other way around.
There are probably lots of commercial tools out there that will do it
(and compared to your Oracle licensing costs, they are
I just discovered some weird behaviour with MySQL 4.0 (4.0.24 and
4.0.18) using InnoDB.
If you have two connections to mysql (I use the mysql client), one of
which has autocommit turned on, an the other turned off, a row deleted
from the client with autocommit turned on still shows up in the
Also check out EMS MySQL Manager. I looked at every MySQL tool I could
get my hands on, and ended up buying a license of this.
Main site is here:
http://sqlmanager.net/products/mysql/manager/
Feature list is here:
http://sqlmanager.net/en/products/mysql/manager/features
It also has access
Tony,
You said that you copied the my.cnf file from huge.cnf - not sure what
version you are using (I missed your original post), but the my-huge.cnf
in mysql 4.0.24 is for MyISAM. You should have used
my-innodb-heavy-4G.cnf as the starting point for an InnoDB system. The
my-huge.cnf
://dev.mysql.com/doc/mysql/en/innodb-configuration.html
You might also want to consider High Performance MySQL. There is lots of
good info in there on setup, tuning, replication, etc.
David
tony wrote:
Hi David,
On Fri, 2005-07-15 at 10:25 -0700, David Griffiths wrote:
Tony
for the link - some very interesting presentations there (wish I
had gone to the conference - next year maybe).
David
Dr. Frank Ullrich wrote:
David,
David Griffiths wrote:
The manual is a little unclear on per-table-tablespaces in 4.1/5.0
(http://dev.mysql.com/doc/mysql/en/multiple
The manual is a little unclear on per-table-tablespaces in 4.1/5.0
(http://dev.mysql.com/doc/mysql/en/multiple-tablespaces.html)
Using per-table-tablespaces ignores the innodb_data_file_path (yes, it
uses it for the ibdata files, but not for the tablespace/data-files for
the individual
Short-answer: use IN instead of =
Long-answer:
Your query is kind of weird. I think you want to use IN:
SELECT memberid, fullname FROM members WHERE memberid IN (select
distinct memberid FROM familymembers)
The equals implies an exact match between the top-level, and the
sub-query, but I
Anandtech has an interesting article
(http://www.anandtech.com/IT/showdoc.aspx?i=2447) on hardware for Linux
database servers.
Some very interesting conclusions:
1) Moving to 64-bit MySQL on a 64-bit Xeon actually decreases
performance by about 12% on average, while an Opteron running 64-bit
I'll post something I heard about when looking into upgrading Oracle 8i
from Windows to Oracle 10g on Linux.
To get more memory for the process, you would enable big memory page,
and then create an in-memory temp file system; you could then allocate
extra memory for a process, and part of
Why not go AMD-64? Dual Opteron, with 8/16/32 gig of RAM? Get a 3ware
SATA drive, and run Gentoo for AMD-64. You can increase your innodb
buffer pool to use almost all that space. You can make your buffer pool
as large as the physical RAM in your machine can support. No 2.5 gig per
process,
This is the weirdest thread I've ever seen. I've never seen so many
seques used in a thread
All we need now is for someone to post a question about configuring
Tomcat to work with Microsoft SQL Server.
To get back to the spirit of the original post, I personally wouldn't
use
(4.0.24) because there were a lot of bug
fixes
as of 4.0.20.
David Griffiths [EMAIL PROTECTED] wrote:
We are running 4.0.20 on two servers (AMD Opteron and Xeon).
Our slave has died twice in the last month with the following error:
Could not parse relay log
We are running 4.0.20 on two servers (AMD Opteron and Xeon).
Our slave has died twice in the last month with the following error:
Could not parse relay log event entry. The possible reasons are: the
master's binary log is corrupted (you can check this by running
'mysqlbinlog' on the binary
This isn't exactly the right spot, but I can't find any info on the
MySQL web site.
Anyone know who maintains the MySQL Migration Tool (or who is developing
it, as it is currently Apha)?
It does not support Orace 8i (9i and 10g only), but I've looked through
the source code, and it's not a
I stopped receiving email most of yesterday as well - it's still
catching up today.
I think there was a hiccup in the list.
David
Cummings, Shawn (GNAPs) wrote:
It's possible that Gabe's mail spool is full, and he is not receiving
mail - including our responses.
Gabriel - if you are
Yes, indexes slow down inserts (or updates that change the value of a
column that is indexed).
Also, remember that MySQL only uses one index per per table in a query.
So if there are some columns in your table that are indexed, but,
1) Have poor cardinality (number of distinct values - low
Mayuran,
It depends on a bunch of things. What else is running on this server?
Does the distro you use have the ability to take advantage of all 16 gig
(ie if you have apache running, will it be stuck in the same 4 gig as
MySQL, or can it use the memory above the 4 gig limit).
How big is your
Boyd,
You can tell Hibernate not to use outer-joins by setting
hibernate.use_outer_join to false in the hibernate configuration
properties file.
It's an always-never proposition. Of course, you can code your own
queries using the Hibernate Query object to write your own when you know
you do
.
David Griffiths [EMAIL PROTECTED] wrote:
We have a master-slave setup in production.
The master is running on a dual-Opteron with SuSE 8 SLES.
The slave is running on a dual Xeon with SuSE 9.
Both run MySQL 4.0.20
We recently moved our traffic
We have a master-slave setup in production.
The master is running on a dual-Opteron with SuSE 8 SLES.
The slave is running on a dual Xeon with SuSE 9.
Both run MySQL 4.0.20
We recently moved our traffic database to the machine and started
writing additional traffic (perhaps as much as 600,000
James,
We've had this issue twice (every 4 months) - running on 4.0.20 - due to
an old kernel (we just upgraded the kernel after the last issue).
Do you have a replicated (slave) database? We shut down the master and
then the slave (a few minutes after the master to let all changes
propigate),
Our mail server has been stopping emails to the list.
This is a test message to see if it bounces again.
Please ignore (and accept my apology for generating noise).
David.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL
Wow - cool idea - nice job. Looking forward to playing with it.
David.
Eddy Macnaghten wrote:
Hi all
I have just released a utility (under the GPL) that enables SQL to be
incorporated into UNIX/LINUX shell scripts (easier than using psql or
similar with better integration).
For more information
You should read this page on clusters:
http://dev.mysql.com/doc/mysql/en/MySQL_Cluster_Limitations_in_4.1.html
Replication and clustering is different technology.
Replication works great, but there are no algorithms to prevent the same
row being updated on different servers at the same time (in
| 0 | 6 |
+--+-+-+
So could it be a bug in 4.0.18?
- seb
---
David Griffiths [EMAIL PROTECTED] wrote:
One of our developers came to me yesterday with strange results from a
query. I've created a simple version of the example. I've pasted the
table definitions
locking, and foreign keys for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php
Order MySQL support from http://www.mysql.com/support/index.html
..
From: David Griffiths ([EMAIL PROTECTED])
Subject: MySQL Database
One of our developers came to me yesterday with strange results from a
query. I've created a simple version of the example. I've pasted the
table definitions at the bottom if someone really needs to see them.
This is on mysql 4.0.18.
insert into master (col1) values (1), (2);
insert into sub
Sorry - removed some data to make it clearer.
insert into master (col1) values (1), (2);
is correct.
David
Michael Stassen wrote:
Before I think about this, which is it?
insert into master (col1) values (1), (2);
or
insert into master (col1) values (1), (2), (3);
Michael
--
MySQL General Mailing
Having done one of these conversions in the past, I can say that
auto-incremented columns work just fine.
You insert the row, and then make a SELECT last_insert_id() call -
this returns the value of the last auto-increment generated via an
insert for the connection (so some other database
Only static data is allowed as default-values; functions, derived data
etc, are not allowed.
An unfortunate shortcoming.
On the flipside, there is some weird rule that the first timestamp in a
table will be set with the current date/time during an insert if the
column is left out of the insert
Miguel,
No subquery needed
SELECT e.name, c.telephone
FROM employee e
LEFT JOIN contact c ON c.id = e.id
WHERE e.sex = 'F'
LEFT JOIN means there does not have to be a matching contact row to
find an employee row, but if there is a matching row, the data will be
returned.
I just guessed at what
It's safe to implement two-way replication (properly called multimaster
replication) in MySQL if each master is guaranteed to only update a
unique subset of that data (ie data that no other master database
changes). Each master database would be able to safely read all the
data, however.
For
We have an Opteron server with 6 gig of RAM.
The issue used to be 4 gig - the max amount of memory a 32-bit processor
could access. With 64-bit processors, the amount of accessible memory
has jumped into the terrabyte range.
Pick a distribution that is for the AMD-64 (we use SuSE 8 Enterprise)
I went to do some work on our database last night (dropping large
indexes, which can be time consuming). I checked to ensure that the
backup of that evening had run, but noticed that the size of the backup
was too small compared to previous days (I'm kicking myself for not
emailing the results
We had a similar problem (though not quite as bad).
I re-organized the datafiles (and fixed some indexes, etc) and we got a
vast speed improvement.
I'd suggest you shutdown the database, use mysqldump to take a dump of
the database, move the old datafiles out of the way, fix your my.cnf to
Don't forget to copy the my.cnf file, and make any changes neccesary
(due to different directory/disk structures).
If you are using InnoDB, and can shut down the database, you should just
be able to move the files in
data/database name or var/database name
like Mark said below (InnoDB stores
Also, are the indexes in place when you start your inserts? Constantly
updating those indexes will be slow; try inserting without indexes, and
then building the indexes.
You can also limit the size of your index file by,
1) Making sure all columns are as small as possible (ie MEDIUMINT rather
Oracle cannot shrink datafiles (same idea as InnoDB datafiles) when data
is deleted either.
David
Marc Slemko wrote:
On Tue, 3 Aug 2004 12:42:03 -0400 , David Seltzer [EMAIL PROTECTED] wrote:
Thanks Marc,
Is there really no way to reclaim unused space in an InnoDB table space? If
not, why is
Could it be a network bandwidth issue? Remember, all that data needs to
be transmitted across to the slave. If you are on a 10-megabit network,
it could be the cause. Remember, Ethernet is not exactly an efficient
protocol, and efficiency drops as network traffic increases.
A second machine
be
used to select rows for
SELECT * FROM traffic_boats WHERE stem_base = '';
and an index on (yearmonth, stem_base, day) could be used to select
rows for
SELECT * FROM traffic_boats WHERE yearmonth = 200407;
Michael
David Griffiths wrote:
We have a table with 40 million rows. It has
We have a table with 40 million rows. It has statistics on traffic from
our website. Logs are processed once a night, and the data from those
logs are added.
Our table (traffic_boats, InnoDB) has three columns of interest:
day INT
yearmonth INT
stem_base VARCHAR(100)
There is an index on day,
We just put a new dual-Opteron server into our production environment.
We ordered a Megaraid SCSI card and five 10k drives, and a 3Ware
Escalade SATA card with six 7200 RPM drives (Maxtor) to see which ones
were best.
Our network guy did a bunch of benchmarking on the drives and found that
Yah - Jeremy Z. mentioned that a few days ago as well. Guess I have it
confused with Posgres (we don't run MySQL on Windows, so I don't pay
much attention to it except the performance issues most people seem to
mention, which I guess made me assume Cygwin).
David
Egor Egorov wrote:
David
The download page @ MySQL.com
(http://dev.mysql.com/downloads/mysql/4.0.html) shows a wack of Itanium
binaries, so I would guess it does.
David
Nawal Lodha wrote:
Can someone tell me if MySQL 4.1.3-beta supports 64 Bit Itanium 2
machines with Windows 2003?
Thanks,
Nawal Lodha.
--
MySQL
.
Daivd
Nawal Lodha wrote:
Thanks David. But I think the Itanium binaries are available only for
HP-UX or Linux but not for Windows 2003.
-Original Message-
From: David Griffiths [mailto:[EMAIL PROTECTED]
Sent: Friday, July 02, 2004 12:10 PM
To: [EMAIL PROTECTED]
Subject: Re: 64 Bit Support
What's the definition of the table? IE are you indexing an INT, VARCHAR,
etc?
What's the definition of the index? Is it unique, composite, etc?
What's the storage engine in use? InnoDB? MyISAM?
Can you show the relevant parts of your my.cnf file?
What operating system are you using?
David
These errors could mean a connection timed out, or a mysql-client didn't
properly close the connection, or possibly a network error.
I went to mysql.com and looked in the searchable docs:
http://dev.mysql.com/doc/mysql/en/Communication_errors.html
If |Aborted
));
to only index part of the data?
David
Oropeza Querejeta, Alejandro wrote
Below are the answers
Best Regards
-Mensaje original-
De: David Griffiths [mailto:[EMAIL PROTECTED]
Enviado el: MiƩrcoles, 30 de Junio de 2004 01:29 p.m.
Para: [EMAIL PROTECTED]
Asunto: Re: Index problem
What's
I've seen job announcements posted on other lists (and I think this one
as well). I think it's relevant, and shouldn't offend anyone.
David
Michael Halligan wrote:
Greetings.
My company has an immediate opening in SF for a Sr. Mysql/DB
architect. I was wondering if this would be the appropriate
,
David Griffiths wrote:
We are in the process of setting up a new MySQL server. It's a
dual-Opteron (Tyan Thunder K8S motherboard) with 6 gig of DDR333 RAM
(registered) and an LSI SCSI card with 6 SCSI drives (5 in a RAID-5
array, with one hot-spare) running SuSE Enterprise 8.1 (64-bit).
I loaded all
We are in the process of setting up a new MySQL server. It's a
dual-Opteron (Tyan Thunder K8S motherboard) with 6 gig of DDR333 RAM
(registered) and an LSI SCSI card with 6 SCSI drives (5 in a RAID-5
array, with one hot-spare) running SuSE Enterprise 8.1 (64-bit).
I loaded all our data (about
better choice for
the optimizer to use (remember, MySQL can only use one index per table
per query, so it has to pick the most efficient one), and should result
in faster results.
Hope that provides some insight for anyone interested.
David.
David Griffiths wrote:
We have a somewhat large database
God doesn't use MySQL (I think he leans towards Postgres - he needs
views and triggers).
But if you are interested in MySQL, Paul DuBois's book, MySQL, Second
Edition is a great reference. If you need more insight into performance
tuning, then Jeremy Zawodny and Derek Balling's book, High
We have a somewhat large database, with a snapshot of the data that we
import into the database. Normally, we create a database, create the
tables, import the data with LOAD DATA, add the indexes, and then the
foreign keys (we are using InnoDB, 4.0.18). We call this the
load-data-method.
Our Opteron server should be arriving today, so I can't provide a whole
lot of insight.
First, what version are you using? 4.0.20? The 64-bit or 32-bit version?
Knowing the version might help. If this is not a production machine, you
might want to try using the version from the MySQL website
I think he's talking about download MySQL, and verifying the binary
using MD5.
There is a link right above the downloads for verification of the binary.
http://dev.mysql.com/doc/mysql/en/Verifying_Package_Integrity.html
The contents of the page @ that URL suggest using GNU Privacy Guard.
David
Louie,
The inner-join is just joining the two tables, shopcart and items.
I think iip is an alias for items, but the alias would normally come
after the name of the tiems table. Another way to write the query is
(assuming iip is an alias for items),
SELECT * FROM shopcart, items iip
WHERE
-
From: David Griffiths
Sent: Friday, May 28, 2004 6:05 PM
Subject: Specifying an index length and the default value
The length of indexes on varchar and char indexes can be specified at
index creation.
What is the default length of an index if no length is provided?
The default is to index
The length of indexes on varchar and char indexes can be specified at
index creation.
What is the default length of an index if no length is provided?
The High Performance MySQL book hints that the index-length used is
specific for each entry in the indexed column (ie an index on a column
MySQL really should throw an exception/error rather than just quietly
trim your data and accept it. When your data is critical, and your
business depends on it, you can't have bad data quietly going into the
database.
David.
Mike Johnson wrote:
From: Jeff McKeon [mailto:[EMAIL PROTECTED]
to
make sure their SQL syntax is correct.
The database has all sorts of constraints that can be applied to your
data model. They should all have the same behaviour when violated.
David
Mike Johnson wrote:
From: David Griffiths [mailto:[EMAIL PROTECTED]
MySQL really should throw an exception
David Brodbeck wrote:
The client software ought to be range-checking the data before sending
it to
the database. If the client isn't even doing that kind of minimal-effort
check, how likely is it to be checking for exceptions?
That's not to say that an error or exception is a bad idea, but
The client software ought to be range-checking the data before sending it to
the database. If the client isn't even doing that kind of minimal-effort
check, how likely is it to be checking for exceptions?
Not sure what you code in, but in Java, you *HAVE* to catch
SQLExceptions (or throw
Michael Stassen wrote:
This comes up frequently. MySQL's behavior is explained in the manual
http://dev.mysql.com/doc/mysql/en/constraint_NOT_NULL.html. It begins:
That's interesting, and I guess one just has to accept it as part of the
mysql philosphy. I don't agree, as I don't like the
Assuming you insert 100,000 rows, you also have to consider that any
indexes on the table will need to be re-analyzed to fix the statics.
Also, the inserts will be slower due to any indexes.
You didn't mention the table-type (storage-engine) you were planning on
using, but if you use InnoDB,
You didn't mention the table-type (storage-engine) you were planning on
using, but if you use InnoDB, and do it all in one transaction (ie turn
off auto-commit and commit just once at the endi), then any failure
during the mass-insert will cause a rollback, so you don't have to worry
about the
Jacob Elder wrote:
Thanks to everyone who helped me with this. I settled on breaking it down
into area code, exchange and subscriber. This is one of the most generous
lists I've ever had the pleasure of begging for help on.
I don't want to deter you from making the changes above, but it's
Is that the only table in your MySQL installation?
MyISAM primary keys are put in a b-tree index, which is cached by MySQL
in memory in the key_buffer_size parameter. What is it set to on your
system (the my.cnf file, probably in /etc or /var)?
The second time you run it, the index is
for the
row-locking, etc).
David
Jacob Elder wrote:
On Tue 18 May 02004 at 12:26:41PM -0700, David Griffiths wrote:
Is that the only table in your MySQL installation?
Yes, and no one has access to it yet but me.
MyISAM primary keys are put in a b-tree index, which is cached by MySQL
Sorry - haven't had a chance to respond till now.
So restore == apply-log, but one works on any computer, and the
other
only works on the computer that it's node locked to.
--apply-log works also in any computer regardless of the hostname or the
license expiration date.
I'm running
1 - 100 of 149 matches
Mail list logo