I am creating an index on a very large innodb table that is taking a
very long time to complete which I understand is a limitation of
innodb.
The problem is that another application has been issuing queries
against the table and those queries have never timed out. So now I
can't kill the
I have the same problem. I have an open bug report here:
http://bugs.mysql.com/?id=4761
I can't figure out why my management server can not see my ndb
servers. Perhaps I should try to down the secondary interface on all
the servers?
On Tue, 17 Aug 2004 15:54:29 -0400, Wheeler, Alex [EMAIL
Are you swapping? Check vmstat and iostat
Are your queries optimized, are they doing full table scans? Enable
the slow query log.
On Mon, 16 Aug 2004 14:48:35 +0200, Fahr o Mavastradioz
[EMAIL PROTECTED] wrote:
Hello people,
I'm currently configuring a quad CPU system as a standalone
, or should I be thinking about some
other way to do this?
Justin Swanhart
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
You can put multiple renames in one statement, and the entire rename
will be atomic..
I create summary tables from some of my data, and I periodically
refresh them. When refreshing them I create new tables to replace the
old tables with..
Then I do:
rename current_table to old_table, new_table
FYI, the atomicity of rename and using it to swap tables is discussed
in the manual here:
http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html
Justin
On Tue, 10 Aug 2004 13:50:34 -0700, Haitao Jiang [EMAIL PROTECTED] wrote:
It would be great if there is a swap table command that is atomic.
:27 -0400, Mayuran Yogarajah
[EMAIL PROTECTED] wrote:
Justin Swanhart wrote:
Im curious about this part.
My healthcheck logic will make sure that server A doesn't appear to be
back up to the load balancer until it has caught up with server B, or
vice versa.
How do you accomplish
Oracle has a procedure called DBMS_APPLICATION_INFO.SET_APPLICATION_INFO
that allows you to specify up to 64k of addtional information about
the current connection. It doesn't have any way to specify this
information at connect time though.
The data can be accessed in Oracle through the
Because not doing so violates the SQL standard. Allowing you to
included non aggregated columns in the SELECT list is a non standard
MySQL extension to the SQL language. You will get an error in other
products, such as oracle, where you will get a xxx is not a group by
expression error.
There must be an even number of replicas because the cluster mirrors
data between two machines. It doesn't do three way mirroring. An
even number of nodes are required because each two data nodes becomes
a node group.
If you have three machines, you could create two NDB processes on each
: Justin Swanhart [mailto:[EMAIL PROTECTED]
Sent: Sunday, August 01, 2004 11:53 AM
To: [EMAIL PROTECTED]
Subject: Re: a question/issue...
In general, it is probably a bad idea to inherit database connections
from a parent in a fork()'ed child process.
What is your reasoning behind not permitted
TOP is a microsoft SQL extension.
MySQL uses the LIMIT clause.
for instance, the following is a rather typical top 10 sql query:
select some_column, sum(another_column) total
from some_table
group by some_column
order by total desc
LIMIT 10
On Wed, 28 Jul 2004 14:39:11 -0400, Kamal Ahmed
Create a seperate table called member_interests or something similar
Store one member_id and one interest_id (or whatever you have your PKs
named) in each row.
This is similar to an order entry system, which typically has one
table for order_headers and one for order_detail. The order_header
You probably want to pick up a good SQL book. MySQL by Paul DuBois is
a really good one.
http://www.amazon.com/exec/obidos/tg/detail/-/0735712123/qid=1090786499/sr=8-2/ref=pd_ka_2/102-0741496-3072118?v=glances=booksn=507846
You want to use the WHERE clause of the select statement.
SELECT
Version 4.0 doesn't support subqueries.
In any event, your query would return all rows from
channels as long as there are any rows in users, which
I doubt is what you intended.
--- nambi c [EMAIL PROTECTED] wrote:
Hi,
My server version : 4.0.18-max-nt
I have created 2 tables 'channels'
MySQL doesn't guarantee that there will be no gaps in sequence values.
Assigment of the id is always atomic because innodb uses an AUTO_INC
lock that lasts for the time of the insert, not the life of the
transaction.
lets say your highest order number is 10
transaction begins for client 1
While the extension on the file isn't set in stone, many people add
the .dmp extension to files created with the Oracle exp utility.
You can use a third party utility call nxtract
(http://www.elmbronze.co.uk/nxtract/index.htm) to convert exp files
into tab delimited files. The eval version only
I don't see how using a multi value insert would be
any faster than the insert between the tables. It
would certainly be faster than one insert statement
per row, but I don't think it would be faster than
insert ... select ...
The only reason I suggested an extended syntax insert
earlier was
--- matt ryan [EMAIL PROTECTED] wrote:
Do you ever delete from this table?
Temp table is trunicated before the EBCDIC file is
loaded
I meant the history table :)
Have you removed the unecessary duplicate key on
the first column of your primary key?
Have not touched the DIC index yet, I
You do have ByteOrder: Big in the .ini file for the
sparc database servers, right?
--- Alexander Haubold [EMAIL PROTECTED] wrote:
Hi everyone,
Just to follow up on my previous post regarding
Cluster on Sparc/Solaris 9:
On an x86 Solaris 9 machine that was set up similar
to the Sparc
a few suggestions...
Your slow inserts could be a concurrancy issue. If
lots of users are doing selects that could be
interfering with your inserts, especially if they use
a n odbc/jdbc app that locks the table for the entire
read. Jdbc reads do that when you tell it to stream
the contents of
Insert ignore doesn't insert the record if there is a
duplicate. It simply doesn't insert the row. Without
the IGNORE clause, the query would generate an error
insert of silenty ignoring the insert.
--- [EMAIL PROTECTED] wrote:
That's the whole point. Eliminate your indexes and
your load
I've used it, with oracle, but oracles index
searches are better, hit
the best one first, then 2nd best, then 3rd, but I
really dont want to
go to oracle, it's too complicated for my tech's
Oracle rarely performs index merges, but it does have
the ability to do, which mysql lacks.
Query
Does access_no contain actual textual data, or is it
simply a key like a category or an integer? If you
don't need to do a fulltext search against access_no
then there is no reason to include it in your fulltext
index. You should most likely have a seperate index
for access_no in that case.
The
Creating a key will make that query execute very fast,
but if that is the only reason for the key you are
going to be trading quite a lot of space for the speed
of one query.
How often are you going to run this query? If you
have 324 million rows, then that index is going to
consume somewhere
Indexes can generate vast amounts of random i/o.
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.
If you have the budget for it, I would consider
getting some solid state
Having that many instances on one box is going to be a
management nightmare. I can only imagine the recovery
scenarios should you have a hardware problem. Perhaps
you may want to think about writing your metric data
to a local mysql instance then pulling the data from
each instance into the
You are doing an implicit group by of first, last as
well as your explicit group by of email.
So you could have two records with the same e-mail
address generate two records with your group by
Justin Time [EMAIL PROTECTED]
Justin Credible [EMAIL PROTECTED] --DUPE--
Case differences
Where is the implicit group? The 'order by'
shouldn't effect how things
are grouped. On MySQL 4.0.17:
Wow. I wasn't aware. Is that expected behavior?
Other databases (Oracle) generate an error when you
include columns in the select list that aren't in a
group by, or they do an implicit
An option would be a log reader program that uses
Oracle log miner to only show commited transactions
from the redo logs. You could then replay the SQL
that is being executed on the oracle box on the mysql
server as long as the tables are defined the same.
9i has an enhanced log miner that can
brpm -qa|grep mysql/b will show you what mysql
packages you have installed.
You probably have both 3.x and 4.x packages installed
and assuming you don't have a 3.x database you want to
preserve, I would suggest uninstalling the 3.x package
with brpm --erase iname_of_3.x_package/i/b
To determine
Keep in mind that if you create an index on multiple
fields, then all of those fields must be searched at
once. You can't index product_name, product_desc and
product_category for instance, then only search on the
product_name field using MATCHES.
If you want to bypass this (and many other
A beta takes as long as a beta takes. That is really
the nature of beta testing. As for an approximate
timeline, I've heard various quotes, but most people
seem to think somewhere late third quarter that the
release will be marked stable.
4.1.3 is really quite stable and you should have very
Does that mean the max. string that can be indexed
and therefore searched on is 500 chars? What exactly
is this limitation?
I may have been wrong on this limit. I know I read
about it somewhere, but I can't seem to find out where
at the moment. Since the fulltext index is maintained
as a
I highly recommend simply using ext3 for your Linux
setup. The 1 or 2 percent performance benefit that
you may get from raw partitions is way outweighed by
complexness of backups of the raw data.
either way:
First I would suggest you read the Linux RAID howto:
If you are usign 4.1 you could try:
SELECT DISTINCT d, title
FROM
(select p.id, p.title
from product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and
s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
)
limit 10
Do you have a hosts.MYD, or a hosts.frm file?
If you do, and there is no .MYI file, perhaps the
older version is just ignoring the table and not
making it available while the newer version errors
out.
If those files exist, try removing them from the data
directory (move them somewhere else) then
No, that isn't possible using mySQL.
Try linking PHP with older client libraries (9.0.1,
8.1.5, etc) instead of the newer 9.2 libraries and see
if that fixes your problem with PHP. You can download
them from otn.oracle.com
swany
--- Alonso, Claudio Fabian
[EMAIL PROTECTED] wrote:
Hello
LOCK TABLE active_table WRITE, history_table WRITE;
#assuming the columns in the tables are exactly
#the same
insert into history_table
select * from active_table;
delete from active_table;
UNLOCK TABLES;
if the columns aren't the same between the tables then
you need to do something like
--- Asif Iqbal [EMAIL PROTECTED] wrote:
gerald_clark wrote:
What about getting a bigger drive?
I guess that would be my only option eh?
If any of your data can be considered history data
that is never modified, you could create compressed
myISAM tables for that data, removing it from the
Create a unique index on each column that you don't
want to be duplicated.
create UNIQUE index table_u1 on table(some_column)
--- John Mistler [EMAIL PROTECTED] wrote:
Is there a way to do an INSERT on a table only if no
row already exists with
the same info for one or more of the columns as
You can do it one of two ways.. Either you can do a
self join like the following:
select t1.userid
from answers t1,
answers t2
where t1.qid = 5
and lower(t1.answer)='student'
and t2.qid = 6
and lower(t2.answer) like 'edu%'
and t1.userid = t2.userid
or you can use a union
--- John Mistler [EMAIL PROTECTED] wrote:
Is there a SELECT statement, if I have a table with
50 columns, to select
every column EXCEPT the last one? Can I do this
without typing the name of
all 49 columns?
If so, then what if I want to exclude the last TWO
columns?
Thanks,
John
43 matches
Mail list logo