Hello Nunzio,
Instead of Dropping a index, you can disable the indexes and get the work
done and re-enable them.
If you are ok with this then run the below as a shell script :-
MUSER=username
MPASS=password
DATABASE=dbname
for db in $DATABASE
do
echo starting disabling indexes for database --
On 09/08/2010 18:33, Norman Khine wrote:
hello, i have a table called checkout, this has a row called products
which has contains a python dictionary data, like
http://pastie.org/1082137
{products: [{productId: 123, productName: APPLE,
productPrice: 2.34, productUrl:
Manasi,
Your table structure doesn't show that the column TestID is unique. I believe
what Michael also suggested was that unless this column contains unique values,
you never know which row your procedure is reading.
I hope I'm making myself understood.
Regards,
Nitin
Hi,
This is probably very simple for someone who has encountered the problem before
but I'm struggling to find how I can contain the MySQL error log from being
flushed at FLUSH LOGS command. This command is executed as part of the
database backup every night and simply moves the old error log
At 01:06 AM 8/10/2010, you wrote:
Hello Nunzio,
Instead of Dropping a index, you can disable the indexes and get the work
done and re-enable them.
Disabling keys will NOT disable Primary or Unique keys. They will still be
active.
Mike
If you are ok with this then run the below as a
Thanks for the feedback. What I am trying to do is two things:
1. Remove all indexes and make the database smaller to copy and move to another
prod box. Currently my indexes are in the double digit GB! Yikes ;-)
2. Remove all indexes so I can find out which ones are needed then tell mysql
to
This should give you a good starting point (not tested):
select distinct concat('ALTER TABLE ', TABLE_NAME, ' DROP INDEX ',
CONSTRAINT_NAME,';')
from information_schema.key_column_usage where
TABLE_SCHEMA='mydatabase';
- md
On Tue, Aug 10, 2010 at 10:43 AM, Nunzio Daveri
Hello Michael, thanks for the one liner. I ran it BUT I started to get errors
after I ran it the first time, this is what I got the 2nd time I ran it (first
time I ran it I had 63 rows in the query, the 2nd time I have 9). I ran it
twice to make sure it got rid of the indexed. I verified the
It's not a completely solution and will need some tweaking.. You
might have to run the PRIMARY KEYS distinctly from the rest.
- michael dykman
On Tue, Aug 10, 2010 at 4:43 PM, Nunzio Daveri nunziodav...@yahoo.com wrote:
Hello Michael, thanks for the one liner. I ran it BUT I started to get
I'm running a set of queries that look like this:
===
SET @PUBID = (SELECT pub_id FROM pub WHERE pub_code = 'DC');
DROP TEMPORARY TABLE IF EXISTS feed_new;
CREATE TEMPORARY TABLE feed_new (
new_title VARCHAR(255), INDEX (new_title)
);
INSERT INTO feed_new
VALUES
Hi Micheal and all, ok so I did some digging around and I still can't find why
I
cant drop the last few indexes.
mysql SELECT COUNT(1) FROM INFORMATION_SCHEMA.STATISTICS WHERE table_schema =
'db_Market' AND table_name = 'dbt_Fruit' and index_name = 'PRIMARY';
+--+
| COUNT(1) |
auto_increment is only allowed on primary-keyed columns. I expect it
is not allowing you to drop the primary key because that column has
the auto_increment attribute. Drop that manually, and the primary key
should be able to let go.
- md
On Tue, Aug 10, 2010 at 5:58 PM, Nunzio Daveri
Can you create a second, indexed column in your feed_new temp table that
includes the title without the year appended? That might allow you to get
by with a single pass through the larger prod table and avoid reading rows
from the feed_new table.
-Travis
-Original Message-
From: Jerry
13 matches
Mail list logo