The problem was here:
---TRANSACTION 154B1E00, ACTIVE 265942 sec rollback
mysql tables in use 1, locked 1
ROLLING BACK 297751 lock struct(s), heap size 35387832, 74438247 row
lock(s), undo log entries 66688203
MySQL thread id 37, OS thread handle 0x7f11bc4b9700, query id 110 localhost
pau query en
Your drop index table needs the table rebuild and locked the drop table. In
this case, you have to drop table rather than drop indexes or deleting data.
On Sun, May 17, 2015 at 7:11 AM, Pau Marc Muñoz Torres
wrote:
> i solved the problem by rebooting my computer. i just drop the table in
> secon
i solved the problem by rebooting my computer. i just drop the table in
seconds
thanks
Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/
2015-05-17 12:00 GMT+02:00 Pau Marc Muñoz Torres :
> this is the innodb out
this is the innodb output,
i tried to kill the process using kil, kill query and kill connection but
doesn't worked. what can i do?
thanks
150517 11:50:46 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 3 seconds
-
BACKGRO
On Sun, May 17, 2015 at 02:01:57PM +0530, Pothanaboyina Trimurthy wrote:
Guys, can I implore you to post to a mailing list using its address in To:
field and not CC:ing it? You are constantly breaking out of my filters.
--
vag·a·bond adjective \ˈva-gə-ˌbänd\
a : of, relating to, or characteri
Hi Pou,
Before killing those connections first check for the undo log entries from
the engine innodb status. If there are too many undo log entries it will
take some time to clean up those entries. If you force fully kill those
connections there are more chances to crash the DB instance.
On 17 May
Hi Pou,
This is the reason why your drop commands taking too much time because they
are in waiting state.Even it is quite surprising to me the purpose of the
delete command. I would say ,kill all pids ( 37,58,59,66 ) and just drop
the table ( it will delete everything ). Please take a backup if ne
this is my process list
++--+---+--+-++-+--+
| Id | User | Host | db | Command | Time |
State | Info |
++--+---+--
Hi Pau,
Ideally drop table should not take that much time , you have to check if
your command is executing or it is in waiting stage. May be you are not
able to get lock on that table.
Cheers,
Adarsh Sharma
On Sat, 16 May 2015 at 23:34 Pau Marc Muñoz Torres
wrote:
> Hello every body
>
> i hav
Hi Pau,
Would you please paste the timeout error ? If you want to get rid of a
table then the recommendation is to drop the table in non-peak hours.
Thanks
Suresh Kuna
On Sat, May 16, 2015 at 2:00 PM, Pau Marc Muñoz Torres
wrote:
> Hello every body
>
> i have a big table in my sql server and
Hello every body
i have a big table in my sql server and i want to delete it, it also have
some indexes. I tried to "drop table" and "delete" commands but i
eventually get a time out. Wath can i do with it, does it exist any method
to delete tables quicly?
i know that drop and delete are not equ
Hello Antonio,
On 5/16/2014 9:49 AM, Antonio Fernández Pérez wrote:
Hi,
I write to the list because I need your advices.
I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.
How you design your tables can have a huge impact on pe
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Advices for work with big tables
>
> Hi,
>
> I write to the list because I need your advices.
>
> I'm working with a database with some tables that have a lot of rows, for
&g
Am 16.05.2014 15:49, schrieb Antonio Fernández Pérez:
> I write to the list because I need your advices.
>
> I'm working with a database with some tables that have a lot of rows, for
> example I have a table with 8GB of data.
>
> How can I do to have a fluid job with this table?
>
> My server w
Hi,
I write to the list because I need your advices.
I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.
How can I do to have a fluid job with this table?
My server works with disk cabin and I think that sharding and partitioning
ar
- Original Message -
> From: "mos"
>
> If you could use MyISAM tables then you could use Merge Tables and
Ick, merge tables :-) If your version is recent enough (Isn't 4.whatever long
out of support anyway?) you're much better off using partitioning - it's
engine-agnostic and has a lot
If you could use MyISAM tables then you could use Merge Tables and
create a table for each day (or whatever period you are collecting
data for). Then when it is time to get rid of the old data, drop the
oldest table (T2001 or T10 for 10 days ago) and create a new
empty table for the new day
Excellent point... replication makes many things trikier
On 11/4/11 9:54 AM, Derek Downey wrote:
Be careful deleting with limit. If you're replicating, you're not guaranteed
the same order
> of those you've deleted.
Perhaps a better way to delete in smaller chunks is to increase the id valu
Be careful deleting with limit. If you're replicating, you're not guaranteed
the same order of those you've deleted.
Perhaps a better way to delete in smaller chunks is to increase the id value:
DELETE FROM my_big_table WHERE id> 5000;
DELETE FROM my_big_table WHERE id> 4000;
etc
-- Derek
On
I've had some luck in the past under similar restrictions deleting in
chunks:
delete from my_big_table where id > 2474 limit 1000
But really, the best way is to buy some more disk space and use the
new table method
On 11/4/11 1:44 AM, Adarsh Sharma wrote:
Thanks Anand,
Ananda Kumar wrote:
W
- Original Message -
> From: "Reindl Harald"
>
> well i guess you have to sit out add the key
> wrong table design having an id-column without a key or
> something weird in the application not using the primary
> key for such operations
For high-volume insert-only tables the lack of a ke
PLEASE do not top-post after you got a reply
at the bottom of your quote
sorry, but i can not help you with your application
if it for whatever reason uses the filed 'id' in a where-statement
and your table has no key on this column your table-design is
wrong and you have to add the key
yes this
Create PROCEDURE qrtz_purge() BEGIN
declare l_id bigint(20);
declare NO_DATA INT DEFAULT 0;
DECLARE LST_CUR CURSOR FOR select id from table_name where id> 123;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
OPEN LST_CUR;
SET NO_DATA = 0;
FETCH LST_CUR INTO l_id;
WH
Am 04.11.2011 08:22, schrieb Adarsh Sharma:
> delete from metadata where id>2474;
> but it takes hours to complete.
>
> CREATE TABLE `metadata` (
> `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
> `id` bigint(20) DEFAULT NULL,
> `url` varchar(800) DEFAULT NULL,
> `meta_field` varchar(200) DEF
Thanks Anand,
Ananda Kumar wrote:
Why dont you create a new table where id < 2474,
rename the original table to "_old" and the new table to actual table
name.
I need to delete rows from 5 tables each > 50 GB , & I don't have
sufficient space to store extra data.
My application loads 2 GB dat
Why dont you create a new table where id < 2474,
rename the original table to "_old" and the new table to actual table name.
or
You need to write a stored proc to loop through rows and delete, which will
be faster.
Doing just a simple "delete" statement, for deleting huge data will take
ages.
re
Dear all,
Today I need to delete some records in > 70 GB tables.
I have 4 tables in mysql database.
my delete command is :-
delete from metadata where id>2474;
but it takes hours to complete.
One of my table structure is as :-
CREATE TABLE `metadata` (
`meta_id` bigint(20) NOT NULL AUTO_IN
Hello list,
I'm currently developing a newsletter tool allowing customers to send
their newsletters to their clients and get all kinds of statistics. For
each customer of ours, I need to save up to five different lists of
newsletter recipients with their email addresses and some other stuffs
006 00:02
To: Marvin Wright
Cc: mysql@lists.mysql.com
Subject: Re: Deletes on big tables
Marvin Wright wrote:
>I have 3 tables where I keep cache records, the structures are
>something like
>
>
>TableA is a 1 to many on TableB which is a 1 to many on TableC
>
>To give you
: Deletes on big tables
> 2.If I could split the tables up into smaller tables would this
> help ? My dilemma here is that I can split the data, the data would
> be in different tables but on the same hardware, the same number of
> deletes would still have to happen so would it actua
Marvin Wright wrote:
I have 3 tables where I keep cache records, the structures are something
like
TableA is a 1 to many on TableB which is a 1 to many on TableC
To give you an idea of size, TableA has 8,686,769 rows, TableB has
5,6322,236 rows and TableC has 1,089,635,551 rows.
My expir
2. If I could split the tables up into smaller tables would this
help ? My dilemma here is that I can split the data, the data would be
in different tables but on the same hardware, the same number of deletes
would still have to happen so would it actually make any difference ?
No idea a
Hi,
This is a bit of a long mail, so apologies in advance, I've tried to
five as much information as possible that I think might be useful
regarding my problem.
I have 3 tables where I keep cache records, the structures are something
like
TableA TableB TableC
Id
Konstantin Yotov <[EMAIL PROTECTED]> wrote:
> mysqlcheck -r is very slow when repair big tables
> (over 4GB data - repair it 1h and 40m). Is there any
> config option to fasten it.
Check your hard drives setup and throughput, it's not related to MySQL.
--
For technic
Hello! :)
mysqlcheck -r is very slow when repair big tables
(over 4GB data - repair it 1h and 40m). Is there any
config option to fasten it.
Regards: Kosyo
__
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
http
.p_id.
-- This takes some work, but it might speed up other queries, if you
frequently
need to select all of the children for a particular parent.
HTH
Bill
> Date: Thu, 22 Jan 2004 20:09:42 +0100
> From: Benjamin PERNOT
> To: [EMAIL PROTECTED]
> Subject: JOIN 10 times quicker than
Here is my problem:
I have 2 tables, a parent table and a child table. The parent table has got 113
rows, the child table has got 3 000 000 rows.
parent:
---
| p_id | name |
---
| 1| A |
| 2| B |
| ... |... |
| 112 | C |
|
]
Sent: dinsdag 15 juli 2003 17:06
To: [EMAIL PROTECTED]
Subject: Re:Managing big tables
Thanks for your efforts, and in rudys last mail he confirms what i have
feared, the io-overhead for reading the
extremely longer rows most probably causes these longer query-times. I
think we will have to
the mysql-raid-option won't really be
an option. Perhaps the only way to get this damned big tables smaller in
appropriate time will be a solution
via merge-tables (if this does not slow down our productive queries on
that table too much).
alex
--
MySQL General Mailing List
For lis
re analyze
could be handy.
-Original Message-
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 16:24
To: Rudy Metzger
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables
Sorry rudy, but I can not understand what you tr
al Message-
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 15:22
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables
http://www.mysql.com/doc/en/Data_size.html
[EMAIL PROTECTED] wrote:
Hello,
i've got a little
thing completely
different.
Cheers
/rudy
-Original Message-
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 15:22
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables
http://www.mysql.com/doc/en/Data_size.html
[
ete records from one your
and optimize this hear only).
Cheerio
/rudy
-Original Message-
From: Alexander Schulz [mailto:[EMAIL PROTECTED]
Sent: dinsdag 15 juli 2003 14:35
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Managing big tables
Hello,
i've got a little problem,
we
http://www.mysql.com/doc/en/Data_size.html
[EMAIL PROTECTED] wrote:
Hello,
i've got a little problem,
we're using mysql with two big tables (one has 90 Mio. Rows (60 Gb on
HD), the other contains nearly 200.000.000 (130 Gb on HD).
Now we want to delete some rows from these tabl
time??
Simon
-Original Message-
From: Alexander Schulz [mailto:[EMAIL PROTECTED]
Sent: 15 July 2003 13:35
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Managing big tables
Hello,
i've got a little problem,
we're using mysql with two big tables (one has 90 Mio. Rows (60 Gb on
Hello,
i've got a little problem,
we're using mysql with two big tables (one has 90 Mio. Rows (60 Gb on
HD), the other contains nearly 200.000.000 (130 Gb on HD).
Now we want to delete some rows from these tables to free diskspace. It
seems that MySQL frees the harddisk-space which
w
tr. 6, 12169 Berlin (Germany)
> Telefon: +49 30 7970948-0 Fax: +49 30 7970948-3
>
>
> - Original Message -
> From: "Qunfeng Dong" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, December 16,
D]>
Sent: Monday, December 16, 2002 6:42 PM
Subject: How can I speed up the Left Join on big tables?
> Hi,
>
> A simple left join on two big table took 5 mins to
> finish.
>
> Here is the "explain"
> mysql>
Hi,
A simple left join on two big table took 5 mins to
finish.
Here is the "explain"
mysql> explain select count(*) from newSequence s left
join newSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+---++---+-+-+--+-+-+
| table | type
I've upgraded to MySQL 3.2.49 do I still need to pass the big-tables option
on the command line when starting MySQL.
Thanks,
Ian
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
Hello,
I work with a MySQL server version 3.23 and MyIsam tables where records are
inserted real time, betweek 10 and 50 records per second until the table
reaches 10 million records, where one other table is started.
I already have a PRIMARY KEY on this table and the current index "size" is
aro
Hi everybody, your help please
I meet the well-known problem with 3.22 mysql on a 6.1 redhat linux platform.
Database error: Invalid SQL: select ..blabla
MySQL Error: 1114 (The table 'SQL54_0' is full)
Session halted.
I put the option big-tables in all the following groups in t
Hi Everyone
I know this type of question has been answered quite frequently so
please accept my apologies if this is "old hat" and just annoying. I
have spent a good few hours searching past discussions and the 'net
but I still haven't been successful so I'm posting my question here.
I am buildi
Hi,
Got some tips and trix from some users about big tables,
-Using mysql 3.23.33
-ReadHat Linux 6.0 Kernel: 2.2.5
What I seem to must do is to get the "with-raid" to work
if a going to solve my problem, MERGE will be to complicated
to do when I have 11 diffrent Client prg working towa
Roger,
On Sat, Mar 10, 2001 at 12:52:54AM +0100, Roger Westin wrote:
> Thanks Setve,
> but right now I cant change the filesystem, so I need to go for the
> with-raid option,
> I read the manual and recompiled my MySQL, but I still cant get bigger
> files than 2GB. Accordning to the manual it sha
rom: Steve Ruby <[EMAIL PROTECTED]>
>To: Roger Westin <[EMAIL PROTECTED]>
>CC: [EMAIL PROTECTED]
>Subject: Re: Big Tables
>Date: Fri, 09 Mar 2001 11:07:11 -0700
>
>Roger Westin wrote:
> >
> > Hi,
> > Have a problem with big tables,
> > Cant get them o
Roger Westin wrote:
>
> Hi,
> Have a problem with big tables,
> Cant get them over 2GB
> Using mysql 3.23.33
> and ReadHat Linux 6.0 Kernel: 2.2.5
>
> Need to biuld a table atleast 70Gb so Anyone?
>
switch to a file system that supports very large files
The prob
Hi,
Have a problem with big tables,
Cant get them over 2GB
Using mysql 3.23.33
and ReadHat Linux 6.0 Kernel: 2.2.5
Need to biuld a table atleast 70Gb so Anyone?
/roger
_
Get Your Private, Free E-mail from MSN Hotmail at
On Sun, Jan 14, 2001 at 09:36:00AM +, Martin Thoma wrote:
> Hi there,
>
> sorry, I'm a newbee and I didn't found what to do in the manual.
>
> I heard in the maillinglist something about big tables. I want to
> make a database with about 2-3 GB. What do I have t
Hi there,
sorry, I'm a newbee and I didn't found what to do in the manual.
I heard in the maillinglist something about big tables. I want to make a
database with about 2-3 GB. What do I have to do / set / pay attention to
BEFORE I start ?
Thanks in advance
: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Datum: den 12 januari 2001 00:11
Ämne: Problem with big tables.
>PS: I think we have solved the bug with table_cache > 256; I have
>patched bdd for this and if it works we will make a new release
>ASAP!
---
Hi!
>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:
Peter> Hello mysql,
Peter> I've recently tried to do a test of mysqld with big tables (>4GB) on
Peter> linux 2.4 mysql 3.23.30
Peter> 1) The strange thing is even without setti
Hello mysql,
I've recently tried to do a test of mysqld with big tables (>4GB) on
linux 2.4 mysql 3.23.30
1) The strange thing is even without setting up max_rows got a
table to grow for more than 4Gb with insert test. Is this possible ?
(May be you use special pointers fo
63 matches
Mail list logo