Yes I killed several times the query but now way, the server was continuing
to hog disk space and not even shutdown worked!
Thanks!
Claudio



2009/2/27 Brent Baisley <brentt...@gmail.com>

> MySQL can handle large tables no problem, it's large queries that it
> has issues with. You couldn't just kill the query instead of killing
> MySQL?
> use show processlist to get the query id, then kill it. You may
> already know that.
>
> Brent
>
> On Fri, Feb 27, 2009 at 5:25 PM, Claudio Nanni <claudio.na...@gmail.com>
> wrote:
> > Great Brent, helps a lot!
> > it is very good to know your experience.
> > I will speak to developers and try to see if there is the opportunity to
> > apply the 'Divide et Impera' principle!
> > I am sorry to say MySQL it is a little out of control when dealing with
> huge
> > tables, it is the first time I had to kill MySQL deamon a couple of
> times.
> >
> > Thanks again Brent
> >
> > Claudio
> >
> >
> >
> > Brent Baisley wrote:
> >>
> >> I've used a similar setup and hit up to 400 million with 5-7 million
> >> records being added and deleted per day. Processing the table like you
> >> mention gave me the exact same result. The query was just too big for
> >> MySQL to handle. If you can break down your query into multiple
> >> smaller queries, it will run much quicker. I went from 7-8 hours down
> >> to 10 minutes. I broke my query into increments based on date/time and
> >> merged the results.
> >> I also switched to using MERGE tables so I could create a much narrow
> >> set of tables to query on (i.e. current_month). Of course, this means
> >> splitting your table into separate tables based on a certain criteria.
> >> Basically, divide and conquer.
> >>
> >> Hope that helps.
> >>
> >> Brent Baisley
> >>
> >> On Fri, Feb 27, 2009 at 4:42 PM, Claudio Nanni <claudio.na...@gmail.com
> >
> >> wrote:
> >>
> >>>
> >>> Hi,
> >>> I have one 15GB table with 250 million records and just the primary
> key,
> >>> it is a very simple table but when a report is run (query) it just
> takes
> >>> hours,
> >>> and sometimes the application hangs.
> >>> I was trying to play a little with indexes and tuning (there is not
> great
> >>> indexes to be done though)
> >>> but eveytime I try to alter table for indexes it just hogs the disk
> space
> >>> and takes hours
> >>> to try to build indexes in various passages(.TMD) but it is a real pain
> >>> since I cannot even kill the mysql process,
> >>> and I had to kill the server with table corruption and had to
> stop/start
> >>> and
> >>> repair table.
> >>> Does anybody experience problems in managing a simple MyISAM table with
> >>>  250
> >>> million records and a primary key?
> >>> I tried also to duplicate the table, add indexes and insert into it
> (also
> >>> using INNODB for the new table) but it is really
> >>> taking ages everytime. And I had to move the 'tmpdir' to the data
> >>> partition
> >>> because it was filling the / 100%.
> >>>
> >>> MySQL is 5.0.x on 64bit RHEL 5 with 16GB RAM and NAS storage.
> >>>
> >>> Any hint on how to manage big tables?
> >>>
> >>> Thanks
> >>>
> >>> Claudio Nanni
> >>>
> >>>
> >>
> >>
> >
> >
>

Reply via email to