Re: LARGE operation stuck. What now?

2005-09-08 Thread Joseph Cochran
Thank you. This is a very promising answer. I don't know that we want to 
drop the table if we don't have to, but knowing that we can restart the DB 
without the rollback operation is a boon! We could certainly do a mysqldump 
of just that table (which works fine, we continue to run nightly backups), 
then restore it if we do need to drop it. 

-- Joe

On 9/7/05, Gleb Paharenko <[EMAIL PROTECTED]> wrote:
> 
> Hello.
> 
> 
> 
> Here is described the possible way of how to force the rollback
> 
> (you can kill the mysqld process and set innodb_force_recovery to 3 to
> 
> bring the database up without the rollback, then DROP the table that is
> 
> causing the runaway rollback):
> 
> http://dev.mysql.com/doc/mysql/en/forcing-recovery.html
> 
> 
> 
> 
> 
> 
> 
> Joseph Cochran <[EMAIL PROTECTED]> wrote:
> 
> >
> 
> >Thanks for the questions, hopefully this will help: InnoDB, yes. It's
> 
> >version 4.1.11, not replicated.
> 
> >
> 
> >I am familiar with KILL. It is definitely something I CAN do, but not
> 
> >necessarily something I SHOULD do at this point in time. Usually when you
> 
> >kill a process while it's running, it will roll back the transaction 
> before
> 
> >releasing the process, which often takes as long as the commit: I'd 
> rather
> 
> >not kill it and have it rolling back for two weeks if I can help it.
> 
> >
> 
> >Thanks!
> 
> 
> 
> --
> For technical support contracts, goto https://order.mysql.com/?ref=ensita
> This email is sponsored by Ensita.NET http://www.ensita.net/
> __ ___ ___  __
> / |/ /_ __/ __/ __ \/ / Gleb Paharenko
> / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED]
> /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET
> <___/ www.mysql.com <http://www.mysql.com>
> 
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
> 
>


Re: LARGE operation stuck. What now?

2005-09-06 Thread Joseph Cochran
Thanks for the questions, hopefully this will help: InnoDB, yes. It's 
version 4.1.11, not replicated.

I am familiar with KILL. It is definitely something I CAN do, but not 
necessarily something I SHOULD do at this point in time. Usually when you 
kill a process while it's running, it will roll back the transaction before 
releasing the process, which often takes as long as the commit: I'd rather 
not kill it and have it rolling back for two weeks if I can help it.

Thanks!

-- Joe

On 9/6/05, Kevin Burton <[EMAIL PROTECTED]> wrote:
> 
> INNODB I assume?
> 
> Replicated environment?
> 
> What version of mysql?
> 
> See KILL in the SQL manual.. if you do a show processlist you can get the 
> pid and you might be able to kill it.
> 
> I believe that it's safe to do a KILL on an DELETE but any decision you 
> make her is your own...
> 
> That's a LOT of data...
> 
> Also.. if the kill works you could still delete in the future but put a 
> LIMIT on the delete clause. This way you can determine how long your 
> delete's will take. 
> 
> Kevin
> 
> On 9/6/05, Joseph Cochran <[EMAIL PROTECTED]> wrote:
> > 
> > Here's the situation. I have a table of about 900 million rows, 
> > consisting
> > of a bigint and an int in each row. There's an index on the bigint. The
> > table is referenced in a SELECT in other DB operations roughly 15 or 20 
> > times per day.
> > 
> > We're under tight deadlines and some operations on the table were 
> > getting
> > too slow, so two weeks ago I initiated a DELETE from the table that 
> > should
> > remove about 600 million of the rows, based on values in the int column. 
> > 
> > 
> > It's still running.
> > 
> > In the interim, it appears that the transaction has finished (doing a 
> > select
> > will yield rows that should be there while rows that should not are 
> > indeed
> > excluded), but the transaction is still "updating" in the processlist 
> > and 
> > hasn't returned to the prompt that issued it. Further, MySQL appears to 
> > be
> > thrashing the disk, running about 250 transactions per second of about 
> > 20-25
> > KB per transaction. The end result of this is that any DB operation we 
> > run 
> > that's small enough to be in memory is unaffected, but anything that 
> > starts
> > swapping to the disk is slw.
> > 
> > We have changed our workflow and I've changed the code so that the 
> > affected
> > table isn't hurting us directly anymore, but now it's two weeks on and 
> > very 
> > soon our deadline will have passed and I'll need to resolve this if the
> > transaction doesn't finish.
> > 
> > At this point I don't care what I should or should not have done, or 
> > what
> > went wrong to cause this sort of hang. I need to know what my options 
> > are 
> > when we have time to work on the server without impacting other people's
> > deadlines, which means I need a few educated guesses on what's going on 
> > with
> > the DB. What will happen if I stop the transaction from my client? If I 
> > kill 
> > the process? What will happen if we reboot the server? What the heck is
> > MySQL DOING that's thrashing the disk anyway? Why is it not using a 
> > higher
> > throughput to the disk?
> > 
> > We're okay for now, but we're kind of without a net in that backups take 
> > 16 
> > hours to run because of the slow disk access. So we have something, it's
> > just not very optimal.
> > 
> > But at some point in the next few weeks I'm going to have to take some 
> > sort
> > of action if this thing doesn't commit, and I don't want it to be 
> > offline 
> > for several days undoing whatever it's done.
> > 
> > Any advice would be most appreciated.
> > 
> > -- Joe
> > 
> > 
> 
> 
> -- 
> Kevin A. Burton, Location - San Francisco, CA
> AIM/YIM - sfburtonator, Web - http://www.feedblog.org/
> GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 
>


LARGE operation stuck. What now?

2005-09-06 Thread Joseph Cochran
Here's the situation. I have a table of about 900 million rows, consisting 
of a bigint and an int in each row. There's an index on the bigint. The 
table is referenced in a SELECT in other DB operations roughly 15 or 20 
times per day. 

We're under tight deadlines and some operations on the table were getting 
too slow, so two weeks ago I initiated a DELETE from the table that should 
remove about 600 million of the rows, based on values in the int column. 

It's still running.

In the interim, it appears that the transaction has finished (doing a select 
will yield rows that should be there while rows that should not are indeed 
excluded), but the transaction is still "updating" in the processlist and 
hasn't returned to the prompt that issued it. Further, MySQL appears to be 
thrashing the disk, running about 250 transactions per second of about 20-25 
KB per transaction. The end result of this is that any DB operation we run 
that's small enough to be in memory is unaffected, but anything that starts 
swapping to the disk is slw. 

We have changed our workflow and I've changed the code so that the affected 
table isn't hurting us directly anymore, but now it's two weeks on and very 
soon our deadline will have passed and I'll need to resolve this if the 
transaction doesn't finish. 

At this point I don't care what I should or should not have done, or what 
went wrong to cause this sort of hang. I need to know what my options are 
when we have time to work on the server without impacting other people's 
deadlines, which means I need a few educated guesses on what's going on with 
the DB. What will happen if I stop the transaction from my client? If I kill 
the process? What will happen if we reboot the server? What the heck is 
MySQL DOING that's thrashing the disk anyway? Why is it not using a higher 
throughput to the disk?

We're okay for now, but we're kind of without a net in that backups take 16 
hours to run because of the slow disk access. So we have something, it's 
just not very optimal. 

But at some point in the next few weeks I'm going to have to take some sort 
of action if this thing doesn't commit, and I don't want it to be offline 
for several days undoing whatever it's done.

Any advice would be most appreciated.

-- Joe


Re: TimeZone

2005-08-08 Thread Joseph Cochran
Some countries have multiple timezones, so it is not sufficient to
know the country code in order to get the timezone. If they have
previously posted the timezone, however, then it should be possible to
store that information in a cookie on the client machine that your web
layer can retrieve. If you want to permanently tie a timezone to a
user (assuming that this is an internal system or other system to
which your users authenticate -- if it is a public website you're
going to have to use cookies), simply include an extra column in the
user's record that has a number that stores its differential from GMT
(so the USA east coast would be -5) and save all of your data in GMT,
applying the timezone column to the time via datetime functions either
in the query or in your web layer.

-- Joe

On 8/8/05, KH <[EMAIL PROTECTED]> wrote:
> Hi,
>  There is a request from mgmt, when user browse the request (web), the
> database will return the requested timestamp at their timezone that
> previously posted. How do i do that? Do I need store whole country codes
> together timezones in database mysql ? Is there any way to find full
> country code together with their timezone?
> 
> 
> Cheers
> KH
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread Joseph Cochran
Thanks! Explain and InnoDB monitor were exactly what I needed to
diagnose and fix the problem! In case you were curious, the issue was
that the statement I was expecting to run was not the statement that
was running, but the first hundred and some-odd characters in both
were the same. Using the monitor I was able to see that the wrong
thing was running.

Some SELECTs are still taking longer than they should, but I have some
new tools at my disposal, which makes me very happy.

-- Joe

On 5/4/05, Gleb Paharenko <[EMAIL PROTECTED]> wrote:
> Hello.
> 
> 
> > We're running MySQL 4.11 on a machine with 2GB memory, the table is
> 
> > InnoDB with a compound primary key, and additional indexes on all rows
> 
> > with searchable options in the API. Any generic advice or admin tools
> 
> > would be great.
> 
> Use EXPLAIN to determine how efficient your indexes are. Using a lot of
> 
> keys could slow down the INSERT operations but fasten the SELECTs.
> 
> InnoDB monitors might be helpful in your case as well. See:
> 
>   http://dev.mysql.com/doc/mysql/en/explain.html
> 
>   http://dev.mysql.com/doc/mysql/en/innodb-monitor.html
> 
> 
> Joseph Cochran <[EMAIL PROTECTED]> wrote:
> 
> > So here's my situation: we have a database that has a table of about 5
> 
> > million rows. To put a new row into the table, I do an INSERT ...
> 
> > SELECT, pulling data from one row in the table to seed the data for
> 
> > the new row. When there are no active connections to the DB other than
> 
> > the one making the INSERT, it runs like a charm. But during normal
> 
> > daytime operation, when we run around 50 connections (most sleeping at
> 
> > any one time), it takes up to two minutes to do, and ends up locking
> 
> > any other inserts or updates against that table for the entire time.
> 
> >
> 
> > I'll get into more specifics if they're required, but I wanted to ask
> 
> > in general if MySQL has tools to diagnose this, or if anyone has had
> 
> > general situations like this. In SQL Server (which is where I have
> 
> > most of my experience) I could use the trace tool and the Query
> 
> > Analyzer to tell what the execution plan for the query was and thus
> 
> > what's stalling it (an index gone bad, a weird locking situation,
> 
> > etc).
> 
> >
> 
> > We're running MySQL 4.11 on a machine with 2GB memory, the table is
> 
> > InnoDB with a compound primary key, and additional indexes on all rows
> 
> > with searchable options in the API. Any generic advice or admin tools
> 
> > would be great.
> 
> >
> 
> > -- Joe
> 
> >
> 
> --
> For technical support contracts, goto https://order.mysql.com/?ref=ensita
> This email is sponsored by Ensita.NET http://www.ensita.net/
>__  ___ ___   __
>   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
>  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
> /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
><___/   www.mysql.com
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow queries, why?

2005-05-03 Thread Joseph Cochran
So here's my situation: we have a database that has a table of about 5
million rows. To put a new row into the table, I do an INSERT ...
SELECT, pulling data from one row in the table to seed the data for
the new row. When there are no active connections to the DB other than
the one making the INSERT, it runs like a charm. But during normal
daytime operation, when we run around 50 connections (most sleeping at
any one time), it takes up to two minutes to do, and ends up locking
any other inserts or updates against that table for the entire time.

I'll get into more specifics if they're required, but I wanted to ask
in general if MySQL has tools to diagnose this, or if anyone has had
general situations like this. In SQL Server (which is where I have
most of my experience) I could use the trace tool and the Query
Analyzer to tell what the execution plan for the query was and thus
what's stalling it (an index gone bad, a weird locking situation,
etc).

We're running MySQL 4.11 on a machine with 2GB memory, the table is
InnoDB with a compound primary key, and additional indexes on all rows
with searchable options in the API. Any generic advice or admin tools
would be great.

-- Joe

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL and FreeBSD instability

2005-04-22 Thread Joseph Cochran
We've been using mysql version 4.7 on a FreeBSD 4.10 machine for the
past few months, and while we've been putting a lot of load onto the
DB, it's been stable. This past weekend, our sysadmin updated FreeBSD
from 4.10 to 4.11, at which point mysql began having serious problems.
The DB itself is fine, but the process in FreeBSD instead of spending
CPU time ends up hung in a state of "biord", during which NO database
operations can take place. From what I've been able to tell, this is a
blocking I/O state, but what we can't figure out is why it's suddenly
doing this, and how to get it to stop.

Other than standard start-stop-reboot and so on, here are several
remedies we've tried: rollback to BSD 4.10. Blocking state remains.
Upgrade mysql to latest release (also 4.11): Blocking state goes from
90% of the time to 50% or so, but remains. The next step we're going
to try is to switch out hardware to a BSD 5 machine and hope that
solves the issue. But we've had no luck trying to figure out what's
happening. Other states we see are "ffsfsn" and the occasional
"sbwait".

Has anyone else had a similar issue, and if so how did you address it?

Thanks!

-- Joe

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why "VARCHAR" TO "CHAR" automatically when the length less than 4.

2004-08-23 Thread Joseph Cochran
On Mon, 23 Aug 2004 15:25:21 +0200, Martijn Tonies <[EMAIL PROTECTED]> wrote:
> > > I have a questions about "varchar columns" change to "CHAR columns"
> > > automatically.

> > See http://dev.mysql.com/doc/mysql/en/Silent_column_changes.html
 
> In addition to that - it doesn't really matter as the CHAR datatype
> isn't properly implemented in MySQL and behaves the same as
> the VARCHAR datatype.

I don't believe that is correct. We recently had a very large disk
space problem that was solved when we discovered that a column that
should have been varchar was instead char, and was taking up excess
storage because of it (which a datatype acting like varchar wouldn't
do).

-- Joe

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]