Re: NOW() is stuck...

2013-06-27 Thread Eric Bergen
>>> returning
>>>>> the same
>>>>> value:
>>>>>
>>>>> mysql> select now();
>>>>> +-+
>>>>> | now()   |
>>>>> +-+
>>>>> | 2013-06-26 02:27:14 |
>>>>> +-+
>>>>>
>>>>> The system variable "timestamp" also has that same time value stored in
>>>>> it. How
>>>>> can we kick this loose so that the values are more current with real
>>>>> time? (it is
>>>>> currently 3:08PM here, despite our MySQL instance thinking it's 2am.
>>>>> The
>>>>> system
>>>>> time on the machine is correct:
>>>>>
>>>>> $ date
>>>>> Wed Jun 26 15:08:56 PDT 2013
>>>>>
>>>>>
>>>>> This is MySQL 5.1.46 running on solaris2.10.
>>>>>
>>>>> Any ideas short of restarting the MySQL engine? I'm willing to do that,
>>>>> but would much
>>>>> rather wait and not do it in the middle of the day.
>>>>>
>>>>> Thanks,
>>>>> Andy
>>>>>
>>>>>
>>>>> --
>>>>> Andy Wallace
>>>>> iHOUSEweb, Inc.
>>>>> awall...@ihouseweb.com
>>>>> (866) 645-7700 ext 219
>>>>> --
>>>>> "Sometimes it pays to stay in bed on Monday, rather than spending the
>>>>> rest of the week debugging Monday's code."
>>>>> - Christopher Thompson
>>>>>
>>>>> --
>>>>> MySQL General Mailing List
>>>>> For list archives: http://lists.mysql.com/mysql
>>>>> To unsubscribe:http://lists.mysql.com/mysql
>>>>>
>>>>
>>>>
>>>> 
>>>>
>>>> Notice: This communication may contain privileged and/or confidential
>>>> information. If you are not the intended recipient, please notify the
>>>> sender by email, and immediately delete the message and any attachments
>>>> without copying or disclosing them. LBI may, for any reason, intercept,
>>>> access, use, and disclose any information that is communicated by or
>>>> through, or which is stored on, its networks, applications, services,
>>>> and devices.
>>>>
>>>
>>> --
>>> Andy Wallace
>>> iHOUSEweb, Inc.
>>> awall...@ihouseweb.com
>>> (866) 645-7700 ext 219
>>> --
>>> "Sometimes it pays to stay in bed on Monday, rather than spending the
>>> rest of the week debugging Monday's code."
>>> - Christopher Thompson
>>>
>>> --
>>> MySQL General Mailing List
>>> For list archives: http://lists.mysql.com/mysql
>>> To unsubscribe:http://lists.mysql.com/mysql
>>>
>>
>>
>> 
>>
>> Notice: This communication may contain privileged and/or confidential
>> information. If you are not the intended recipient, please notify the sender
>> by email, and immediately delete the message and any attachments without
>> copying or disclosing them. LBI may, for any reason, intercept, access, use,
>> and disclose any information that is communicated by or through, or which is
>> stored on, its networks, applications, services, and devices.
>>
>
> --
> Andy Wallace
> iHOUSEweb, Inc.
> awall...@ihouseweb.com
> (866) 645-7700 ext 219
> --
> "Sometimes it pays to stay in bed on Monday, rather than spending the rest
> of the week debugging Monday's code."
> - Christopher Thompson
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NOW() is stuck...

2013-06-26 Thread Eric Bergen
This is the expected behavior if you set the timestamp variable in
your session. This is the same mechanism that replication uses to
execute transactions on the slave with the correct time. Setting
timestamp back to default or reopening your connection will fix it.

MariaDB [(none)]> set timestamp=1372296737;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> select now(); select sleep(5); select now();
+-+
| now()   |
+-+
| 2013-06-26 21:32:17 |
+-+
1 row in set (0.00 sec)

+--+
| sleep(5) |
+--+
|0 |
+--+
1 row in set (5.00 sec)

+-+
| now()   |
+-+
| 2013-06-26 21:32:17 |
+-+
1 row in set (0.00 sec)

MariaDB [(none)]> set timestamp=default;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> select now();
+-+
| now()   |
+-+
| 2013-06-26 21:33:53 |
+-+
1 row in set (0.00 sec)

MariaDB [(none)]> select now();
+-+
| now()   |
+-+
| 2013-06-26 21:33:54 |
+-+
1 row in set (0.00 sec)

On Wed, Jun 26, 2013 at 4:18 PM, John Meyer
 wrote:
> Well, if you want to get unstuck in time, maybe you need to call Billy
> Pilgrim ;-)
>
> Andy Wallace wrote:
>>
>> We've been having some issues with one of our MySQL servers lately, and
>> currently
>> the dang thing is "stuck". For at least the last hour, NOW() is returning
>> the same
>> value:
>>
>> mysql> select now();
>> +-+
>> | now()   |
>> +-+
>> | 2013-06-26 02:27:14 |
>> +-+
>>
>> The system variable "timestamp" also has that same time value stored in
>> it. How
>> can we kick this loose so that the values are more current with real time?
>> (it is
>> currently 3:08PM here, despite our MySQL instance thinking it's 2am. The
>> system
>> time on the machine is correct:
>>
>> $ date
>> Wed Jun 26 15:08:56 PDT 2013
>>
>>
>> This is MySQL 5.1.46 running on solaris2.10.
>>
>> Any ideas short of restarting the MySQL engine? I'm willing to do that,
>> but would much
>> rather wait and not do it in the middle of the day.
>>
>> Thanks,
>> Andy
>>
>>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-31 Thread Robinson, Eric
> -Original Message-
> From: Vikas Shukla [mailto:myfriendvi...@gmail.com] 
> Sent: Thursday, May 30, 2013 7:19 PM
> To: Robinson, Eric; mysql@lists.mysql.com
> Subject: RE: Are There Slow Queries that Don't Show in the 
> Slow Query Logs?
> 
> Hi,
> 
> No, it does not represents the time from request to response 
> not does it includes the time that is spent in waiting for 
> the locks to be released.
> The slow query log consists of SQL statements that took more 
> than long_query_time seconds to EXECUTE. The default value of 
> long_query_time is 10.
> The time to acquire the initial locks is not counted as 
> execution time.
> mysqld writes a statement to the slow query log after it has 
> been executed and after all locks have been released, so log 
> order might differ from execution order.
> 
> Lets take an example, if a query is received at 10:00 hrs and 
> it waits till 10:05 hrs , it starts getting executed at 
> 10:05:00 and completed at 10:05:24 (HH:MM:SS). So, here it 
> took 24 seconds to execute. So only
> 24 seconds is counted.
> So if long_query_time is equal to 10, which is by default, 
> this would be logged in slow query log as it takes more than 
> 10 seconds to execute.
> 
> Sent from my Windows Phone From: Robinson, Eric
> Sent: 31-05-2013 03:48
> To: mysql@lists.mysql.com
> Subject: Are There Slow Queries that Don't Show in the Slow 
> Query Logs?
> As everyone knows, with MyISAM, queries and inserts can lock 
> tables and force other queries to wait in a queue. When that 
> happens, does the time shown in the slow query logs represent 
> the whole time from when the server received the request to 
> when the response was sent to the client? Or is the time a 
> query spends waiting for a table lock to be released omitted 
> from what is recorded in the slow query logs?
> 
> --
> Eric Robinson
> 
> 

Very good answer, Vikas. Thank you for the clarification!

--Eric




Disclaimer - May 31, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Vikas Shukla,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: [>Suspected Spam<][Characteristics] Re: NET START MYSQL QUESTION?

2013-05-12 Thread Robinson, Eric
> why not answer the question another user made hours ago?
> under which account do you try to start mysqld?
> 

Agreed. Chances are good that if he goes into the Windows Services control 
panel and gets the properties of the mysql service, he will find that it is 
configured to start under a Windows account other than the System account, and 
that the Windows account in question either does not have the required 
privileges or it is locked. I'd probably just change it to start under the 
System account.

--Erio


 

Disclaimer - May 12, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: [>Suspected Spam<][Characteristics] RE: Slow Response -- What Does This Sound Like to You?

2013-05-10 Thread Robinson, Eric
.`actionflag` as "actionflag"  from 
(select 
telenc_loghist.actionflag,telenc_loghist.encounterid,telenc_loghist.moddate 
from telenc_loghist
 inner join enc on 
enc.encounterid=telenc_loghist.encounterid where actionflag in(4) and enc.date 
between '2011-01-01' and '2013-05-07') `Q2`) `Addressed_Query` on 
`Created_Query`.`encounterid` = `Addressed_Query`.`encounterid` where  NOT 
`Addressed_Query`.`moddate` is null) `Time_Difference_Query` on 
`Query1`.`Encounter_ID` = `Time_Difference_Query`.`encounterid` where 
`Query1`.`Appointment_Provider_ID` in (9118, 9119, 60922, 9116, 47495) and 
`Query1`.`Facility_Name` in ('Fremont Family Care') and 
`Query1`.`Appointment_Date` between cast(cast('2011-01-01' as date) as date) 
and cast(cast('2013-05-07' as date) as date) and 
`Query1`.`Appointment_Provider_ID` = 60922;



Also, the query cache size is currently set to 64MB. Do you really think 
turning it off is a good idea?

The my.cnf file follows...

[mysqld_safe]
timezone=America/Chicago
[mysqld]
#-optimization
skip-locking
skip-innodb
skip-bdb
query_cache_type = 1
tmp_table_size=1M
wait_timeout=2048
interactive_timeout=2048
table_cache=1024
query_cache_limit=1M
thread_concurrency = 4
key_buffer = 256M
query_cache_size=64M
max_allowed_packet = 1M
sort_buffer_size = 512K
read_buffer_size = 512K
myisam_sort_buffer_size = 128M
thread_cache_size=40
max_connections=500
max_tmp_tables=32
lower_case_table_names=1
#-turn on query logging
#log=/ha01_mysql/site150/mysql/query.log
log_slow_queries=/ha01_mysql/site150/mysql/slow_query.log
long_query_time=3
#-make this server a replication master
#log-bin = /ha01_mysql/site150/mysql/binlogs/
expire_logs_days=5
server-id=99
#replicate-do-db=mobiledoc_150


I'm thinking that our best solution may be to re-enable the slave (currently 
disabled) and point the nasty query at it.

By the way, we have considered switching to innodb, but there the advantages of 
MyISAM in our environment have usually outweighed.

--Eric






Disclaimer - May 10, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Rick James,Bruce Ferrell,mysql@lists.mysql.com. If you are not the 
named addressee you should not disseminate, distribute, copy or alter this 
email. Any views or opinions presented in this email are solely those of the 
author and might not represent those of Physicians' Managed Care or Physician 
Select Management. Warning: Although Physicians' Managed Care or Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Robinson, Eric
> -Original Message-
> From: Wm Mussatto [mailto:mussa...@csz.com] 
> Sent: Thursday, May 09, 2013 3:50 PM
> To: Robinson, Eric
> Cc: Rick James; mysql@lists.mysql.com
> Subject: RE: Slow Response -- What Does This Sound Like to You?
> 
> On Thu, May 9, 2013 15:25, Robinson, Eric wrote:
> >>
> >> > -Original Message-
> >> > From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
> >> > Sent: Thursday, May 09, 2013 1:58 PM
> >> > To: mysql@lists.mysql.com
> >> > Subject: Slow Response -- What Does This Sound Like to You?
> >> >
> >> > We have a situation where users complain that the system
> >> periodically
> >> > freezes for 30-90 seconds. We check the slow query logs and
> >> find that
> >> > one user issued a complex query that did indeed take 30-90
> >> seconds to
> >> > complete. However, NO slow queries are recorded for the other 50 
> >> > users, before, during, or after the freeze. Note that 
> the complex 
> >> > query in question always shows: "Lock_time: 0".
> >> >
> >> > Q: What conditions could cause single query to lock up a
> >> database for
> >> > a while for all users (even though it shows "lock time: 
> 0")  but no 
> >> > other slow queries would show in the logs for any other
> >> users who are
> >> > hitting the database at the same time?
> >> >
> >> > OS: RHEL3 x64
> >> > CPU: 8 x 2.9GHz Xeon
> >> > RAM: 32GB
> >> > Disk: RAID 5 (6 x 512GB SSD)
> >> > MySQL: 5.0.95 x64
> >> > Engine: MyISAM
> >> >
> >
> >
> >> MyISAM?  Or InnoDB?
> >> Lock_time perhaps applies only to table locks on MyISAM.
> >>
> >> SHOW ENGINE InnoDB STATUS;
> >> You may find some deadlocks.
> >>
> >> Is Replication involved?
> >>
> >> Anyone doing an ALTER?
> >
> >
> >
> > MyISAM, no replication involved, and nobody is altering the 
> database. 
> > This happens whenever people run certain reports.
> >
> >
> > --Eric
> This may be a dumb question, but have you verified that the 
> applications do not issue a "Lock TABLES ..."? Either the big 
> one or one of the others.
> 

I have not verified this, but it should be easy to find out. Hopefully that is 
not the case as it is a canned application and we don't have access to the code.

--Eric





Disclaimer - May 9, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Wm Mussatto,Rick James,mysql@lists.mysql.com. If you are not the 
named addressee you should not disseminate, distribute, copy or alter this 
email. Any views or opinions presented in this email are solely those of the 
author and might not represent those of Physicians' Managed Care or Physician 
Select Management. Warning: Although Physicians' Managed Care or Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Robinson, Eric
> 
> > -Original Message-
> > From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
> > Sent: Thursday, May 09, 2013 1:58 PM
> > To: mysql@lists.mysql.com
> > Subject: Slow Response -- What Does This Sound Like to You?
> > 
> > We have a situation where users complain that the system 
> periodically 
> > freezes for 30-90 seconds. We check the slow query logs and 
> find that 
> > one user issued a complex query that did indeed take 30-90 
> seconds to 
> > complete. However, NO slow queries are recorded for the other 50 
> > users, before, during, or after the freeze. Note that the complex 
> > query in question always shows: "Lock_time: 0".
> > 
> > Q: What conditions could cause single query to lock up a 
> database for 
> > a while for all users (even though it shows "lock time: 0")  but no 
> > other slow queries would show in the logs for any other 
> users who are 
> > hitting the database at the same time?
> > 
> > OS: RHEL3 x64
> > CPU: 8 x 2.9GHz Xeon
> > RAM: 32GB
> > Disk: RAID 5 (6 x 512GB SSD)
> > MySQL: 5.0.95 x64
> > Engine: MyISAM
> > 


> MyISAM?  Or InnoDB?
> Lock_time perhaps applies only to table locks on MyISAM.
> 
> SHOW ENGINE InnoDB STATUS;
> You may find some deadlocks.
> 
> Is Replication involved?
> 
> Anyone doing an ALTER?



MyISAM, no replication involved, and nobody is altering the database. This 
happens whenever people run certain reports.


--Eric







Disclaimer - May 9, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Rick James,mysql@lists.mysql.com. If you are not the named addressee 
you should not disseminate, distribute, copy or alter this email. Any views or 
opinions presented in this email are solely those of the author and might not 
represent those of Physicians' Managed Care or Physician Select Management. 
Warning: Although Physicians' Managed Care or Physician Select Management has 
taken reasonable precautions to ensure no viruses are present in this email, 
the company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Which test suites are valid for --extern option in mysql-test-run?

2012-11-26 Thread Eric Bergen
Those tests are typically used by people developing the mysqld server.
The packages distributed have already had the tests ran on them. I'm
not sure how much value it will provide unless you're modifying
mysqld.

I don't have a great answer but my best guess is that extern can't be
used with tests that start or stop mysqld servers like the replication
tests.

On Sat, Nov 24, 2012 at 12:16 AM, Tianyin Xu  wrote:
> Hi, all,
>
> I wanna test my running MySQL server. According to the README in the
> "mysql-test" directory, "If you want to use an already running MySQL server
> for specific tests, use the --extern option to mysql-test-run".
>
> For example, mysql-test-run --extern alias
>
> But the README also pointed out that "some tests cannot run with an
> external server".
>
> My question is how to know which test suites in "t" directory are valid for
> --extern option? Without knowing this, I cannot judge whether a test fails
> is because it's not supported or "real" problems.
>
> Thanks a lot!
> Tianyin
>
> --
> Tianyin XU,
> http://cseweb.ucsd.edu/~tixu/



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Covering Index without the data file (myisam)

2012-11-24 Thread Eric Bergen
MyISAM can't do this but innodb can. If you change to an innodb table
and define your index as the primary key then row data is clustered
with the primary key. This means there is no additional storage
overhead for the primary key because it is just the row data. This
will break down if you define secondary indexes though because the
primary key is copied for each secondary index.

http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html

On Sat, Nov 24, 2012 at 1:15 PM, Reindl Harald  wrote:
>
>
> Am 24.11.2012 22:02, schrieb Hank:
>> Hello everyone,
>>
>>  I know this is a longshot, but is there any way to eliminate the MYD
>> file for a table that has a full covering index? The index is larger
>> than the datafile, since it contains all the records in the datafile,
>> plus a second reverse index.  It seems redundant to have to also store
>> a MYD file
>
> simple answer: NO
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Why configuration directives are all case sensitive?

2012-11-24 Thread Eric Bergen
Yes, the missing piece of the puzzle is that configuration files are
turned into command line arguments internally. Programs will parse
configuration files and place them at the beginning of the array for
command line arguments. They are made case sensitive because they are
turned into command line arguments. So the basic process is read all
the configuration files in order of precedence into the beginning of
the command line array. Then send this array to the command line
handling code. This is also how argument precedence works. As the
array is walked through options will override previous options so
things passed on the command line are at the end and override options
in the config file.

On Fri, Nov 23, 2012 at 10:08 PM, Tianyin Xu  wrote:
> Dear Eric,
>
> Thanks a lot for the explanation of argument directives! The concerns are
> very considerate.
>
> Actually, what I'm curious about is the configuration directives in the
> configuration file, i.e., my.cnf. To my experience, MySQL is the very few
> software who treats these directives in a case sensitive manner.
>
> The directives in the configuration file is different from the arguments,
> because we don't have something like --port and -P. So, a directive like
> "Port" clearly indicates that the user specifies "port" (exactly as you
> said). And this's the reason that most apps do not differentiate them.
>
> Could you please explain about the directives in the configuration file?
>
> Thanks a lot!
>
> Happy Thanksgiving!!
> Tianyin
>
>
>
> On Fri, Nov 23, 2012 at 4:20 PM, Eric Bergen  wrote:
>>
>> Anger and OS religious arguments the real answer is that is just how
>> the option parsing code works. It doesn't always have to make sense.
>> There are short and long args to programs. For example on the mysql
>> client there is --port or -P and --pasword or -p. The short options
>> have to be case sensitive because -P and -p mean different things. The
>> short options are case sensitive so the long options may as well be.
>> It keeps things simpler. Who wants to write --Port when --port means
>> not hitting the shift key?
>>
>> There are a few exceptions to this. The option comparison treats _ and
>> - as the same. I realize that isn't case but it just shows the
>> matching isn't exact. So --show_warnings is valid. On the other side
>> of the equal sign comparisons for true, on, false, and off are done
>> case insensitive. So --show_warnings=FaLse is valid but
>> --show_warningS=TruE isn't.
>>
>> If you want to be even more confused consider that mysql allows
>> partial argument names. You don't have to type out the full long arg
>> as long as you type enough that it only matches one option. For
>> example mysql --so is enough to mean socket but mysql --s isn't
>> because it can't be distinguished from 'show' variables. This gets
>> confusing with things like b. mysql --b is batch mode. So is mysql -B
>> but mysql -b is no beep. Confused yet?
>>
>> On Sun, Nov 18, 2012 at 6:42 PM, Tianyin Xu  wrote:
>> > On Sun, Nov 18, 2012 at 6:13 PM, Reindl Harald
>> > wrote:
>> >
>> >>
>> >>
>> >> Am 19.11.2012 02:07, schrieb Tianyin Xu:
>> >> > You are saying as long as admins are careful, there's no
>> >> misconfiguration?
>> >> > But why misconfigurations are so pervasive?
>> >> > Simply because the admins are not careful enough?
>> >>
>> >> yes
>> >>
>> >>
>> > That means not only I'm dummy, and that's means you should take care the
>> > system configuration design if many people are careless.
>> >
>> >
>> >
>> >> > I apologize for my lack of respect. I don't know what's your stuff,
>> >> > but
>> >> > I guess they'll be more popular if you make them more friendly.
>> >>
>> >> it does not need to be more popular
>> >> it is better not to be too popular but working clean and safe
>> >>
>> >> careless working these days means usually also not care
>> >> about security which is not acceptable htese days and i
>> >> know a lot of crap out there which is more popluar like
>> >> my work but with crappy quality and terrible insecure
>> >>
>> >> see all this CMS sytems out there writing hundrets of
>> >> warnings each request with error_reporting E_STRICT
>> >> while my whole source code runs clean i know who is right
>> >>
>&g

Re: Why configuration directives are all case sensitive?

2012-11-23 Thread Eric Bergen
Anger and OS religious arguments the real answer is that is just how
the option parsing code works. It doesn't always have to make sense.
There are short and long args to programs. For example on the mysql
client there is --port or -P and --pasword or -p. The short options
have to be case sensitive because -P and -p mean different things. The
short options are case sensitive so the long options may as well be.
It keeps things simpler. Who wants to write --Port when --port means
not hitting the shift key?

There are a few exceptions to this. The option comparison treats _ and
- as the same. I realize that isn't case but it just shows the
matching isn't exact. So --show_warnings is valid. On the other side
of the equal sign comparisons for true, on, false, and off are done
case insensitive. So --show_warnings=FaLse is valid but
--show_warningS=TruE isn't.

If you want to be even more confused consider that mysql allows
partial argument names. You don't have to type out the full long arg
as long as you type enough that it only matches one option. For
example mysql --so is enough to mean socket but mysql --s isn't
because it can't be distinguished from 'show' variables. This gets
confusing with things like b. mysql --b is batch mode. So is mysql -B
but mysql -b is no beep. Confused yet?

On Sun, Nov 18, 2012 at 6:42 PM, Tianyin Xu  wrote:
> On Sun, Nov 18, 2012 at 6:13 PM, Reindl Harald wrote:
>
>>
>>
>> Am 19.11.2012 02:07, schrieb Tianyin Xu:
>> > You are saying as long as admins are careful, there's no
>> misconfiguration?
>> > But why misconfigurations are so pervasive?
>> > Simply because the admins are not careful enough?
>>
>> yes
>>
>>
> That means not only I'm dummy, and that's means you should take care the
> system configuration design if many people are careless.
>
>
>
>> > I apologize for my lack of respect. I don't know what's your stuff, but
>> > I guess they'll be more popular if you make them more friendly.
>>
>> it does not need to be more popular
>> it is better not to be too popular but working clean and safe
>>
>> careless working these days means usually also not care
>> about security which is not acceptable htese days and i
>> know a lot of crap out there which is more popluar like
>> my work but with crappy quality and terrible insecure
>>
>> see all this CMS sytems out there writing hundrets of
>> warnings each request with error_reporting E_STRICT
>> while my whole source code runs clean i know who is right
>>
>> really:
>> if you find it useful to complain why a configuration is
>> case-sensitive instead accept it and correct your fault
>> you are doing the wrong job
>>
>>
> I'm complaining nothing. I just curious about the configuration and want to
> know you developers' thinking. I apologize if I gave you the impression of
> complaining by asking questions.
>
> Basically, I'm new to MySQL and find MySQL really take care about lots of
> things to give users an easy job. For example, the unit, the enumeration
> options, all are case insensitive -- "512K" and "512k" means the same size,
> "MIXED" and "mixed" means the same option, etc. Having such impression,
> when MySQL tells me 'Port' is unknown, it did take me some time to figure
> it out. Maybe simply because all the other servers I used like PostgreSQL,
> httpd, etc are case insensitive. That's the whole story, and that's why I
> ask on the forum, being curious about the reason.
>
> It's fine that you told me it's simply because you developers assume nobody
> "came to the idea write options
> not EXACTLY like they are in the documentation", so you simply do not want
> to do it. But I do not buy this, because MySQL developers do take care a
> lot of things (as unit and options I mentioned).
>
> T
>
>
> --
> Tianyin XU,
> http://cseweb.ucsd.edu/~tixu/



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: get a partial dump

2012-10-23 Thread Eric Bergen
mysqldump has a --where argument that allows you to pass in a where clause.

On Thursday, October 18, 2012, Stefan Kuhn wrote:

> Hi everybody,
> I want to get insert statements from a mysql database for a number of rows
> of
> a table, specified by a where clause. I also need the inserts for the
> datasets linked via foreign keys in other tables. So I need a sort of
> partial
> dump, a bit like mysqldump, but restricted to a (small) set of data. This
> needs to be done in a Java program, using mysql via jdbc.
> Does anybody know an easy way to do this? Of course I could build the
> statements in the java code, but I thought mysql might offer that or at
> least
> parts of it. Does anybody have an idea?
> Thanks,
> Stefan
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>

-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net


Re: Can I measure the use of index?

2012-10-15 Thread Eric Bergen
For the record mariadb also has table and index statistics. Including
statistics on temporary tables.

On Mon, Oct 15, 2012 at 8:34 PM, Lixun Peng  wrote:
> Hi,
>
> If you are using Percona Server, you can use this query:
>
> SELECT DISTINCT s.table_schema,
> s.table_name,
> s.index_name
> FROM   information_schema.statistics `s`
>LEFT JOIN information_schema.index_statistics indxs
>  ON ( s.table_schema = indxs.table_schema
>   AND s.table_name = indxs.table_name
>   AND s.index_name = indxs.index_name )
> WHERE  indxs.table_schema IS NULL
>AND s.index_name NOT IN ( 'PRIMARY' ) ;
>
> It will display all indexes that not use.
>
> or this query:
>
> SELECT table_name,
>index_name,
>SUM(rows_read)
> FROM   information_schema.index_statistics
> GROUP  BY table_name,
>   index_name;
>
> you can get the all indexes are using.
>
> On Tue, Oct 16, 2012 at 12:44 AM, Perrin Harkins  wrote:
>
>> On Mon, Oct 15, 2012 at 11:24 AM, Carlos Eduardo Caldi
>>  wrote:
>> > Somebody knows how can I log or measure the index use ?
>>
>> http://www.percona.com/doc/percona-toolkit/2.1/pt-index-usage.html
>>
>> - Perrin
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql
>>
>>
>
>
> --
> Senior MySQL Developer @ Taobao.com
>
> Mobile Phone: +86 18658156856 (Hangzhou)
> Gtalk: penglixun(at)gmail.com
> Twitter: http://www.twitter.com/plinux
> Blog: http://www.penglixun.com



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Managing ODBC on Linux? What Happened to myodbc3i?

2012-04-17 Thread Robinson, Eric
We usually manage ODBC data sources on Linux using the myodbc3i utility that 
came with mysql-connector-odbc-3.51.30-1.rhel5.i386.rpm. When we upgraded to 
mysql-connector-odbc-5.1.10-1.rhel5.i386.rpm, it removed the myodbc3i utility. 
What GUI tool is now recommended to managing ODBC connections on Linux?

--
Eric Robinson









Disclaimer - April 17, 2012 
This email and any files transmitted with it are confidential and intended 
solely for mysql@lists.mysql.com. If you are not the named addressee you should 
not disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of Physicians' Managed Care or Physician Select Management. Warning: 
Although Physicians' Managed Care or Physician Select Management has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/


Re: Replication rings/maatkit (was: HA & Scalability w MySQL + SAN + VMWare)

2012-04-02 Thread Eric Bergen
I suggest buying a copy of High Performance MySQL 3rd edition that
just came out. It has a whole chapter on HA that answers a lot of
these questions.

On Mon, Apr 2, 2012 at 6:14 PM, Wes Modes  wrote:
>
>> Replication rings are possible but you must design your application to
>> take special care to NOT update the same row in multiple nodes of the
>> ring at the same time. This is even harder to design and code for than
>> splitting writes/reads to master/slaves.
>>
>> Also the loss of one node of a replication ring is not as easy to
>> recover from as simply promoting one slave to become the new master of
>> a replication tree (demoting the recovered former-master to become yet
>> another slave) as there may be pending events in the relay logs of the
>> lost node that have not yet been relayed to the downstream node.
>>
>> I may not have every answer, but I have seen nearly every kind of
>> failure.  Everyone else is encouraged to add their views to the
>> discussion.
>>
>
> Has anyone used maatkit or Percona to setup circular replication?  How
> does it affect this system's reliability and robustness?  Do the tools
> help to deal with fail over?
>
> W.
>
> --
> Wes Modes
> Systems Designer, Developer, and Administrator
> University Library ITS
> University of California, Santa Cruz
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: 4 minute slow on select count(*) from table - myisam type

2011-10-03 Thread Eric Bergen
Can you run show processlist in another connection while the select
count(*) query is running and say what the state column is?

On Mon, Oct 3, 2011 at 7:00 AM, Joey L  wrote:
> this is not a real query on the site - it is just a way i am measuring
> performance on mysql - I do not know if it is such a great way to test.
> Looking for a better way to get a performance read on my site...do you have
> any ?? besides just viewing pages on it.
> thanks
> mjh
>
>
> On Mon, Oct 3, 2011 at 9:58 AM, Andrés Tello  wrote:
>
>> have you tried
>>
>> select count(yourindex) instead of select count(*) ?
>>
>>
>> On Mon, Oct 3, 2011 at 7:53 AM, Joey L  wrote:
>>
>>> Thanks for the input -
>>> 1. I will wait 48 hours and see what happens.
>>> 2. can you tell me what are some performance tests I can do to help me
>>> better tune my server ?
>>> 3. I am concerned about this table : | w6h8a_sh404sef_urls
>>> |
>>> MyISAM |      10 | Dynamic    | 8908402 |            174 |  1551178184 |
>>>  281474976710655 |   2410850304 |         0 |        8908777 | 2011-09-22
>>> 11:16:03 | 2011-10-02 21:17:20 | 2011-10-02 10:12:04 | utf8_general_ci   |
>>>  NULL |                |                                   |
>>> what can I do to make it run faster - i did not write the code...but need
>>> to
>>> optimize server to handle this table when it gets larger.  It is used for
>>> url re-writes - so it has a lot of urls.
>>> thanks
>>> mjh
>>>
>>> On Mon, Oct 3, 2011 at 12:38 AM, Bruce Ferrell >> >wrote:
>>>
>>> >
>>> > The meaning is:
>>> >
>>> > increase max_connections
>>> > reduce wait_timeout
>>> > -- 28800 is wait 8 hours before closing out dead connections
>>> > same for interactive_timeout
>>> >
>>> >
>>> > increase key_buffer_size (> 7.8G) increase join_buffer_size
>>> > -- This keeps mysql from having to run to disk constantly for keys
>>> > -- Key buffer size / total MyISAM indexes: 256.0M/7.8G
>>> > -- You have a key buffer of 256M and 7.8G of keys
>>> >
>>> > join_buffer_size (> 128.0K, or always use indexes with joins)
>>> > Joins performed without indexes: 23576 of 744k queries.
>>> > -- You probably want to look at the slow query log.  Generalize the
>>> queries
>>> > and the do an explain on the query.  I have seen instances where a query
>>> I
>>> > thought was using an index wasn't and I had to re-write... with help
>>> from
>>> > this list :-)  Thanks gang!
>>> >
>>> >
>>> > increase tmp_table_size (> 16M)
>>> > increase max_heap_table_size (> 16M)
>>> > -- When making adjustments, make tmp_table_size/max_heap_table_size
>>> equal
>>> >
>>> > increase table_cache ( > 1k )
>>> > -- Table cache hit rate: 7% (1K open / 14K opened)
>>> > -- Increase table_cache gradually to avoid file descriptor limits
>>> >
>>> > All of the aside, you need to let this run for at least 24 hours. I
>>> > prefer 48 hours.  The first line says mysql has only been running 9
>>> > hours.   You can reset the timeouts interactivly by entering at the
>>> > mysql prompt:
>>> >
>>> > set global wait_timeout=
>>> >
>>> > You can do the same for the interactive_timeout.
>>> >
>>> > Setting these values too low will cause long running queries to abort
>>> >
>>> >
>>> > On 10/02/2011 07:02 PM, Joey L wrote:
>>> > > Variables to adjust:
>>> > > >     max_connections (> 100)
>>> > > >     wait_timeout (< 28800)
>>> > > >     interactive_timeout (< 28800)
>>> > > >     key_buffer_size (> 7.8G)
>>> > > >     join_buffer_size (> 128.0K, or always use indexes with joins)
>>> > > >     tmp_table_size (> 16M)
>>> > > >     max_heap_table_size (> 16M)
>>> > > >     table_cache (> 1024)
>>> >
>>> >
>>> > --
>>> > MySQL General Mailing List
>>> > For list archives: http://lists.mysql.com/mysql
>>> > To unsubscribe:    http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com
>>> >
>>> >
>>>
>>
>>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Restore only one database or one table

2011-05-19 Thread Eric Bergen
The mysql command line has the -o option to only execute queries for
the default database. This can be used to restore one database from a
dump file that contains many. For specific tables you can setup a
restore user that only has permissions on the tables you want to
restore then use the -f flag to continue on error. Only use this in
emergencies though.

On Thursday, May 19, 2011, Adarsh Sharma  wrote:
> Johan De Meersman wrote:
>
> - Original Message -
>
>
> From: "Suresh Kuna" 
>
> Try to take a tab separated dump, so you can restore what ever you
> want in terms of tables or databases.
>
>
>
> Uhh. I'm a bit fuzzy today, but I really don't see how a tab-separated dump 
> will help split off tables or databases :-)
>
>
> To answer the original question, though; the technically accurate answer is 
> "yes, you can". It's made "easy" because mysqldump conveniently dumps 
> database-per database and table-per table. It's a bugger to do, however, 
> because if you take a monolithic dump you need to open the whole dumpfile in 
> a text editor and copy the data you want to another file or straight to the 
> MySQL commandline. Good luck with your 250G backup :-)
>
> You can use sed or awk to look for markers and split the file up that way. 
> You'd be much better off in the future to dump database-per-database, and if 
> you think you need it table-per-table. 's Not all that hard, just script to 
> loop over the output of show databases and show tables. Probably plenty of 
> scripts on the internet that do exactly that.
>
> Compressing your dumps is a good idea, too - the output is a text file, so 
> bzip2 will probably compress that a factor 10 or better. Simply use bzcat to 
> pipe the file back into the MySQL client to restore.
>
>
>
>
> That's pretty nice & What I am expected to hear.
>
> I will let u know after some practical implementation.
>
> Thanks & best Regards,
> Adarsh Sharma
>

-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB and rsync

2011-01-31 Thread Eric Bergen
I skimmed over this thread and I think I can help clarify the innodb,
rsync, and lvm situation.

The basic issue with just running rsync on the files under a running
mysqld is that the rsync
will copy different parts of files at different points in time. This
means that it could sync things
to disk before and after rsync has read that part of a file which,
when restored, will be corrupted.
This is the worst case scenario.

The next best thing is to run a FLUSH TABLES WITH READ LOCK (global
read lock) and maintain
the lock while the rsync is running. This works fine for myisam tables
because all file activity to
myisam tables is stopped while a global read lock is held.

This isn't guaranteed to work with innodb. Internally to mysql flush
tables with read lock only stops
queries from acquiring write locks which let them modify tables. This
won't make innodb hold still
for a few different reasons.

First a select query in innodb can actually modify data files. A
select on a page with unmerged
records from the insert buffer will cause innodb to merge those
records before making the page
available for selects. This causes some disk i/o. If this happens
while rsync is running the resulting
backup can be corrupted. So even while holding a global read lock and
only running selects innodb
can write data.

The master thread(s) perform background tasks such as flushing dirty
pages, merging the insert buffer
and purging old records whenever innodb feels like there is spare i/o
capacity. These threads don't
know how to hold still during a global read lock and can corrupt a
backup if it were taken with rsync.

The safest way to create a backup without using something like
XtraBackup is to get a snapshot at
the filesystem level or below that at the block device level. This is
effectively what LVM does. When
you create a LVM snapshot it freezes that device at a single point in
time. When you restore the backup
innodb goes through it's recovery procedure as if the power went out.

Some possible solutions to this were discussed on the internals list a
few years ago. I'm not sure what
has been implemented since then. The list thread is here:
http://lists.mysql.com/internals/35527

-Eric

On Fri, Jan 28, 2011 at 1:59 PM, Robinson, Eric  wrote:
>> * flush atbles
>> * rsync while mysqld is running
>> * stop mysqld
>> * second rsync
>>
>
> Unless we can verify 100% that there is a safe way to do it without
> shutting down MySQL, then I'm sure the approach you described above is
> the one we will end up with. Thanks for your input.
>
> --
> Eric Robinson
>
>
>
> Disclaimer - January 28, 2011
> This email and any files transmitted with it are confidential and intended 
> solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
> addressee you should not disseminate, distribute, copy or alter this email. 
> Any views or opinions presented in this email are solely those of the author 
> and might not represent those of Physicians' Managed Care or Physician Select 
> Management. Warning: Although Physicians' Managed Care or Physician Select 
> Management has taken reasonable precautions to ensure no viruses are present 
> in this email, the company cannot accept responsibility for any loss or 
> damage arising from the use of this email or attachments.
> This disclaimer was added by Policy Patrol: http://www.policypatrol.com/
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-28 Thread Robinson, Eric
> * flush atbles
> * rsync while mysqld is running
> * stop mysqld
> * second rsync
> 

Unless we can verify 100% that there is a safe way to do it without
shutting down MySQL, then I'm sure the approach you described above is
the one we will end up with. Thanks for your input.

--
Eric Robinson



Disclaimer - January 28, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-28 Thread Robinson, Eric
> And you will build your business on a hotcopy with external 
> tools beause you do not trust replication? laughable!
> 
> Do what you want, but dont come back and cry if all goes down 
> You were told in which way you can use rsync with minimum 
> downtime or that replication can be used to stop only salves 
> for a short time and you are acting like a child "mama i will 
> do what i said the whole time"


Our current model has been working well since 2006. We will be careful
to verify the reliability of any proposed changes. 

Have a great day!

--
Eric Robinson




Disclaimer - January 28, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-28 Thread Robinson, Eric
> In one extreme instance, having a few terabytes of data 
> across several instances (on distinct hosts), I was required 
> to do a full-refactoring data migration with an absolute 
> limitation on allowable downtime.
> Among the technique which I used (and I can't take credit for this
> one) was to use rsync on the live server for innodb files 
> (this phase took a very long time, but did not interfere with 
> operations). The result of this phase was, as you would 
> expect, a set a seriously broken files which were notheless 
> very similar to the correct files.
> When that phase was complete, I shut the server down and did 
> another rsync.  It required perhaps a minute or 2, but the 
> result was 100% clean innodb data files which satisfied my 
> downtime limitations.
> 
>  FLUSH TABLES WITH READ LOCK might suffice if all 
> transactions are completed/rolled-back but I would stil 
> advise that you scan SHOW ENGINE INNODB STATUS but I would 
> carefully experiment with that.
> 

You just described almost the exact procedure that I described at the
beginning of this thread, except I use MyISAM so my question was whether
the same technique could work with InnoDB. It sounds like it very well
could if combined with SHOW ENGINE INNODB STATUS. I will definitely test
it to be sure.

> As for maat-kit, don't let the disclaimers discourage you.  
> If you read the disclaimers carefully on any product (at 
> least those released with the benefit(?) of legal advice), 
> you would have a hard time trusting any of it with your 
> enterprise.  The maat-kit team (and Baron Schwartz in 
> particular) and quite simply the *best* MySQL engineering 
> team out there, with the possible exception of the vendor.  I 
> would not hesitate to trust them with my data.
> 

I will definitely look at it again. Thanks.

--Eric




Disclaimer - January 28, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Michael Dykman,mysql@lists.mysql.com,Shawn Green (MySQL). If you are 
not the named addressee you should not disseminate, distribute, copy or alter 
this email. Any views or opinions presented in this email are solely those of 
the author and might not represent those of Physicians' Managed Care or 
Physician Select Management. Warning: Although Physicians' Managed Care or 
Physician Select Management has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-28 Thread Robinson, Eric
> You need to quiesce the InnoDb background threads. One 
> technique is mentioned here:
> http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablesp
aces.html


Just refreshing this topic a bit. Can anyone confirm that FLUSH TABLES
WITH READ LOCK is sufficient to quiesce the InnoBD background threads
per Shawn's message above? 

--
Eric Robinson



Disclaimer - January 28, 2011 
This email and any files transmitted with it are confidential and intended 
solely for mysql@lists.mysql.com,Shawn Green (MySQL). If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-26 Thread Robinson, Eric
 

> > You need to quiesce the InnoDb background threads. One
technique is
> > mentioned here:
> >
http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablesp
<http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablesp> 
> aces.html
> >
> > Look for the section talking about "clean" backups.
>
> Now we're talkin. I'll check it out.
>
>


I read that section but it is not at all clear (1) how one
quiesces the
InnoDB background threads, or (2) if there is a way to keep them
quiesced while the backup is in progress.



>From what I see there: 


You can make a clean backup .ibd file using the following
method: 

1.  Stop all activity from the mysqld
<http://dev.mysql.com/doc/refman/5.5/en/mysqld.html>  server and commit
all transactions. 

2.  Wait until SHOW ENGINE INNODB STATUS
<http://dev.mysql.com/doc/refman/5.5/en/show-engine.html>  shows that
there are no active transactions in the database, and the main thread
status of InnoDB is Waiting for server activity. Then you can make a
copy of the .ibd file. 

 

> I would assume that "flush tables with read lock" would work 
> for 1. and then you wait for 2. to happen. Probably shouldn't 
> take very long, especially in the quiet moments.
 
 
 
Can anyone confirm that FLUSH TABLES WITH READ LOCK is sufficient to
"quiesce" the InnoDB background threads?  When Googling this, I'm seeing
posts saying that even after a flush & lock, InnoDB keeps making changes
to certain files (metadata?) and that's why it is never really safe
(according to them) to copy the files from a running instance of MySQL.
Call me persistent (or just annoyingly thick-headed) but I am not fully
satisfied their explanations.
 
 
--Eric
 
 


Disclaimer - January 26, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Johan De Meersman,Shawn Green (MySQL),Reindl 
Harald,mysql@lists.mysql.com. If you are not the named addressee you should not 
disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of Physicians' Managed Care or Physician Select Management. Warning: 
Although Physicians' Managed Care or Physician Select Management has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/


RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> > You need to quiesce the InnoDb background threads. One technique is 
> > mentioned here:
> > http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablesp
> aces.html
> > 
> > Look for the section talking about "clean" backups.
> 
> Now we're talkin. I'll check it out.
> 
> 

I read that section but it is not at all clear (1) how one quiesces the
InnoDB background threads, or (2) if there is a way to keep them
quiesced while the backup is in progress.


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Robinson, Eric,Shawn Green (MySQL),Reindl 
Harald,mysql@lists.mysql.com. If you are not the named addressee you should not 
disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of Physicians' Managed Care or Physician Select Management. Warning: 
Although Physicians' Managed Care or Physician Select Management has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> > I'm starting to worry that you may be right. I know FLUSH 
> TABLES WITH 
> > READ LOCK does not work as expected with InnoDB, but is 
> there really 
> > no way to put InnoDB into a state where all changes have 
> been flushed 
> > to disk and it is safe to rsync the directory? Is stopping 
> the service 
> > really the only way? (And even if I stop the service, is 
> rsync totally 
> > safe with InnoDB?)
> >
> 
> You need to quiesce the InnoDb background threads. One 
> technique is mentioned here:
> http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablesp
aces.html
> 
> Look for the section talking about "clean" backups.

Now we're talkin. I'll check it out.


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Shawn Green (MySQL),Reindl Harald,mysql@lists.mysql.com. If you are 
not the named addressee you should not disseminate, distribute, copy or alter 
this email. Any views or opinions presented in this email are solely those of 
the author and might not represent those of Physicians' Managed Care or 
Physician Select Management. Warning: Although Physicians' Managed Care or 
Physician Select Management has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> On 1/25/2011 10:45, Robinson, Eric wrote:
> >>> There is a very good reason: it is the phenomenon of row 
> drift. The 
> >>> master and slave can appear to be in good sync, but often 
> it is not 
> >>> actually the case.
> >>
> >> ... sounds interesting; have you got any document explaining this 
> >> phenomenon? AFAIK, the things that (silently) break 
> replication are:
> >> - non-deterministic functions in statement-based replication
> >> - hand-made updates on the slave db
> >> is this enough to justify a *daily* resync?!
> >
> >
> > I'm definitely no expert on this. All I know is that we used to
> > frequently experience situations where queries to the slaves would
> > return different recordsets than the same queries to the 
> masters. Yet by
> > all other indications the servers were in sync. All the replication
> > threads were running and the row counts were identical, but 
> the data in
> > the rows was sometimes different. I asked about this in the 
> list and the
> > answers I got back were that the phenomenon was called row 
> drift and was
> > fairly well known and not always easy (or sometimes even 
> possible) to
> > eliminate because of bad programming practices in some off-the-shelf
> > applications. At that time, the consensus in the list was 
> that it was
> > not safe to trust replication slaves for backup purposes. 
> That's when I
> > came up with the idea of doing an rsync every night, which creates a
> > slave that is 100% reliable for using as a backup source and also
> > eliminates problems with row-drift. Since we started using that
> > technique, we don't get calls from users complaining that 
> their reports
> > are showing bogus totals and such.
> >
> 
> I suspect that your queries were not as deterministic as you thought 
> they were. Do you have a sample of a query that produced different 
> results between the master and the slave? We shouldn't need 
> the results, 
> just the query.
> 


Sorry, no. The software is a canned medical application so we cannot
easily inspect the queries that could have been causing the problem.
Even though we could capture them in various ways (sniffer, proxy, query
logs) it would not be easy to isolate the culprits out of the tens of
thousands issued every day. And it was a year or more ago. We have not
had the problem since we started rsyncing. :-)

 


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Shawn Green (MySQL),Mattia Merzi,Reindl 
Harald,mysql@lists.mysql.com. If you are not the named addressee you should not 
disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of Physicians' Managed Care or Physician Select Management. Warning: 
Although Physicians' Managed Care or Physician Select Management has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> 240 mysql-instances on 3 physical hosts?
> what crazy setup is this please?


Processors average 90% idle, peaks are low, iowait is low, the system is
not swapping, response time is good, and our users are happy all around
the country. What is crazy about that? 


> The whole world can work with replication-slaves and you are 
> the only one who installing an endless count of 
> mysql-services instead a hand of large instances 


I don't know how the rest of the world does it, but we have been doing
it like this since 2006 and it has worked great and we have never
regretted having multiple instances of mysql. In fact, it is really
great because we can maintain each customer's service individually, stop
and start mysql without affecting other customers, turn query logs on
and off for each customer, customize performance parameters, and so on.
I can maintain a customer's database right in the middle of a production
day and the other customers won't even notice! It has been great being
able to do all these things.  

> outside mysql would ever think of backup a running db-server
> 

Then you're just not Googling very well. :-)

--Eric


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> 240 mysql-servers?
> why there is no consolidation?

I said 240+ mysql *instances*, not servers. It's actually just 3
physical servers (not counting standby cluster nodes).

> > just need a way to make the same thing work with InnoDB.
> 
> this is simply impossible

That is very unfortunate.


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> nobody cares if they are binary replica as long 
> as the data is consistent and ident

Like I said, I'm no expert on this, but my approach seems like the only
way to 100% absolutely sure that the data on the slave is in fact
consistent and identical to the data on tha master.

> so start another slave on the machine with his own socket for 
> backups

You say that like it doesn't mean a huge amount of additional work,
expense, and complexity. We currently have 240+ master MySQL instances
and are adding them at a rate of several per week.

Based on everything you've said so far, I still prefer my solution. I
just need a way to make the same thing work with InnoDB.

--
Eric Robinson


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> > There is a very good reason: it is the phenomenon of row drift. The 
> > master and slave can appear to be in good sync, but often it is not 
> > actually the case.
> 
> ... sounds interesting; have you got any document explaining 
> this phenomenon? AFAIK, the things that (silently) break 
> replication are:
> - non-deterministic functions in statement-based replication
> - hand-made updates on the slave db
> is this enough to justify a *daily* resync?!


I'm definitely no expert on this. All I know is that we used to
frequently experience situations where queries to the slaves would
return different recordsets than the same queries to the masters. Yet by
all other indications the servers were in sync. All the replication
threads were running and the row counts were identical, but the data in
the rows was sometimes different. I asked about this in the list and the
answers I got back were that the phenomenon was called row drift and was
fairly well known and not always easy (or sometimes even possible) to
eliminate because of bad programming practices in some off-the-shelf
applications. At that time, the consensus in the list was that it was
not safe to trust replication slaves for backup purposes. That's when I
came up with the idea of doing an rsync every night, which creates a
slave that is 100% reliable for using as a backup source and also
eliminates problems with row-drift. Since we started using that
technique, we don't get calls from users complaining that their reports
are showing bogus totals and such. 


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Mattia Merzi,Reindl Harald,mysql@lists.mysql.com. If you are not the 
named addressee you should not disseminate, distribute, copy or alter this 
email. Any views or opinions presented in this email are solely those of the 
author and might not represent those of Physicians' Managed Care or Physician 
Select Management. Warning: Although Physicians' Managed Care or Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> Why don't you use a Maatkit solution like mk-checksum to 
> ensure that your slaves have identical data with the master?

I looked at Maatkit a year or so ago. It looked pretty interesting, but
then I started reading the disclaimers carefully and they scared the
bejeepers out of me. Warnings about data corruption and whatnot. I'll
check it out again. (I was actually looking for it this morning but
could not remember the name, so thanks for the reminder.)

--
Eric Robinson


Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for st...@internetretailconnection.com,mysql@lists.mysql.com. If you are 
not the named addressee you should not disseminate, distribute, copy or alter 
this email. Any views or opinions presented in this email are solely those of 
the author and might not represent those of Physicians' Managed Care or 
Physician Select Management. Warning: Although Physicians' Managed Care or 
Physician Select Management has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: InnoDB and rsync

2011-01-25 Thread Robinson, Eric
> your whole solution is crippled because why in the world are 
> you killing your salves and reinit them without any reason daily?

There is a very good reason: it is the phenomenon of row drift. The
master and slave can appear to be in good sync, but often it is not
actually the case. For this reason, most people agree that it is not
safe to rely on the slave server as the source for your backups. My
solution efficiently corrects row drift and makes sure the slaves are
100% binary replicas of the slaves, which can then be trusted as backup
sources. The whole thing is very fast and there is no downtime for
users, who can continue to work 24x7. I fail to see how this is
"crippled."

> why in the world do you not leave the slaves in peace and 
> backup them to another media because you can stop them as 
> long you want for a consistent backup and after starting the 
> last changes from the master are applied

See my comment above. (But also we cannot stop them as long as we want
because the slaves are used for running reports. Using my approach, each
slave is down for about 30 seconds. The masters are not brought down at
all.)

> If you understand how innodb works you will see that this is 
> not possible by design

I'm starting to worry that you may be right. I know FLUSH TABLES WITH
READ LOCK does not work as expected with InnoDB, but is there really no
way to put InnoDB into a state where all changes have been flushed to
disk and it is safe to rsync the directory? Is stopping the service
really the only way? (And even if I stop the service, is rsync totally
safe with InnoDB?)

--
Eric Robinson



Disclaimer - January 25, 2011 
This email and any files transmitted with it are confidential and intended 
solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



InnoDB and rsync

2011-01-24 Thread Robinson, Eric
Is there a way to safely backup an InnoDB database using rsync? 

Right now we have a very efficient and reliable way to backup 240+
separate instances of MySQL with MyISAM tables. The databases range in
size from .5GB to 16GB. During this time, users can still access the
system, so our customers can work 24x7. In the process, we also refresh
240+ slave instances with a perfect byte-for-byte replica of the master
databases. 

The whole thing takes about 30 minutes. 

Here's how we do it. 

Late at night when the number of users on the system is low, we do the
following for each of the 240+ instances of MySQL...

1. Shut down the slave and remove all replication-related log files.

2. Perform an rsync of the master's data directory to the slave. Users
may be making changes to tables during this rsync.

3. Issue a FLUSH TABLES WITH READ LOCK on the master followed by a RESET
MASTER.

4. Perform a second rsync of the data directory from the master to the
slave to copy any user changes that happened during step 2. This usually
completes in a few seconds, often less than 1. If any users were trying
to insert records at this exact moment, their application may appear to
pause very briefly.

5. Start the slave.

When I'm all done, I have 240+ slave servers in perfect sync with their
masters, each having a 100% identical binary copy of its master's
database. Since these copies are truly identical, they can be used for a
second layer of backup to other media.  

Like I said, the whole process takes about 30 minutes because the rsync
algorithm only copies the byte-level changes.

IS THERE ANY WAY TO SET UP SOMETHING THIS EASY AND EFFICIENT USING
INNODB?

I've been reading about InnoDB hot copy and other approaches, but none
of them seem to work as well as the approach I have worked out with
MyISAM. Unfortunately, my software wants to force us to switch to
InnoDB, so I'm really stuck. If we have to switch to InnoDB and we
cannot come up with a method for doing fast, rsync-style backups, it
will probably mean huge, costly, and unnecessary changes to our
infrastructure.

Any help will be GREATLY appreciated.

--
Eric Robinson


Disclaimer - January 24, 2011 
This email and any files transmitted with it are confidential and intended 
solely for mysql@lists.mysql.com. If you are not the named addressee you should 
not disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of Physicians' Managed Care or Physician Select Management. Warning: 
Although Physicians' Managed Care or Physician Select Management has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is is possible to update a column based on a REGEXP on another column?

2011-01-22 Thread Eric Bergen
There isn't a built in way but you can try 
http://www.mysqludf.org/lib_mysqludf_preg/

I would go with the php/perl script if this is a one time thing.

-Eric

On Jan 21, 2011, at 11:31 AM, Phil  wrote:

> I have a table which contains a username column which may be constructed
> something like
> 
> somename[A] or [DDD]someothername
> 
> The A or DDD can be anything at all.
> 
> I've added a new column to the table to which I'd like to populate with the
> value within the square brackets.
> 
> I could write something in perl or php to run through each and update them
> but was wondering if there is a way to do this within mysql itself?  The
> regexp only returns a boolean so I can't see how to use that.
> 
> Regards
> 
> Phil
> 
> 
> -- 
> Distributed Computing stats
> http://stats.free-dc.org

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Innodb table space questions

2011-01-17 Thread Eric Bergen
reply inline

On Mon, Jan 17, 2011 at 9:30 PM, Angela liu  wrote:
> Folks :
>
> two questions:
>
> 1. can Innodb create per database table space , not per table table space?

No. The only available options are creating a global tablespace which
can be many files or a file per table.

> 2. can we store table on specific tablespace like Oracle or DB2 when creating 
> table?

You can only choose to store a table in it's own tablespace or in the
global one.

> Many thanks.
>
>
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: export result from select statement

2011-01-08 Thread Eric Bergen
select into outfile is the correct way. What do you mean by doesn't
work? Does it give you an error?

On Sat, Jan 8, 2011 at 3:04 PM, LAMP  wrote:
> Hi guys,
> I wonder how to store to csv or txt file result from SELECT query? not a
> whole table nor database. Just results from SELECT query.
> Usually I use MySQL Query Browser and Export feature, but in this case I
> don't have access with MySQL Query Browser. Though, I have a command line
> access.
> I found on one place something like
> #SELECT 2+2 into outfile '/path/to/specific/directory/test.csv';
> Though, it doesn't work?!?
>
> Thanks.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldumpslow

2010-12-18 Thread Eric Bergen
I don't think so. mysqldumpslow parses the log file on the local server.

On Fri, Dec 17, 2010 at 3:01 AM, partha sarathy  wrote:
> Is it possible to run mysqldumpslow on a remote host?
>
> -Partha
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: dynamic alias?

2010-08-27 Thread Eric Bloomquist
Thanks, PREPARE certainly has some potential.

Unfortunately, when I try to use that sort of thing inside of a Cognos
report, it complains that "At least one expression in the Select clause
is missing the AS clause to make it a proper alias."

(Sorry I didn't include the Cognos context -- it didn't occur to me that
it'd be relevant.)

Thanks,
Eric


-Original Message-
From: Peter Brawley [mailto:peter.braw...@earthlink.net] 
Sent: Friday, August 27, 2010 12:57 PM
To: mysql@lists.mysql.com
Subject: Re: dynamic alias?

  On 8/27/2010 11:16 AM, Eric Bloomquist wrote:
> Hi all,
>
> I'm wondering if it's possible to have a dynamic alias in MySQL.  The
> result I'm looking for is essentially:
See the manual page for PREPARE.

PB

-
> +-+---++
> | Provider Name   | Facility Name | Appts on 8/28/2010
|<==
> +-+---++
> | Mildred Ratched | Oregon State Hospital | 12 |
> | Henry Jekyll| London Internal Medicine  |  3 |
> | ... |   ||
>
> Where "Appts on 8/28/2010" instead includes whatever tomorrow's date
is
> when the query is run.
>
> Is it possible to do something like this in a MySQL query?
>
> Thanks,
> Eric
>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=eric_bloomqu...@cooley-dickinson.org


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



dynamic alias?

2010-08-27 Thread Eric Bloomquist
Hi all,

I'm wondering if it's possible to have a dynamic alias in MySQL.  The
result I'm looking for is essentially:

+-+---++
| Provider Name   | Facility Name | Appts on 8/28/2010 | <==
+-+---++
| Mildred Ratched | Oregon State Hospital | 12 |
| Henry Jekyll| London Internal Medicine  |  3 | 
| ... |   ||

Where "Appts on 8/28/2010" instead includes whatever tomorrow's date is
when the query is run.

Is it possible to do something like this in a MySQL query?

Thanks,
Eric


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-21 Thread Eric Bergen
Most alter table operations in 5.0 will rebuild the entire table. The
best thing to increase for alter table speed in innodb is the buffer
pool. For more details on how innodb handles alter table see
http://ebergen.net/wordpress/2007/05/07/how-alter-table-locks-tables-and-handles-transactions/

On Wednesday, August 18, 2010, Xn Nooby  wrote:
> From what I have read, ALTER TABLE to add an index causes the entire
> table to be duplicated, so wouldn't my ALTER TABLE command be
> duplicating the work done by the SELECT command?
>
>
>
> On Wed, Aug 18, 2010 at 4:50 PM, mos  wrote:
>> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>>
>>> Below is a generic version of the code I am trying.  It does copy the
>>> rows very quickly, but I will have to test to see how quickly the
>>> indices are built.  Is the below code what you were suggesting?  I had
>>> a little trouble dropping and later adding the primary index, but I
>>> think I got it figured out.
>>>
>>> Below I basically do this:
>>>  make the_table_clone from the the_table
>>>  drop the indices on the_table_clone
>>>  copy the row from the_table to the_table_clone
>>>  add the indices back to the_table_clone
>>>
>>> If this runs fast enough, I will then drop the_table, and rename
>>> the_table_clone to the_table
>>>
>>>
>>> USE the_database;
>>>
>>>
>>> DROP TABLE IF EXISTS the_table_clone;
>>>
>>> CREATE TABLE the_table_clone LIKE the_table;
>>
>> Or you can try something like:
>>
>> create table the_table_clone engine=innodb select * from the_table limit=0;
>> This will create the same table structure but not the indexes so you don't
>> have to drop the indexes below.
>>
>>
>>
>>> # drop minor indices on clone
>>>
>>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>>
>>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>>
>>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>>
>>>
>>> # drop primary index on clone
>>>
>>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>>
>> You still need the statement above to change the autoinc to integer if you
>> use my Create Table... statement above.
>>
>>
>>
>>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>>
>>>
>>> # add 2 new columns to clone
>>>
>>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>>> price_amount float DEFAULT '0';
>>>
>>>
>>> # copy rows
>>>
>>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>>> 0,1000;
>>>
>>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>>
>> Why do you have two insert statements? If you are inserting a group of
>> records at a time then you need a limit statement on each, and increment the
>> offset by the number of rows that have been added.
>>
>> I would explicitly specify the column list for both the Insert and the
>> Select to make sure they match up. There is no point going through all this
>> if it inserts the data into the wrong columns!
>> Check the data before creating the indexes to make sure the same number of
>> rows have been copied over and the data is in the correct columns.
>>
>>
>>
>>> # Add back indices in one command (for max speed)
>>>
>>> ALTER TABLE the_table_clone \
>>>  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>>>  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>>>  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>>>  MODIFY id INT SIGNED AUTO_INCREMENT,\
>>>  ADD PRIMARY KEY(col1);
>>
>> Correct.
>>
>> Mike
>>
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>

-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recommended swap partition size

2010-04-19 Thread Eric Bergen
Google oom_adj and oom_score. You can control which process is most
likely to be killed.

On Mon, Apr 19, 2010 at 12:53 AM, Johan De Meersman  wrote:
>
>
> On Sun, Apr 18, 2010 at 9:04 PM, Eric Bergen  wrote:
>>
>> Usually I prefer to have linux kill processes rather than excessively
>> swapping. I've worked on machines before that have swapped so badly
>
> I guess you never had the OOM killer randomly shooting down your SSH daemon
> on a machine hundred of miles away, then :-)
>
>
> --
> Bier met grenadyn
> Is als mosterd by den wyn
> Sy die't drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqld_safe

2010-04-18 Thread Eric Bergen
It's distributed as part of mysql. The script is responsible for
restarting mysqld if it exits with a non zero return code such as when
it crashes.

On Mon, Mar 29, 2010 at 2:40 PM, Colin Streicher
 wrote:
> Yeah, its just a shell script that acts as a wrapper around the mysql 
> processes on debian systems(
> maybe others, I'm not sure)
> You can read it at 'less /usr/bin/mysqld_safe'
>
> Colin
>
> On March 29, 2010 11:51:36 am Glyn Astill wrote:
>> --- On Mon, 29/3/10, Brown, Charles  wrote:
>> > Hello All.  when I issued this
>> > command: ps -e | grep,  I noticed that mysqld_safe was
>> > up running in my system.
>> > My question is:  what is mysqld_safe and why was it
>> > running. Please help.
>> >
>> >
>> > [sp...@naxbmisq03 ~]$ ps -e | grep -i mysql
>> > 11989 ?        00:00:00 mysqld_safe
>> > 12025 ?        13:28:39 mysqld
>>
>> My understanding is that mysqld_safe is the process used to start mysqld
>>  and handle any runtime errors, crashes etc.
>>
>
> --
> Anyone who has had a bull by the tail knows five or six more things
> than someone who hasn't.
>                -- Mark Twain
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Multiple table engine

2010-04-18 Thread Eric Bergen
This can become a problem when using replication. For example if you do:

begin;
insert into innodb_table;
insert into myisam_table;
insert into innodb_table;
rollback;

The innodb rows won't be replicated but the myisam row will.  There is
more info at:
http://dev.mysql.com/doc/refman/5.0/en/innodb-and-mysql-replication.html

On Thu, Apr 8, 2010 at 4:02 AM, Jigal van Hemert  wrote:
> Tompkins Neil wrote:
>>
>> Just looking for some confirmation that under a single database - I assume
>> it is perfectly normal to have both MyISAM and InnoDB engines for
>> different
>> tables ?  Is there anything I need to be aware of ?
>
> In most case no problems. MySQL can mix engines without problems.
> Every engine uses it's own specific buffers, so if your database becomes big
> and memory becomes an issue (large buffers needed) it might be handy to use
> only one engine (so you can set the buffers for the other engine(s) to a
> very low number).
> On a daily basis I use databases with mixed MyISAM and InnoDB tables.
>
> --
> Jigal van Hemert.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recommended swap partition size

2010-04-18 Thread Eric Bergen
The impact of swap activity on performance is dependent on the rate at
which things are being swapped and the speed of swapping.  A few pages
per second probably won't kill things but in this case it was swapping
hundreds of pages per second which killed performance. Disks are much
slower than ram.

In my environment mysqld_safe will failover the cluster if it detects
mysqld has crashed so I prefer the quick crash and failover to the
long period of slow response time. Many operators prefer the long slow
response time, I don't but it's a religious debate.

On Sun, Apr 18, 2010 at 12:13 PM, Rob Wultsch  wrote:
> On Sun, Apr 18, 2010 at 12:04 PM, Eric Bergen  wrote:
>> Linux will normally swap out a few pages of rarely used memory so it's
>> a good idea to have some swap around. 2G seems excessive though.
>> Usually I prefer to have linux kill processes rather than excessively
>> swapping. I've worked on machines before that have swapped so badly
>> that it took minutes just to ssh to them. This is effectively a
>> failure scenario that can last for a lot longer than it takes to
>> restart/failover mysqld. For apache it means the clients will see
>> errors until the load balancer health check drops the server out of
>> rotation. The best solution in all cases is to keep an eye on swap
>> in/out and memory usage so neither the crash nor the excessive
>> swapping becomes a problem.
>>
>
>
> Umm, you were probably horribly over io utilized. Swapping by itself
> will not kill perforance I have some boxes where mysql has leaked a
> metric crap ton of memory and swapping is ok. The leaked memory is
> swapped out and sits out in swap. Every now and a again I create more
> swap to keep the server happy.
>
> Swapping is often preferable to crash with unplanned downtime.
>
> Note that innodb_flush_method can implact this...
>
>
> --
> Rob Wultsch
> wult...@gmail.com
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recommended swap partition size

2010-04-18 Thread Eric Bergen
Linux will normally swap out a few pages of rarely used memory so it's
a good idea to have some swap around. 2G seems excessive though.
Usually I prefer to have linux kill processes rather than excessively
swapping. I've worked on machines before that have swapped so badly
that it took minutes just to ssh to them. This is effectively a
failure scenario that can last for a lot longer than it takes to
restart/failover mysqld. For apache it means the clients will see
errors until the load balancer health check drops the server out of
rotation. The best solution in all cases is to keep an eye on swap
in/out and memory usage so neither the crash nor the excessive
swapping becomes a problem.

On Wed, Apr 14, 2010 at 3:06 AM, Glyn Astill  wrote:
> --- On Wed, 14/4/10, Dan Nelson  wrote:
>
>> Hammerman said:
>> > My organization has a dedicated MySQL server. The
>> system has 32Gb of
>> > memory, and is running CentOS 5.3.  The default
>> engine will be InnoDB.
>> > Does anyone know how much space should be dedicated to
>> swap?
>>
>> I say zero swap, or if for some reason you NEED swap (for
>> crashdumps maybe,
>> but I didn't think Linux supported that), no more than
>> 2GB.  With that much
>> RAM, you don't ever want to be in the state where the OS
>> decides to page out
>> 8GB of memory (for example) to swap.  We have a few
>> Oracle servers with
>> between 32 and 48 GB of memory and they all live just fine
>> without swap.
>>
>
> But surely better to have a server that is paging out and has slowed to a 
> crawl than one where the oom killer starts killing off your processes, with 
> no swap I'd be turning overcommit off.
>
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to corrupt a database please???

2010-04-18 Thread Eric Bergen
A couple good tests are.

1. Corrupt a relay log. For this you can stop the sql thread, cat
/dev/urandom over the newest relay log, start the sql thread and watch
it fail.
2. Change the innodb_log_file_size in my.cnf without going through the
proper procedure to remove the old log files. In 5.0 this will cause
incorrect information in frm file errors for queries which will take a
little bit of work to track down.
3. Can some random data over myisam files and run a check tables so
mysql marks them as crashed.

On Sun, Apr 18, 2010 at 11:13 AM, Rob Wultsch  wrote:
> On Sun, Apr 18, 2010 at 11:07 AM, Jim Lyons  wrote:
>> You can remove the innodb logs and/or the innodb data file.  You can also
>> remove some of the individual .idb files (if you're using file-per-table
>> option).
>>
>
> He originally asked about how to provide a training excise about
> repairing a db. How the hell do you repair from not having data files?
> For that matter the recovery from lacking log files (and assuming a
> crashed state) is imho ugly as hell.
>
> --
> Rob Wultsch
> wult...@gmail.com
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-09 Thread Robinson, Eric
Hi Baron,

> I'm the primary author of Maatkit.  

Awkward... :-)

> What can I say -- you could go buy a commercial off-the-shelf tool 
> and believe the song and dance they feed you about the tool 
> being perfect.  

There's not a single commercial software solution in our toolbox. We're
big fans of CentOS, LVS, heartbeat, ldirectord, tomcat, MySQL, Xen,
pureFTP, and more. We've been happy with the performance and reliability
of all of our FOSS tools. I'm definitely not a Kool-aid drinker when it
comes to commercial product marketing.

> At least with Maatkit, you get transparency.  We make a concerted 
> effort to update the RISKS section of each tool with each release, so
there 
> is full disclosure.

Fair enough, but I still found the warnings a little too scary. A more
complete explanation of the exact nature of the bugs and the exact
circumstances under which I should be concerned about triggering them
would have increased my comfort level.  

> I think Maatkit is by far the best solution for live master-slave sync

> in most real-world situations.

We'll give it another look.

--
Eric Robinson



Disclaimer - December 9, 2009 
This email and any files transmitted with it are confidential and intended 
solely for Baron Schwartz,Gavin Towey,Tom Worster,my...@lists.mysql.com. If you 
are not the named addressee you should not disseminate, distribute, copy or 
alter this email. Any views or opinions presented in this email are solely 
those of the author and might not represent those of . Warning: Although  has 
taken reasonable precautions to ensure no viruses are present in this email, 
the company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Robinson, Eric
>> I would never have any confidence that the replication 
>> is solid enough to use the slave server for backup purposes.

> I agree completely there.  That's the other reason I like filesystem 
> snapshots is that it allows you to take a backup from 
> the master relatively painlessly.

I've thought of using snapshots. Offhand, can't remember the reason that
I decided they would not work for us. It'll come to me... 

--
Eric Robinson


Disclaimer - December 4, 2009 
This email and any files transmitted with it are confidential and intended 
solely for Gavin Towey,my...@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of . Warning: Although  has taken reasonable 
precautions to ensure no viruses are present in this email, the company cannot 
accept responsibility for any loss or damage arising from the use of this email 
or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Robinson, Eric
> I would say that it's very important to know why data 
> is getting out of sync between your master and slave. 

Ultimately, I agree. But since it's a canned application, getting to
that point might be hard, and once it is resolved, new issues might
arise. I would never have any confidence that the replication is solid
enough to use the slave server for backup purposes. (Which, by the way,
is the real reason I'm doing this. In the middle of the night, when
there are few users on the system, I want to backup the slave, but first
I want to make sure I have a 100% reliable copy of the data.)

> There are ways to resync data that don't involve all 
> this as well:  Maatkit has some tools

I've looked with great interest at Maatkit, but their tools are replete
with warnings about dangers, bugs, and crashes. They certainly do not
inspire confidence. 

--
Eric Robinson 



Disclaimer - December 4, 2009 
This email and any files transmitted with it are confidential and intended 
solely for Gavin Towey,Tom Worster,my...@lists.mysql.com. If you are not the 
named addressee you should not disseminate, distribute, copy or alter this 
email. Any views or opinions presented in this email are solely those of the 
author and might not represent those of . Warning: Although  has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Robinson, Eric
> (1) innodb? 

It's an off-the-shelf application that uses MyISAM tables. It is
possible to convert to innodb, but I have not been sold on innodb in
terms of its  performance characteristics for this particular
application. Maybe I've been reading the wrong stuff. Do you have
general thoughts on the differences with respect to performance?

> (2) why delete slave logs when you can 
> restart the slave with --skip-slave and 
> then use CHANGE MASTER TO?

Well... I guess mainly because I didn't know about that option! I
thought I needed to "fake out" mysql on this, but it sounds like I can
just do 'flush tables with read lock;reset master;' on the master and
'change master to...;' on the slave. So cool. Thanks for the input!

--
Eric Robinson


Disclaimer - December 4, 2009 
This email and any files transmitted with it are confidential and intended 
solely for Tom Worster,my...@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of . Warning: Although  has taken reasonable 
precautions to ensure no viruses are present in this email, the company cannot 
accept responsibility for any loss or damage arising from the use of this email 
or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Robinson, Eric
 
Let's face it, sometimes the master and slave get out of sync, even when
'show slave status' and 'show master status' indicate that all is well.
And sometimes it is not feasible to wait until after production hours to
resync them. We've been working on a method to do an emergency
hot-resync during production hours with little or no user downtime. What
do you guys think of this approach? It's only for Linux, though...

1. Shut down the slave and remove its replication logs (master.info and
*relay* files).

2. Do an initial rsync of the master to the slave. Using rsync's
bit-differential algorithm, this quickly copies most of the changed data
and can be safely be done against a live database. This initial rsync is
done before the next step to minimize the time during which the tables
will be read-locked.

3. Do a 'flush tables with read lock;reset master' on the master server.
At this point, user apps may freeze briefly during inserts or updates. 

4. Do a second rsync, which goes very fast because very little data has
changed between steps 2 and 3. 

5. Unlock the master tables.

6. Restart the slave.

When you're done, you have a 100% binary duplicate of the master
database on the slave, with no worries that some queries got missed
somewhere. The master was never stopped and users were not severely
impacted. (Mileage may vary, of course.) 

We've tried this a few times and it has seemed to work well in most
cases. We had once case where the slave SQL thread did not want to
restart afterwards and we had to do the whole thing again, only we
stopped the master the second time. Not yet sure what that was all
about, but I think it may have been a race issue of some kind. We're
still exploring it.

Anyway, comments would be appreciated.

--
Eric Robinson


Disclaimer - December 4, 2009 
This email and any files transmitted with it are confidential and intended 
solely for my...@lists.mysql.com. If you are not the named addressee you should 
not disseminate, distribute, copy or alter this email. Any views or opinions 
presented in this email are solely those of the author and might not represent 
those of . Warning: Although  has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Rename Database - Why Would This Not Work?

2009-11-23 Thread Robinson, Eric
>RENAME TABLE
>   olddb.table1 TO newdb.table1,
>   olddb.table2 TO newdb.table2

>put the whole list in here, the whole statement will be applied to the
system atomically 

The database has 1200+ tables, so your approach seems like more work to
me. As it is, all I'm doing is:

service mysql stop
mv olddb newdb
service mysql start
mysqlcheck -o newdb  


--
Eric Robinson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Rename Database - Why Would This Not Work?

2009-11-21 Thread Robinson, Eric
> DB engines that have their own data dictionary (Innodb, etc) in
addition to 
> what is in the .frm could easily be messed up.

Like I said, there are only MyISAM tables in the database, so would
there be any risks associated with my simple approach? 
 
(Also there are no stored procedures because this is MySQL 4.1.22.)
 
--Eric
 


Rename Database - Why Would This Not Work?

2009-11-21 Thread Robinson, Eric
I used a simple procedure to rename my MySQL 4.1.22 database, which has
only My-ISAM tables:
 
1. Stopped MySQL
2. Renamed the database directory from olddbname to newdbname
3. Started mysql
 
At this point, I grepped for 'olddbname' and found that many of the old
.MYI files still had references to 'olddbname'. So I...
 
4. Did mysqlcheck -o newdbname
 
Then all the references to 'olddbname' were removed from the index
files.
 
I then started our application and everything seems to be working fine
using the new database name. Yet I'm still worried because when I Google
it, I see people talking about lots of different ways to do a database
rename, and people are making it sound like a complicated, dangerous
procedure.
 
Why would my simple approach not work? Should I be watching for
potential problems down the road because I did it this way?
 

--
Eric Robinson
Director of Information Technology
Physician Select Management, LLC
775.885.2211 x 111

 


Re: Distinct max() and separate unique value

2009-10-20 Thread Eric Anderson

On Tue, 20 Oct 2009, DaWiz wrote:


I would try:

select max(object_id), term_taxonomy_id
group by term_taxonomy_id
order by term_taxonomy_id;

max(column) returns a single value so distinct is not needed.
The group by and order by should only have columns thaqt are displayed and 
that are not aggregate columns.


You end up with the same object_id.

select max(object_id), term_taxonomy_id from wp_term_relationships where 
term_taxonomy_id IN (122,127) group by term_taxonomy_id order by 
term_taxonomy_id;


++--+
| max(object_id) | term_taxonomy_id |
++--+
|   1503 |  122 |
|   1503 |  127 |
++--+

I'm trying to formulate a query on a Wordpress database that will give me 
the highest 'object_id' with the highest 'term_taxonomy_id', something 
like:


+-+--+
| max(distinct object_id) | term_taxonomy_id |
+-+--+
|1503 |  127 |
|1494 |  122 |
+-+--+

But I just can't seem to get there?

select max(distinct object_id), term_taxonomy_id from wp_term_relationships 
where term_taxonomy_id IN (122,127) group by term_taxonomy_id, object_id 
order by term_taxonomy_id desc, object_id desc



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Distinct max() and separate unique value

2009-10-20 Thread Eric Anderson


I'm trying to formulate a query on a Wordpress database that will give 
me the highest 'object_id' with the highest 'term_taxonomy_id', 
something like:


+-+--+
| max(distinct object_id) | term_taxonomy_id |
+-+--+
|1503 |  127 |
|1494 |  122 |
+-+--+

But I just can't seem to get there?

select max(distinct object_id), term_taxonomy_id from 
wp_term_relationships where term_taxonomy_id IN (122,127) group by 
term_taxonomy_id, object_id order by term_taxonomy_id desc, object_id 
desc


+-+--+
| max(distinct object_id) | term_taxonomy_id |
+-+--+
|1503 |  127 |
|1481 |  127 |
| 300 |  127 |
|1503 |  122 |
|1494 |  122 |
|1470 |  122 |
|1468 |  122 |
|1205 |  122 |
|1062 |  122 |
| 316 |  122 |
| 306 |  122 |
| 228 |  122 |
| 222 |  122 |
| 216 |  122 |
| 211 |  122 |
| 184 |  122 |
| 155 |  122 |
| 149 |  122 |
| 134 |  122 |
| 128 |  122 |
| 124 |  122 |
| 119 |  122 |
| 113 |  122 |
| 109 |  122 |
| 105 |  122 |
|  93 |  122 |
|  91 |  122 |
|  87 |  122 |
+-+--+

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is myisam_repair_threads considered safe

2009-09-17 Thread Eric Bergen
It's been in mysql for at least 7 years.

http://ebergen.net/wordpress/2009/04/11/longest-beta-ever-myisamchk-parallel-recover/

On Thursday, September 10, 2009, Rob Wultsch  wrote:
> myisam_repair_threads
>
> "If this value is greater than 1, MyISAM  table indexes are created in
> parallel (each index in its own thread) during the Repair by sorting
> process. The default value is 1.
> Note :Multi-threaded repair is still beta-quality code."
>
> The note is present is all versions of MySQL manual:
>
> http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_myisam_repair_threads
> http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_myisam_repair_threads
> http://dev.mysql.com/doc/refman/4.1/en/server-system-variables.html#sysvar_myisam_repair_threads
>
>
> Can anyone comment about whether this setting is safe, and if so on
> what major versions?
>
> --
> Rob Wultsch
> wult...@gmail.com
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>

-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Fail to change MySQL data directory on ubuntu

2009-08-27 Thread Eric Bergen
That procedure is horribly incorrect. You should simply move the
ib_log and ibdata files with the rest of the datadir. The ibdata1 file
contains innodb's system tables and depending on your setting of
innodb_file_per_table it also contains your data!

On Thu, Aug 27, 2009 at 7:56 AM, Jia Chen wrote:
>> Chen, Did you really delete ibdata1 ?
>
> Yes, I did.
>
> Best,
> Jia
>
>
> Claudio Nanni wrote:
>>
>>
>> 2009/8/26 chen jia mailto:chen.1...@gmail.com>>
>>
>>    Hi there,
>>
>>    I am using MySQL on ubuntu 8.04.
>>
>>    I followed this link
>>
>>  http://www.ubuntu-howto.info/howto/how-to-move-mysql-databases-to-another-location-partition-or-hard-drive
>>    to change the data directory of MySQL.
>>
>>    After stopping MySQL: sudo /etc/init.d/mysql stop
>>
>>    I make a new directory: sudo mkdir /media/disk/MySQL_data
>>
>>    then change the ownership of new directory, sudo chown mysql:mysql
>>    /media/disk/MySQL_data
>>
>>    and copy all data to the new directory, cp -r -p /var/lib/mysql/*
>>    /media/disk/MySQL_data/ and deleted all files like ibdata1,
>>
>>
>> Chen, Did you really delete ibdata1 ?
>>
>>
>>
>>
>>
>>
>>    ib_logfile0, and ib_logfile1.
>>
>>    I then edit /etc/mysql/my.conf and update the "datadir" to my new
>>    directory. I also update /etc/apparmor.d/usr.sbin.mysql so that news
>>    lines with /var/lib/mysql replaced by /media/disk/MySQL_data are
>>    added.
>>
>>    However, after sudo /etc/init.d/apparmor reload
>>
>>    I try sudo /etc/init.d/mysql start
>>
>>    I got
>>    * Starting MySQL database server mysqld
>>           [fail]
>>
>>    If I change the "datadir" line in /etc/mysql/my.conf back to the
>>    original one, I can start MySQL successfully.
>>
>>    I think I have done everything needed to change MySQL data directory.
>>
>>    Why am I still getting this error?  Where can I start to look for
>>    the causes?
>>
>>    Thanks.
>>
>>    Jia
>>
>>    --
>>    MySQL General Mailing List
>>    For list archives: http://lists.mysql.com/mysql
>>    To unsubscribe:
>> http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com
>>
>>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why doesn't mySQL stop a query when the browser tab is closedL

2009-07-25 Thread Eric Bergen
; >>From: Jay Blanchard [mailto:jblanch...@pocket.com]
>> > >>Sent: Wednesday, June 03, 2009 8:46 AM
>> > >>To: Daevid Vincent; mysql@lists.mysql.com
>> > >>Subject: RE: Why doesn't mySQL stop a query when the browser tab is
>> > >>closedL
>> > >>
>> > >>[snip]
>> > >>I just noticed a horrible thing.
>> > >>[/snip]
>> > >>
>> > >>Keep in mind that the query event is server side and is not
>> > tied to the
>> > >>browser (client side) once it has begun because of the
>> > statelessness of
>> > >>the connection. You would have to have some sort of
>> > onClose() event from
>> > >>the browser that would trigger a query cancellation.
>> > >>
>> > > [JS] Going beyond that, the browser is at several removes
>> > from the MySQL
>> > > server. Typically the browser talks to the web server, then
>> > the web server
>> > > runs some application code (PHP or whatever), and then the
>> > application code
>> > > talks to the MySQL server. The only part of this chain that
>> > "knows" what the
>> > > MySQL server is doing is the last bit, the application
>> > code, which is
>> > > typically waiting for a response.
>> > >
>> > > Getting back to the user, HTTP itself is a stateless
>> > protocol. That means
>> > > the web server has no way of knowing if the user, the
>> > browser, or even the
>> > > user's computer is still there; it also doesn't really know
>> > what the user
>> > > last did (it's up to the application code to remember that somehow).
>> > >
>> > > In order for an end user to cancel a query, there would
>> > have to be some way
>> > > for the user to tell the browser to tell the web server to tell the
>> > > application code to tell the MySQL server to stop. I'm
>> > pretty sure you could
>> > > create a "tired of waiting" button for the user, but I
>> > haven't done it
>> > > myself.
>> > >
>> > > Regards,
>> > >
>> > > Jerry Schwartz
>> > > The Infoshop by Global Information Incorporated
>> > > 195 Farmington Ave.
>> > > Farmington, CT 06032
>> > >
>> > > 860.674.8796 / FAX: 860.674.8341
>> > >
>> > > www.the-infoshop.com
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >>--
>> > >>MySQL General Mailing List
>> > >>For list archives: http://lists.mysql.com/mysql
>> > >>To unsubscribe:    http://lists.mysql.com/mysql?unsub=jschwa...@the-
>> > >>infoshop.com
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > --
>> > > MySQL General Mailing List
>> > > For list archives: http://lists.mysql.com/mysql
>> > > To unsubscribe:
>> > http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > A: It reverses the normal flow of conversation.
>> > Q: What's wrong with top-posting?
>> > A: Top-posting.
>> > Q: What's the biggest scourge on plain text email discussions?
>> >
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:    http://lists.mysql.com/mysql?unsub=joh...@pixelated.net
>>
>>
>
>
> --
> -
> Johnny Withers
> 601.209.4985
> joh...@pixelated.net
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why can't I kill the query cache?

2009-05-29 Thread Eric Bergen
You can also flush the cache with echo 1 > /proc/sys/vm/drop_caches if
you have a new enough kernel.

On Fri, May 29, 2009 at 2:16 PM, Dan Nelson  wrote:
> In the last episode (May 29), Gerald L. Clark said:
>> Little, Timothy wrote:
>> > Also titled, I want this to run slow ALL the time...
>> >
>> > I have a group of dreadful queries that I have to optimize.
>> >
>> > Some take 20-30 seconds each -- the first time that I run them.  But
>> > then they never seem to take that long after the first time (taking less
>> > than a second then).  If I change the "keywords" searched for in the
>> > where clauses, then they take a long time again...  so it's the
>> > query-cache or something just like it.
>> >
>> > BUT, I am doing this each time :
>> > flush tables;
>> > reset query cache;
>> > set global query_cache_size=0;
>> > SELECT   SQL_NO_CACHE DISTINCT ca.conceptid AS headingid,
>> >
>> > And still it's not avoiding the cache.
>> >
>> > Is there a cache I'm missing?
>> >
>> > Tim...
>> >
>> >
>> Disk cache, but I don't know how to clear it.
>
> Create a file 2x the size of your RAM (for a 2gb system, dd if=/dev/zero
> of=bigfile bs=1024k count=4096), then dd it to /dev/null (dd if=bigfile
> of=/dev/null bs=1024k).  That should flush your OS cache.  The guaranteed
> way would be to dismount then remount your filesystem, but that could be
> difficult depending on how many other processes are using it..
>
> --
>        Dan Nelson
>        dnel...@allantgroup.com
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question on replication terminology

2009-04-29 Thread Eric Bergen
Dual master replication can be either dual master dual write or dual
master single writer. The latter is preferred. In this configuration
replication is connected in both directions but clients only ever
connect to one master at a time. It's just as safe as master -> slave
replication if you handle the failover correctly.

-Eric

On Tue, Apr 28, 2009 at 3:43 PM, Claudio Nanni  wrote:
> Hi there,
> I would only like to stress that the only supported (and recommended)
> replication solution in MySQL is
> Master--->Slave  replication.
> In this scenario you can have ONLY one master and (virtually) any number of
> slaves.
> There is NO other safe replication solution.
> The terms you mention seems to refer to the same solution, where you have
> two servers each acting as a master:
> this is a non standard dangerous scenario in MySQL and requires application
> logic awareness.
>
> Hope to have brought a little light in your mind
>
> Cheers
> Claudio
>
>
>
> Vikram Vaswani wrote:
>>
>> Hi
>>
>> I'm new to replication and looking through some docs on how to use it.
>> Could
>> someone please tell me if the following terms mean the same thing or, if
>> not, what is the difference:
>>
>> master-master replication
>> dual-master replication
>> bidirectional replication
>>
>> TIA
>> -BT
>>
>>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: innodb_thread_concurrency at runtime in 4.1 ?

2009-04-27 Thread Eric Bergen
Hi,

I checked the 4.1.24 source and I don't see innodb_thread_concurrency
in the list of variables that can be set at runtime. I only see:

innodb_max_dirty_pages_pct,
innodb_max_purge_lag,
innodb_table_locks,
innodb_max_purge_lag,
innodb_autoextend_increment,

-Eric

On Sun, Apr 26, 2009 at 11:48 PM, Alagar samy  wrote:
> Hi,
>
> i am using mysql_server-4.1.23 and in the documentation of 4.1
> (http://dev.mysql.com/doc/refman/4.1/en/dynamic-system-variables.html) , it
> is mentioned innodb_thread_concurrency variable can be set at runtime.
>
> but i am getting error when trying to set dynamically ..
>
> mysql> SET GLOBAL innodb_thread_concurrency=4;
> ERROR 1193 (HY000): Unknown system variable 'innodb_thread_concurrency'
>
> still throws error when tried as set @innodb_thread_concurrency and set
> global @innodb_thread_concurrency ...
>
> can you please let me know whether 4.1 documentation is misleading or  this
> is a bug in mysql-4.1 (as in, this variable cannot be set at run-time) ?
>
>
> PS : i am able to set innodb_thread_concurrency variable at runtime in
> mysql-5.1 .. but this is not an option for me now .. i have to go with 4.1
> at this point ..
>
>
> A.Alagarsamy
>
> 
> Now surf faster and smarter ! Check out the new Firefox 3 - Yahoo! Edition *
> Click here!



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Sun bought by Oracle

2009-04-20 Thread Eric Bergen
I don't think the purchase was about MySQL at all. I'm sure Oracle is
far more interested in java, zfs, and solaris than they are MySQL.
MySQL is just an added bonus that will go well with their acquisition
of Innobase Oy a few years ago. Oracle didn't kill InnoDB and it's not
very likely that they will kill MySQL. If you walk around the user
conference this week you will see why it would be incredibly stupid
try to try kill MySQL. Not that companies like Percona would let that
happen anyway. After all it is GPL.

On Mon, Apr 20, 2009 at 10:02 AM,
 wrote:
>
>
>> From: Peter Brawley [mailto:peter.braw...@earthlink.net]
>>
>> An optimist wrote that. A pessimist, Larry Dignan at
>> http://blogs.zdnet.com/BTL/?p=16598&tag=nl.e539, wrote point six:
>>
>> "Oracle gets to kill MySQL. There's no way Ellison will let that
>> open source database mess with the margins of his database.
>> MySQL at best will wither from neglect. In any case, MySQL
>> is MyToast."
>>
>> We ought to know who's right within half a year.
>>
>
> Is MySQL not Open Source?  Heck, I'd love to help with that.  We would
> just have to change the name to something befitting the product.  MySQL
> still tickles my MS warning alerts; My Pictures, My Music, My Videos,
> etc.  How about 'GNO', pronounced like 'Know', stands for "GNO's Not
> Oracle"?
>
>
> The information contained in this message and any attachment may be
> proprietary, confidential, and privileged or subject to the work
> product doctrine and thus protected from disclosure.  If the reader
> of this message is not the intended recipient, or an employee or
> agent responsible for delivering this message to the intended
> recipient, you are hereby notified that any dissemination,
> distribution or copying of this communication is strictly prohibited.
> If you have received this communication in error, please notify me
> immediately by replying to this message and deleting it and all
> copies and backups thereof.  Thank you.
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>
>



-- 
Eric Bergen
eric.ber...@provenscaling.com
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: flush-host problem

2009-04-11 Thread Eric Bergen
You can set max_connect_errors=9 to "disable" the host
blocking. It's common to do this with load balancers because tcp/ip
health checks count as connection errors.

-Eric

On Sat, Apr 11, 2009 at 1:01 PM, Shawn Green  wrote:
>
> Hello Gary,
>
> Gary Smith wrote:
>>
>> Mike,
>> It's not a connection pooling issue per say.  We have several boxes running 
>> spam assassin, sqlgrey and postfix (via mysql).  Normally these components 
>> work great.  SA and sqlgrey both have a fixed number of connections, around 
>> 16, that they are generally actively using unless we get a burst of email, 
>> at which time they will increase by a few.  The problem is that Postfix has 
>> been receiving a higher level than normal of emails as we have taken 50% of 
>> our servers offline at this location (setting them up at a new location).  
>> We're also have this bouncing across a couple different firewalls, so for 
>> some reason, the conneciton to mysql is generating a larger number of these:
>>
>> 090407 12:26:42 [Warning] Aborted connection 972479 to db: 'db' user: 'user' 
>> host: 'host' (Got an error reading communication packets)
>>
>> We do know the network isn't optimal right now and are working to fix the 
>> issues but we are hoping to get by just for the short term.
>>
>> But that leads back to the original question about increase the connection 
>> error cutoff before banning a host.
>>
>> We are using 5.1.32 with INNODB tables.
>>
>> 
>> From: mos [mo...@fastmail.fm]
>> Sent: Tuesday, April 07, 2009 9:18 AM
>> To: mysql@lists.mysql.com
>> Subject: Re: flush-host problem
>>
>> At 10:39 AM 4/7/2009, Gary Smith wrote:
>>>
>>> I have system that is generating a larger than normal number of connection
>>> errors.  We know why the errors are occuring and are working to resolve
>>> them (connectivity and load issue on the client).  The question is, how
>>> can I tweak mysql to tolerate a higher level than normal of bad
>>> connections before banning the host.
>>>
>>> What happens is that when we have 300-500 connections a few random ones
>>> will get mucked up during a heavier than normal load on the client.  I
>>> have set the max connections to 3000 (which we never get close to).
>>>
>>> So, if there a config/startup setting to tweak to ease the banning of bad
>>> connetions thus reducing the need for me to continually "mysqladmin
>>> flush-host" on the server?
>>> --
>
> ...
>
> The server generates those "Aborted connection" messages under the following 
> circumstances:
>
> 1) The server was trying to pass information to the client and the client 
> stopped being there.
>
> 2) The client sat idle beyond the wait_timeout setting.
>
> 3) There was some kind of networking interference (such as a VPN or proxy 
> server closing an idle connection)  or bad connections.
>
> Look at your SHOW PROCESSLIST report. If it shows many connections that have 
> been idle for more than 30 seconds or so, then you need to change your client 
> software behavior. If these are pooled connections, make sure your idle 
> connection timeout for the pool (part of the pool configuration) is set to a 
> value smaller than wait_timeout.  Otherwize you need to audit your client 
> code to ensure that it is calling the appropriate close() function for the 
> library it is using to connect to the MySQL server. That will release the 
> connection and allow the server to reallocate those resources to handling the 
> commands for the live (and not idle) connections.
>
> For additional reasons for these errors, please peruse:
> http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html
>
> Warmest regards,
>
> --
> Shawn Green, MySQL Senior Support Engineer
> Sun Microsystems, Inc.
> Office: Blountville, TN
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com
>



--
Eric Bergen
eric.ber...@provenscaling.com
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Closing/Opening tables

2009-02-27 Thread Eric Bergen
MySQL can open a single table multiple times depending on how many
clients need to use it. This means that having a table_cache the same
as the total_tables will only work if your mysql server only has one
client.

For more details read:
http://dev.mysql.com/doc/refman/5.0/en/table-cache.html

On Fri, Feb 27, 2009 at 2:53 PM,   wrote:
> Thanks Dan.. thats a valuable point.. and this actually happening with MyISAM 
> tables only..
>
> But the question is; when I set the table_cache to higher than total tables.. 
> then it should stop closing the table in first place..so that only un-opened 
> tables will be opened and kept in cache.. it will avoid closing and 
> re-opening.. but looks like it is not the case..
>
> Unless the table_cache is also used(unlikely) for temporary tables which are 
> created by select queries..
>
>
>
>
> 
> From: Dan Nelson 
> To: dbrb2002-...@yahoo.com
> Cc: mysql@lists.mysql.com
> Sent: Friday, February 27, 2009 1:15:25 PM
> Subject: Re: MySQL Closing/Opening tables
>
> In the last episode (Feb 27), dbrb2002-...@yahoo.com said:
>> Recently I noticed the server takes lot of time on and off when opening
>> and closing tables.  And I tried to increase the table_cache more the the
>> total tables (file_limit is properly set); and the problem still continues
>> and lowering it also continues..  and tried to set in middle..  same
>
> MyISAM tables flush dirty index blocks at the end of every update; this can
> cause a long wait inside "closing tables".  If you have just deleted a lot
> of rows or did some other update touching many rows, you might have to flush
> a lot of dirty blocks.  Running "show status like 'Key_blocks_not_flushed'"
> during one of these periods should show the count starting out large,
> dropping rapidly, then leveling off when that table's blocks have been
> flushed.
>
> Fixes include:
>
> * Altering your troublesome tables and adding the DELAY_KEY_WRITE=1 option.
>  This will force you to repair those tables after a mysql or OS crash,
>  since the on-disk copies of the index will almost always be out of synch.
>
> * Switching to an engine with logging like InnoDB will allow mysql to write
>  the changes to a transaction log immediately, then trickle out the actual
>  key block updates over time.  If you want to try out mysql 6.0, the maria
>  engine is basically MyISAM with logging.
>
> --
>    Dan Nelson
>    dnel...@allantgroup.com
>



-- 
Eric Bergen
eric.ber...@provenscaling.com
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: how to disable InnoDB and MyISAM on ndb cluster

2009-01-01 Thread Eric Bergen
Hi Nipuna,

InnoDB can be disabled with the skip-innodb option. MyISAM can't
really be disabled because it's required to read the grant tables.

-Eric

On Sat, Dec 27, 2008 at 4:21 AM, Nipuna Perera  wrote:
> Hi All,
>
> I'm using mysql-cluster-gpl-6.2.15 for create DB cluster in our server
> machines,
> currently I'm using 2 data nodes, 2 management nodes and two mysqld server
> nodes which were combine with same data directory in SAN.
> SAN for the collect binary logs only.
>
> What need to know are,
> 1. Is it possible to disable InnoDB and MyISAM engines while using the ndb
> cluster in mysqld servers, if it is yes, can you tell me the way of doing
> it?
> 2. Is there having any disterbance for ndb cluster by disabling the InnoDB
> and MyISAM?
>
> Thanks and Regards,
>
> --
> Nipuna Perera
> නිපුණ පෙරේ‍රා
> http://nipunaperera.blogspot.com
>



-- 
high performance mysql consulting.
http://provenscaling.com


Re: Where should I download mysql 4.0.12?

2008-11-06 Thread Eric Bergen
If you still want it you can download it from the Proven Scaling mirror.

http://mirror.provenscaling.com/mysql/community/source/4.0/

-Eric

On Thu, Nov 6, 2008 at 1:58 AM, Joerg Bruehe <[EMAIL PROTECTED]> wrote:
> Hi!
>
>
> Moon's Father wrote:
>> Hi.
>>Where can I download a mysql 4.0.12? I found nowhere to download it.
>
> You will not find it anywhere at MySQL, it is out of support since more
> than 2 years (September 2006).
> For an explanation, read the text here:
>   http://downloads.mysql.com/archives.php?p=mysql-4.0
>
> As regards 4.0.12 in specific: That version is totally obsolete, the
> last published version of the 4.0 series was 4.0.27.
> Anybody installing 4.0.12 now would miss many security fixes and so run
> great risks.
> (This is valid for all unsupported versions, of course - the older a
> version is, the more security fixes will be missing.)
>
>
> Jörg
>
> --
> Joerg Bruehe,  MySQL Build Team, [EMAIL PROTECTED]
> Sun Microsystems GmbH,   Sonnenallee 1,   D-85551 Kirchheim-Heimstetten
> Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
> Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>



-- 
high performance mysql consulting.
http://provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Question about Averaging IF() function results

2008-11-04 Thread Eric Lommatsch
Hello Peter,
 
Thanks for your suggestion, I think I have found another way to get the
average that I need. 
 
If the formula I have come up with does not work I will try your formula.
 
Thank you
 
Eric H. Lommatsch
Programmer
360 Business 
2087 South Grant Street
Denver, CO 80210
Tel 303-777-8939
Fax 303-778-0378
 
[EMAIL PROTECTED]
 



From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 04, 2008 1:14 PM
To: Eric Lommatsch
Cc: mysql@lists.mysql.com
Subject: Re: Question about Averaging IF() function results


Eric,

I'd replace 

  (avg(IF(avgTest.Q1<7,avgTest.Q1,Null))
  +avg(IF(avgTest.Q2<7,avgTest.Q2,Null))
  +avg(IF(avgTest.Q3<7,avgTest.Q3,Null))
  +avg(IF(avgTest.Q4<7,avgTest.Q4,Null))
  +avg(IF(avgTest.Q5<7,avgTest.Q5,Null)))/5 as overallAvg from avgTest group
by course;

with ...

(IF(avgTest.Q1<7,avgTest.Q1,0) + IF(avgTest.Q2<7,avgTest.Q2,0) +
IF(avgTest.Q3<7,avgTest.Q3,0)+
(IF(avgTest.Q1<7,avgTest.Q1,0)+ IF (avgTest.Q2<7,avgTest.Q2,0)+ IF
(avgTest.Q3<7,avgTest.Q3,0)+
IF(avgTest.Q4<7,avgTest.Q4,0) + IF(avgTest.Q5<7,avgTest.Q5,0)) /
MAX(1,IF(avgTest.Q1<7,1,0) + IF(avgTest.Q2<7,1,0) + IF(avgTest.Q3<7,1,0) +
IF(avgTest.Q4<7,1,0) + IF(avgTest.Q5<7,1,0))

PB



Eric Lommatsch wrote: 

Hello List,
 
I have a question about trying to calculate an average across
columns. I am trying to calculate the results of surveys where in the data I
have individuals that have marked questions on the survey as N/A. in my
survey I am using 1-6 as the evaluated answers and if the person marked NA
the stored value is 7.
 
Here is a table with some sample data of what I am using to test the
calculation I am working on:  ( actually this is simplified from the actual
data but the results I get are still the same)
 
CREATE TABLE `avgTest` (
  `Course` varchar(8) default NULL,
  `Q1` int(11) default NULL,
  `Q2` int(11) default NULL,
  `Q3` int(11) default NULL,
  `Q4` int(11) default NULL,
  `Q5` int(11) default NULL
)
 
Course|Q1|Q2|Q3|Q4|Q5

-

HUM300  |6  | 6  | 7 |  6 |6
HUM301  |6  | 6  | 6 |  6 |6

HUM301  |7  | 7  | 7 |  7 |7
 
Here is the query that I am using to perform the calculations
 
select course,
  avg(IF(avgTest.Q1<7,avgTest.Q1,Null)) as AvgOfQ1,
  avg(IF(avgTest.Q2<7,avgTest.Q2,Null)) as AvgOfQ2,
  avg(IF(avgTest.Q3<7,avgTest.Q3,Null)) as AvgOfQ3,
  avg(IF(avgTest.Q4<7,avgTest.Q4,Null)) as AvgOfQ4,
  avg(IF(avgTest.Q5<7,avgTest.Q5,Null)) as AvgOfQ5,
  (avg(IF(avgTest.Q1<7,avgTest.Q1,Null))
  +avg(IF(avgTest.Q2<7,avgTest.Q2,Null))
  +avg(IF(avgTest.Q3<7,avgTest.Q3,Null))
  +avg(IF(avgTest.Q4<7,avgTest.Q4,Null))
  +avg(IF(avgTest.Q5<7,avgTest.Q5,Null)))/5 as overallAvg from
avgTest group by course;
 
Here are the results that I get that are incorrect.
 

Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|
AvgOfQ4|AvgOfQ5|overallAvg

-
--
HUM300  |  6.000  |6.000 |   Null|
6.000  |6.000 |   Null 

HUM301  |  6.000  |6.000 |  6.000  |
6.000  |6.000 |   6.000 
 

Here are the results that I get that when I change using null in the
query to a 0.
 

Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|
AvgOfQ4|AvgOfQ5|overallAvg

-
--
HUM300  |  6.000  |6.000 |  0.000  |
6.000  |6.000 |   4.800 

HUM301  |  6.000  |6.000 |  6.000  |
6.000  |6.000 |   6.000 
 

Here are the results that I want to be getting from the query that I
am working with.
 

Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|
AvgOfQ4|AvgOfQ5|overallAvg

-
--
HUM300  |  6.000 

Question about Averaging IF() function results

2008-11-04 Thread Eric Lommatsch
Hello List,
 
I have a question about trying to calculate an average across columns. I am
trying to calculate the results of surveys where in the data I have
individuals that have marked questions on the survey as N/A. in my survey I
am using 1-6 as the evaluated answers and if the person marked NA the stored
value is 7.
 
Here is a table with some sample data of what I am using to test the
calculation I am working on:  ( actually this is simplified from the actual
data but the results I get are still the same)
 
CREATE TABLE `avgTest` (
  `Course` varchar(8) default NULL,
  `Q1` int(11) default NULL,
  `Q2` int(11) default NULL,
  `Q3` int(11) default NULL,
  `Q4` int(11) default NULL,
  `Q5` int(11) default NULL
)
 
Course|Q1|Q2|Q3|Q4|Q5
-

HUM300  |6  | 6  | 7 |  6 |6
HUM301  |6  | 6  | 6 |  6 |6
HUM301  |7  | 7  | 7 |  7 |7
 
Here is the query that I am using to perform the calculations
 
select course,
  avg(IF(avgTest.Q1<7,avgTest.Q1,Null)) as AvgOfQ1,
  avg(IF(avgTest.Q2<7,avgTest.Q2,Null)) as AvgOfQ2,
  avg(IF(avgTest.Q3<7,avgTest.Q3,Null)) as AvgOfQ3,
  avg(IF(avgTest.Q4<7,avgTest.Q4,Null)) as AvgOfQ4,
  avg(IF(avgTest.Q5<7,avgTest.Q5,Null)) as AvgOfQ5,
  (avg(IF(avgTest.Q1<7,avgTest.Q1,Null))
  +avg(IF(avgTest.Q2<7,avgTest.Q2,Null))
  +avg(IF(avgTest.Q3<7,avgTest.Q3,Null))
  +avg(IF(avgTest.Q4<7,avgTest.Q4,Null))
  +avg(IF(avgTest.Q5<7,avgTest.Q5,Null)))/5 as overallAvg from avgTest group
by course;
 
Here are the results that I get that are incorrect.
 
Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|AvgOfQ4|
AvgOfQ5|overallAvg
-
--
HUM300  |  6.000  |6.000 |   Null|
6.000  |6.000 |   Null 
HUM301  |  6.000  |6.000 |  6.000  |   6.000
|6.000 |   6.000 
 
Here are the results that I get that when I change using null in the query to
a 0.
 
Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|AvgOfQ4|
AvgOfQ5|overallAvg
-
--
HUM300  |  6.000  |6.000 |  0.000  |   6.000
|6.000 |   4.800 
HUM301  |  6.000  |6.000 |  6.000  |   6.000
|6.000 |   6.000 
 
Here are the results that I want to be getting from the query that I am
working with.
 
Course|AvgOfQ1|AvgOfQ2|AvgOfQ3|AvgOfQ4|
AvgOfQ5|overallAvg
-
--
HUM300  |  6.000  |6.000 |   Null|
6.000  |6.000 |   6.000 
HUM301  |  6.000  |6.000 |  6.000  |   6.000
|6.000 |   6.000 
 
I tried using the if function without a false answer and I am getting a
syntax error when I do this.
 
If it is possible for me to get this correct result in MySQL, can someone
provide me with the correct query syntax to get these results?
 
 
Thank you
 
Eric H. Lommatsch
Programmer
360 Business 
2087 South Grant Street
Denver, CO 80210
Tel 303-777-8939
Fax 303-778-0378
 
[EMAIL PROTECTED]

 



"Copying" tables between databases

2008-10-09 Thread Eric Anderson


I've got two databases Foo and Bar.  Both databases have a table called 
'zoop_t'.  Foo's is the "master" copy (i.e. the one that gets updated) 
and Bar needs to be updated if/when changes are made.


Currently, I'm mysqldump'ing that table from Foo at midnight via cron 
and feeding it back into Bar via cron.


Is there a way to set up real-time "replication" of that table?

--

WANT TO MODEL FOR MAC & BUMBLE?  APPLY AT http://casting.macandbumble.com
-
 Eric Anderson Mac and Bumble   Bumble Cash
  ICQ 3849549   http://www.macandbumble.com  http://www.bumblecash.com
 San Diego, CA<[EMAIL PROTECTED]>  <[EMAIL PROTECTED]>
-
SEE OUR LATEST PARTY PICTURES -- http://events.macandbumble.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Speed up slow SQL statement.

2008-09-29 Thread Eric Stewart

Good morning everyone,

products.id is defined as a PRIMARY KEY so it's index.
browse_nodes_to_products.product_id is defined as a INDEX so it's  
indexed.
browse_nodes_to_products.browse_node_id is defined as an INDEX so it's  
indexed.

browse_nodes.amazon_id is defined as an INDEX so it's indexed.

See http://pastebin.com/m46cced58
It has complete table structures, row counts and EXPLAIN output of the  
SQL statement I'm trying to optimize.


I don't think I understand your question regarding carrying the  
product_id through the relationship. This is a many to many  
relationship. A browse_node can contain many products and a product  
can be in many browse_nodes. This is achieved through a many to many  
join table browse_nodes_to_products.


Further research into the SQL statement is revealing that a temp table  
is being created and may be one of the reason it's slowing down.


Any ideas how I can optimize this?

Eric

On Sep 26, 2008, at 11:47 AM, Martin Gainty wrote:



Hi Eric-

the immediate challenge is to fic the join statement so
make sure products.id is indexed
make sure browse_nodes_to_products.product_id is indexed
make sure browse_nodes_to_products.browse_node_id  is indexed
make sure browse_nodes.amazon_id is indexed

there seems to be mapping/relationship challenge for your product to  
browse_node_id


which finally maps to amazon_id

would be simpler if is there any way you can carry the product_id thru
from products table to
browser_nodes_to_products table
to browse_nodes table

anyone?
Martin
__
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the  
official business of Sender. This transmission is of a confidential  
nature and Sender does not endorse distribution to any party other  
than intended recipient. Sender does not necessarily endorse content  
contained within this transmission.




From: [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Subject: Speed up slow SQL statement.
Date: Fri, 26 Sep 2008 10:42:07 -0400

Good morning everyone,

I've got a sql statement that is running quite slow. I've indexed
everything I can that could possibly be applicable but I can't seem  
to

speed it up.

I've put up the table structures, row counts, the sql statement and
the explain dump of the sql statement all in paste online here 
http://pastebin.com/m46cced58

I'm including the sql statement itself here as well:

select distinct products.id as id,
  products.created_at as created_at,
  products.asin as asin,
  products.sales_rank as sales_rank,
  products.points as points
from products
inner join (browse_nodes, browse_nodes_to_products) on
  (browse_nodes.amazon_id = browse_nodes_to_products.browse_node_id
and products.id = browse_nodes_to_products.product_id)
where browse_nodes.lft >= 5 and browse_nodes.rgt <= 10
 order by products.sales_rank desc limit 10 offset 0;


What I'm trying to accomplish with this is to get an ordered list of
unique products found under a category.

Any ideas on how I could speed this up?

Thanks in advance,

Eric Stewart
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



_
See how Windows connects the people, information, and fun that are  
part of your life.

http://clk.atdmt.com/MRT/go/msnnkwxp1020093175mrt/direct/01/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Speed up slow SQL statement.

2008-09-26 Thread Eric Stewart

Good morning everyone,

I've got a sql statement that is running quite slow. I've indexed  
everything I can that could possibly be applicable but I can't seem to  
speed it up.


I've put up the table structures, row counts, the sql statement and  
the explain dump of the sql statement all in paste online here http://pastebin.com/m46cced58


I'm including the sql statement itself here as well:

select distinct products.id as id,
  products.created_at as created_at,
  products.asin as asin,
  products.sales_rank as sales_rank,
  products.points as points
from products
inner join (browse_nodes, browse_nodes_to_products) on
  (browse_nodes.amazon_id = browse_nodes_to_products.browse_node_id  
and products.id = browse_nodes_to_products.product_id)

where browse_nodes.lft >= 5 and browse_nodes.rgt <= 10
 order by products.sales_rank desc limit 10 offset 0;


What I'm trying to accomplish with this is to get an ordered list of  
unique products found under a category.


Any ideas on how I could speed this up?

Thanks in advance,

Eric Stewart
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Lock Issue

2008-07-14 Thread Eric Thunberg
Hello,

I'm having issues where once every 24-36 hours a replication slave
locks up and I have to kill -9 the process and restart it. I'm using
mysql 5.1.26 and centos 5 (kernel 2.6.18-92.el5) but it's been
occurring since I simultaneously upgraded to 5x and started using the
slave in production (instead of as a server to just run mysqldump on).
I've tried the last 3 versions of mysql, upgraded to the latest
centos, and I'm starting to wonder if it could be hardware related.
Either way, here is the .err info from the most recent crash, I'd
appreciate any help -- hopefully something in here jumps out at
someone and I'm just doing something stupid.

Status information:

Current dir: /usr/local/mysql/var/
Running threads: 5  Stack size: 262144
Current locks:
lock: 0x20436290:

lock: 0x1f1acf00:

lock: 0x1fed76d0:

lock: 0x2aaaec214188:

lock: 0x2aaaec212c58:

lock: 0x2aaaec208438:

lock: 0x1ec4b168:

lock: 0x1e52c898:

lock: 0x1e11c0d8:

lock: 0x1fc4dcb8:

lock: 0x1e162b98:

lock: 0x1e161db8:

lock: 0x1f700aa0:

lock: 0x1ec00b20:


lock: 0x1fc25dc0:

lock: 0x1f30c7d0:

lock: 0x1fac89c8:

lock: 0x203f75f8:

lock: 0x1e4a0d98:

lock: 0x1eee3908:

lock: 0x1eec8278:

lock: 0x1f324f18:

lock: 0x1f5e38a8:

lock: 0x1fac1028:

lock: 0x2aaae407e288:

lock: 0x2aaaec036480:

lock: 0x2aaaec05e968:

lock: 0x1e116bf0:

lock: 0x2aaaec05be88:

lock: 0x1ed4b5c8:

lock: 0x1f7938a8:

lock: 0x1e36be48:

lock: 0x1fc691b0:

lock: 0x1fc983b0:

lock: 0x1e72db60:

lock: 0x1f13aad8:

lock: 0x2aaaec02ca08:

lock: 0x1ee6bed8:

lock: 0x1eab1eb8:

lock: 0x1fa8a4c8:

lock: 0x1e832c58:

lock: 0x1ef0b7d8:

lock: 0x2aaae80689d8:

lock: 0x1eec3a28:

lock: 0x2aaae413bd78:

lock: 0x2aaae41b0078:

lock: 0x2aaae412c1c8:

lock: 0x1fa88f78:

lock: 0x1e43aea8:

lock: 0x1e71de68:

lock: 0x1fa3ff78:

lock: 0x1f391730:

lock: 0x1f391620:

lock: 0x1e462890:

lock: 0x1f342610:

lock: 0x1eec6278:

lock: 0x1eec6e98:

lock: 0x1e7085f8:

lock: 0x1fc696c8:

lock: 0x1f98c338:

lock: 0x1ec11e00:

lock: 0x1e442760:

lock: 0x1e79be90:

lock: 0x1ef71e28:

lock: 0x1ef2dc68:

lock: 0x2aaae851f448:

lock: 0x2aaae837de68:

lock: 0x1f33f728:

lock: 0x2aaae41add68:

lock: 0x1e34caf8:

lock: 0x2aaae83d1868:

lock: 0x1e71d510:

lock: 0x1f026600:

lock: 0x1e7a7318:

lock: 0x1ef1e740:

lock: 0x1ed13180:

lock: 0x1eca5860:

lock: 0x1e72ca30:

lock: 0x1e7c9b68:

lock: 0x1e7b4bc8:

lock: 0x1e7b1b98:

lock: 0x1ea7d958:

lock: 0x1e4d6fa8:

lock: 0x1e79dc28:

lock: 0x1eac0798:

lock: 0x1e5115d8:

lock: 0x1eac6958:

lock: 0x1e514608:

lock: 0x1eab1270:

lock: 0x2aaae402ae48: read
read  : 0x1ee6f9d0 (82781:1);

lock: 0x2aaae40276f8: read
read  : 0x1e47d920 (82781:1);

lock: 0x2aaae4017ed8:

lock: 0x1eab0c08:

lock: 0x1eaf0f68:

lock: 0x1e523c48:

lock: 0x1eab2a88:

lock: 0x1ea90028:

lock: 0x1eaa85c8:


lock: 0x1ea8d318:

lock: 0x1e5267f8:


Key caches:
default
Buffer_size: 402653184
Block_size:   1024
Division_limit:100
Age_limit: 300
blocks used:172670
not flushed: 0
w_requests:1619446
writes: 153057
r_requests:  645726100
reads:  172881


handler status:
read_key:244157667
read_next:   187779363
read_rnd   8229903
read_first:969
write:11072520
delete8139
update:   15260322

Table status: Opened tables:   1501
Open tables:  512
Open files:   522
Open streams:   0

Alarm status:
Active alarms:   5
Max used alarms: 35
Next alarm time: 60

Thread database.table_name  Locked/WaitingLock_type

82781   esea.messages_users Locked - read Low priority read lock
82781   esea.support_ticketsLocked - read Low priority read lock




Begin safemalloc memory dump:

End safemalloc memory dump.

Memory status:
Non-mmapped space allocated from system: 87920832
Number of free chunks:   779
Number of fastbin blocks:0
Number of mmapped regions:   15
Space in mmapped regions:897253376
Maximum total allocated space:   0
Space available in freed fastbin blocks: 0
Total allocated space:   50178912
Total free space:37741920
Top-most, releasable space:  30251296
Estimated memory (with thread stack):986222784



Events status:
LLA = Last Locked At  LUA = Last Unlocked At
WOC = Waiting On Condition  DL = Data Locked

Event scheduler status:
State  : INITIALIZED
Thread id  : 0
LLA: n/a:0
LUA: n/a:0
WOC: NO
Workers: 0
Executed   : 0
Data locked: NO

Event queue status:
Element count   : 0
Data locked : NO
Attempting lock : NO
LLA : init_queue:132
LUA : init_queue:142
WOC : NO
Next activation : never

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How Can I Auto Delete my Binary Files?

2008-07-10 Thread Eric Bergen
Shaun,

Purge master logs is available in 4.0 it just doesn't have the
"before" key word. It should be trivial to write a script that decides
which log file to purge based on the mtime.

-Eric

On Thu, Jul 10, 2008 at 12:15 PM, Shaun Adams <[EMAIL PROTECTED]> wrote:
> I'm running a RHEL 4.1 and MySQL 4.0.26
>
> so a lot or the more recent commands available in 5.0.x aren't available to
> me. Does anyone know of any scripts or anything I can use to delete files
> that arent being used or run by my slave servers? It's pretty safe to say
> that I can delete log files older than 7 days so that can eliminate the need
> to check for open files.
>
>
>
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why open-files-limit smaller than table_cache?

2008-06-08 Thread Eric Bergen
Hi,

It's true that MyISAM keeps two files descriptors open per open table
but this is MyISAM specific. An open table in MySQL isn't a one to one
mapping to file descriptors, it's simply a c++ object. It's up to the
storage engine how it manages file descriptors. Engines can use worker
threads with their own descriptors or like the heap engine use none at
all. it's possible to have many more open tables than file
descriptors.

-Eric

On Mon, Jun 2, 2008 at 9:54 AM, Jean-Paul Bauer <[EMAIL PROTECTED]> wrote:
> Hi all,
> I'm a bit confused about the allowed range of values for the
> open-files-limit and the table_cache settings. The documentation
> states that the range of values allowed for open-files-limit is
> 0-65535 and for table_cache it is 1-524288.
>
> Where I get confused is that from my understanding each table in the
> table_cache will require at least one file descriptor (two if it is a
> MyISAM) table. Surely this means that the maximum for table_cache
> effectively is 65535. What am I misunderstanding or missing here?
>
> Documentation links:
> http://dev.mysql.com/doc/refman/5.0/en/server-options.html#option_mysqld_open-files-limit
> http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_table_cache
>
> Thanks for any clarification on this.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Looking for a Mysql Guru/DBA

2008-05-22 Thread Eric Frazier

Kevin Hunter wrote:

At 1:43p -0400 on Wed, 21 May 2008, bruce wrote:

The basic goal of the project is to be able to track the sites that I'm
visiting via a Firefox extension. I want to be able to implement something
like the "breadcrumbs" extension, but I want to be able to go a lot further.


If this is for a one-per-person kind thing (i.e. only a single Firefox
installation will use a single DB instance at any time), MySQL may be
overkill.  You may want to look towards something smaller and embedded,
like SQLite[1].  It'll be much less overhead, in both installation for
users and memory overhead for your extension.  In fact, Firefox 3.0
already includes SQLite for the smart url bar they've got going on.


If you're interested, and you're reasonably good at mysql, and devising
database structures/schema then let's talk!


You will still want to talk to someone about getting the right schema in
place, however.

Kevin

[1] http://www.sqlite.org/



Hey Kevin,

One thought on that, there are a lot of existing WAMP installs out 
there. :) But overall I think you are probably right.


This might be worth a look http://www.freebyte.com/programming/database/

Thanks,

Eric


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: very simple but slow query

2008-05-20 Thread Eric Frazier
We use a sub select on a 8M+ row table because it takes better advantage 
of indexes.


SELECT startip,endip FROM geodb a
 WHERE a.startip = (SELECT max(startip) FROM geodb WHERE b.startip <= 
3250648033) AND a.endip >= 3250648033;


startip and endip are INT(10) unsigned and unique keys.

This returns, on a fairly crappy old system in milliseconds after the 
table is loaded.


Carlo, What do your tables look like exactly, and what are you 
considering to be poor performance?


Look up the profiling flag, if you set that, you can get a detailed 
breakdown on the time spent in each query.


mysql> set profiling=1;
Query OK, 0 rows affected (0.00 sec)

mysql> select count(*) from visitor;
+--+
| count(*) |
+--+
|  152 |
+--+
1 row in set (0.00 sec)

mysql> show profile;
++--+
| Status | Duration |
++--+
| (initialization)   | 0.08 |
| checking query cache for query | 0.000232 |
| Opening tables | 0.48 |
| System lock| 0.25 |
| Table lock | 0.000125 |
| init   | 0.62 |
| optimizing | 0.34 |
| executing  | 0.000314 |
| end| 0.19 |
| query end  | 0.12 |
| storing result in query cache  | 0.000245 |
| freeing items  | 0.3  |
| closing tables | 0.23 |
| logging slow query | 0.11 |
++--+
14 rows in set (0.01 sec)


http://www.futhark.ch/mysql/122.html is a good tut on joining a table on 
itself which might be where you are going. Don't use cross joins. Just 
do some googling as to why.



Thanks,

Eric




Ananda Kumar wrote:

in mysql sub queries dont perform well.

You can could try this

SELECT a.ID
FROM ven_tes a, ven_tes b where a.id=b.id and b.id_ven=6573 .






On 5/20/08, Wakan <[EMAIL PROTECTED]> wrote:

Hi,
can someone could explain where are problems in this query:

EXPLAIN
SELECT ID
FROM ven_tes
WHERE ID IN (SELECT ID FROM ven_tes WHERE ID_ven=6573)
+++-+-++-+-+--+--+--+


| id | select_type| table   | type| possible_keys  |
key | key_len | ref  | rows | Extra|
+++-+-++-+-+--+--+--+


|  1 | PRIMARY| ven_tes | index   | NULL   |
PRIMARY |   4 | NULL | 6573 | Using where; Using index |
|  2 | DEPENDENT SUBQUERY | ven_tes | unique_subquery | PRIMARY,ID_ven |
PRIMARY |   4 | func |1 | Using index; Using where |
+++-+-++-+-+--+--+--+


as you can see, it doesn't use absolutely indexes on ven_tes (ID is the
primary key, ID_ven is index)

Thanks in advance
Carlo



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Query execution time - MySQL

2008-05-14 Thread Eric Frazier
ou have any questions.Standing by and I > > hope this 
helped you.Sincerely,Craig Huffstetlerxq on FreeNode #mysql | > > #apache> > > > On Wed, May 14, 2008 at 6:13 AM, Neil Tompkins > > <[EMAIL PROTECTED]> wrote:> > 
> > Hi,When performing a SQL query like SELECT Name FROM Customers. How > > do I obtain the time in which the query took to execute like 1.5 seconds > > 
etcThanks,Neil_All > > new Live Search at > > Live.comhttp://clk.atdmt.com/UKM/go/msnnkmgl001006ukm/direct/01/> > 
> > _> > > > Great deals on almost anything at eBay.c
o.uk. Search, bid, find and > > win on eBay today!> > > > http://clk.atdmt.com/UKM/go/msnnkmgl001004ukm/direct/01/> > >> > > --> > > MySQL 
General Mailing List> > > For list archives: http://lists.mysql.com/mysql> > > To unsubscribe: > > http://lists.mysql.com/[EMAIL PROTECTED]> > >> > > 
> > > > > Miss your Messenger buddies when on-the-go? Get Messenger on your > > Mobile! 
<http://clk.atdmt.com/UKM/go/msnnkmgl001001ukm/direct/01/>

_
Be a Hero and Win with Iron Man
http://clk.atdmt.com/UKM/go/msnnkmgl001009ukm/direct/01/


Hi,

Not sure about this, but do profiles exist in 3? If so you could do set 
profile=1


mysql> set profiling=1;
Query OK, 0 rows affected (0.00 sec)

mysql> select count(*) from visitor;
+--+
| count(*) |
+--+
|  2841878 |
+--+
1 row in set (0.00 sec)

mysql> show profile;
++---+
| Status | Duration  |
++---+
| (initialization)   | 0.192 |
| checking query cache for query | 0.42  |
| Opening tables | 0.24  |
| System lock| 0.202 |
| Table lock | 0.512 |
| init   | 0.24  |
| optimizing | 0.205 |
| executing  | 0.0001027 |
| end| 0.212 |
| query end  | 0.242 |
| storing result in query cache  | 0.0001452 |
| freeing items  | 0.215 |
| closing tables | 0.21  |
| logging slow query | 0.197 |
++---+
14 rows in set (0.00 sec)

And there is no reason you couldn't do this from your program as well. I 
don't know what the possible performance impact would be, you don't get 
time measurements for free, unless it is done from an outside the server 
source..


Thanks,

Eric



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Database cache corrupted

2008-04-27 Thread Eric Bergen
Can you send us the exact zabbix error?

On Sun, Apr 27, 2008 at 6:18 PM, Sergio Belkin <[EMAIL PROTECTED]> wrote:
> So anyone had ever had any problem with database cache? :)
>
>  2008/4/25 Sergio Belkin <[EMAIL PROTECTED]>:
>
>
> > Hi, I am using zabbix (monitoring software) with mysql. zabbix goes
>  >  zombie and complains with messages suggesting that Database cache
>  >  perhaps is  corrupted. How can I check and fix it?  I am using Centos
>  >  5.1, cpu Intel(R) Xeon(R) CPU and 1 Gb of RAM.
>  >
>  >
>  >  my.cnf is as follows:
>  >
>  >  [client]
>  >  port= 3306
>  >  socket  = /var/lib/mysql/mysql.sock
>  >  [mysqld]
>  >  port= 3306
>  >  socket  = /var/lib/mysql/mysql.sock
>  >  back_log = 50
>  >  max_connections = 100
>  >  max_connect_errors = 10
>  >  table_cache = 1024
>  >  max_allowed_packet = 16M
>  >  binlog_cache_size = 1M
>  >  max_heap_table_size = 128M
>  >  sort_buffer_size = 8M
>  >  join_buffer_size = 3M
>  >  thread_cache_size = 8
>  >  thread_concurrency = 8
>  >  query_cache_size = 64M
>  >  query_cache_limit = 2M
>  >  ft_min_word_len = 4
>  >  default_table_type = InnoDB
>  >  thread_stack = 192K
>  >  transaction_isolation = REPEATABLE-READ
>  >  tmp_table_size = 64M
>  >  log_slow_queries = /var/log/mysqld/slow-query-log
>  >  long_query_time = 5
>  >  log_long_format
>  >  tmpdir = /tmp
>  >  log_queries_not_using_indexes = /var/log/mysqld/not-indexes.log
>  >  expire_logs_days = 2
>  >  server-id = 1
>  >  key_buffer_size = 8M
>  >  read_buffer_size = 2M
>  >  read_rnd_buffer_size = 16M
>  >  bulk_insert_buffer_size = 64M
>  >  myisam_sort_buffer_size = 128M
>  >  myisam_max_sort_file_size = 10G
>  >  myisam_max_extra_sort_file_size = 10G
>  >  myisam_repair_threads = 1
>  >  myisam_recover
>  >  skip-bdb
>  >  innodb_additional_mem_pool_size = 16M
>  >  innodb_buffer_pool_size = 600M
>  >  innodb_data_file_path = ibdata1:128M;ibdata2:50M:autoextend:max:12800M
>  >  innodb_file_io_threads = 4
>  >  innodb_thread_concurrency = 16
>  >  innodb_flush_log_at_trx_commit = 1
>  >  innodb_log_buffer_size = 8M
>  >  innodb_log_file_size = 256M
>  >  innodb_log_files_in_group = 3
>  >  innodb_max_dirty_pages_pct = 90
>  >  innodb_lock_wait_timeout = 120
>  >  [mysqldump]
>  >  quick
>  >  max_allowed_packet = 16M
>  >  [mysql]
>  >  no-auto-rehash
>  >  [isamchk]
>  >  key_buffer = 512M
>  >  sort_buffer_size = 512M
>  >  read_buffer = 8M
>  >  write_buffer = 8M
>  >  [myisamchk]
>  >  key_buffer = 512M
>  >  sort_buffer_size = 512M
>  >  read_buffer = 8M
>  >  write_buffer = 8M
>  >  [mysqlhotcopy]
>  >  interactive-timeout
>  >  [mysqld_safe]
>  >  open-files-limit = 8192
>  >
>  >  EOF
>  >
>  >  Thanks in advance!
>  >
>  >  --
>  >  --
>  >  Open Kairos http://www.openkairos.com
>  >  Watch More TV http://sebelk.blogspot.com
>  >  Sergio Belkin -
>  >
>
>
>
>  --
>  --
>  Open Kairos http://www.openkairos.com
>  Watch More TV http://sebelk.blogspot.com
>  Sergio Belkin -
>
>  --
>  MySQL General Mailing List
>  For list archives: http://lists.mysql.com/mysql
>  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Working with Images

2008-04-21 Thread Eric Bergen
Victor,

Please give us an example with the python removed and include the
actual syntax error.


-Eric

On Fri, Apr 18, 2008 at 8:41 AM, Victor Subervi <[EMAIL PROTECTED]> wrote:
> Hi;
>  The python code works properly, so I assume this is a strictly MySQL
>  question now :)
>  If I grab an image in the database thus:
>
>   sql = "select pic1 from products where id='" + str(id) + "';"
>   cursor.execute(sql)
>   pic1 = cursor.fetchall()[0][0].tostring()
>  #  pic1 = cursor.fetchall()[0][0]  // either this or the above line
>
>  and try and re-insert it thus:
>
>   cursor.execute('update products set pic1="%s" where id="%s", ;',
>  (pic1, id))
>
>  it tells me I have an error in my MySQL syntax. What is the error?
>  TIA,
>  Victor
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-21 Thread Eric Bergen
TCP checksums aren't as strong as encryption. It's rare but corruption
can happen.

Where are you reading the positions from and how are you taking the
snapshot to restore the slave?


On Mon, Apr 21, 2008 at 12:30 AM, Jan Kirchhoff <[EMAIL PROTECTED]> wrote:
> Eric Bergen schrieb:
>
> > Hi Jan,
>  >
>  > You have two separate issues here. First the issue with the link
>  > between the external slave and the master. Running mysql through
>  > something like stunnel may help with the connection and data loss
>  > issues.
>  >
>  I wonder how any corruption could happen on a TCP connection as TCP has
>  its own checksums and a connection would break down in case of a missing
>  packet?
>
> > The second problem is that your slave is corrupt. Duplicate key errors
>  > are sometimes caused by a corrupt table but more often by restarting
>  > replication from an incorrect binlog location. Try recloning the slave
>  > and starting replication again through stunnel.
>  >
>  The duplicate key errors happen after I start at the beginning of a
>  logfile (master_log_pos=0) when the positions that mysql reports as its
>  last positions is not working.
>
>  I think I have 2 issues:
>  #1: how can this kind of binlog corruption happen on a TCP link although
>  TCP has its checksums and resends lost packets?
>
>  #2: why does mysql report a master log position that is obviously wrong?
>  mysql  reports log-posion 172 which is not working at all in a "change
>  master to" command, my only option is to start with master_log_pos=0 and
>  the number of duplicate key errors and such that I have to skip after
>  starting from master_log_pos=0 shows me that the real position that
>  mysql has stopped processing the binlog must be something in the
>  thousands or tenthousands and not 172?!
>
>  Jan
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Does version 4 mysqlcheck close MyISAM tables left open?

2008-04-20 Thread Eric Bergen
Nicole,

The tables left open warning is from the tables that were left open
when the server was rebooted. Internally mysql keeps a counter per
table of the number of clients that have a table open. When a table is
closed this counter is decremented. If mysql is improperly shutdown
this counter doesn't get decremented. A mysqlcheck (repair table
query) will reset the counter.

-Eric

On Tue, Apr 15, 2008 at 1:12 PM, Garris, Nicole
<[EMAIL PROTECTED]> wrote:
> Unable to find this in the manual ...
>
>  Yesterday morning we rebooted the server by accident, which crashed and
>  restarted MySQL 4.1. Late last night a scheduled job ran mysqlcheck and
>  found 4 open tables. When I next ran mysqlcheck it found nothing wrong.
>
>  mysqlcheck command:
>  CHECK TABLE $DBTABLES $TYPE1 $TYPE2" | mysql --host=$DBHOST -t -u$USER
>  -p$PASSWORD $i
>  where $TYPE1 and $TYPE2 are empty.
>
>  warning message from the mysqlcheck command:
>  expressionengine_dof_public.exp_stats   check   warning 2 clients are
>  using or haven't closed the table properly
>
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-20 Thread Eric Bergen
Hi Jan,

You have two separate issues here. First the issue with the link
between the external slave and the master. Running mysql through
something like stunnel may help with the connection and data loss
issues.

The second problem is that your slave is corrupt. Duplicate key errors
are sometimes caused by a corrupt table but more often by restarting
replication from an incorrect binlog location. Try recloning the slave
and starting replication again through stunnel.

-Eric

On Tue, Apr 15, 2008 at 1:11 AM, Jan Kirchhoff <[EMAIL PROTECTED]> wrote:
> I have a setup with a master and a bunch of slaves in my LAN as well as
>  one external slave that is running on a Xen-Server on the internet.
>  All servers run Debian Linux and its mysql version 5.0.32
>  Binlogs are around 2 GB per day. I have no trouble at all with my local
>  slaves, but the external one hangs once every two days.
>  As this server has no "other" problems like crashing programs, kenrel
>  panics, corrupted files or such, I am pretty sure that the hardware is OK.
>
>  the slave's log:
>
>  Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [ERROR] Error
>  reading packet from server: Lost connection to MySQL server during query
>  ( server_errno=2013)
>  Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [Note] Slave
>  I/O thread: Failed reading log event, reconnecting to retry, log
>  'mysql-bin.045709' position 7334981
>  Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [Note] Slave:
>  connected to master '[EMAIL PROTECTED]:1234',replication resumed in log
>  'mysql-bin.045709' at position 7334981
>  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
>  in Log_event::read_log_event(): 'Event too big', data_len: 503316507,
>  event_type: 16
>  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
>  reading relay log event: slave SQL thread aborted because of I/O error
>  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Slave:
>  Could not parse relay log event entry. The possible reasons are: the
>  master's binary log is corrupted (you can check this by running
>  'mysqlbinlog' on the binary log), the slave's relay log is corrupted
>  (you can check this by running 'mysq
>  lbinlog' on the relay log), a network problem, or a bug in the master's
>  or slave's MySQL code. If you want to check the master's binary log or
>  slave's relay log, you will be able to know their names by issuing 'SHOW
>  SLAVE STATUS' on this slave. Error_code: 0
>  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
>  running query, slave SQL thread aborted. Fix the problem, and restart
>  the slave SQL thread with "SLAVE START". We stopped at log
>  'mysql-bin.045709' position 172
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
>  I/O thread killed while reading event
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
>  I/O thread exiting, read up to log 'mysql-bin.045709', position 23801854
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
>  SQL thread initialized, starting replication in log 'mysql-bin.045709'
>  at position 172, relay log './db-extern-relay-bin.01' position: 4
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
>  I/O thread: connected to master '[EMAIL PROTECTED]:1234',  replication
>  started in log 'mysql-bin.045709' at position 172
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [ERROR] Error
>  reading packet from server: error reading log entry ( server_errno=1236)
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [ERROR] Got
>  fatal error 1236: 'error reading log entry' from master when reading
>  data from binary log
>  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
>  I/O thread exiting, read up to log 'mysql-bin.045709', position 172
>
>  slave start;
>  doesn't help.
>
>  slave stop, reset slave; change master to
>  master_log_file="mysql-bin.045709", master_log_pos=172;slave start
>  does not help as well
>
>  the only way to get this up and running again is to do a change master
>  to master_log_file="mysql-bin.045709", master_log_pos=0 and use
>  sql_slave_skip_counter when I get duplicate key errors. this sucks.
>  When this problem occurs, the log positions are always small number, I
>  would say less than 500.
>
>  I also get connection errors in the log from time to time, but it
>  recovers itself:
>  Apr 14 22:27:17 db-extern mysqld[24884]: 080414

Re: History of changed rows

2008-04-18 Thread Eric Frazier

C K wrote:

Hi all.
How can we manage the history of changed rows in the database. I have some
idea but not yet implemented. By using triggers to make a copy of the row
being changed to the another table in the same db. Is there any way to only
save the changed fields data and field name? Any other idea?
Thanks
CPK


How about mysqlbinlog? :)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Innodb vs myisam

2008-04-06 Thread Eric Bergen
I don't see what the issue is. As Jay said the row counts in explain
outputs are estimates. When running an explain query MySQL asks the
storage engine how many rows it thinks are between a set of values for
an index. Different storage engines use different methods to calculate
row count. Both innodb and myisam estimate the row count  based on
statistics they keep on the distribution of keys in an index. MyISAM
is more accurate than innodb with it's row count because of how it
keeps statistics. Analyze table on a myisam table will count the
number of unique values in an index
(myisam/mi_check:update_key_parts). Innodb samples the key
distribution in 8 different pages per index and does some calculations
based on the tree structure of those pages (details
innobase/btr/btr0cur.c:btr_estimate_number_of_different_key_vals).


On Sun, Apr 6, 2008 at 8:49 PM, Moon's Father <[EMAIL PROTECTED]> wrote:
> Just waiting for any reply .
>
>
>
>  On Thu, Apr 3, 2008 at 11:01 PM, Jay Pipes <[EMAIL PROTECTED]> wrote:
>
>  > Please actually read my reply before asking the same question.  As I
>  > stated, InnoDB outputs *estimated* row counts in EXPLAIN, whereas MyISAM
>  > outputs *accurate* row counts.
>  >
>  > -jay
>  >
>  > Krishna Chandra Prajapati wrote:
>  >
>  > > Hi,
>  > >
>  > > On myisam storage system
>  > >
>  > > mysql> explain select ui.user_id, ucp.user_id,ucp.payment_date from
>  > > user_info ui, user_course_payment ucp where ui.user_id=ucp.user_id;
>  > >
>  > > 
> ++-+---++---+---+-+-++-+
>  > > | id | select_type | table | type   | possible_keys |
>  > > key   | key_len | ref | rows   | Extra
>  > > |
>  > >
>  > > 
> ++-+---++---+---+-+-++-+
>  > > |  1 | SIMPLE  | ucp   | index  | PRIMARY,user_course_pay_comp1 |
>  > > user_course_pay_comp1 | 30  | NULL| *256721* | Using
>  > > index |
>  > >
>  > > |  1 | SIMPLE  | ui| eq_ref | PRIMARY   |
>  > > PRIMARY   | 10  | dip.ucp.user_id |  1 | Using index
>  > > |
>  > >
>  > > 
> ++-+---++---+---+-+-++-+
>  > > 2 rows in set (0.00 sec)
>  > >
>  > >
>  > > On innodb storage system
>  > >
>  > > mysql>  explain select ui.user_id, ucp.user_id,ucp.payment_date from
>  > > user_info ui, user_course_payment ucp where ui.user_id=ucp.user_id;
>  > >
>  > > 
> ++-+---++---+-+-+-++-+
>  > > | id | select_type | table | type   | possible_keys |
>  > > key | key_len | ref | rows   | Extra
>  > >   |
>  > >
>  > > 
> ++-+---++---+-+-+-++-+
>  > > |  1 | SIMPLE  | ucp   | index  | PRIMARY,user_course_pay_comp1 |
>  > > idx_user_course_payment | 9   | NULL| *256519* | Using
>  > > index
>  > > |
>  > > |  1 | SIMPLE  | ui| eq_ref | PRIMARY   |
>  > > PRIMARY | 10  | dip.ucp.user_id |  1 | Using
>  > > index |
>  > >
>  > >
>  > > 
> ++-+---++---+-+-+-++-+
>  > > 2 rows in set (0.00 sec)
>  > >
>  > > I have executed ANALYZE TABLE COMMAND on both the system (innodb and
>  > > myisam)
>  > > Yet there is a small difference. Highlighted in red color
>  > >
>  > > Is it the behavior of myisam or innodb or interal working of the storage
>  > > engines.
>  > >
>  > > Thanks,
>  > > Krishna
>  > >
>  > >
>  > >
>  > >
>  > > On Wed, Apr 2, 2008 at 9:06 PM, Rob Wultsch <[EMAIL PROTECTED]> wrote:
>  > >
>  > >  On Wed, Apr 2, 2008 at 5:06 AM, Krishna Chandra Prajapati <
>  > > > [EMAIL PROTECTED]> wrote:
>  > > >
>  > > >  Horribly ugly stuff
>  > > > >
>  > > > >  I know I sure as heck am not going to spend half an hour to turn
>  > > > those
>  > > > queries into something understandable, and I expect no one else will
>  > > > either.  If you want help please remove all extraneous details  (turn
>  > > > table
>  > > > and columns names in t1,t2,col1,col2, etc or descriptive names like
>  > > > parent,
>  > > > child, datetime_end)  and send out something that is easy to
>  > > > reproduce. You
>  > > > get a cupcake if you include ddl that populates itself with random
>  > > > data.
>  > > >
>  > > > Also, using /G instead of a semi colon will make database output a
>  > > > heck of
>  > > > a lot easier to read in email fo

Re: only myisam storage engine

2008-04-01 Thread Eric Bergen
You can set the skip-innodb option in my.cnf

-Eric

On Tue, Apr 1, 2008 at 9:28 PM, Ananda Kumar <[EMAIL PROTECTED]> wrote:
> Hi All,
>  We have a linux box running debain, with 8 cpu and 8 GB RAM, we want only
>  myisam engine to be running on this.
>
>  So, should i not setup any innodb parameters or is there any other way to
>  have only myisam engine running on this machine, please let me know.
>
>  regards
>  anandkl
>



-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connections on Database

2008-03-31 Thread Eric Bergen
Oops. I was wrong on that one. max_user_connections is different from
global max_connections and max_connections per user.

2008/3/31 Eric Bergen <[EMAIL PROTECTED]>:
> You probably only want max_connections and not max_user_connections.
>  Max_user_connections is the number of times a user can connect per
>  hour..
>
>  -Eric
>
>  2008/3/31 Vidal Garza <[EMAIL PROTECTED]>:
>
>
> > Velen escribió:
>  >
>  >
>  > > Hi,
>  >  >
>  >  > How can i limit connections to my database?
>  >  >
>  >  > Regards,
>  >  >
>  >  > Velen
>  >  >
>  >  max_user_connections = 50
>  >  max_connections = 50
>  >
>  >
>  >  --
>  >  
>  >  Ing. Vidal Garza Tirado
>  >  Depto. Sistemas
>  >  Aduanet S.A. de C.V.
>  >  Tel. (867)711-5850 ext. 4346, Fax (867)711-5855.
>  >  Ave. César López de Lara No. 3603 Int. B Col Jardín.
>  >  Nuevo Laredo, Tamaulipas, México.
>  >
>  >
>  >
>  >  --
>  >  Este mensaje ha sido analizado por MailScanner
>  >  en busca de virus y otros contenidos peligrosos,
>  >  y se considera que está limpio.
>  >  For all your IT requirements visit: http://www.aduanet.net
>  >
>  >
>  >
>  >
>  >  --
>  >  MySQL General Mailing List
>  >  For list archives: http://lists.mysql.com/mysql
>  >  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>  >
>  >
>
>
>
>  --
>  high performance mysql consulting.
>  http://provenscaling.com
>



-- 
high performance mysql consulting.
http://provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connections on Database

2008-03-31 Thread Eric Bergen
You probably only want max_connections and not max_user_connections.
Max_user_connections is the number of times a user can connect per
hour..

-Eric

2008/3/31 Vidal Garza <[EMAIL PROTECTED]>:
> Velen escribió:
>
>
> > Hi,
>  >
>  > How can i limit connections to my database?
>  >
>  > Regards,
>  >
>  > Velen
>  >
>  max_user_connections = 50
>  max_connections = 50
>
>
>  --
>  
>  Ing. Vidal Garza Tirado
>  Depto. Sistemas
>  Aduanet S.A. de C.V.
>  Tel. (867)711-5850 ext. 4346, Fax (867)711-5855.
>  Ave. César López de Lara No. 3603 Int. B Col Jardín.
>  Nuevo Laredo, Tamaulipas, México.
>
>
>
>  --
>  Este mensaje ha sido analizado por MailScanner
>  en busca de virus y otros contenidos peligrosos,
>  y se considera que está limpio.
>  For all your IT requirements visit: http://www.aduanet.net
>
>
>
>
>  --
>  MySQL General Mailing List
>  For list archives: http://lists.mysql.com/mysql
>  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>



-- 
high performance mysql consulting.
http://provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: The Use database command is too slow

2008-02-02 Thread Eric Bergen
I think a better question is why do you have one database per user?

-Eric

On 2/2/08, imad <[EMAIL PROTECTED]> wrote:
> I am not connecting through MySQL. I am connecting through PHP. How
> can I speed it up?
>
>
>
>
> On Feb 3, 2008 12:20 AM, Saravanan <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > when you connect using mysql auto-rehash is enabled as default. It will 
> > read and open the tables in the database when you try to use the database. 
> > It may take time for database with many tables. use mysql with -A or 
> > --no-auto-rehash.
> >
> > shell> mysql -uroot -A
> > shell> mysql -uroot --skip-auto-rehash
> >
> > Saravanan
> > MySQL DBA
> >
> >
> > --- On Sat, 2/2/08, imad <[EMAIL PROTECTED]> wrote:
> >
> > > From: imad <[EMAIL PROTECTED]>
> > > Subject: The Use database command is too slow
> > > To: mysql@lists.mysql.com
> > > Date: Saturday, February 2, 2008, 6:14 AM
> >
> > > Hello,
> > >
> > > I have like 50K databases on my server, one for every user.
> > > I am experience delay in the command 'use database'
> > > which is issued by
> > > every user when he connects to the database.
> > >
> > > Can anyone here tell me what is my problem and how can I
> > > speed it up.
> > >
> > > thanks
> > >
> > > --
> > > MySQL General Mailing List
> > > For list archives: http://lists.mysql.com/mysql
> > > To unsubscribe:
> > > http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
> >   
> > 
> > Be a better friend, newshound, and
> > know-it-all with Yahoo! Mobile.  Try it now.  
> > http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
> >
> >
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance problem - MySQL at 99.9% CPU

2008-01-02 Thread Eric Frazier
-Original Message-
From: Per Jessen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 02, 2008 7:51 AM
To: mysql@lists.mysql.com
Subject: Re: Performance problem - MySQL at 99.9% CPU

Gunnar R. wrote:

> I am thinking about buying a new dual core box (with IDE disks?), but 
> I have to make sure this really is a hardware issue before I spend 
> thousands of bucks.

I think you've got an application problem somewhere which you should look
into first.  Hardware-wise I think you're doing fine, except you could
probably increase overall performance with more memory.  MySQL is pretty
good at query-caching.

Just for general info I tested Heap tables vs the query cache, query cache
one and it makes a lot of sense why once I saw that. Even in-memory tables
can't be as fast(giving queries in the cache) because of the cost of parsing
and optimization of the query. The query cache being basicly a fast in
memory hash lookup. However, if you have a system that doesn't have a lot of
repetative queries, the Heap table would win again that just makes sense,
but my little test proved the query cache is pretty good for most things. 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Cross database joins

2007-11-26 Thread Eric Frazier

Hi,

I found one thread on this that included some people's opinions, but I 
haven't been able to find anyone who has actually done some performance 
testing to see if there is a cost and what that cost is to doing cross 
database joins. I do tend to want to keep everything in one DB, but it 
gets hard when you have databases that do cross over at times, but 
rarely. Of course I am being somewhat lazy in doing this post, but only 
because I think someone here *must* have already done some testing 
between cross db joins and inside db joins. Another point of interest is 
if DBI actually opens another connection or not. I saw one mention of a 
worry about that, but as I understand it, you can refer to any table in 
any database from any mysql connection that has permission to access 
that DB and table, so you should be able to access any DB from any 
initial connection.


Thanks,

Eric



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

RE: Unusual sort

2007-11-24 Thread Eric Frazier
Hi,

One thought, it might a good idea to make a trigger/procedure that inserts
the seprate index field, so you can forget about it from here on. 


-Original Message-
From: Martin Gainty [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 24, 2007 11:18 AM
To: Jim; mysql@lists.mysql.com
Subject: RE: Unusual sort


Hi Jim
 
it seems that you cannot create an index with a function soyou will need to
establish a separate 12 character column which has all of the URL entries
insertedalphabetically in ascending order (fully padded with www.
prefix)backup your DBALTER TABLE table ADD TwelveCharacterURL CHAR(12),
  ADD FOREIGN KEY 12CharacterIndex (TwelveCharacterURL);
UPDATE TABLE TABLE set TwelveCharacterURL=(LPAD(OldURLColumn,12,'www.'));
 
Anyone else?
Martin __Disclaimer and
confidentiality noteEverything in this e-mail and any attachments relates to
the official business of Sender. This transmission is of a confidential
nature and Sender does not endorse distribution to any party other than
intended recipient. Sender does not necessarily endorse content contained
within this transmission.> From: [EMAIL PROTECTED]> To: mysql@lists.mysql.com>
Subject: Unusual sort> Date: Fri, 23 Nov 2007 16:29:50 -0700> > I have a
table containing web site host names, most of them having both a >
"name.com" and "www.name.com" version, that I'd like sorted in the >
following manner:> > axxx.com> www.axxx.com> bxxx.com> www.bxxx.com>
wxxx.com> www.wxxx.com> zxxx.com> www.zxxx.com> > Any way to do this? > > >
-- > MySQL General Mailing List> For list archives:
http://lists.mysql.com/mysql> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]>
_
Your smile counts. The more smiles you share, the more we donate.  Join in.
www.windowslive.com/smile?ocid=TXT_TAGLM_Wave2_oprsmilewlhmtagline


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Giant database vs unlimited databases

2007-11-20 Thread Eric Frazier

Russell E Glaue wrote:

No one probably wants to go through the trouble to code this solution but it
is possible to use MySQL Proxy to filter the SQL statements and results.

MySQL Proxy sits in between MySQL Server and the MySQL Client.
It can read queries, modify them, send queries to the server or deny them all
together, and even read results and modify them as well, or deny the results to
be sent back to the client.

Perhaps if you can resolve to a less complicated set up, but still lean towards
the VPD idea, MySQL Proxy might work for you.

I just wanted to throw this solution out in case it was useful.
-RG


Hi Russel,

That sounds like a cool idea and makes sense. That is what made me feel 
oogy about the idea of trying to do something like this with MySQL 
privileges. I read tons of things that say the real auth layer should be 
separate. And that VPD example was a good example of how fine grained 
and therefore complex auth schemes can get. I would guess that following 
your idea further, it could end up being more scalable(sorry I hate that 
word it is so overused) that is easy to change and upgrade.


I am interested in the many dbs vs one big db issue because I followed 
the many db choice at one time. It did make sense because I could answer 
all three of the questions in my previous post a big YES. But, it was a 
lot of extra work, esp over time. I also discovered you can do cross DB 
joins, but that makes some DBAs shriek in horror :)As it should.


Thanks,

Eric


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Giant database vs unlimited databases

2007-11-20 Thread Eric Frazier

Mohammad wrk wrote:

Hi Eric,

In the case of  a "yes" answer to the second question below, can't we 
still use something like VPD (Virtual Private Database) in MySQL?


Thanks,

Mohammad



Hi,

I don't know much about Oracle, but I looked this up. MySQL can't do it, 
that I know of.  Here is the final example of such a setup that I found 
at http://www.oracle-base.com/articles/8i/VirtualPrivateDatabases.php


CONNECT user1/[EMAIL PROTECTED];
INSERT INTO schemaowner.user_data (column1, user_id)
 VALUES('User1', 1);
INSERT INTO schemaowner.user_data (column1, user_id) 			 
VALUES('User2',2);

COMMIT;

CONNECT user2/[EMAIL PROTECTED]
INSERT INTO schemaowner.user_data (column1, user_id)
VALUES ('User 1', 1);
INSERT INTO schemaowner.user_data (column1, user_id)
VALUES ('User 2', 2);
COMMIT;

CONNECT schemaowner/[EMAIL PROTECTED]
SELECT * FROM schemaowner.user_data;
CONNECT user1/[EMAIL PROTECTED];
SELECT * FROM schemaowner.user_data;
CONNECT user2/[EMAIL PROTECTED]
SELECT * FROM schemaowner.user_data;

Notice that:

* When connected to USER1, only the first insert will work.
* When connected to USER2, only the second insert will work.
* The failing inserts produce the error:
ORA-28115: policy with check option violation


You can setup column level privileges on MySQL, but I wonder if it would 
be buggy considering I have never heard of anyone doing this before. 
Plus from what I understand the above example is a lot more than column 
privileges. user1 can only insert data if the insert statement's data 
sets user_id to 1, for example. Pretty cool, but scary in a way. I find 
this much logic in the DB to be scary(esp if not well documented), but 
then I use MySQL :)


So as to which way you should go is most defiantly a matter of opinion I 
think. But, going back to my opinion(which is not at all informed as to 
all the details), if question 2 is a YES, then I would tend to go with 
separate DBs.


Thanks for the Oracle lesson :)

Thanks,

Eric









- Original Message 
From: Eric Frazier <[EMAIL PROTECTED]>
To: Mohammad wrk <[EMAIL PROTECTED]>
Cc: mysql@lists.mysql.com
Sent: Monday, November 19, 2007 7:42:13 AM
Subject: Re: Giant database vs unlimited databases

Mohammad wrk wrote:
 > Hi,
 >
 > I'm working on a web 2.0 project that targeting small to medium size 
companies for providing business services. Companies simply register to 
the site and then start their business by loading their data, sharing 
and discussing them with others.

 >
 > The design/architectural decision now we are facing from database 
perspective is how we should store companies' specific data? One way is 
to put all of them in a single database and partition them by company-id 
and the other one is to create, on the fly,  a new database per company 
. The justification for the latter is that MySQL is not powerful enough 
(compare to Oracle or DB2) to handle large amount of data and concurrent 
users.

 >
 > I'm new to MySQL and don't know that much about it and this is why 
I'd like to discuss this concern here.

 >

Funny, I thought you asked the question, should I separate my customers
into their own databases, or use one big DB? Not MySQL sucks, Oracle is
better. :)

Issues I would ask about on this:

1. Is there a chance that given their separation, these DBs will ever
diverge in design because of differences between customers?
2. Could they ever need to be separated for legal reasons? (like one bad
query causing customer data be compromised)
3. Is there any other reason you may do something vastly different from
one customer to another?

If you answer yes to any of these, then you might be best off separating
dbs. But, if you never want to, or expect for any of these things to
happen, you will just be creating headaches for yourself. Backup,
replication, and the need for cross DB queries, will all be a pain in
comparison to a single DB.

I am sure there is more to consider, but these are the points that come
to my mind right away.

Thanks,

Eric



Instant message from any web browser! Try the new * Yahoo! Canada 
Messenger for the Web BETA* 
<http://ca.messenger.yahoo.com/webmessengerpromo.php>




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Giant database vs unlimited databases

2007-11-19 Thread Eric Frazier

Mohammad wrk wrote:

Hi,

I'm working on a web 2.0 project that targeting small to medium size companies 
for providing business services. Companies simply register to the site and then 
start their business by loading their data, sharing and discussing them with 
others.

The design/architectural decision now we are facing from database perspective 
is how we should store companies' specific data? One way is to put all of them 
in a single database and partition them by company-id and the other one is to 
create, on the fly,  a new database per company . The justification for the 
latter is that MySQL is not powerful enough (compare to Oracle or DB2) to 
handle large amount of data and concurrent users.

I'm new to MySQL and don't know that much about it and this is why I'd like to discuss this concern here. 



Funny, I thought you asked the question, should I separate my customers 
into their own databases, or use one big DB? Not MySQL sucks, Oracle is 
better. :)


Issues I would ask about on this:

1. Is there a chance that given their separation, these DBs will ever 
diverge in design because of differences between customers?
2. Could they ever need to be separated for legal reasons? (like one bad 
query causing customer data be compromised)
3. Is there any other reason you may do something vastly different from 
one customer to another?


If you answer yes to any of these, then you might be best off separating 
dbs. But, if you never want to, or expect for any of these things to 
happen, you will just be creating headaches for yourself. Backup, 
replication, and the need for cross DB queries, will all be a pain in 
comparison to a single DB.


I am sure there is more to consider, but these are the points that come 
to my mind right away.


Thanks,

Eric


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Replication and AUTO_INCREMENT; is it safe?

2007-10-24 Thread Eric Frazier

On 10/24/07, Eric Frazier <[EMAIL PROTECTED]> wrote:

js wrote:


Hi list,

Reading How AUTO_INCREMENT Handling Works in InnoDB[1] makes me
wonder how is it possible to replicate AUTO_INCREMENTed value to slaves.

According to the doc,

"If you specify an AUTO_INCREMENT column for an InnoDB table, the
table handle in the InnoDB data dictionary contains a special counter
called the auto-increment counter that is used in assigning new values
for the column. This counter is stored only in main memory, not on
disk."

Let's say there are two server, A and B. A replicates its data to B, the slave.
A and B has a table that looks like(column 'id' is auto_increment field)

id value
1  a
2  b
3  c
4  d

If After "delete from table where id = 4" and restart mysqld on server B,
"insert into table (value) values(e)" is executed on server A.

In this case, because A's internal counter is 4, table on A would be
1 a
2 b
3 c
5 e

But B's would be different because restarting mysqld flushed InnoDB's
internal counter.
1 a
2 b
3 c
4 e

Is this correct?
or MySQL is smart enough to handle this problem?

Thanks.

[1]http://dev.mysql.com/doc/refman/4.1/en/innodb-auto-increment-handling.html


  

http://dev.mysql.com/doc/refman/5.0/en/faqs-general.html  See 28.1.5

But there are more reasons to avoid auto-increment in mysql. I haven't
run into the problem above, but I have had such problems when restoring
backups. Make your data make sense, a mindless counting number just to
make a table unique doesn't every make any sense. Session ids,
timestamps, combinations of fields all make much better primary keys and
it is safer overall to implement a "counter" function in your app than
to trust mysql's


js wrote:


Thank you for your reply.

But I couldn't under stand how --auto-increment-increment and
--auto-increment-offset
helps me avoid my problem.

Could you please explain?


Restarting the server doesn't reset autoinc.. But that can happen when 
you restore a backup, I don't remember what to avoid of the top of my 
head, but look into mysqldump and do some tests. Best way to 
understand But, you can avoid any problem with autoinc by just not 
using it. If you must use it for replication it is quite safe to use it 
if you are only replicating to a slave write only, so the slave is not 
also another master(you are not doing inserts/updates on the slave as 
well), or if you need to replicate in a circle use 
auto-increment-increment etc. I think it is not a bad idea to use these 
even if your slave is just a slave.


Bottom line, if you are designing a DB, for max safety avoid autoinc 
entirely. It will save you headaches for a little extra work to start. 
This is one area where MySQL still deserves some jeering because 
Postgress had this figured out a long time ago with proper sequences 
that are a lot easier to mange. With all of the features and cool stuff 
MySQL has added in the last few years, I don't get why they haven't 
fixed autoinc or added a true sequence type.


Eric

















  




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Saving space disk (innodb)

2007-10-10 Thread Eric Frazier

Andrew Carlson wrote:

If you do what Baron suggests, you may want to set Innodb to create a
file-per-table - that way, in the future, you could save space when tables
are dropped, or you could recreate innodb tables individually to save space,
not have to dump all your innodb tables at one time.

  

I think this is a fantastic idea. So you would

- do your DB dump(horrible with hundreds of Gigs.)
- reset your my.cnf setting to include:

[mysqld]
innodb_file_per_table

- stop the db

- kill off the existing tablespace files

- restart the DB

- recreate the database and import your dump.

http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html

So the only other question is what is the cost if any? It is a good idea 
because often there are just a few tables that get really big and this 
is a nice way to deal with them separately like you would with MyISAM.


Eric


On 10/10/07, Baron Schwartz <[EMAIL PROTECTED]> wrote:
  

Hi,

Tiago Cruz wrote:


Hello guys,

I have one monster database running on MySQL 4.0.17, using InnoDB:

270GB Oct 10 14:35 ibdata1


I've deleted a lot of register of then, and I've expected that the size
can be decreased if 50% (135 GB) but the ibdata was the same value than
before "clean"...

How can I force to save this space?
  

You must dump your data to files, shut down MySQL, delete your current
InnoDB tablespace and log files, reconfigure the server, restart MySQL
and let InnoDB create new (empty) files.  Then reload the data.

You should probably save your current data and tablespace files until
you are sure you complete this successfully.

It's an annoying procedure but there is no other way.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






  




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Saving space disk (innodb)

2007-10-10 Thread Eric Frazier

Dan Rogart wrote:

OPTIMIZE TABLE should reclaim that space, but be aware that it could
take a while to run (locking your table all the while) since it just
maps to an ALTER TABLE statement which creates a new copy of the table.
Depends on how big your tables are.

Doc:  http://dev.mysql.com/doc/refman/4.1/en/optimize-table.html


  
He has InnoDB tables and that doesn't reclaim tablespace. He wants to 
get back disk space from his data files.
" As of 4.1.3, |OPTIMIZE TABLE| is mapped to |ALTER TABLE|, which 
rebuilds the table to update index statistics and free unused space in 
the clustered index"
But that just means he has empty space in his tablespace :) At least 
that is how I read it, so Baron's suggestion makes the most sense.


Eric




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: DB Schema Comparison Utility ?

2007-10-03 Thread Eric Frazier

Daevid Vincent wrote:

This has been asked for many many times on this list, not sure why mySQL AB
doesn't just release a command line tool like a 'mysql diff' and also a
'mysql lint'. The lint one should be totally trivial for them to do, as they
already have a SQL parser! I can't tell you how many times our daily build
was broken by a missing semi-colon or some other SQL syntax error. We run
all commits through "php -l" and ruby's checker, but mysql is the only one
we have to sweat over.

While I'm glad that pretty GUI tools like Upscene's exist, that doesn't do
us any good on a linux build system where it does an "svn checkout", runs
automated BVT tests, compiles code, uploads to a daily build directory, etc.

We need command line tools that run on linux.

:( 
  
This is not quite what you were asking for, but I found this yesterday: 
http://sourceforge.net/projects/mysqltoolkit


I think the guy has done a lot of really good work.

Thanks,

Eric

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slave Bin Log Question

2007-09-24 Thread Eric Frazier

Boyd Hemphill wrote:

I have executed a strategy for backup where I stop a slave and do a mysqldump 
with --master-data.  Both master and slave are 4.1.20
 
My assumption was that the log coordinates in the dump file would provide me with the place to replay the log for a point in time recovery.
 
What I learned today however is that it appears the binary log only captures statements run directly on the slave.  Any SQL run by the SQL thread of replication seems only to go in the relay log.  This effectively renders the --master-data useless for my purpose.
 
So, I have two questions.

1.  Can someone verify that the binary log on the slave is not capturing SQL 
from the replication SQL thread.
  


It sounds like you need --log-slave-updates

http://dev.mysql.com/doc/refman/5.0/en/replication-options.html

2.  If the above is really true, what strategies are you using to backup InnoDB 
without InnoDB hot backup?
 
Thanks for your time!
 
Peace

Boyd







CONFIDENTIALITY NOTICE: This email & attached documents may contain confidential information. All information is intended only for the use of the named recipient. If you are not the named recipient, you are not authorized to read, disclose, copy, distribute or take any action in reliance on the information and any action other than immediate delivery to the named recipient is strictly prohibited. If you have received this email in error, do not read the information and please immediately notify sender by telephone to arrange for a return of the original documents. If you are the named recipient you are not authorized to reveal any of this information to any other unauthorized person. If you did not receive all pages listed or if pages are not legible, please immediately notify sender by phone. 







  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   6   7   8   9   >