Re: HA Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-03 Thread Ian
On 03/04/2012 00:47, Wes Modes wrote:
 Thanks again for sharing your knowledge.  I do believe the answers I've
 receiving, but since I have requirements that I cannot easily alter, I'm
 also gently pushing my expert advisers here to look beyond their own
 preferences and direct experience.
 
 RE: Shared storage.  I can easily let go of the preference to take
 advantage of shared storage.  I understand duplicated databases are the
 essence of database redundancy.  You make good points.
 
 In terms of the acceptability of a small fraction of users being
 temporarily unable to access services:  rather than sharding, which
 again requires more control over the application than we have, I was
 more envisioning that would be the fraction of users who hit the one
 peer MySQL server that is temporarily unavailable due to h/w or s/w
 failure or DB corruption while its fail over is powered up.
 
 Does MySQL cluster seem like it will address my requirements to allow us
 to horizontally scale a number of MySQL nodes as peers without
 separating reads and writes, or slaves and masters. 
 
 Wes

Hi Wes,

If you can't alter the application to split reads and writes, why not
let MySQL Proxy to do it for you?

http://forge.mysql.com/wiki/MySQL_Proxy

Combine this with haproxy and you could build a multi-master environment
with each master having any number of slaves.  Set MySQL Proxy to send
writes to the masters and reads to the slaves.

Regards

Ian
-- 




 On 4/2/2012 2:25 PM, shawn green wrote:
 Hello Wes,

 On 4/2/2012 4:05 PM, Wes Modes wrote:
 Thanks Shawn and Karen, for the suggestions, even given my vague
 requirements.

 To clarify some of my requirements.

 *Application:  *We are using an open-source application called Omeka,
 which is a free, flexible, and open source web-publishing platform for
 the display of library, museum, archives, and scholarly collections and
 exhibitions.  Without getting into how free (or scalable) free software
 really is, we can view it as one aspect we cannot change, having been
 written into the grant requirements we received for the project.
 Experienced Omeka developers and our own developer have suggested
 that/it is not feasible to separate database writes from reads in the
 application/ (given time and resources).


 That's a shame. Sounds like you are back to one big server or several
 smaller servers with in-program sharding.

 *SAN: *The SAN is a Dell EqualLogic 6100 which has redundant everything,
 including multiple NICs, controllers, and power.  So we are less
 concerned about the SAN being a SPoF.  On the other hand, if we have a
 single big MySQL server that fails, we could bring up another copy of it
 via VMWare, but until the server came up, the application would be dead
 in the water.  If the database is corrupted, service will be interrupted
 for a considerable time.


 Again, each MySQL instance needs it's own copy of the data. Having
 only one big powerful disk system means that each instance you fire up
 must both share spindles and networking to access its data. Just like
 a freeway at rush hour, you may find the traffic into and out of this
 one device crawling to a halt exactly when you don't want it to.

 *High Availability:*  It sounds like there is some debate over how to
 provide HA best, but do people really disagree on the desired results?
 Without getting into the many meanings of this buzz word, here's what we
 mean: /We desire to maintain high availability of service, allowing a
 small fraction of users to experience outage for only seconds at a
 time.  We desire to provide this through horizontal scaling, redundancy,
 failover planning, and external monitoring.  /


 Small fraction of users - this implies data sharding. Multiple MySQL
 instances each with enough data to operate independently for one slice
 of your most important data and an application smart enough to know
 which shard to go to for each slice of data.

 For a few seconds at a time - you do not want a shared disk. Should
 the active MySQL die, it's data will be in an inconsistent state. Once
 you fire up the passive daemon it will need to perform a recovery
 restart. This down time is more than likely not going to take only a
 few seconds. The more data you have, the longer the checks will take. 
 An independent copy maintained by a slave instance, provides a
 logically consistent copy of the master's data as it will only
 replicate complete transactions.

 horizontal scaling - one master, multiple slaves. This requires the
 separation of writes and reads.


 *Scalability:  *Again, seems like there are lots of applications and
 implementation, but people agree on the general concept.  Here's what we
 mean for this project:  /We desire to  scale our services so that a
 usage surge does not cause unavailability of the services for some
 users.  We prefer to horizontally increase scalability using
 load-balancing strategies to treat clusters of servers as single logical
 units

Re: Replication - LINUX to WIN

2010-03-18 Thread Ian Simpson
Hi Vikram,

There are a couple of potential problems with Linux to Windows
replication, but they are easily avoidable.

1) Table names: There are case-sensitivity differences between Linux and
Windows: Linux is case-sensitive as regards table names, whereas Windows
is not. Given that you are replicating from a Linux master to a Windows
slave, you shouldn't hit too many problems with this, but if you ever
reverse the order for some reason there could be problems.

2) MySQL versions: try to make sure that you always use the same version
of MySQL on client and slave (this is generally good replication advice
in all cases). There have been some bugs in replication that cause
problems when the master and slave are on different versions.

On Thu, 2010-03-18 at 14:03 +0530, Vikram A wrote:
 Hi,
 I have done replication with Win to Win servers with mysql version 
 5.0.41-community-nt. 
 
 Now, 
 We have Enterprise Linux(Red Hat 4.1.2-14) and windows server2003.
 Is it possible do the replication LINUX(Master) to WINDOWS SERVER2003(Slave) ?
 
 It will be great help to me.
 
 Thank you.
 
 VIKRAM A
 
 
 
   The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
 http://in.yahoo.com/


-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Storage Engine

2010-03-18 Thread Ian Simpson
Hi Neil,

As Krishna said, Innodb is generally always superior to MyISAM unless
you need full-text search (and even if you do there are alternatives to
MyISAMs full-text search, which I've often found to be lacking in
performance).

A poorly optimised query will be slow whether it is called from a
procedure or executed directly by the page. I'd advise analysing your
joined queries carefully, using the EXPLAIN tool in MySQL, and add
appropriate indexes to your tables.

Also, if you are developing in PHP, you may wish to look at the memcache
module, which can cache function results in memory in PHP. This can
often be helpful for storing results for very frequently called queries.

On Thu, 2010-03-18 at 09:18 +, Tompkins Neil wrote:
 Hi
 
 I'm currently looking to develop an on line web application - which is going
 to be used by around 200+ concurrent users at any one time.  Can
 you recommend to me the best/preferred MySQL database engine to use for
 example MyISAM ?
 
 Also, in your experiences, do you recommend that SELECT statements
 containing a number of INNER JOINS should be execute from a Stored procedure
 or purely from the webpage ?
 
 Cheers
 Neil


-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication - LINUX to WIN

2010-03-18 Thread Ian Simpson
Hi Vikram,

As far as I know it's just table names: I think it's related to the fact
that the Windows file system is case-insensitive, while Linux
filesystems generally are not.

On Thu, 2010-03-18 at 15:18 +0530, Vikram A wrote:
 Hello Ian Simpson,
 
 Thank you for reply. We defined table names, fields in lower cases
 except the KEYS . I hope the Key will not taken into the account[all
 in upper case].
 
 Regarding the version we will keep same version as you said. 
 
 Thank you
 
 --- On Thu, 18/3/10, Ian Simpson i...@it.myjobgroup.co.uk wrote:
 
 From: Ian Simpson i...@it.myjobgroup.co.uk
 Subject: Re: Replication - LINUX to WIN
 To: Vikram A vikkiatb...@yahoo.in
 Cc: MY SQL Mailing list mysql@lists.mysql.com
 Date: Thursday, 18 March, 2010, 9:33 AM
 
 Hi Vikram,
 
 There are a couple of potential problems with Linux to Windows
 replication, but they are easily avoidable.
 
 1) Table names: There are case-sensitivity differences between
 Linux and
 Windows: Linux is case-sensitive as regards table names,
 whereas Windows
 is not. Given that you are replicating from a Linux master to
 a Windows
 slave, you shouldn't hit too many problems with this, but if
 you ever
 reverse the order for some reason there could be problems.
 
 2) MySQL versions: try to make sure that you always use the
 same version
 of MySQL on client and slave (this is generally good
 replication advice
 in all cases). There have been some bugs in replication that
 cause
 problems when the master and slave are on different versions.
 
 On Thu, 2010-03-18 at 14:03 +0530, Vikram A wrote:
  Hi,
  I have done replication with Win to Win servers with mysql
 version 5.0.41-community-nt. 
  
  Now, 
  We have Enterprise Linux(Red Hat 4.1.2-14) and windows
 server2003.
  Is it possible do the replication LINUX(Master) to WINDOWS
 SERVER2003(Slave) ?
  
  It will be great help to me.
  
  Thank you.
  
  VIKRAM A
  
  
  
The INTERNET now has a personality. YOURS! See your
 Yahoo! Homepage. http://in.yahoo.com/
 
 
 -- 
 Ian Simpson
 System Administrator
 MyJobGroup
 
 
 
 
 __
 The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.


-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about DELETE

2010-03-18 Thread Ian Simpson
Hi Randall,

If you're talking about processes that are taking that long, then
running SHOW PROCESSLIST several times during the operation should give
you a rough idea what it is doing at each stage.

Also, do you have an index on the id column? It could just be taking a
long time to identify all the rows it needs to delete.

On Thu, 2010-03-18 at 10:03 -0400, Price, Randall wrote:
 Thanks for your responses on this.
 
 However, I suspect that the indexes are being rebuilt over and over during 
 the mass delete operation.
 
 If I delete a small number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 5) it may only take a minute or so.
 
 If I delete a large number of records (i.e., DELETE FROM table WHERE id 
 BETWEEN 1 AND 500) it may take upwards of an hour or more.
 
 So what would cause this increased slowness the more records you delete, 
 unless the indexing is happening multiple times?
 
 Thanks,
 
 -Randall Price
 
 
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
 Meersman
 Sent: Thursday, March 18, 2010 6:48 AM
 To: Ananda Kumar
 Cc: Price, Randall; [MySQL]
 Subject: Re: Question about DELETE
 
 Given that OP is talking about a single delete statement, I'm gonna be very 
 surprised if he manages to squeeze an intermediate commit in there :-)
 
 For a single-statement delete on a single table, the indexes will be rebuilt 
 only once. I'm not entirely sure what happens to cascaded deletes, though.
 
 On Thu, Mar 18, 2010 at 6:05 AM, Ananda Kumar 
 anan...@gmail.commailto:anan...@gmail.com wrote:
 Hi,
 It depends how frequently ur doing a commit.
 If you have written a plsql, with loop and if you commit after each row is
 deleted, then it get update for each row. Else if you commit at the end the
 loop, it commits only once for all the rows deleted.
 
 regards
 anandkl
 On Thu, Mar 18, 2010 at 1:21 AM, Price, Randall 
 randall.pr...@vt.edumailto:randall.pr...@vt.eduwrote:
 
  Hello,
 
  I have a simple question about deleting records from INNODB tables.  I have
  a master table with a few child tables linked via Foreign Key constraints.
   Each table has several indexes as well.
 
  My question is:  if I delete many records in a single delete statement
  (i.e., DELETE FROM table WHERE id = 1 AND id = 5 ... AND ID = 100) how many
  times are the foreign keys/indexes updated?
 
  Once for the entire DELETE operation or one time for each record that is
  deleted?
 
  Thanks,
 
  Randall Price
 
 
 
 
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Remove 'debian-sys-maint' Account?

2010-03-03 Thread Ian Simpson
Hi Carlos,

The debian-sys-maint account is used for two things that I am aware of:

1) the /etc/init.d/mysql stop command uses it to perform a controlled
shutdown of MySQL.

2) on execution of /etc/init.d/mysql start, it uses this account to run
a CHECK TABLE on all of your tables

The first function is quite handy, the second is downright annoying,
especially if you have any large tables, as these will take a long time
to check.

There is a way of stopping it from performing the table check without
disabling the maintenance account, but I can't remember it off-hand.

Thanks

On Tue, 2010-03-02 at 13:15 -0500, Carlos Williams wrote:
 I am using Debian 'Squeeze' / Testing on with MySQL 5.1.41-3
 installed. It is a fresh install and I was checking all the system
 accounts and noticed that Debian has a 'debian-sys-maint' account on
 'localhost'. Has anyone ever removed this account? Do I need it or can
 I safely remove this account? I don't understand why it's there. I
 don't want to break MySQL even though there is no data or databases on
 this machine but I would like to keep this as clean as possible.
 
 Thanks for any input.
 
 -Carlos
 


-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Simple Query Question

2009-12-17 Thread Ian
Hi,

I am sure there is a simple solution to this problem, I just cant find it :)

I have got a table that records views for an article for each blog per day.
So the structure is as follows:

CREATE TABLE `wp_views` (
`blog_id` int(11) NOT NULL,
`post_id` int(11) NOT NULL,
`date` date NOT NULL,
`views` int(11) NOT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8;

Now thats fine and I can pull top blogs per day and thats all fine, but what
I am after is pulling the top articles for a time period and where I am
running into problems is where two blogs have the same post_id's the views
get sum()'d for the day and I cant figure out (read end of year mind block)
how to get around it. Here is my current query (for last 7 days):

SELECT blog_id, post_id, sum( views ) AS views FROM wp_views WHERE (date =
2009-12-17 AND date = 2009-12-10) GROUP BY blog_id, post_id ORDER BY
views DESC LIMIT 10

Any ideas as to whats wrong. I know its something simple, I just cant put my
finger on it.

Thanks in advance,
Ian


Re: Simple Query Question

2009-12-17 Thread Ian
Hi,

Thanks, I just checked and it was a memcache that was caching the output.
See I knew it was a simple solution ;)

Thanks for the effort everyone and sorry for wasting time.

Regards
Ian

2009/12/17 Aleksandar Bradaric leann...@gmail.com

 Hi Ian,

 Why do you think something's wrong? Here is my test data and the results of
 your query:
 ---
 mysql SELECT * FROM wp_views;
 +-+-++---+
 | blog_id | post_id | date   | views |
 +-+-++---+
 |   1 |   1 | 2009-12-16 | 2 |
 |   1 |   1 | 2009-12-17 | 3 |
 |   1 |   2 | 2009-12-16 | 4 |
 |   1 |   2 | 2009-12-17 | 5 |
 |   2 |   1 | 2009-12-16 | 6 |
 |   2 |   1 | 2009-12-17 | 7 |
 |   2 |   2 | 2009-12-16 | 8 |
 |   2 |   2 | 2009-12-17 | 9 |
 |   1 |   1 | 2009-12-18 | 1 |
 |   1 |   2 | 2009-12-18 | 1 |
 |   2 |   1 | 2009-12-18 | 1 |
 |   2 |   2 | 2009-12-18 | 1 |
 +-+-++---+
 12 rows in set (0.00 sec)

 mysql SELECT blog_id, post_id, sum( views ) AS views FROM wp_views WHERE
 (date = 2009-12-17 AND date = 2009-12-10) GROUP BY blog_id, post_id
 ORDER BY views DESC LIMIT 10;
 +-+-+---+
 | blog_id | post_id | views |
 +-+-+---+
 |   2 |   2 |17 |
 |   2 |   1 |13 |
 |   1 |   2 | 9 |
 |   1 |   1 | 5 |
 +-+-+---+
 4 rows in set (0.00 sec)
 ---

 Seems OK to me... Are you getting different results?


 Take care,
 Aleksandar



 Ian wrote:

 Hi,

 I am sure there is a simple solution to this problem, I just cant find it
 :)

 I have got a table that records views for an article for each blog per
 day.
 So the structure is as follows:

 CREATE TABLE `wp_views` (
 `blog_id` int(11) NOT NULL,
 `post_id` int(11) NOT NULL,
 `date` date NOT NULL,
 `views` int(11) NOT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8;

 Now thats fine and I can pull top blogs per day and thats all fine, but
 what
 I am after is pulling the top articles for a time period and where I am
 running into problems is where two blogs have the same post_id's the views
 get sum()'d for the day and I cant figure out (read end of year mind
 block)
 how to get around it. Here is my current query (for last 7 days):

 SELECT blog_id, post_id, sum( views ) AS views FROM wp_views WHERE (date
 =
 2009-12-17 AND date = 2009-12-10) GROUP BY blog_id, post_id ORDER BY
 views DESC LIMIT 10

 Any ideas as to whats wrong. I know its something simple, I just cant put
 my
 finger on it.

 Thanks in advance,
 Ian





Trying to index a table - cant figure out best way

2009-10-08 Thread Ian
Hi,

I have a table that stores article views per day per article:

`post_id` int(11) NOT NULL,
`date` date NOT NULL,
`views` int(11) NOT NULL

A couple of entries:
1987 2009-10-04 1744
1583 2009-10-04 2626
1238 2009-10-04 2211

This works fine - but I am now trying to figure out whats the best way to
add an index to this so that if I am running a query limiting between dates
I dont have to run through all the hundreds of thousands of records to find
a couple hundred results.

my query looks something like this (for 7 days top articles):
SELECT articles.*, sum(views) AS views FROM articles LEFT JOIN article_views
ON article_views.article_id = articles.id WHERE ( date = '2009-10-07' AND
date = '2009-10-01') GROUP BY article_id

Any help would be greatly appreciated.

Thanks
Ian


Re: Trying to index a table - cant figure out best way

2009-10-08 Thread Ian
Hi Daevid,

Thanks - have tried this - but an explain still shows that its going through
all the articles. See below - note these arent all records, ive taken a
snapshot of the views for a 2 week period to test with:

This is without the index:
1 SIMPLE article_views ALL *NULL* *NULL* *NULL* *NULL* 78300 Using where;
Using temporary; Using filesort  1 SIMPLE articles
eq_ref PRIMARY PRIMARY 8 database.article_views.article_id 1 Using where

Then with the date_idx(date);
1 SIMPLE article_views ALL date_idx *NULL* *NULL* *NULL* 78300 Using where;
Using temporary; Using filesort  1 SIMPLE articles eq_ref PRIMARY PRIMARY 8
database.article_views.article_id 1 Using where
No difference :/

And I used the query below so its only 1 week and not the 2-3 weeks in the
table.

Thanks anyway :)

Ian


2009/10/9 Daevid Vincent dae...@daevid.com

 ALTER TABLE articles ADD INDEX date_idx (date);

  -Original Message-
  From: Ian [mailto:barnrac...@gmail.com]
  Sent: Thursday, October 08, 2009 12:23 PM
  To: mysql@lists.mysql.com
  Subject: Trying to index a table - cant figure out best way
 
  Hi,
 
  I have a table that stores article views per day per article:
 
  `post_id` int(11) NOT NULL,
  `date` date NOT NULL,
  `views` int(11) NOT NULL
 
  A couple of entries:
  1987 2009-10-04 1744
  1583 2009-10-04 2626
  1238 2009-10-04 2211
 
  This works fine - but I am now trying to figure out whats the
  best way to
  add an index to this so that if I am running a query limiting
  between dates
  I dont have to run through all the hundreds of thousands of
  records to find
  a couple hundred results.
 
  my query looks something like this (for 7 days top articles):
  SELECT articles.*, sum(views) AS views FROM articles LEFT
  JOIN article_views
  ON article_views.article_id = articles.id WHERE ( date =
  '2009-10-07' AND
  date = '2009-10-01') GROUP BY article_id
 
  Any help would be greatly appreciated.
 
  Thanks
  Ian
 




Re: Simply join that confuses me.

2009-09-30 Thread Ian Simpson
I think this should work...

SELECT
b.Name AS Boss name,
ad.Name AS Admin name,
as.Name AS Assistant name 
FROM Projects AS pr 
LEFT JOIN
People AS b 
ON b.ID = pr.Boss_ID
LEFT JOIN
People AS ad
ON ad.ID = pr.Admin_ID
LEFT JOIN
People AS as
ON as.ID = pr.Assistant_ID
WHERE pr.Project_ID = 5

I went with LEFT JOIN rather than INNER in case there might be roles
that weren't always filled on a particular project.

On Tue, 2009-09-29 at 22:11 -0700, Brian Dunning wrote:
 I have a table of projects with several columns for the IDs of some  
 people in various roles, and then a table of the people. How can I get  
 all the people for a given project, but keep their roles straight?
 
 Projects
 --
 Project_ID
 Boss_ID
 Admin_ID
 Assistant_ID
 
 People
 
 ID
 Name
 
 Can I do something like this:
 
 SELECT
 * from Projects where Project_ID = 5,
 Name from People where Projects.Boss_ID = People.ID as Boss_Name,
 Name from People where Projects.Admin_ID = People.ID as Admin_Name,
 Name from People where Projects.Assistant_ID = People.ID as  
 Assistant_Name
 
 I know that syntax isn't right but I'm not sure where to go.
 
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Prevent execution of queries without a WHERE clause

2009-09-08 Thread Ian Simpson
Thanks John, that's done the trick

On Mon, 2009-09-07 at 18:24 +0100, John Daisley wrote:
 Add the option 'safe-updates' to the mysql section of your 'my.cnf' / 
 'my.ini' file and restart the mysqld service.
 
 [mysql]
 Safe-updates
 
 Regards
 
 John Daisley
 Mobile +44(0)7812 451238
 Email j...@butterflysystems.co.uk
 
 Certified MySQL 5 Database Administrator (CMDBA)
 Certified MySQL 5 Developer
 Cognos BI Developer
 
 ---
 Sent from HP IPAQ mobile device.
 
 
 
 -Original Message-
 From: Ian Simpson i...@it.myjobgroup.co.uk
 Sent: Monday, September 07, 2009 5:14 PM
 To: mysql@lists.mysql.com
 Subject: Prevent execution of queries without a WHERE clause
 
 Hi all,
 
 I vaguely recall finding mention a MySQL command or start-up option that
 blocked any update or delete query that didn't have a WHERE component,
 to prevent statements accidentally affecting too many rows (like those
 of a certain absent-minded web developer who might work for the same
 company as me...). I now can't find any reference to it, other than a
 vague mention of using safe mode in the comments in the mysql docs; it
 doesn't explain if that is starting with --safe-mode, or using the
 mysqld_safe script. Both of these modes seem remarkably poorly
 documented, making me unwilling to experiment with them without advice,
 in case one of them disables networking or something similar.
 
 Hoping someone can help with this.
 
 Thanks
 -- 
 Ian Simpson
 System Administrator
 MyJobGroup
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Prevent execution of queries without a WHERE clause

2009-09-08 Thread Ian Simpson
Hi Darren,

Once I knew the name of the command I could Google it and find all the
relevant documentation, so I've put additional directives in place to
override the limits on SELECTS.

Thanks for the thought, though.

On Tue, 2009-09-08 at 15:32 +0100, Darren Cassar wrote:
 Hi,
 
 You can check this feature out on section 2.9 on the mysql 5.0
 certification guide (page 44).
 
 this feature has some side effects other than requesting a where with
 deletes and updates i.e.
 Quoting for whoever doesn't have the MySQL certification study guide
 1.UPDATE and DELETE statments are allowed only if then include a WHERE
 clause that spedifically identifies which records to update or delete
 by means of a key,value or if they include a LIMIT clause.
 2. Output from single-table SELECT statments is restricted to no more
 than 1,000 rows unless the statment include a LIMIT clause
 3. Multiple-table SELECT statments are allowed only if MySQL will
 examine no more than 1,000,000 rows to process the query.
 
 The --i-am-a-dummy option is a synonym for --safe-updates. :)
 
 Gluck
 
 Darren
 
 On Tue, Sep 8, 2009 at 10:25 AM, Ian Simpson i...@it.myjobgroup.co.uk
 wrote:
 Thanks John, that's done the trick
 
 
 On Mon, 2009-09-07 at 18:24 +0100, John Daisley wrote:
  Add the option 'safe-updates' to the mysql section of your
 'my.cnf' / 'my.ini' file and restart the mysqld service.
 
  [mysql]
  Safe-updates
 
  Regards
 
  John Daisley
  Mobile +44(0)7812 451238
  Email j...@butterflysystems.co.uk
 
  Certified MySQL 5 Database Administrator (CMDBA)
  Certified MySQL 5 Developer
  Cognos BI Developer
 
  ---
  Sent from HP IPAQ mobile device.
 
 
 
  -Original Message-
  From: Ian Simpson i...@it.myjobgroup.co.uk
  Sent: Monday, September 07, 2009 5:14 PM
  To: mysql@lists.mysql.com
  Subject: Prevent execution of queries without a WHERE clause
 
  Hi all,
 
  I vaguely recall finding mention a MySQL command or start-up
 option that
  blocked any update or delete query that didn't have a WHERE
 component,
  to prevent statements accidentally affecting too many rows
 (like those
  of a certain absent-minded web developer who might work for
 the same
  company as me...). I now can't find any reference to it,
 other than a
  vague mention of using safe mode in the comments in the
 mysql docs; it
  doesn't explain if that is starting with --safe-mode, or
 using the
  mysqld_safe script. Both of these modes seem remarkably
 poorly
  documented, making me unwilling to experiment with them
 without advice,
  in case one of them disables networking or something
 similar.
 
  Hoping someone can help with this.
 
  Thanks
  --
  Ian Simpson
  System Administrator
  MyJobGroup
 
 
 --
 Ian Simpson
 System Administrator
 MyJobGroup
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 
 To unsubscribe:
  http://lists.mysql.com/mysql?unsub=i...@mysqlpreacher.com
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Prevent execution of queries without a WHERE clause

2009-09-07 Thread Ian Simpson
Hi all,

I vaguely recall finding mention a MySQL command or start-up option that
blocked any update or delete query that didn't have a WHERE component,
to prevent statements accidentally affecting too many rows (like those
of a certain absent-minded web developer who might work for the same
company as me...). I now can't find any reference to it, other than a
vague mention of using safe mode in the comments in the mysql docs; it
doesn't explain if that is starting with --safe-mode, or using the
mysqld_safe script. Both of these modes seem remarkably poorly
documented, making me unwilling to experiment with them without advice,
in case one of them disables networking or something similar.

Hoping someone can help with this.

Thanks
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: grant user create privilege

2009-05-07 Thread Ian Simpson
Did you flush privileges after creating the user?

On Thu, 2009-05-07 at 08:54 -0500, Jim Lyons wrote:
 It's hard to believe this to be the case since I assume you've created other
 databases in this instance, but the error on create database, which is
 essentially a mkdir in Unix, makes me wonder if you don't have a file
 permissions error on the datadir directory.
 
 On Wed, May 6, 2009 at 9:14 AM, John Clement
 john.clem...@readingroom.comwrote:
 
  I'm clearly doing something wrong.  All I want is to grant a user rights
  to create databases:
 
  grant create on *.* to 'user'@'localhost' identified by 'pass';
 
  doesn't do the trick, nor does
 
  grant super on *.* to 'user'@'localhost' identified by 'pass';
 
  The user in question was originally created using
 
  grant all privileges on their_db.* to 'user'@'localhost' identified by
  'pass';
 
  If I try logging in as this user though the following happens:
 
 
 
  mysql create database testdb;
  ERROR 1044 (42000): Access denied for user 'user'@'localhost' to
  database 'testdb'
 
  Can someone point out the error of my ways?
 
  Many thanks, jc
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com
 
 
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MYSQLDUMP ERROR

2009-05-01 Thread Ian Simpson
MySQL dump calls LOCK TABLES before dumping the data (presumably to
prevent data modification halfway through the process).

LOCK TABLES has its own privilege (conveniently given the same name)
which your user account will have to be given in order to run a
mysqldump.

On Fri, 2009-05-01 at 15:49 +0530, Krishna Chandra Prajapati wrote:
 Hi lists,
 
 I have given select privileges to database tables. when i am taking
 mysqldump remotely it's giving error.
 
 [prajap...@beta2 prajapati]$ mysqldump --verbose -h 152.20.1.115 -u dip dip
 states -pdip  state.sql
 -- Connecting to 152.20.1.115...
 mysqldump: Got error: 1044: Access denied for user 'dip'@'152.20.1.%' to
 database 'dip' when doing LOCK TABLES
 
 Thanks,
 Krishna Chandra Prajapati
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: IN vs. OR on performance

2009-03-29 Thread Ian P. Christian
2009/3/29 Oscar ro4...@gmail.com:
 Hi all-

 I want to know what the difference between IN and OR is under the hood.

 select * from dummy_table where id in (2, 3, 4, 5, 6, 7);

 select * from dummy_table where id=2 or id=3 or id=4 or id=5 or id=6 or
 id=7;

I've have thought once the query is compiled, they are the same. What
might cause a difference in performance is doing  id  2 and id = 7.

Test it on a large dataset and let us know :)

-- 
Blog: http://pookey.co.uk/blog
Follow me on twitter: http://twitter.com/ipchristian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: [MySQL] Re: REPOST: ON DUPLICATE failure

2009-01-22 Thread Ian Simpson
The reporting of two rows thing is to do with how MySQL handles
INSERT ... ON DUPLICATE KEY UPDATE ... statements; it will report 1 row
if it inserts, and 2 rows if it finds a duplicate key and has to update
as well.

http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html

Just after the first code box.

On Wed, 2009-01-21 at 21:44 -0700, Ashley M. Kirchner wrote:
 Michael Dykman wrote:
  It
  worked fine as you wrote it on my v5.0.45, although it reported 2 rows
  affected on each subsequent run of the insert statement.  I thought
  this odd as I only ran the same statement repeatedly  leaving me with
  one row ever, but the value updated just fine.

 I noticed that too.  It reports 2 rows, but there's only 1.  That 
 was my first indication that something's awry.  But then when I tried it 
 on 5.1.30, it did the same thing, 2 rows.  But at least it DID update on 
 5.1.30, but not on 5.0.37.
 
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to Use Cascade Delete Properly

2009-01-02 Thread Ian Simpson
If you want deletes to be blocked, then you shouldn't be using ON DELETE
CASCADE; the default behaviour is ON DELETE CONSTRAIN, which seems to be
what you want; it will refuse to delete any rows that are depended on by
rows in other tables.

On Thu, 2009-01-01 at 17:07 -0500, Lola J. Lee Beno wrote:
 I'm trying to understand how to use cascade delete properly but not sure 
 if I have this backwards or not.  Here's an example: 
 
 I have two tables:
 
 mysql describe adsource;
 +-+--+--+-+-+---+
 | Field   | Type | Null | Key | Default | Extra |
 +-+--+--+-+-+---+
 | adsource_id | varchar(35)  | NO   | PRI | NULL|   | 
 | company_id  | varchar(35)  | YES  | MUL | NULL|   | 
 | location| varchar(50)  | YES  | | NULL|   | 
 | url | varchar(200) | YES  | | NULL|   | 
 +-+--+--+-+-+---+
 
 
 mysql describe jobposts;
 +--+--+--+-+-+---+
 | Field| Type | Null | Key | Default | Extra |
 +--+--+--+-+-+---+
 | jobpost_id   | varchar(35)  | NO   | PRI | NULL|   | 
 | company_id   | varchar(35)  | NO   | MUL | NULL|   | 
 | details  | text | YES  | | NULL|   | 
 | job_title| varchar(50)  | YES  | | NULL|   | 
 | postdate | date | YES  | | NULL|   | 
 | salary   | decimal(5,2) | YES  | | NULL|   | 
 | deadlinedate | date | YES  | | NULL|   | 
 | adsource_id  | varchar(35)  | YES  | MUL | NULL|   | 
 +--+--+--+-+-+---+
 
 
 
 For jobposts; I have adsourcefk referencing adsource.adsource_id with 
 cascade delete set. For adsource, I have companyfk referencing 
 company.company_id with cascade delete set.
 
 Now, say I have three jobposts records that have one referenced adsource 
 record.  If I delete one jobposts record, there now remains two jobposts 
 records.  If I delete the adsource record, the two jobposts records get 
 deleted. I don't want this happening. 
 
 What I want to have happen is: if I try to delete an adsource record and 
 there are jobposts records containing that id as foreign key, I want the 
 delete to NOT happen.  I can't use triggers because for some reason I 
 can't get the triggers working properly.  Is there any way I can do this 
 on the database side without having to write code in the application 
 code I'm working on?
 
 -- 
 Lola J. Lee Beno - ColdFusion Programmer/Web Designer for Hire
 http://www.lolajl.net/resume | Blog at http://www.lolajl.net/blog/
 No greater injury can be done to any youth than to let him feel that
 because he belongs to this or that race he will be advanced in life 
 regardless of his own merits or efforts. - Booker T. Washington
 
 
-- 
Ian Simpson
Award Winning System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to Use Cascade Delete Properly

2009-01-02 Thread Ian Simpson
That looks ok to me; I seem to have misremembered the RESTRICT keyword
as CONSTRAIN, since it's been a while since I've had to use it.

Bear in mind that I don't think MySQL has a check to make sure that the
same foreign key does not already exist; I have seen a table that had
the same foreign key three times. You should be ok as you are specifying
a name for the constraint, and it checks for unique names. 

On Fri, 2009-01-02 at 06:03 -0500, Lola J. Lee Beno wrote:
 Ian Simpson wrote:
  If you want deletes to be blocked, then you shouldn't be using ON DELETE
  CASCADE; the default behaviour is ON DELETE CONSTRAIN, which seems to be
  what you want; it will refuse to delete any rows that are depended on by
  rows in other tables.

 
 Wouldn't that be:
 
 ALTER TABLE `jobsearchtwodb`.`jobposts` ADD CONSTRAINT `adsourcefk` 
 FOREIGN KEY `adsourcefk` (`adsource_id`)
 REFERENCES `adsource` (`adsource_id`)
 ON DELETE RESTRICT
 ON UPDATE NO ACTION;
 
 
 ON DELETE RESTRICT having the behavior like ON DELETE CONSTRAIN?
 
 -- 
 Lola J. Lee Beno - ColdFusion Programmer/Web Designer for Hire
 http://www.lolajl.net/resume | Blog at http://www.lolajl.net/blog/
 No greater injury can be done to any youth than to let him feel that
 because he belongs to this or that race he will be advanced in life 
 regardless of his own merits or efforts. - Booker T. Washington
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to understand this phrase on the document.

2008-11-01 Thread Ian Christian
2008/10/31 Moon's Father [EMAIL PROTECTED]

 *A prepared statement is also global to the connection. If you create a
 prepared statement within a stored routine, it is not deallocated when the
 stored routine ends. *

 Then I don't know how to deallocate the memory used by prepare statement
 within procedure?


The memory will be de-allocated when you disconnect that session, there is
no need to worry about it's memory usage.

This statement simply means that if you call a method twice, which uses the
same prepared statement, it will on need to be re-prepared, and hence you
will see an improvement in speed.


Re: Permissions

2008-10-22 Thread Ian Christian
2008/10/21 Moon's Father [EMAIL PROTECTED]:
 Could you please give me an idea of how to manage the privileges inside
 mysql?

http://www.google.co.uk/search?q=mysql+grant

first hit :)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Confusion over query stratergy

2008-10-17 Thread Ian Christian
Hi all,

I'm trying to work out the difference in a field between the last 2
updates in an updates table.   I'm doing this as shown below:

mysqlSELECT
-  (@in - AcctInputOctets) AS AcctInputOctets,
-  (@out - AcctOutputOctets) AS AcctOutputOctets,
-  (@in := AcctInputOctets),
-  (@out := AcctOutputOctets)
-FROM updates
-WHERE acctuniqueid = '4b9fe4a361344536'
-ORDER BY updates.AcctSessionTime DESC LIMIT 2
- ;
+--+--+--++
| AcctInputOctets  | AcctOutputOctets | (@in :=
AcctInputOctets) | (@out := AcctOutputOctets) |
+--+--+--++
| 18446744073654284768 | 18446744073171813223 |
55266848 |  537738393 |
| 9508 |18620 |
55257340 |  537719773 |
+--+--+--++
2 rows in set (0.02 sec)

mysql explain(query above)
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: updates
 type: ref
possible_keys: AcctUniqueID
  key: AcctUniqueID
  key_len: 34
  ref: const
 rows: 327
Extra: Using where; Using filesort
1 row in set (0.00 sec)


As can be seen, this query uses a key, and runs well.  However, I only
require the 2nd row of that dataset.  I couldn't figure out a better
way of doing it than this:

mysql SELECT AcctInputOctets,  AcctOutputOctets FROM
-  (SELECT
-  (@in - AcctInputOctets) AS AcctInputOctets,
-  (@out - AcctOutputOctets) AS AcctOutputOctets,
-  (@in := AcctInputOctets),
-  (@out := AcctOutputOctets)
-FROM updates
-WHERE acctuniqueid = '4b9fe4a361344536'
-ORDER BY updates.AcctSessionTime DESC LIMIT 2
-  ) AS t1 LIMIT 1,2
- ;
+-+--+
| AcctInputOctets | AcctOutputOctets |
+-+--+
|9508 |18620 |
+-+--+
1 row in set (0.02 sec)


This does exactly what I want, but to me feels wrong, I think I'm
missing a trick to doing this 'the right way'.  Also, look at how the
query runs:


mysql explain SELECT AcctInputOctets,  AcctOutputOctets FROM
-  (SELECT
-  (@in - AcctInputOctets) AS AcctInputOctets,
-  (@out - AcctOutputOctets) AS AcctOutputOctets,
-  (@in := AcctInputOctets),
-  (@out := AcctOutputOctets)
-FROM updates
-WHERE acctuniqueid = '4b9fe4a361344536'
-ORDER BY updates.AcctSessionTime DESC LIMIT 2
-  ) AS t1 LIMIT 1,2
- \G
*** 1. row ***
   id: 1
  select_type: PRIMARY
table: derived2
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 2
Extra:
*** 2. row ***
   id: 2
  select_type: DERIVED
table: updates
 type: ALL
possible_keys: AcctUniqueID
  key: AcctUniqueID
  key_len: 34
  ref:
 rows: 28717165
Extra: Using filesort
2 rows in set (0.02 sec)


Apparently, it's doing a full table scan over all 29 million records.
Whilst this query appears to run fast still, surly it's not right that
a full table scan is needed?

Thanks,

Ian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL select basics

2008-10-17 Thread Ian Christian
2008/10/17 dave aptiva [EMAIL PROTECTED]:

 I tried SELECT ID_number, max( count( CU_number ) ) but this causes an error
 # - Invalid use of group function 


# sqlite3
SQLite version 3.5.9
Enter .help for instructions
sqlite create table moo (id_number, cu_number);
sqlite insert into moo(1, 1);
SQL error: near 1: syntax error
sqlite insert into moo values (1, 1);
sqlite insert into moo values (1, 2);
sqlite insert into moo values (1, 3);
sqlite insert into moo values (2, 3);
sqlite SELECT id_number, count(cu_number) FROM moo GROUP BY id_number;
1|3
2|1
sqlite SELECT id_number, count(cu_number) FROM moo GROUP BY id_number
HAVING count(cu_number) = max(cu_number);
1|3



OR:

sqlite SELECT id_number, count(cu_number) FROM moo GROUP BY id_number
ORDER BY count(cu_number) DESC LIMIT 1;
1|3

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: REPLICATION

2008-10-17 Thread Ian Christian
2008/10/16 Krishna Chandra Prajapati [EMAIL PROTECTED]:
 I believe that yahoo, google and other companies must be generating reports.
 How they are doing. Any Idea.

This is how google do it: http://en.wikipedia.org/wiki/MapReduce

Have you seen federated tables? Be aware of the performance on these though.

If it's for reporting purposes... whilst it's not an overly nice
solution - you could use backups to put all the data from your 2
frontends into your reporting database and then run the reports.
Depends on what kinda datasets your dealing with.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Confusion over query stratergy

2008-10-17 Thread Ian Christian
2008/10/17 Rob Wultsch [EMAIL PROTECTED]:

 *How long does the second query actually take to run compared to first?

Actually, really quickly - so quickly that I also suspected that a
full table scan was not taking place.   I'd like to understand how the
output of EXPLAIN can differ from the actual strategy used, why is
there this difference (if indeed, there is)?   This question is now
redundant, as we're approaching the problem from a different angle,
but this will bug me if I don't get to understand it :)

Thanks!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Confusion over query stratergy

2008-10-17 Thread Ian Christian
2008/10/17 Brent Baisley [EMAIL PROTECTED]:
 Why are you creating a subquery/derived table?

 Just change your limit to 1,2
  ORDER BY updates.AcctSessionTime DESC LIMIT 1,2

Because then the maths in the select part isn't executed, and I don't
get the figures I need.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: FLOOR(DATE_ADD(CURDATE(), INTERVAL 4 MONTH)) -- ?

2008-10-17 Thread Ian Christian
2008/10/17 Rob Wultsch [EMAIL PROTECTED]:
 On Fri, Oct 17, 2008 at 1:33 PM, Rene Fournier [EMAIL PROTECTED] wrote:

 Okay, I realize that query won't work, but that's essentially want I want
 to do:

 Add four months to the current date, then return the first day of that
 month, e.g.:

floor ( 2008-10-16 + 4 months ) = 2009-02-1

 Is there a nice SQL way of achieving this?

 ...Rene


 SELECT date(now() + INTERVAL 4 MONTH - INTERVAL day(now())-1 DAY);

Be aware that only 1 suggested answer was correct :)

mysql SELECT date( date(2008-10-31) + INTERVAL 4 MONTH - INTERVAL
day(2008-10-31)-1 DAY) as d\G
*** 1. row ***
d: 2009-01-29

mysql select cast(date_format( date(2008-10-31) + interval 4
month,%Y-%m-01) as date) as d \G
*** 1. row ***
d: 2009-02-01

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DELETE - error_count

2008-10-17 Thread Ian Christian
2008/10/17 Reyna.Sabina [EMAIL PROTECTED]:
 Hi,

 Running Environment:
  MySQL Server version: 5.0.45
  OS is Red-Hat 64-bit

 The table 'junk' doesn't have a row with id=4. Two tests to trap 'errors'
 using  DELETE follows:


Perhaps I'm missing something - but a delete matching no rows is
not an error is it?  A select returning no data is not an error, so
I'm not understanding why a delete would be.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'ODBC'@'localhost' (using password: YES)'

2008-10-04 Thread Ian Simpson
Hi Varuna,

The problem looks like it is with the command you issued. The correct
syntax is:

mysqladmin -uroot -p shutdown

then supply root password.

Without giving the -uroot argument, it doesn't know that you are trying
to log in as the root user, which is why it is telling you that access
is denied for [EMAIL PROTECTED], rather than [EMAIL PROTECTED]

Thanks

On Sat, 2008-10-04 at 13:56 +0530, Varuna Seneviratna wrote:
 I am using WinXP.I am wanted to shutdown MySQL service from the command line
 and ran the command mysqladmin -p root shutdown next the root password was
 asked for,I entered the correct password, when I entered the password the
 below displayed error was the result
 
 mysqladmin: connect to server at 'localhost' failed
 error: 'Access denied for user 'ODBC'@'localhost' (using password: YES)'
 
 
 
 How can I correct this?
 Is this because of a firewall preventing access to port 3306?If a firewall
 is preventing access how was it able to ask for the password?
 
 I have Nortan Internet Security trial version running and the windows
 firewall is disabled.If this is caused by a firewall barrier please tell me
 how to open the port in Nortan Internet Security.
 
 
 Varuna
-- 
Ian Simpson
System Administrator
MyJobGroup



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Odd crash with MySQL Embedded 5.1.28

2008-09-26 Thread Ian Monroe
So we're using MySQL Embedded in Amarok, it works fine for most of us
that use 5.1, including myself. However the following backtrace is
from the second person to have this issue. Its repeatable, happens at
startup.

Is there something about how their MySQL is setup on their system that
could cause such a crash?

Thread 1 (Thread 0xb397d960 (LWP 23945)):
[KCrash Handler]
#6  0xb750550d in ?? () from /lib/tls/i686/cmov/libc.so.6
#7  0xb7506cad in malloc () from /lib/tls/i686/cmov/libc.so.6
#8  0xaf0525af in my_malloc (size=1228, my_flags=48) at my_malloc.c:34
#9  0xaf069d16 in create_key_cache (name=0xaf31f83a default,
length=7) at set_var.cc:3898
#10 0xaf069f1d in get_or_create_key_cache (name=0xaf31f83a default,
length=7) at set_var.cc:3932
#11 0xaf02bb16 in mysql_init_variables () at ../sql/mysqld.cc:7486
#12 0xaf02debc in init_common_variables (conf_file_name=0xaf3140c4
my, argc=1, argv=0xaf312ba0, groups=0xbfdaae78) at
../sql/mysqld.cc:3139
#13 0xaf02ebb3 in init_embedded_server (argc=0, argv=0x0,
groups=0xbfdaae78) at lib_sql.cc:491
#14 0xaf01fb9c in mysql_server_init (argc=0, argv=0x0, groups=0x0) at
libmysql.c:173
#15 0xaf01c22f in mysql_init (mysql=0x0) at client.c:1509
#16 0xaf0119b8 in MySqlEmbeddedCollection (this=0x835cb20,
[EMAIL PROTECTED], [EMAIL PROTECTED]) at
/home/krf/kde-devel/src/amarok/src/collection/sqlcollection/MySqlEmbeddedCollection.cpp:128
#17 0xaefee54b in SqlCollectionFactory::init (this=0x8178240) at
/home/krf/kde-devel/src/amarok/src/collection/sqlcollection/SqlCollection.cpp:65
[snip]

The relevant code in question:

char* defaultsLine = qstrdup( QString( --defaults-file=%1 ).arg(
defaultsFile ).toAscii().data() );
char* databaseLine = qstrdup( QString( --datadir=%1 ).arg(
databaseDir ).toAscii().data() );

if( !QFile::exists( defaultsFile ) )
{
QFile df( defaultsFile );
df.open( QIODevice::WriteOnly );
}

if( !QFile::exists( databaseDir ) )
{
QDir dir( databaseDir );
dir.mkpath( . );
}

static const int num_elements = 5;
char **server_options = new char* [ num_elements + 1 ];
server_options[0] = amarokmysqld;
server_options[1] = defaultsLine;
server_options[2] = databaseLine;
server_options[3] = --default-storage-engine=MYISAM;
server_options[4] = --skip-innodb;
server_options[5] = 0;

char **server_groups = new char* [ 3 ];
server_groups[0] = amarokserver;
server_groups[1] = amarokclient;
server_groups[2] = 0;

mysql_library_init(num_elements, server_options, server_groups);
m_db = mysql_init(NULL); // - crash happens HERE
delete [] server_options;
delete [] server_groups;
delete [] defaultsLine;
delete [] databaseLine;

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Odd crash with MySQL Embedded 5.1.28

2008-09-26 Thread Ian Monroe
On Fri, Sep 26, 2008 at 9:37 AM, Ian Monroe [EMAIL PROTECTED] wrote:
 So we're using MySQL Embedded in Amarok, it works fine for most of us
 that use 5.1, including myself. However the following backtrace is
 from the second person to have this issue. Its repeatable, happens at
 startup.

 Is there something about how their MySQL is setup on their system that
 could cause such a crash?
[snip]
mysql_library_init(num_elements, server_options, server_groups);
m_db = mysql_init(NULL); // - crash happens HERE

So we did figure out that mysql_library_init is returning 1, which
explains why mysql_init is crashing.

But how do we figure out what exactly the '1' means? Any clues?

Ian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Odd crash with MySQL Embedded 5.1.28

2008-09-26 Thread Ian Monroe
On Fri, Sep 26, 2008 at 2:27 PM, Martin Gainty [EMAIL PROTECTED] wrote:
 2 items-

 check that the my.cnf file cannot be found on your path

Yea its right, plus we tried switching to --no-defaults instead of
specifying an (empty) file. That didn't work either.

 or your server groups are (usually just server, client) are incorrect

How could they be incorrect? I only vaguely get what they are for,
something about the config groups in my.cnf file.

Ian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Odd crash with MySQL Embedded 5.1.28

2008-09-26 Thread Ian Monroe
On Fri, Sep 26, 2008 at 1:55 PM, Ian Monroe [EMAIL PROTECTED] wrote:
 On Fri, Sep 26, 2008 at 9:37 AM, Ian Monroe [EMAIL PROTECTED] wrote:
 So we're using MySQL Embedded in Amarok, it works fine for most of us
 that use 5.1, including myself. However the following backtrace is
 from the second person to have this issue. Its repeatable, happens at
 startup.

 Is there something about how their MySQL is setup on their system that
 could cause such a crash?
 [snip]
mysql_library_init(num_elements, server_options, server_groups);
m_db = mysql_init(NULL); // - crash happens HERE

 So we did figure out that mysql_library_init is returning 1, which
 explains why mysql_init is crashing.

 But how do we figure out what exactly the '1' means? Any clues?

I created a bug for this why is it returning 1? issue:
http://bugs.mysql.com/39693

Ian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to figure out what options are supported, was Re: Odd crash with MySQL Embedded 5.1.28

2008-09-26 Thread Ian Monroe
On Fri, Sep 26, 2008 at 4:15 PM, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Sep 26), Ian Monroe said:
 On Fri, Sep 26, 2008 at 1:55 PM, Ian Monroe [EMAIL PROTECTED] wrote:
  On Fri, Sep 26, 2008 at 9:37 AM, Ian Monroe [EMAIL PROTECTED] wrote:
  So we're using MySQL Embedded in Amarok, it works fine for most of us
  that use 5.1, including myself. However the following backtrace is
  from the second person to have this issue. Its repeatable, happens at
  startup.
 
  Is there something about how their MySQL is setup on their system that
  could cause such a crash?
  [snip]
 mysql_library_init(num_elements, server_options, server_groups);
 m_db = mysql_init(NULL); // - crash happens HERE
 
  So we did figure out that mysql_library_init is returning 1, which
  explains why mysql_init is crashing.
 
  But how do we figure out what exactly the '1' means? Any clues?

 We figured it out: we had --skip-innodb, which crashed anyone who
 didn't have Innodb support built.

 Is there any way we could have it add --skip-innodb if and only if
 Innodb support is present? I can't find a mysql_* function that
 describes server functionality. A compile time option would work as
 well I suppose, since we're currently statically linking.

 You want --loose-skip-innodb.

 http://dev.mysql.com/doc/refman/5.0/en/command-line-options.html

  If an option is prefixed by --loose, a program does not exit with an
  error if it does not recognize the option, but instead issues only a
  warning:

  $ mysql --loose-no-such-option
  mysql: WARNING: unknown option '--no-such-option'

Hey thanks, I think thats exactly what we need.

Ian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Relational Databasing on busy webserver

2008-09-23 Thread Ian Simpson
Hi Ben,

It sounds like what you're looking for is an ENUM value:

http://dev.mysql.com/doc/refman/5.0/en/enum.html

Bear in mind when using this data-type that if you do want to add a new
value (such as a new state/country), you will have to perform an ALTER
TABLE statement, which can take some time to execute over a table
storing a lot of data. Using the relational method means you'd just have
to add one row to a table, which is significantly faster.

Thanks 

On Mon, 2008-09-22 at 17:12 -0400, Ben A.H. wrote:
 Hello,
 
 We are setting up a relatively common web application which collects user 
 information... Right off the bat our system will have over 200,000 USER 
 RECORDS so having an efficient database  lookup scheme is critical.
 I am a programmer/developer with some education in databasing but my forte 
 is programming hence this odd (I think) question...
 
 Obviously we'll have the standard USER table  a bunch of supporting tables.
 
 For items like STATE/PROVINCE standard database logic would dictate I setup:
 
 USER
name
email
.etc...
StateID  (foreign key)
 
 
 STATE
StateID
StateName
 
 But I often wonder if there's any benefit in having a State table...
 
 For one, new States/Provinces are not that likely, removal of these entities 
 is also unlikely (except maybe Quebec :-)) so the chances of having to make 
 alternations to the State table near nil. It raises the question of whether 
 or not a State Table is even necessary.
 Using a traditional SQL State table, I'd have to do an SQL query to populate 
 User State/Province options every time the New User Registration form is 
 shown - isn't this needless overhead?!
 Would my webforms not load faster if State/Province information was 
 hard-coded as options for a combobox? Is this something people have 
 experimented with?
 
 There are various other fields that I believe could be handled like this for 
 a cumulative performance boost. For example: country, state/province, 
 gender, industry, occupation, ethnicity, language are all options that 
 aren't going to change that often. Then again, when we do have to display 
 the users choice; for example if user.countryID=3 we'd have to have a way to 
 turn that number 3 into the word Canada when we display the users 
 profile... I'd probably do this via XML lookup.
 
 Has anyone experimented with the benefits/tradeoffs of such a scheme? As 
 I've said, we are dealing with at least 200,000 user records, probably 
 300,000-400,000 in the next year. The User table contains at least 50 
 attributes, 15 of which are Foreign Keys that link to tables that will 
 likely never change (the users choices for countryID, stateID, 
 OperatingSystemID, internet_connectionTypeID, internetUserFrequencyID, 
 industryID, occupationID)...
 
 ThanX in advance
 Ben
 
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



threading in mysql-embedded

2008-09-08 Thread Ian Monroe
In limitations of MySQL embedded it notes that InnoDB isn't reentrant.
Simply putting a mutex around all calls to mysql should be sufficient
correct? So different threads are OK, just not different threads at
the same time. Correct?

It kind of depends on the definition of reentrant being used so I ask.

What are the rules for MyISAM and threads?

Any general thoughts on what backend to use for mysql embedded also
appreciated. We noticed that InnoDB seems to enjoy exit()ing on error
conditions (like disk full) which isn't very nice.

Ian

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How do I (can I) use aggregate functions inside a select

2008-07-25 Thread Ian Simpson
Hi David,

Try

Select company, state, sales, sum(sales) / sales as percent
From Sales GROUP BY company, state;

Not sure if you always want to group by state; if each company exists in
only one state then the group by is irrelevant, if not then it will give
you the by-state breakdown.



On Fri, 2008-07-25 at 10:53 -0400, David Ruggles wrote:
 I get:
 Error Code : 1140
 Mixing of GROUP columns (MIN(),MAX(),COUNT(),...) with no GROUP columns is
 illegal if there is no GROUP BY clause
 
 Thanks,
 
 David Ruggles
 CCNA MCSE (NT) CNA A+
 Network Engineer  Safe Data, Inc.
 (910) 285-7200[EMAIL PROTECTED]
 
 
 
 -Original Message-
 From: Ananda Kumar [mailto:[EMAIL PROTECTED] 
 Sent: Friday, July 25, 2008 10:37 AM
 To: David Ruggles
 Cc: mysql
 Subject: Re: How do I (can I) use aggregate functions inside a select
 
 
 yes, u can use the below sql.
 
 regards
 anandkl
 
 
 On 7/25/08, David Ruggles [EMAIL PROTECTED] wrote:
 
  I may be approaching this all wrong, but I need to know a percentage of
  total sales within a select statement.
 
  So I can do something like this:
  Select company, state, sales, sum(sales) / sales as percent
  From Sales
 
 
  Thanks,
 
  David Ruggles
  CCNA MCSE (NT) CNA A+
  Network EngineerSafe Data, Inc.
  (910) 285-7200  [EMAIL PROTECTED]
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 No virus found in this incoming message.
 Checked by AVG - http://www.avg.com 
 Version: 8.0.138 / Virus Database: 270.5.6/1572 - Release Date: 7/25/2008
 6:51 AM
 
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Access denied for user 'debian-sys-maint'@'localhost'

2008-07-22 Thread Ian Simpson
Hi Jesse,

If you're specifying the password in plain text, you shouldn't put the
PASSWORD directive in there; you only use PASSWORD if you're using the
hashed password that MySQL will actually store.


GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED
BY 'LongPasswordHere' WITH GRANT OPTION

or

GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED
BY PASSWORD 'HexadecimalString' WITH GRANT OPTION

Also, you will need to execute FLUSH PRIVILEGES once you're done, since
MySQL normally only checks the privilege tables on start-up.

On Mon, 2008-07-21 at 20:35 -0400, Jesse wrote:
 OK. This is driving me Nutz 8-p
 
 Any time I try to restart mysql, I get the error, Access denied for user 
 'debian-sys-maint'@'localhost'
 
 My understanding is that the password for the debian-sys-maint user is found 
 in /etc/mysql/debian.cnf  So, I edit that, and note the password.
 
 I then execute the following in MySQL (with the correct password, of 
 course):
 GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY 
 PASSWORD 'LongPasswordHere' WITH GRANT OPTION
 
 To test it out, I try a mysql -u debian-sys-maint -p, type in the password 
 and get the Access denied error again.  What's going on? Why can't I get 
 this to work?
 
 Jesse 
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: delete query question

2008-07-08 Thread Ian Simpson
If the tables are InnoDB, you could temporarily set up a foreign key
relationship between the two, with the 'ON DELETE CASCADE' option. 

On Tue, 2008-07-08 at 11:14 -0400, Jeff Mckeon wrote:
 I think this is possible but I'm having a total brain fart as to how to
 construct the query..
 
 Table2.ticket = table1.ID
 
 Table2 is a many to 1 relationship to table1
 
 I need to delete all records from table1 where created 
 unix_timestamp(date_sub(now(), interval 3 month)) 
 And all rows from table2 where Table2.ticket = Table1.ID (of the deleted
 rows..)
 
 Can't this be done in one query? Or two?
 
 Thanks,
 
 Jeff
 
 
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

RE: delete query question

2008-07-08 Thread Ian Simpson
Oh well ;)

It looks like you can use joins in a delete statement, and delete the
joined rows, which will delete from the individual tables.

So something like:

delete table1, table2 from table1 inner join table2 on table1.ID =
table2.ticket where...

should do it

I modified the above code from 

http://dev.mysql.com/doc/refman/5.0/en/delete.html

just search in the page for 'join' and you'll find the relevant section


On Tue, 2008-07-08 at 11:35 -0400, Jeff Mckeon wrote:
 
  -Original Message-
  From: Ian Simpson [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, July 08, 2008 11:27 AM
  To: Jeff Mckeon
  Cc: mysql@lists.mysql.com
  Subject: Re: delete query question
  
  If the tables are InnoDB, you could temporarily set up a foreign key
  relationship between the two, with the 'ON DELETE CASCADE' option.
  
 
 Nope, MyISAM...
 
  On Tue, 2008-07-08 at 11:14 -0400, Jeff Mckeon wrote:
   I think this is possible but I'm having a total brain fart as to how
  to
   construct the query..
  
   Table2.ticket = table1.ID
  
   Table2 is a many to 1 relationship to table1
  
   I need to delete all records from table1 where created 
   unix_timestamp(date_sub(now(), interval 3 month))
   And all rows from table2 where Table2.ticket = Table1.ID (of the
  deleted
   rows..)
  
   Can't this be done in one query? Or two?
  
   Thanks,
  
   Jeff
  
  
  
  
  --
  Ian Simpson
  System Administrator
  MyJobGroup
  
  This email may contain confidential information and is intended for the
  recipient(s) only. If an addressing or transmission error has
  misdirected this email, please notify the author by replying to this
  email. If you are not the intended recipient(s) disclosure,
  distribution, copying or printing of this email is strictly prohibited
  and you should destroy this mail. Information or opinions in this
  message shall not be treated as neither given nor endorsed by the
  company. Neither the company nor the sender accepts any responsibility
  for viruses or other destructive elements and it is your responsibility
  to scan any attachments.
 
 
-- 
Ian Simpson
System Administrator
MyJobGroup

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Do I need to use GROUP BY to do this?

2008-06-24 Thread Ian Simpson
Off the top of my head I can't think of a way of getting the output in
the format that you want.

If you use a query like:

SELECT SUBSTRING(sales_date,1,10), sales_type, COUNT(sales_id)
FROM sales_activity
GROUP BY SUBSTRING(sales_date,1,10), sales_type;

You'll get output like:

Datetypenumber

2008-06-15  1   4
2008-06-15  2   2
2008-06-16  1   2
2008-06-17  1   2

which is the data that you want in a different output format.

Thanks

On Mon, 2008-06-23 at 18:16 -0700, Grant Giddens wrote:
 Ian,
 
   Thanks for the help, this query worked perfectly.  Can you also help
 me with one more query?  Say my sales_type=1 for a sale, and
 sales_type=2 for a return.  I'd like to do 1 query to get a count of
 the sales and returns for each day.  Here was my test data again:
 
 INSERT INTO `sales_activity` VALUES (1, '2008-06-15 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (2, '2008-06-15 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (3, '2008-06-15 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (4, '2008-06-15 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (5, '2008-06-15 13:00:00', 2);
 INSERT INTO `sales_activity` VALUES (6, '2008-06-15 13:00:00', 2);
 INSERT INTO `sales_activity` VALUES (7, '2008-06-16 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (8, '2008-06-16 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (9, '2008-06-17 13:00:00', 1);
 INSERT INTO `sales_activity` VALUES (10, '2008-06-17 13:00:00', 1);
 
 The result of the query should be:
 
 date   salesreturns
 --   --
 2008-06-1542
 2008-06-1620
 2008-06-1720
 
 Thanks,
 Grant
 
 --- On Wed, 6/18/08, Ian Simpson [EMAIL PROTECTED] wrote:
 From: Ian Simpson [EMAIL PROTECTED]
 Subject: Re: Do I need to use GROUP BY to do this?
 To: Grant Giddens [EMAIL PROTECTED]
 Cc: mysql@lists.mysql.com
 Date: Wednesday, June 18, 2008, 11:02 AM
 
 
 
 I happen to have worked on a similar query this morning, so
 it's in my
 mind :)
 
 SELECT SUBSTRING(sales_date,1,10), COUNT(sales_id)
 FROM sales_activity
 WHERE sales_type = 1
 GROUP BY SUBSTRING(sales_date,1,10);
 
 should do the trick.
 
 On Tue, 2008-06-17 at 18:21 -0700, Grant Giddens wrote:
  Hi,
  
  nbsp; I have a table where I keep sales transactions, so
 I'm trying to do a query that will count the number of
 transactions per day.
  
  My test data looks like:
  
  -- 
  -- Table structure for table `sales_activity`
  -- 
  
  CREATE TABLE `sales_activity` (
  nbsp; `sales_id` int(11) NOT NULL auto_increment,
  nbsp; `sales_date` datetime NOT NULL default '-00-00
 00:00:00',
  nbsp; `sales_type` tinyint(4) NOT NULL default '0',
  nbsp; PRIMARY KEYnbsp; (`sales_id`)
  ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ;
  
  -- 
  -- Dumping data for table `sales_activity`
 -- 
 Ian Simpson
 System Administrator
 MyJobGroup
 
 
 This email may contain confidential information and is
 intended for the recipient(s) only. If an addressing or
 transmission error has misdirected this email, please notify
 the author by replying to this email. If you are not the
 intended recipient(s) disclosure, distribution, copying or
 printing of this email is strictly prohibited and you should
 destroy this mail. Information or opinions in this message
 shall not be treated as neither given nor endorsed by the
 company. Neither the company nor the sender accepts any
 responsibility for viruses or other destructive elements and
 it is your responsibility to scan any attachments. 
-- 
Ian Simpson
System Administrator
MyJobGroup

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Reset of Status Parameters

2008-06-20 Thread Ian Simpson
I tend to use the 'mytop' program, which shows the average
queries/second for the entire lifetime and for the last 5 seconds, as
well as showing a bunch of other statistics and a list of running
queries. It's a handy little monitoring tool.

On Fri, 2008-06-20 at 12:17 +0530, Venu Madhav Padakanti wrote:
 I am using MySQL version 5.0.22, I am interested in knowing the current 
 performance on the MySQL.
 
 With the status command we can get the queries per second but it will 
 average since the beginning of time when SQL was up and running and not 
 the current rate?
 
 Is there any way to reset that parameter so that the data can reflect 
 current without restarting the MySQL
 
 Thanks in advance
 ..venu
-- 
Ian Simpson
System Administrator
MyJobGroup

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Do I need to use GROUP BY to do this?

2008-06-18 Thread Ian Simpson
I happen to have worked on a similar query this morning, so it's in my
mind :)

SELECT SUBSTRING(sales_date,1,10), COUNT(sales_id)
FROM sales_activity
WHERE sales_type = 1
GROUP BY SUBSTRING(sales_date,1,10);

should do the trick.

On Tue, 2008-06-17 at 18:21 -0700, Grant Giddens wrote:
 Hi,
 
 nbsp; I have a table where I keep sales transactions, so I'm trying to do a 
 query that will count the number of transactions per day.
 
 My test data looks like:
 
 -- 
 -- Table structure for table `sales_activity`
 -- 
 
 CREATE TABLE `sales_activity` (
 nbsp; `sales_id` int(11) NOT NULL auto_increment,
 nbsp; `sales_date` datetime NOT NULL default '-00-00 00:00:00',
 nbsp; `sales_type` tinyint(4) NOT NULL default '0',
 nbsp; PRIMARY KEYnbsp; (`sales_id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ;
 
 -- 
 -- Dumping data for table `sales_activity`
-- 
Ian Simpson
System Administrator
MyJobGroup

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi list,

Have a bit of a mystery here that I hope somebody can help with.

I've just got a new server that I'm using as a dedicated MySQL server.
In terms of hardware it's pretty much identical, if not slightly
superior to an existing server already in production use.

It's having a real struggle processing INSERT statements to InnoDB
tables; it's maxing out at around 100 inserts per second, even with very
simple two column tables (inserts into MyISAM tables run fine).
Meanwhile, the original server can happily process around 1000
inserts/sec into an identical table.

The MySQL configuration of the two databases is identical, except for
the tablespace file size (the new server has a larger tablespace
defined), and the InnoDB logs (again, new server has larger logs).

Can anybody suggest an area of investigation as to the cause?

Thanks,
-- 
Ian Simpson

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi Alex,

Configurations are identical, other than the differences I initially
mentioned. I've diffed both the configuration files and the output of
SHOW VARIABLES on both servers.

I've contacted my hosting provider to ask about the RAID settings.

Variable_name: innodb_flush_log_at_trx_commit
Value: 1
Variable_name: sync_binlog
Value: 0
Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

Thanks

-- 
Ian Simpson

On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu wrote:
 Please check if the my.cnf configurations to be the same.
 
  What are your configuration parameters in terms of innodh flush log
 trx commit , bin logging, sync binlog and innodb unsafe for binlog ?
 
 If the systems have raid, check if the BBWC is enabled on the new host
 and WB is enabled.
 
 
 On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED]
 wrote:
 Hi list,
 
 Have a bit of a mystery here that I hope somebody can help
 with.
 
 I've just got a new server that I'm using as a dedicated MySQL
 server.
 In terms of hardware it's pretty much identical, if not
 slightly
 superior to an existing server already in production use.
 
 It's having a real struggle processing INSERT statements to
 InnoDB
 tables; it's maxing out at around 100 inserts per second, even
 with very
 simple two column tables (inserts into MyISAM tables run
 fine).
 Meanwhile, the original server can happily process around 1000
 inserts/sec into an identical table.
 
 The MySQL configuration of the two databases is identical,
 except for
 the tablespace file size (the new server has a larger
 tablespace
 defined), and the InnoDB logs (again, new server has larger
 logs).
 
 Can anybody suggest an area of investigation as to the cause?
 
 Thanks,
 --
 Ian Simpson
 
 This email may contain confidential information and is
 intended for the recipient(s) only. If an addressing or
 transmission error has misdirected this email, please notify
 the author by replying to this email. If you are not the
 intended recipient(s) disclosure, distribution, copying or
 printing of this email is strictly prohibited and you should
 destroy this mail. Information or opinions in this message
 shall not be treated as neither given nor endorsed by the
 company. Neither the company nor the sender accepts any
 responsibility for viruses or other destructive elements and
 it is your responsibility to scan any attachments.
 
 
 
 -- 
 Thanks
 Alex
 http://alexlurthu.wordpress.com

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi guys, thanks for pitching in.

The inserts are from replication; we're not using transactions on the
master (yet), and I don't think there's a way of telling MySQL to batch
incoming replication statements if they're not already in a transaction.

Disk usage: the older server (the one that's running fine) is running
more transactions per second, but has lower blocks written and read per
second than the new server:

The working server (which in addition to replicating is also handling a
bunch of read queries)

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda  88.47   782.20   998.77 9046888130 11551757459

The new server, which is just trying to handle replication

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda  77.83  1367.55  2914.72  358474084  764029986

Thanks,

-- 
Ian Simpson



On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
 also how often do you issue a commit. batching the inserts inside a
 transaction might help.
 
 On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
 wrote:
 check for iostat to see if the disk is heavly used. 
 
 
 On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote: 
 Hi Alex,
 
 Configurations are identical, other than the
 differences I initially
 mentioned. I've diffed both the configuration files
 and the output of
 SHOW VARIABLES on both servers.
 
 I've contacted my hosting provider to ask about the
 RAID settings.
 
 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF
 
 Thanks
 
 --
 Ian Simpson
 
 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
 wrote:
  Please check if the my.cnf configurations to be the
 same.
 
   What are your configuration parameters in terms of
 innodh flush log
  trx commit , bin logging, sync binlog and innodb
 unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is
 enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
 [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope
 somebody can help
  with.
 
  I've just got a new server that I'm using as
 a dedicated MySQL
  server.
  In terms of hardware it's pretty much
 identical, if not
  slightly
  superior to an existing server already in
 production use.
 
  It's having a real struggle processing
 INSERT statements to
  InnoDB
  tables; it's maxing out at around 100
 inserts per second, even
  with very
  simple two column tables (inserts into
 MyISAM tables run
  fine).
  Meanwhile, the original server can happily
 process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases
 is identical,
  except for
  the tablespace file size (the new server has
 a larger
  tablespace
  defined), and the InnoDB logs (again, new
 server has larger
  logs).
 
  Can anybody suggest an area of investigation
 as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential
 information and is
  intended for the recipient(s) only. If an
 addressing or
  transmission error has misdirected this
 email, please notify

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi Guys,

Having delved a little more into the capabilities of iostat, I've
discovered that the drive bandwidth seems to be maxed out while MySQL is
running, which I'd peg as the primary candidate for the problem.

Looks like I'll be having more words with my hosting company about
this...

Thanks for all your help

-- 
Ian Simpson

On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
 Disk usage: the older server (the one that's running fine) is running
 more transactions per second, but has lower blocks written and read per
 second than the new server:
 [JS] That, to me, suggests that the difference might be in the way the 
 systems 
 themselves are configured. Unfortunately, I don't know how Linux handles file 
 system buffering.
 
 The working server (which in addition to replicating is also handling a
 bunch of read queries)
 
 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda  88.47   782.20   998.77 9046888130 11551757459
 
 The new server, which is just trying to handle replication
 
 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda  77.83  1367.55  2914.72  358474084  764029986
 
 Thanks,
 ?
 --
 Ian Simpson
 
 
 
 On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
  also how often do you issue a commit. batching the inserts inside a
  transaction might help.
 
  On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
  wrote:
  check for iostat to see if the disk is heavly used.
 
 
  On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
  Hi Alex,
 
  Configurations are identical, other than the
  differences I initially
  mentioned. I've diffed both the configuration files
  and the output of
  SHOW VARIABLES on both servers.
 
  I've contacted my hosting provider to ask about the
  RAID settings.
 
  Variable_name: innodb_flush_log_at_trx_commit
 Value: 1
  Variable_name: sync_binlog
 Value: 0
  Variable_name: innodb_locks_unsafe_for_binlog
 Value: OFF
 
  Thanks
 
  --
  Ian Simpson
 
  On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
  wrote:
   Please check if the my.cnf configurations to be the
  same.
  
What are your configuration parameters in terms of
  innodh flush log
   trx commit , bin logging, sync binlog and innodb
  unsafe for binlog ?
  
   If the systems have raid, check if the BBWC is
  enabled on the new host
   and WB is enabled.
  
  
   On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
  [EMAIL PROTECTED]
   wrote:
   Hi list,
  
   Have a bit of a mystery here that I hope
  somebody can help
   with.
  
   I've just got a new server that I'm using as
  a dedicated MySQL
   server.
   In terms of hardware it's pretty much
  identical, if not
   slightly
   superior to an existing server already in
  production use.
  
   It's having a real struggle processing
  INSERT statements to
   InnoDB
   tables; it's maxing out at around 100
  inserts per second, even
   with very
   simple two column tables (inserts into
  MyISAM tables run
   fine).
   Meanwhile, the original server can happily
  process around 1000
   inserts/sec into an identical table.
  
   The MySQL configuration of the two databases
  is identical,
   except for
   the tablespace file size (the new server has
  a larger
   tablespace
   defined), and the InnoDB logs (again, new
  server has larger
   logs).
  
   Can anybody suggest an area of investigation
  as to the cause?
  
   Thanks,
   --
   Ian Simpson

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi Jerry,

It could be a kernel issue; however, currently I'm suspecting that the
drive in the new server simply doesn't have the same bandwidth
capability. The iostat results I'm getting (although I'm not an expert
in reading them, having only learned of it about 3 hours ago) suggest
that the older server is handling roughly the same data quantities, but
just using a much lower percentage of the drive's bandwidth.

I can't seem to find a tool which reports on exactly how much write
bandwidth a drive has; everything seems to focus on reading speed.

Thanks,


-- 
Ian Simpson


On Fri, 2008-06-13 at 11:18 -0400, Jerry Schwartz wrote:
 Having delved a little more into the capabilities of iostat, I've
 discovered that the drive bandwidth seems to be maxed out while MySQL is
 running, which I'd peg as the primary candidate for the problem.
 [JS] That suggests even more strongly that there is a difference in the 
 kernel 
 configuration. More physical I/O would drive the traffic up, by definition. 
 Either MySQL is causing this, or the system file system is causing it.
 
 Looks like I'll be having more words with my hosting company about
 this...
 
 Thanks for all your help
 ?
 --
 Ian Simpson
 
 On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
  Disk usage: the older server (the one that's running fine) is running
  more transactions per second, but has lower blocks written and read
 per
  second than the new server:
  [JS] That, to me, suggests that the difference might be in the way the
 systems
  themselves are configured. Unfortunately, I don't know how Linux
 handles file
  system buffering.
  
  The working server (which in addition to replicating is also handling
 a
  bunch of read queries)
  
  Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
 Blk_wrtn
  sda  88.47   782.20   998.77 9046888130
 11551757459
  
  The new server, which is just trying to handle replication
  
  Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
 Blk_wrtn
  sda  77.83  1367.55  2914.72  358474084
 764029986
  
  Thanks,
  ?
  --
  Ian Simpson
  
  
  
  On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
   also how often do you issue a commit. batching the inserts inside a
   transaction might help.
  
   On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
   wrote:
   check for iostat to see if the disk is heavly used.
  
  
   On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
   Hi Alex,
  
   Configurations are identical, other than the
   differences I initially
   mentioned. I've diffed both the configuration files
   and the output of
   SHOW VARIABLES on both servers.
  
   I've contacted my hosting provider to ask about the
   RAID settings.
  
   Variable_name: innodb_flush_log_at_trx_commit
  Value: 1
   Variable_name: sync_binlog
  Value: 0
   Variable_name: innodb_locks_unsafe_for_binlog
  Value: OFF
  
   Thanks
  
   --
   Ian Simpson
  
   On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
   wrote:
Please check if the my.cnf configurations to be
 the
   same.
   
 What are your configuration parameters in terms
 of
   innodh flush log
trx commit , bin logging, sync binlog and innodb
   unsafe for binlog ?
   
If the systems have raid, check if the BBWC is
   enabled on the new host
and WB is enabled.
   
   
On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
   [EMAIL PROTECTED]
wrote:
Hi list,
   
Have a bit of a mystery here that I hope
   somebody can help
with.
   
I've just got a new server that I'm using
 as
   a dedicated MySQL
server.
In terms of hardware it's pretty much
   identical, if not
slightly
superior to an existing server already in
   production use.
   
It's having a real struggle processing
   INSERT statements to
InnoDB
tables; it's maxing out at around 100
   inserts per second, even
with very

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
That's pretty much what I've been doing to get that the drive is running
at 100% bandwidth.

What I'd like is something that just gives the bandwidth of the device
in terms of Mb/s: you can probably work it out using that iostat
command, seeing how much it wrote and what percentage of the bandwidth
it's using, and then doing a calculation with those numbers to get the
100% value, but I don't know if that's valid, since there are generally
a number of other operations going on at the same time.

Thanks

-- 
Ian Simpson

On Fri, 2008-06-13 at 08:48 -0700, Wm Mussatto wrote:
 On Fri, June 13, 2008 08:26, Ian Simpson wrote:
  Hi Jerry,
 
  It could be a kernel issue; however, currently I'm suspecting that the
  drive in the new server simply doesn't have the same bandwidth
  capability. The iostat results I'm getting (although I'm not an expert
  in reading them, having only learned of it about 3 hours ago) suggest
  that the older server is handling roughly the same data quantities, but
  just using a much lower percentage of the drive's bandwidth.
 
  I can't seem to find a tool which reports on exactly how much write
  bandwidth a drive has; everything seems to focus on reading speed.
 
  Thanks,
 
  
  --
  Ian Simpson
 Try something like:
 iostat -xk /dev/sda /dev/sdb /dev/sdc 10
 where the /dev/... are the drives you want to examine and '10' is the
 redisplay rate. last column is %util.
 
 Hope this helps.
 
 
 
  On Fri, 2008-06-13 at 11:18 -0400, Jerry Schwartz wrote:
  Having delved a little more into the capabilities of iostat, I've
  discovered that the drive bandwidth seems to be maxed out while MySQL
  is
  running, which I'd peg as the primary candidate for the problem.
  [JS] That suggests even more strongly that there is a difference in the
  kernel
  configuration. More physical I/O would drive the traffic up, by
  definition.
  Either MySQL is causing this, or the system file system is causing it.
  
  Looks like I'll be having more words with my hosting company about
  this...
  
  Thanks for all your help
  ?
  --
  Ian Simpson
  
  On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
   Disk usage: the older server (the one that's running fine) is
  running
   more transactions per second, but has lower blocks written and read
  per
   second than the new server:
   [JS] That, to me, suggests that the difference might be in the way
  the
  systems
   themselves are configured. Unfortunately, I don't know how Linux
  handles file
   system buffering.
   
   The working server (which in addition to replicating is also
  handling
  a
   bunch of read queries)
   
   Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
  Blk_wrtn
   sda  88.47   782.20   998.77 9046888130
  11551757459
   
   The new server, which is just trying to handle replication
   
   Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
  Blk_wrtn
   sda  77.83  1367.55  2914.72  358474084
  764029986
   
   Thanks,
   ?
   --
   Ian Simpson
   
   
   
   On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
also how often do you issue a commit. batching the inserts inside
  a
transaction might help.
   
On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
wrote:
check for iostat to see if the disk is heavly used.
   
   
On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
Hi Alex,
   
Configurations are identical, other than the
differences I initially
mentioned. I've diffed both the configuration
  files
and the output of
SHOW VARIABLES on both servers.
   
I've contacted my hosting provider to ask about
  the
RAID settings.
   
Variable_name: innodb_flush_log_at_trx_commit
   Value: 1
Variable_name: sync_binlog
   Value: 0
Variable_name: innodb_locks_unsafe_for_binlog
   Value: OFF
   
Thanks
   
--
Ian Simpson
   
On Fri, 2008-06-13 at 17:43 +0530, Alex Arul
  Lurthu
wrote:
 Please check if the my.cnf configurations to be
  the
same.

  What are your configuration parameters in terms
  of
innodh flush log
 trx commit , bin logging, sync binlog and innodb
unsafe for binlog ?

 If the systems have raid, check if the BBWC is
enabled on the new host
 and WB is enabled.


 On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
[EMAIL

URGENT! up2date -u deleted mysql...safest way to recover on production server

2008-05-04 Thread Ian M. Evans

Decided to run up2date -u

I noticed that our website was tossing off mysql errors. Quickly 
realized that mysql was down. Went to restart but it couldn't find 
mysqld_safe, mysqld, mysqladmin, etc.


I used locate and it couldn't find the binaries anywhere...it appears 
that up2date -u had somehow uninstalled MySQL 4.0.27.


I have the rpms...

The /var/lib/mysql/mysite database directory is fine as is the 
/var/lib/mysql/mysql database directory with all the permissions 
(columns_priv.MYD  columns_priv.MYI  columns_priv.frm  db.MYD  db.MYI 
db.frm  func.MYD  func.MYI  func.frm  host.MYD  host.MYI  host.frm 
tables_priv.MYD  tables_priv.MYI  tables_priv.frm  user.MYD  user.MYI 
user.frm)


What's the safest way to reinstall the rpms and have it 
use/find/whatever the permissions database without it creating a new one?


Thanks...guess I'm drinking coffee until I get this done!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: URGENT! up2date -u deleted mysql...safest way to recover on production server

2008-05-04 Thread Ian M. Evans

Glyn Astill wrote:

I'd back up the data directories then try and then re-install mysql (sorry, I 
know little about red hat and it's package management). It shouldn't overwrite 
your data if it's already present anyway.


Great. Thought that would be the case, but without sleep, I wasn't sure. :-)

Worked fine.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Starting a 2nd MySQL instance on UNIX

2008-04-24 Thread Ian Simpson

Mark,

When you try to log-in to the new instance, are you specifying the new 
port number to the client? If you don't give it the new port number, 
then it will connect to the default port, which is presumably your 
4.0.20 instance.


Mark-E wrote:

I have a Solaris box where MySQL 4.0.20 instance is running (to support
Bugzilla 2.22). I have loaded mysql5.0 on the same box (for Bugzilla 3.0.3)
and created a new mysql50 user that I want to use to run this instance with.
I tried to start the instance on another port by running the following
command...

./bin/mysqld_safe --defaults-file=/usr/local/mysql-5.0/my.cnf
--socket=/tmp/mysql50/mysql.sock --port=3307 --basedir=/usr/local/mysql-5.0
--datadir=/usr/local/mysql-5.0/data
--pid-file=/usr/local/mysql-5.0/mysql50.pid --user=mysql50

The instance appears to start but the message Starting the instance comes
up and I never get back to the system prompt. it just sits there. If I open
another terminal window and do a ps -ef | grep mysql, I can see the new
processes running. There is nothing in the error log. 


I ran the mysql_install_db.sh script to create the mysql database however, I
cannot log in. I thought that it creates a root user with no password. I
tired logging in as root with no password bu no luck. If i use the mysql
4.0.20 root user password, I get into the 4.0.20 instance even though the
mysql50 user does not have mysql 4.0.20 in it's path. when I run mysql at
the prompt, how would I differentiate between the 2 instances?

So at this point I am stuck. If anyone out can help guide me on what I need
to do to ge tthe instance up and running properly, I would appreciate it. I
am rather new to MySQL and I have read through the docs but things are still
not very clear.


Thanks! 


Mark
  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: /tmp/mysql.sock dissapears

2008-03-01 Thread Ian
Hi,

Okay, ill remove the extra log then.

The problem isnt that the socket doesn't get created, its that it randomly
disappears when mysql has been running for weeks already. So, /tmp's
permissions are correct, and everything works for a undecided amount of
time, then it just disappears and mysql stops working.

Thanks
Ian

On Fri, Feb 29, 2008 at 5:46 PM, Vidal Garza [EMAIL PROTECTED] wrote:

 Ian escribió:
  Hi,
 
  Okay, I have added that and will wait and see when it happens again if
 there
  is anything in that log. Just out of interest, does that log show
 anything
  different to the /var/db/mysql/hostnameofunit.err file ?
 
  Cheers
  Ian
 
  On Thu, Feb 28, 2008 at 8:44 PM, Vidal Garza [EMAIL PROTECTED] wrote:
 
 
  Its for test.
  put the log file on my.cnf and tellus what going on
 
  my.cnf
  ...
  log-error=/var/db/mysql/Server_Error.log
 
 
  Ian escribió:
 
  Hi,
 
  We use the following sh script to start (its the default one when
 
  installed)
 
  cat /usr/local/etc/rc.d/mysql-server.sh
  #!/bin/sh
  #
  # $FreeBSD: ports/databases/mysql50-server/files/mysql-server.sh.in,v
  1.32006/03/07 16:25:00 ale Exp $
  #
 
  # PROVIDE: mysql
  # REQUIRE: LOGIN
  # KEYWORD: shutdown
 
  #
  # Add the following line to /etc/rc.conf to enable mysql:
  # mysql_enable (bool):  Set to NO by default.
  #   Set it to YES to enable MySQL.
  # mysql_limits (bool):  Set to NO by default.
  #   Set it to yes to run `limits -e -U mysql`
  #   just before mysql starts.
  # mysql_dbdir (str):Default to /var/db/mysql
  #   Base database directory.
  # mysql_args (str): Custom additional arguments to be passed
  #   to mysqld_safe (default empty).
  #
 
  . /etc/rc.subr
 
  name=mysql
  rcvar=`set_rcvar`
 
  load_rc_config $name
 
  : ${mysql_enable=NO}
  : ${mysql_limits=NO}
  : ${mysql_dbdir=/var/db/mysql}
  : ${mysql_args=}
 
  mysql_user=mysql
  mysql_limits_args=-e -U ${mysql_user}
  pidfile=${mysql_dbdir}/`/bin/hostname`.pid
  command=/usr/local/bin/mysqld_safe
  command_args=--defaults-extra-file=${mysql_dbdir}/my.cnf
  --user=${mysql_user} --datadir=${mysql_dbdir} --pid-file=${pidfile}
  ${mysql_args}  /dev/null 
  procname=/usr/local/libexec/mysqld
  start_precmd=${name}_prestart
  mysql_install_db=/usr/local/bin/mysql_install_db
  mysql_install_db_args=--ldata=${mysql_dbdir}
 
  mysql_create_auth_tables()
  {
  eval $mysql_install_db $mysql_install_db_args /dev/null
  [ $? -eq 0 ]  chown -R ${mysql_user}:${mysql_user}
 
  ${mysql_dbdir}
 
  }
 
  mysql_prestart()
  {
  if [ ! -d ${mysql_dbdir}/mysql/. ]; then
  mysql_create_auth_tables || return 1
  fi
  if checkyesno mysql_limits; then
  eval `/usr/bin/limits ${mysql_limits_args}`
 2/dev/null
  else
  return 0
  fi
  }
 
  run_rc_command $1
 
  In rc.conf we have:
  mysql_enable=YES
  mysql_args=--myisam-recover=BACKUP,FORCE
 
  Here is the ps of it running:
  # ps -axwww |grep mysql
  62025  p0  R+ 0:00.00 grep mysql
  78519  p0- I  0:00.01 /bin/sh /usr/local/bin/mysqld_safe
  --defaults-extra-file=/var/db/mysql/my.cnf --user=mysql
  --datadir=/var/db/mysql --pid-file=/var/db/mysql/hostname.pid
  --myisam-recover=BACKUP,FORCE
  78548  p0- S180:27.77 /usr/local/libexec/mysqld
  --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/usr/local
  --datadir=/var/db/mysql --user=mysql
 
  --pid-file=/var/db/mysql/hostname.pid
 
  --port=3306 --socket=/tmp/mysql.sock --myisam-recover=BACKUP,FORCE
 
  We could try that, but why would it behave differently to the current
 sh
  script?
 
  Thanks
  Ian
 
  On Thu, Feb 28, 2008 at 7:43 PM, Vidal Garza [EMAIL PROTECTED]
 wrote:
 
 
 
  how do you start up?
  you can start up from scrip.
 
  #!/bin/sh
  id=02
  ip=192.168.0.42
 
  sockfile=/tmp/mysql$id.sock
  user=mysql
  datdir=/var/db/mysql$id
  port=3306
  /bin/sh /usr/local/bin/mysqld_safe --user=$user --datadir=$datdir
  --bind-address=$ip --port=$port --sock=$sockfile 
 
 
 
  Ian escribió:
 
 
  Hi,
 
  I am running mysql 5.0.45 on freebsd 4.11 and for some strange
 reason
  /tmp/mysql.sock keeps on disappearing and we are forced to kill -9
 
  mysql
 
  and
 
 
  restart it causing db corruptions as there is no other way of
 telling
 
  it
 
  to
 
 
  stop once that file has gone. I have tried to find any reason why
 this
  happens and there are no errors, no core files, nothing - the file
 
  just
 
  disappears.
 
  Here is the error when trying to login:
 
  # mysql -p cache
  Enter password:
 
  ERROR 2002 (HY000): Can't connect to local MySQL server through
 socket
  '/tmp/mysql.sock' (61)
 
  Here is my my.cnf file:
  [client]
  port= 3306
  socket  = /tmp/mysql.sock
 
  [mysqld]
  port= 3306
  socket  = /tmp/mysql.sock
  skip-locking
  key_buffer = 16M
  max_allowed_packet = 1M

Re: /tmp/mysql.sock dissapears

2008-02-29 Thread Ian
Hi,

I wouldnt have thought so but whenever that file is missing and try the
stop/restart it just sits at the waiting for pids part of the stop loop
(where it lists the pids), and it never stops the server - i have left it
for over an hour and it never stops - just keeps on in the stop loop.

Cheers
Ian

On Fri, Feb 29, 2008 at 12:43 PM, Sebastian Mendel [EMAIL PROTECTED]
wrote:

 Ian schrieb:
  Hi,
 
  I am running mysql 5.0.45 on freebsd 4.11 and for some strange reason
  /tmp/mysql.sock keeps on disappearing and we are forced to kill -9 mysql
 and
  restart it causing db corruptions as there is no other way of telling it
 to
  stop once that file has gone. I have tried to find any reason why this
  happens and there are no errors, no core files, nothing - the file just
  disappears.

 why do you need to kill if the socket is missing?

 does $/etc/init.d/mysqld stop|restart not work without a socket?

 --
 Sebastian

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




Re: /tmp/mysql.sock dissapears

2008-02-29 Thread Ian
Hi,

I dont know about network, but I know using php we cant access anything in
MySQL once the file is gone. When it happens again, ill be sure to check
network.

Cheers
Ian

On Fri, Feb 29, 2008 at 12:54 PM, Sebastian Mendel [EMAIL PROTECTED]
wrote:

 Ian schrieb:
  Hi,
 
  I wouldnt have thought so but whenever that file is missing and try the
  stop/restart it just sits at the waiting for pids part of the stop loop
  (where it lists the pids), and it never stops the server - i have left
 it
  for over an hour and it never stops - just keeps on in the stop loop.

 does MySQL still respond on network connections when the socket is 'gone'?

 --
 Sebastian

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




/tmp/mysql.sock dissapears

2008-02-28 Thread Ian
Hi,

I am running mysql 5.0.45 on freebsd 4.11 and for some strange reason
/tmp/mysql.sock keeps on disappearing and we are forced to kill -9 mysql and
restart it causing db corruptions as there is no other way of telling it to
stop once that file has gone. I have tried to find any reason why this
happens and there are no errors, no core files, nothing - the file just
disappears.

Here is the error when trying to login:

# mysql -p cache
Enter password:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket
'/tmp/mysql.sock' (61)

Here is my my.cnf file:
[client]
port= 3306
socket  = /tmp/mysql.sock

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
#log-bin

server-id   = 1

# Uncomment the following if you are using BDB tables
#bdb_cache_size = 4M
#bdb_max_lock = 1

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

The machine in question is a dual xeon with 4gig of ram.

Any ideas?

Thanks in advance,
Ian


Re: /tmp/mysql.sock dissapears

2008-02-28 Thread Ian
Hi,

We use the following sh script to start (its the default one when installed)

cat /usr/local/etc/rc.d/mysql-server.sh
#!/bin/sh
#
# $FreeBSD: ports/databases/mysql50-server/files/mysql-server.sh.in,v
1.32006/03/07 16:25:00 ale Exp $
#

# PROVIDE: mysql
# REQUIRE: LOGIN
# KEYWORD: shutdown

#
# Add the following line to /etc/rc.conf to enable mysql:
# mysql_enable (bool):  Set to NO by default.
#   Set it to YES to enable MySQL.
# mysql_limits (bool):  Set to NO by default.
#   Set it to yes to run `limits -e -U mysql`
#   just before mysql starts.
# mysql_dbdir (str):Default to /var/db/mysql
#   Base database directory.
# mysql_args (str): Custom additional arguments to be passed
#   to mysqld_safe (default empty).
#

. /etc/rc.subr

name=mysql
rcvar=`set_rcvar`

load_rc_config $name

: ${mysql_enable=NO}
: ${mysql_limits=NO}
: ${mysql_dbdir=/var/db/mysql}
: ${mysql_args=}

mysql_user=mysql
mysql_limits_args=-e -U ${mysql_user}
pidfile=${mysql_dbdir}/`/bin/hostname`.pid
command=/usr/local/bin/mysqld_safe
command_args=--defaults-extra-file=${mysql_dbdir}/my.cnf
--user=${mysql_user} --datadir=${mysql_dbdir} --pid-file=${pidfile}
${mysql_args}  /dev/null 
procname=/usr/local/libexec/mysqld
start_precmd=${name}_prestart
mysql_install_db=/usr/local/bin/mysql_install_db
mysql_install_db_args=--ldata=${mysql_dbdir}

mysql_create_auth_tables()
{
eval $mysql_install_db $mysql_install_db_args /dev/null
[ $? -eq 0 ]  chown -R ${mysql_user}:${mysql_user} ${mysql_dbdir}
}

mysql_prestart()
{
if [ ! -d ${mysql_dbdir}/mysql/. ]; then
mysql_create_auth_tables || return 1
fi
if checkyesno mysql_limits; then
eval `/usr/bin/limits ${mysql_limits_args}` 2/dev/null
else
return 0
fi
}

run_rc_command $1

In rc.conf we have:
mysql_enable=YES
mysql_args=--myisam-recover=BACKUP,FORCE

Here is the ps of it running:
# ps -axwww |grep mysql
62025  p0  R+ 0:00.00 grep mysql
78519  p0- I  0:00.01 /bin/sh /usr/local/bin/mysqld_safe
--defaults-extra-file=/var/db/mysql/my.cnf --user=mysql
--datadir=/var/db/mysql --pid-file=/var/db/mysql/hostname.pid
--myisam-recover=BACKUP,FORCE
78548  p0- S180:27.77 /usr/local/libexec/mysqld
--defaults-extra-file=/var/db/mysql/my.cnf --basedir=/usr/local
--datadir=/var/db/mysql --user=mysql --pid-file=/var/db/mysql/hostname.pid
--port=3306 --socket=/tmp/mysql.sock --myisam-recover=BACKUP,FORCE

We could try that, but why would it behave differently to the current sh
script?

Thanks
Ian

On Thu, Feb 28, 2008 at 7:43 PM, Vidal Garza [EMAIL PROTECTED] wrote:

 how do you start up?
 you can start up from scrip.

 #!/bin/sh
 id=02
 ip=192.168.0.42

 sockfile=/tmp/mysql$id.sock
 user=mysql
 datdir=/var/db/mysql$id
 port=3306
 /bin/sh /usr/local/bin/mysqld_safe --user=$user --datadir=$datdir
 --bind-address=$ip --port=$port --sock=$sockfile 



 Ian escribió:
  Hi,
 
  I am running mysql 5.0.45 on freebsd 4.11 and for some strange reason
  /tmp/mysql.sock keeps on disappearing and we are forced to kill -9 mysql
 and
  restart it causing db corruptions as there is no other way of telling it
 to
  stop once that file has gone. I have tried to find any reason why this
  happens and there are no errors, no core files, nothing - the file just
  disappears.
 
  Here is the error when trying to login:
 
  # mysql -p cache
  Enter password:
 
  ERROR 2002 (HY000): Can't connect to local MySQL server through socket
  '/tmp/mysql.sock' (61)
 
  Here is my my.cnf file:
  [client]
  port= 3306
  socket  = /tmp/mysql.sock
 
  [mysqld]
  port= 3306
  socket  = /tmp/mysql.sock
  skip-locking
  key_buffer = 16M
  max_allowed_packet = 1M
  table_cache = 64
  sort_buffer_size = 512K
  net_buffer_length = 8K
  read_buffer_size = 256K
  read_rnd_buffer_size = 512K
  myisam_sort_buffer_size = 8M
  #log-bin
 
  server-id   = 1
 
  # Uncomment the following if you are using BDB tables
  #bdb_cache_size = 4M
  #bdb_max_lock = 1
 
  # Uncomment the following if you are using InnoDB tables
  #innodb_data_home_dir = /var/db/mysql/
  #innodb_data_file_path = ibdata1:10M:autoextend
  #innodb_log_group_home_dir = /var/db/mysql/
  #innodb_log_arch_dir = /var/db/mysql/
  # You can set .._buffer_pool_size up to 50 - 80 %
  # of RAM but beware of setting memory usage too high
  #innodb_buffer_pool_size = 16M
  #innodb_additional_mem_pool_size = 2M
  # Set .._log_file_size to 25 % of buffer pool size
  #innodb_log_file_size = 5M
  #innodb_log_buffer_size = 8M
  #innodb_flush_log_at_trx_commit = 1
  #innodb_lock_wait_timeout = 50
 
  [mysqldump]
  quick
  max_allowed_packet = 16M
 
  [mysql]
  no-auto-rehash
 
  [isamchk]
  key_buffer = 20M
  sort_buffer_size = 20M
  read_buffer = 2M
  write_buffer = 2M
 
  [myisamchk]
  key_buffer = 20M
  sort_buffer_size = 20M

Re: /tmp/mysql.sock dissapears

2008-02-28 Thread Ian
Hi,

Okay, I have added that and will wait and see when it happens again if there
is anything in that log. Just out of interest, does that log show anything
different to the /var/db/mysql/hostnameofunit.err file ?

Cheers
Ian

On Thu, Feb 28, 2008 at 8:44 PM, Vidal Garza [EMAIL PROTECTED] wrote:

 Its for test.
 put the log file on my.cnf and tellus what going on

 my.cnf
 ...
 log-error=/var/db/mysql/Server_Error.log


 Ian escribió:
  Hi,
 
  We use the following sh script to start (its the default one when
 installed)
 
  cat /usr/local/etc/rc.d/mysql-server.sh
  #!/bin/sh
  #
  # $FreeBSD: ports/databases/mysql50-server/files/mysql-server.sh.in,v
  1.32006/03/07 16:25:00 ale Exp $
  #
 
  # PROVIDE: mysql
  # REQUIRE: LOGIN
  # KEYWORD: shutdown
 
  #
  # Add the following line to /etc/rc.conf to enable mysql:
  # mysql_enable (bool):  Set to NO by default.
  #   Set it to YES to enable MySQL.
  # mysql_limits (bool):  Set to NO by default.
  #   Set it to yes to run `limits -e -U mysql`
  #   just before mysql starts.
  # mysql_dbdir (str):Default to /var/db/mysql
  #   Base database directory.
  # mysql_args (str): Custom additional arguments to be passed
  #   to mysqld_safe (default empty).
  #
 
  . /etc/rc.subr
 
  name=mysql
  rcvar=`set_rcvar`
 
  load_rc_config $name
 
  : ${mysql_enable=NO}
  : ${mysql_limits=NO}
  : ${mysql_dbdir=/var/db/mysql}
  : ${mysql_args=}
 
  mysql_user=mysql
  mysql_limits_args=-e -U ${mysql_user}
  pidfile=${mysql_dbdir}/`/bin/hostname`.pid
  command=/usr/local/bin/mysqld_safe
  command_args=--defaults-extra-file=${mysql_dbdir}/my.cnf
  --user=${mysql_user} --datadir=${mysql_dbdir} --pid-file=${pidfile}
  ${mysql_args}  /dev/null 
  procname=/usr/local/libexec/mysqld
  start_precmd=${name}_prestart
  mysql_install_db=/usr/local/bin/mysql_install_db
  mysql_install_db_args=--ldata=${mysql_dbdir}
 
  mysql_create_auth_tables()
  {
  eval $mysql_install_db $mysql_install_db_args /dev/null
  [ $? -eq 0 ]  chown -R ${mysql_user}:${mysql_user}
 ${mysql_dbdir}
  }
 
  mysql_prestart()
  {
  if [ ! -d ${mysql_dbdir}/mysql/. ]; then
  mysql_create_auth_tables || return 1
  fi
  if checkyesno mysql_limits; then
  eval `/usr/bin/limits ${mysql_limits_args}` 2/dev/null
  else
  return 0
  fi
  }
 
  run_rc_command $1
 
  In rc.conf we have:
  mysql_enable=YES
  mysql_args=--myisam-recover=BACKUP,FORCE
 
  Here is the ps of it running:
  # ps -axwww |grep mysql
  62025  p0  R+ 0:00.00 grep mysql
  78519  p0- I  0:00.01 /bin/sh /usr/local/bin/mysqld_safe
  --defaults-extra-file=/var/db/mysql/my.cnf --user=mysql
  --datadir=/var/db/mysql --pid-file=/var/db/mysql/hostname.pid
  --myisam-recover=BACKUP,FORCE
  78548  p0- S180:27.77 /usr/local/libexec/mysqld
  --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/usr/local
  --datadir=/var/db/mysql --user=mysql
 --pid-file=/var/db/mysql/hostname.pid
  --port=3306 --socket=/tmp/mysql.sock --myisam-recover=BACKUP,FORCE
 
  We could try that, but why would it behave differently to the current sh
  script?
 
  Thanks
  Ian
 
  On Thu, Feb 28, 2008 at 7:43 PM, Vidal Garza [EMAIL PROTECTED] wrote:
 
 
  how do you start up?
  you can start up from scrip.
 
  #!/bin/sh
  id=02
  ip=192.168.0.42
 
  sockfile=/tmp/mysql$id.sock
  user=mysql
  datdir=/var/db/mysql$id
  port=3306
  /bin/sh /usr/local/bin/mysqld_safe --user=$user --datadir=$datdir
  --bind-address=$ip --port=$port --sock=$sockfile 
 
 
 
  Ian escribió:
 
  Hi,
 
  I am running mysql 5.0.45 on freebsd 4.11 and for some strange reason
  /tmp/mysql.sock keeps on disappearing and we are forced to kill -9
 mysql
 
  and
 
  restart it causing db corruptions as there is no other way of telling
 it
 
  to
 
  stop once that file has gone. I have tried to find any reason why this
  happens and there are no errors, no core files, nothing - the file
 just
  disappears.
 
  Here is the error when trying to login:
 
  # mysql -p cache
  Enter password:
 
  ERROR 2002 (HY000): Can't connect to local MySQL server through socket
  '/tmp/mysql.sock' (61)
 
  Here is my my.cnf file:
  [client]
  port= 3306
  socket  = /tmp/mysql.sock
 
  [mysqld]
  port= 3306
  socket  = /tmp/mysql.sock
  skip-locking
  key_buffer = 16M
  max_allowed_packet = 1M
  table_cache = 64
  sort_buffer_size = 512K
  net_buffer_length = 8K
  read_buffer_size = 256K
  read_rnd_buffer_size = 512K
  myisam_sort_buffer_size = 8M
  #log-bin
 
  server-id   = 1
 
  # Uncomment the following if you are using BDB tables
  #bdb_cache_size = 4M
  #bdb_max_lock = 1
 
  # Uncomment the following if you are using InnoDB tables
  #innodb_data_home_dir = /var/db/mysql/
  #innodb_data_file_path = ibdata1:10M:autoextend
  #innodb_log_group_home_dir = /var/db/mysql

Re: /tmp/mysql.sock dissapears

2008-02-28 Thread Ian
Hi,

No, there is nothing in there cleaning /tmp. Mind you, further investigating
there are still some temp files in there from last year - so the directory
isnt getting wiped clean.

Thanks :)

On Fri, Feb 29, 2008 at 3:36 AM, Steve Bernacki [EMAIL PROTECTED]
wrote:

 Does the system in question have some type of /tmp cleaner script that
 might be removing the socket file?   Check /etc/crontab and root's crontab
 (crontab -l)

 Steve





Slow query not using index

2007-11-14 Thread Ian M. Evans
Bad news: I have a slow query that doesn't appear to be using an index 
even if I force it.


Good news: the forehead shaped dent in my desk is really progressing well.

Here's the query:

SELECT DISTINCT poster_data.*
FROM poster_data, poster_prodcat, poster_categories
WHERE poster_categories.apcatname LIKE '%Celebrity Name%'
AND poster_prodcat.apcatnum = poster_categories.apcatnum
AND poster_data.apnumber = poster_prodcat.apnumber
ORDER BY poster_data.aptitle

poster_data has 61,343 rows of unique data for each apnumber (poster)

poster_categories has 26,716 rows...a category id and a category name

poster_prodcat has 993,410 rows...a category id and the apnumber (poster)

My query is supposed to find all the posters that are from categories 
that contain a celebrity name.


Here's the slow entry:
# Query_time: 10  Lock_time: 0  Rows_sent: 30  Rows_examined: 1271071

So it's examining over a million rows to send 30.

When I do an explain:

Table: poster_data
Type: ALL
Possible Keys: posterid
Key: NULL
Key_len: NULL
Rows: 61479
Extra: Using temporary; Using filesort

Table: poster_prodcat
Type: ref
Possible Keys: prodcat_ind,apcatnum,apnumber
Key: apnumber
Key_Len: 3
ref: poster_data.apnumber
Rows: 12
Extra: Distinct

Table: poster_categories
Type: eq_ref
Possible Keys: PRIMARY
Key: PRIMARY
Key_len: 4
Ref: poster_prodcat.apcatnum
Rows: 1
Extra: Using where; Distinct

Even if I force the use of posterid in the poster_data table, it still 
doesn't use it.


Essentially the query's asking to find category numbers for categories 
that contain the name, use those numbers to find the poster id's that 
match those category numbers and then return the poster info.


Thanks for any insight.

--
Ian Evans


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow query not using index

2007-11-14 Thread Ian M. Evans
One of the list readers (thanks Brent!) suggested using a full text 
index on the category names field. Queries dropped from 10-49 seconds 
down to 0.0085


Thanks for the emails folks!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Storing Apache logs in MySQL.

2007-11-09 Thread Ian
On 9 Nov 2007 at 10:44, Shaun T. Erickson wrote:

 I pull Apache (combined format) logs, daily, from a number of mirrors,
 back to a central server for processing. I'd like to somehow load them
 in MySQL for analysis.
 
 1) Does anyone know of a script that can parse the logs and load them?
 (I presume I can get the DB schema from it.)
 
 2) Can MySQL handle a little more than one million records a day
 (currently)? If so, what kind of disk space needs will it have?
 
 3) Am I out of my mind? Anything I'm not considering, that I should?

Hi,

An easier method may be to install mod_log_mysql on each of the Apache servers 
and 
log directly to the central MySQL installation.

Regards

Ian
-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to encrypt Text and still be able to use full text search? 3rd Attempt ++

2007-10-26 Thread Ian
On 26 Oct 2007 at 9:17, mos wrote:

 I posted this message twice in the past 3 days, and it never gets on the 
 mailing list. Why?
 Here it is again:
 
 I have a Text field that contains paragraph text and for security reasons I 
 need to have it encrypted. If I do this, how can I still implement full 
 text search on it?
 Also, I have a lot of Float columns that need to be protected but the user 
 has to use comparison operators like  and  on them. Any recommendations?

Hi,

This is quite a difficult one, and as usual in the field of security depends on 
how valuable 
the data is and how difficult you want it to be for an attacker to obtain it.

If you let us know what type of data this is and how well it has to be 
protected, maybe we 
can help more. security reasons is a bit vague, but I can understand that you 
don't want 
to give too much away.

I can say one thing though, in order for the data to be indexed by MySQL , it 
has to be in 
an unencrypted form somewhere in the database. There is no way I know to get 
around 
this, but I hope someone can correct me :)

Regards

Ian
-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Need ideas on handling aliases and a.k.a.s

2007-10-19 Thread Ian M. Evans
I'm trying to wrap my head around dealing with people in a table that 
have multiple names or akas.


I run an entertainment news site and have to deal with people like 
Pamela Anderson (who was once Pamela Lee, Pamela Anderson Lee, Pamela 
Denise Anderson), Eva Longoria (who's now Eva Longoria Parker) and 
Courteney Cox, who's Courteney Cox Arquette.


I haven't really dealt with this yet, but I guess now I better handle it 
before I get stung too badly.


Right now I have a people table that has:

PeopleID
First
Middle
Last
Display
URL

So as an example you'd have:

PeopleID: 1078
First: Eva
Middle:
Last: Longoria
Display: Eva Longoria
URL: evalongoria

It's worked well for me. I have a peopleinphotos table...add Eva to a 
photo caption and it's just a matter of grabbing her id number (1078) 
and putting it in the table with the photoid #.


She gets nominated, the input form looks up her id# and adds it to the 
nomination table.


I've been lucky in that most entertainers keep their public and personal 
names separate. But suddenly Eva wants her credits to read Eva Longoria 
Parker. Sure I can add Parker to the Last field and remember to always 
use Longoria Parker when I input new info, but what happens if she gets 
divorced?


Just wondering how some of you have handled akas/aliases/divorces for 
things like customer databases. How do you ensure that a name change 
doesn't actually cause a brand new record for the person if the data 
entry person uses the old name, etc.


Thanks for any advice.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



seeding a slave from a slave

2007-07-25 Thread Ian P. Christian

Hi All,
How do I create a mysql data dump from a slave to seed another slave?
Using --master-data with mysqldump from my existing slave sets the 
master to the slave I was dumping, not the real master.


Many Thanks,

Ian


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOW_PRIORITY_UPDATES

2007-07-07 Thread Ian Collins

Hi,
I am seeing a large lock contention on a production database (A large 
table_locks_waited against table_locks_immediate).
I want to set low priority updates, but I don't want to shut the 
database down to do it.


Is,
set global LOW_PRIORITY_UPDATES=1;

the same as starting mysql with --low_priority_updates?
I think it is but want confirmation.

Cheers,
Ian.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query cache question when using HANDLER

2007-06-27 Thread Ian Collins

Hi,
I have an application that is using the MySQL HANDLER heavily. The 
application was ported from a legacy ISAM database to use MySQL.

The upshot is that the application works by index walking - i.e.,

HANDLER tablename OPEN
HANDLER tablename OPEN as indexname
HANDLER indexname READ indexname =  (key1, key2, ...)
HANDLER indexname READ indexname NEXT LIMIT 1
HANDLER indexname READ indexname NEXT LIMIT 1
HANDLER indexname READ indexname NEXT LIMIT 1
...

(it works very well - although we are also migrating the applications to 
use selects and prepares - which are causing their own problems).


We run in 2 scenarios -
1. Machines with lots of databases and few users (ie. internal testing 
machines) and
2. Machines with only one database and many users (i.e. customer 
production machines).


My questions...
Are HANDLER queries cached in the query cache?
If so, is it worth using a query cache when using so many HANDLER .. 
NEXT calls.

Again if so, is it recommended to set a small query_cache_limit.




--

Regards,
Ian Collins
Systems Manager
KIWIPLAN Group
Tel: +64 (0)9 2727622
Mob: +64 (0)21 761144



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Type Mismatch

2007-06-19 Thread Ian
On 19 Jun 2007 at 12:42, Critters wrote:

 Hi,
 I hope someone can help me with my problem, something that has come up 
 when moving code and DB to a new server:
 
 Connection:
 driver={MySQL ODBC 3.51 
 DRIVER};server=localhost;uid=xx;pwd=xx;database=xx;option=16387
 
 SQL:
 SELECT (sum_score/sum_votes) AS 'score' FROM xx WHERE id = 
 xx
 
 Value of score:
 6.2153
 
 ASP:
 %=int(RS(score)*25)-20%
 
 Error:
 Microsoft VBScript runtime (0x800A000D)
 Type mismatch
 
 Any help appreciated, I did not have this problem when I had the same 
 set-up but on a server running an earlier version of MySQL and the ODBC 
 driver.

Hi,

I had a similar problem when using select count(*) on windows.  The MyODBC 
driver is 
not returning a number type, so the code fails because it is expecting one.

To confirm this add this code:

%=Typename( RS(score).value) %

above the line that fails to show what type the MyODBC driver is returning.  
When it failed 
for me I was getting 'unknown' instead of 'Long'.

If this is the case you may have to specifically CAST it into a number type in 
your SQL 
statement or use one of the VBScript functions CInt(), CLng(), CDbl(), CSng() 
etc.
Hope this helps

Ian
-- 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ian P. Christian
I upgraded my slave server a few weeks ago, and the slave failed, with
an error similar to the one shown below.

I rolled back my upgrade, and it started working again, so I forgot
about it.

Today, I upgraded the master (again, from 5.0.32 to 5.0.42) - and hte
slave failed again.

I thought upgrading the slave to match the master might help, but still
it failed.  Below is the error.

The hostname did *not* change.


070612 13:35:09 [Warning] No argument was provided to --log-bin, and
--log-bin-index was not used; so replication may break when this MySQL
server acts as a master and has his hostname changed!! Please use
'--log-bin=/var/run/mysqld/mysqld-bin' to avoid this problem.

070612 13:35:09  InnoDB: Started; log sequence number 40 824537593
070612 13:35:09 [Warning] Neither --relay-log nor --relay-log-index were
used; so replication may break when this MySQL server acts as a slave
and has his hostname changed!! Please use
'--relay-log=/var/run/mysqld/mysqld-relay-bin' to avoid this problem.
070612 13:35:09 [ERROR] Failed to open the relay log
'./xian-relay-bin.000962' (relay_log_pos 284157529)
070612 13:35:09 [ERROR] Could not find target log during relay log
initialization
070612 13:35:09 [ERROR] Failed to initialize the master info structure
070612 13:35:09 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.0.42-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306
 Gentoo Linux mysql-5.0.42

Any ideas/suggestions welcome, reseeding the slave will literally take days.

-- 
Ian P. Christian ~ http://pookey.co.uk



signature.asc
Description: OpenPGP digital signature


Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ian P. Christian
Ian P. Christian wrote:
 I upgraded my slave server a few weeks ago, and the slave failed, with
 an error similar to the one shown below.


I have figured out what happened here now - and I'm part of the way
though fixing it.

It turned out the defaults had changed somewhere, and rather then using
/var/lib/mysql/hostname-bin, it was using /var/run/mysql/mysqld-bin
(and the same change fro relay logs too).

Now... I've changed the slave to use it's correct logs now - however, if
I do the same on the master, I'll have the last 4 hours of logs in
/var/run/mysql/mysqld-bin ignored.

Somehow, I need to get the slave to catch up with the master's old logs
in /var/lib/mysql/hostname-bin, and then continue from the brand new
logs in /var/run/mysql/mysqld-bin

This is an awful mess, and I'm not sure it's recoverable - perhaps it is.

In theory, I should be able to find out where the slave was up to in the
old logs, extract them manually and replay them on the slave, and then
reset the slave to use the new logs - however i'm not sure how reliable
that's going to be - or even how to go about doing it yet.

Ideas anyone?

-- 
Ian P. Christian ~ http://pookey.co.uk



signature.asc
Description: OpenPGP digital signature


Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ian P. Christian
Ofer Inbar wrote:
 Assuming your slave is not usable by client programs now anyway and
 you don't mind it being unusable for a while longer, you can restart
 the slaving from scratch:

This is exactly what I'm trying to avoid doing, it means 2 days downtime
whilst the data is re-inserted.

I have actually managed to fix it now though.  I checked the old binary
log from the master, and it had no new data for the slave, so I simply
issued a 'CHANGE MASTER ...' on the slave to tell it to use the new
binary log file, with a position of 4 (the start) and off it when - back
to being in sync.

Why these defaults changed on a minor mysql release update is beyond me,
however I suspect this is gentoo's fault, not MySQLs.

-- 
Ian P. Christian ~ http://pookey.co.uk



signature.asc
Description: OpenPGP digital signature


Re: replication

2007-05-14 Thread Ian P. Christian
richard wrote:
 as far as I can see, these commands select which db's to replicate on
 the slave that currently exist on the master server.
 What i am asking is, if I set a server up as a slave, can I have other
 databases on the slave that are not part of the replication system?

Yes - I've been doing this for some time, works fine.


-- 
Ian P. Christian ~ http://pookey.co.uk

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



a 'safe' way to move data?

2007-03-30 Thread Ian P. Christian

I'm trying to move data between 2 tables.

 INSERT INTO new_table SELECT * FROM old_table LIMIT 5;
 DELETE FROM old_table LIMIT 5;

This is the only process that deletes data from old_table, can I be 
*sure* that the limit in these 2 queries will address the same data set?


(if I don't limit to small numbers in the LIMIT, I/O gets too high, so 
I'm moving data slowly in batches)


Thanks,

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: a 'safe' way to move data?

2007-03-30 Thread Ian P. Christian

Brent Baisley wrote:
No, you can't assure the same data will be addressed without at least 
including an order by. Even then you would need to make sure that the 
first X records in the order would not change. For instance, if you 
order by entered_date DESC, then the data set would change because any 
new records would get included in the LIMIT.


Will it not always use the natural order of the table in 
selects/deletes, and therefore return results in the order in which they 
were inserted?



--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: a 'safe' way to move data?

2007-03-30 Thread Ian P. Christian

Dan Nelson wrote:
 To be completely safe, you would want to use innodb tables, then select
 only the primary key of the 50,000 records you're interested in, using
 the FOR UPDATE keyword (to keep others from modifying those records
 while you're doing the move).  Then INSERT INTO newtable SELECT * FROM
 oldtable WHERE primarykey in ( your 50,000 keys ), then DELETE FROM
 oldtable WHERE primarykey in ( your 50,000 keys ), then COMMIT, which
 will cause your insertions and deletions to be truly atomic.

Ah of course - a far better idea.

Thanks :)

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql services to monitor.

2007-03-28 Thread Ian van Marwijk

Hi!

Ananda Kumar said the following, On 28-Mar-07 06:15:

Hi All,
What are all the mysql services i need to monitor to make sure mysql db is
running and send a pager when any one of these services go down.


To check if a database is still running is diffrent then the actuall 
process.
Do you want to check if MySQLd is running, or if a database is still 
accessible?
A monitoring system is called Nagios, its worth taking a look at it (it 
can do both of the above described, but could be overkill if you only 
want to check a single service)

http://nagios.org/

Bye, Ian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



HPUX out of memory errors - kernel tuning?

2007-03-27 Thread Ian Collins

Hi,
I have a HP rp3410 with 4Gb RAM running HPUX 11.11. This is running,

Version: '5.0.26-pro-log'  socket: '/tmp/mysql.sock'  port: 3306  MySQL 
Pro (Commercial)


When this machine is put under load (i.e, a lot of database activity), 
the clients are receiving Error 12 errors and the MySQL log has a lot 
of errors as follows,


070327  8:24:20 [ERROR] mysql_ha_read: Got error 12 when reading table 
'XLDEFN_IN'
070327  8:54:51 [ERROR] mysql_ha_read: Got error 12 when reading table 
'XLDEFN_IN'
070327  8:55:19 [ERROR] mysql_ha_read: Got error 12 when reading table 
'XLDEFN_IN'


In addition, I see at MySQL startup (in the mysql err log), a number of,

/usr/local/mysql/bin/mysqld: Out of memory (Needed 368389120 bytes)
/usr/local/mysql/bin/mysqld: Out of memory (Needed 276289536 bytes)


My /etc/my.cnf is set as,

[client]
port=3306
socket=/tmp/mysql.sock

[mysqld]
port=3306
socket=/tmp/mysql.sock
set-variable = key_buffer_size=400M
set-variable = max_allowed_packet=15M
default-table-type=InnoDB
datadir=/data/mysql

The machine is indicating it is under any ram pressure, which makes me 
think it may be kernel tuning (of course, I may be wrong). Are there any 
recommendations for HPUX kernel tuning with MySQL?


--

Regards,
Ian Collins
Systems Manager
KIWIPLAN Group
Tel: +64 (0)9 2727622
Mob: +64 (0)21 761144



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: High Avaliablity mysql db.

2007-03-16 Thread Ian van Marwijk
Hi!

I would start at:
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication.html

Bye,
Ian

Ananda Kumar said the following, On 16-Mar-07 11:04:
 Hi All,
 We are planing to develop and high available mysql db.
 Can anybody please point me to any good documentation. Also how stable is
 MySQL cluster and replication.
 
 regards
 anandkl
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MD5()

2007-03-10 Thread Ian P. Christian

Neil Tompkins wrote:
I'm looking to use MD5() to encrypt credit card numbers.  How do I 
unencrypt this when reading the value ?




you can't.
Google for MD5, or better still look at wikipedia, I'm sure they will 
have something


--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MD5()

2007-03-10 Thread Ian P. Christian

Neil Tompkins wrote:

What do you recommend I use ?


http://dev.mysql.com/doc/refman/5.0/en/encryption-functions.html

That should help you.

However... keep in mind that if your application is likely to be 
accessing this data all the time anyway, if someone compromises your 
database, chances are they will compromise your code, leaving to them 
being able to get your key, and then decrypt all your data anyway.


I'm not saying encrypting it in the database is pointless, it's just far 
from enough to say your data is secure.


A quick google on the subject returned this:

http://forums.mysql.com/read.php?30,14020,14020

which honestly, I've not read - but you might want to :)


--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MD5()

2007-03-10 Thread Ian P. Christian

Neil Tompkins wrote:
The problem I have is our mysql database version is 3.23 and we are not 
in a position to upgrade.


Because you are unlikely to be selecting on this data directly, you 
could use functions of whatever language you're using to connect to the 
database... for example if you're using PHP...


http://uk2.php.net/manual/en/ref.mcrypt.php

I hope to god though your reason for not upgrading is because this is a 
shared host you're planning on storing credit card details on. Make 
sure you give the users a chance to opt out of you keeping that kind of 
data, and remember to never store the CVV number.


--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-08 Thread Ian P. Christian
--
BUFFER POOL AND MEMORY
--
Total memory allocated 611637398; in additional pool allocated 3526400
Buffer pool size 32768
Free buffers 0
Database pages 32750
Modified db pages 188
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 315389, created 2063, written 474318
489.21 reads/s, 0.59 creates/s, 2.05 writes/s
Buffer pool hit rate 951 / 1000
--
ROW OPERATIONS
--
7 queries inside InnoDB, 1 queries in queue
2 read views open inside InnoDB
Main thread process no. 23228, id 2367634320, state: sleeping
Number of rows inserted 356882, updated 1963990, deleted 293832, read 
875872021

2.05 inserts/s, 10.32 updates/s, 0.21 deletes/s, 48500.03 reads/s


Thanks again,

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-08 Thread Ian P. Christian

Filip Krejci wrote:
 Hi,

 I suppose this is really I/O problem.


You're right, it looks like it was just an I/O problem - your suggestion
was spot on. I've now managed to dump my master data, and can get my
slave back online!

Thanks a lot for your suggestion,

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Ian P. Christian
Recently my one and only slave went down, and stupidly I don't have a 
dump suitable for reseeding (is that's the right term...) the slave, so 
need to make a snapshot of the master database again. This time I'll 
make sure I keep this datafile for future restores should I need to - 
you live and learn.


So... I'm doing a database dump:

mysqldump --master-data --single-transaction database  dump.sql

This database I'm dumping has something like 17 million rows, all but 1 
table (which uses FULLTEXT, and only has 3-4k rows) run innodb. There is 
only one table of any real size, and this table has all but about 100k 
of the total rows in. My understanding of this command is that the 
database should not be locked whilst this command is running.


However, here's my problem...
When the dump starts to read from large table, the database just grinds 
to a halt - my website running from the database just stops, and the 
dump (which I was watching progress with a privative `watch ls -la`) 
slows down a bit.


Last time I had to  do this (for the first 'seeding' of my slave), I 
eventually gave up trying to dump from the database whilst the site 
remained live, and took the site down for 15 minutes whilst the dump 
ran.  As I'm sure you'll understand I'm not too keen on taking the 
website down again.


Any suggestions as to why my database is stopping (could be I/O related 
maybe? it's on a good RAID setup though), and what I could do about it?


Many Thanks,

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Ian P. Christian

Marcus Bointon wrote:

Hi Marcus :)

 On 7 Mar 2007, at 08:44, Ian P. Christian wrote:

 --single-transaction doesn't _do_ the dump as a transaction, it simply
 wraps the dump in begin/commit statements so it's atomic when restoring.

 If the dump is to preserve relational integrity then it has to lock
 tables or disable access (or writes/deletes can happen during the dump).
 There are two alternatives: One is to use innoDB's commercial hotbackup
 utility (which I've not used, but it's apparently 'the way').

I was under the impression that with multi-versioning of InnoDB, that it
wouldn't need to do a write lock?

Sorry to quote this much from 'mysqldump --help'

--master-data[=#]
This causes the binary log position and filename to be
appended to the output. If equal to 1, will print it as a
CHANGE MASTER command; if equal to 2, that command will
be prefixed with a comment symbol. This option will turn
--lock-all-tables on, unless --single-transaction is
specified too (in which case a global read lock is only
taken a short time at the beginning of the dump - don't
forget to read about --single-transaction below). In all
cases any action on logs will happen at the exact moment
of the dump.Option automatically turns --lock-tables off.

--single-transaction
Creates a consistent snapshot by dumping all tables in a
single transaction. Works ONLY for tables stored in
storage engines which support multiversioning (currently
only InnoDB does); the dump is NOT guaranteed to be
consistent for other storage engines. Option
automatically turns off --lock-tables.


I'll accept my interpritation of the above could be very wrong
however...

 The other
 is one of the reasons for using a slave - stop the slave, do the dump,
 restart the slave and it will catch up anything it missed. It helps if
 you can dedicate a slave for this.

Yes...I'm aware of this one, but alas.. this was my only slave, and it
managed to become out of sync somehow (something to do with a bug when
using 'mysqladmin kill'). Now I know that things like this happen, I'll
take weekly snapshots of the slave data, but like I said - you live and
learn :)


Whilst I'm here and talking about slaves... is it possible to have a
slave to 2 different databases on 2 different hosts?

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump --single-transaction causes database to become unusable

2007-03-07 Thread Ian P. Christian

Filip Krejci wrote:

Hi,

you are right, option --single-transaction does not accquire any lock on 
your innodb tables. Backup is fully on-line due to mvcc.


You should look for another reason of this behavior.

1/ What says 'show full processlist' when backup is running


It shows mostly inserts/updates into tables, these tables being tables 
other then the huge 16-17 million row one.  These tables are used for 
session storage, and as such, if this block (which goes on as almost the 
first thing my site does), then no SELECT statements for site content 
are executed



2/ What says 'show engine innodb\G' when backup is running


I've actually not checked - I shall do that and report shortly - it's 
nearly midday for me, and that means lots of traffic on the site, not a 
good time to be playing :)



2/ Is your db server stopped or very slow only? (you will see in 1/)


I *think* it's slow to the point where it's unusable.  Until the backup 
gets round to the huge table, inserts/updates/selects are all going on 
just fine,.

3/ If it's very slow
   a/ what is your mysqldump config in my.cnf


[mysqldump]
quick
max_allowed_packet  = 16M

that's all there is... I'm not sure what hte 'quick' does, I'll go and 
check the manual on that, perhaps that's the issue.  I'm guessing I 
might have more luck increasing the packet size?


   b/ what about remote backup (mysqldump --host x.x.x.x --master-data 
--single-transaction  dump.sql) which cause separating writes on 
another box?


I did do this under the assumption it could simply be an I/O problem - 
however the problem persists.  It might be because the network 
connection between the two hosts is pretty fast



Thanks Filip!

--
Ian P. Christian ~ http://pookey.co.uk

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Repairing a table

2007-02-02 Thread Ian Barnes

Hi,

Im having another problem with a large db. I have a 160Gig drive dedicated
to the database partition and I have on database in particular that is
taking up the space. My .MYD is 78Gig and my .MYI is 34 gig. Thats fine i
have a couple of gig left, but whenever i try and do anything on it, the
disk fills up and I have to run a repair table tablename quick to get it
going and that takes forever since its quite large.

Is there any other way that I could repair the index's quickly without
having to wait up to 48 hours for the index to repair? Also, if i run out of
diskspace, is there any way that I can get it working without truncating the
data?

Thanks alot
Ian


Questions about delete and optimize

2007-02-01 Thread Ian Barnes

Hi,

We are looking at various methods that we can effectively and efficiently
delete lots of rows from a database and then optimize it. Our main concern
is disk space - the partition we are working with is only 12gigs small and
our database vary in size from 1gig (not a problem) to 11gig. In the example
below I will use one whos .MYD is 6.5 Gig and the .MYI is 2.7Gig. There are
around 28,900,000 rows in the database.

Once a month we run an automated program that deletes rows older than X
months and then we attempt the optimize the table in question. The delete
query we use is:
DELETE FROM table WHERE date(current_date - interval 2 month). Now my
questions surrounding this are:

1.) Is it quicker to do a query where we say something like: DELETE FROM
table WHERE date = '2006-11-01' instead of where date(current_date)?
2.) Does the current way we do it use a tmp table that is written to disk ?

Then, we run the simple optimize command: OPTIMIZE TABLE tablename and that
is normally where we come into the problem that mysql tries to create a tmp
file while optimizing and it runs out of space, and then corrupts the main
table. We need to run the optimize because after deleting all those rows,
the space isnt freed up until we run the optimize. So my other question is
can we do an optimize a different way, or is there some way that we can
insert and delete rows that would require less optimization?

Thanks in advance,
Ian


Re: system : Install Mysql 5.x binaries with php4 on Debian

2006-12-14 Thread Ian
On 13 Dec 2006 at 18:59, Gilles MISSONNIER wrote:

 Hello,
 I use Debian Linux, and I could install (with apt-get install) fine
 the Mysql-4.1+apache+php4.
 
 I tried fine Mysql 5. on the same debian machine.
 
 Now I want to use mysql 5. through web/php4
 
 Then I run into problem, like :
 
 Warning: mysql_connect(): Can't connect to local MySQL server through 
 socket '/var/run/mysqld/mysqld.sock' (2) in 
 /var/www/test/t1.php on line 9
 
 Warning: mysql_select_db(): supplied argument is not a valid MySQL-Link 
 resource in /var/www/test/t1.php on line 10
 from script.php : connexion a la base refuse
 
 the tool my_print_defaults doesn't help.
 
 any hint?

Hi,

First of all, is the mysql server running?  If not then the mysql.sock socket 
file will not be 
created.

If it is, then its probably in a different location to the one that php expects 
to find it. In 
which case you have several choices:

Change 'mysql.default_socket =' in php.ini to point at the real location of 
mysql.sock 
(check with the php.net site to make sure that this option has not changed in 
the version 
you have).

Change the location that mysql puts the socket file in my.ini (see the manual 
on 
dev.mysql.com for the exact syntax).

Create a symbolic link from the mysql.sock to the place where php expects to 
find it:
ln -s /current/location/mysql.sock /new/location/mysql.sock

I consider the latter the best option as it wont break anything else that 
depends on the 
socket.

Regards

Ian
-- 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql5 onHPUX - no entry for terminal type

2006-10-16 Thread Ian Collins

Hi,
I have just upgraded an HPUX 11.00 server to MySQL 5.0.26-pro.
Since the upgrade, the readline seems to be broken. (It was OK on 4.1.18).

We see the problem when we run,

mysql -A -u user -p
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 34782 to server version: 5.0.26-pro

No entry for terminal type xterm;
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql

It doesn't matter what I set my TERM to.
It works for everything else (e.g., vi, emacs, more)

Any ideas?

(It may sound trivial but we have automated test suites that break 
because of this).


Regards,
Ian Collins.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump quotes in MySQL 5.0.26

2006-10-11 Thread Ian Collins

Hi,
I have 2 identical Linux machines setup with identical my.cnf files 
(except for server-id) and both running 5.0.26.


On server A, I run,

mysqldump -h localhost -u root -p... --single-transaction --flush-logs 
--delete-master-logs --master-data=1 --databases db1 db2  dumpfile


I copy dumpfile to server B
and extract with,

mysqladmin -u root -p... create db1
mysqladmin -u root -p... create db2

cat dumpfile | mysql -u root -p...

I get the error,

ERROR 1049 at line 25: Unknown database '`db1`'

Note the backticks around the database name.
Looking at dumpfile, I see,

use `db1`

I tried doing the mysqldump with --compatible=ansi which changes the 
backticks to double quotes, but get the same error,


ERROR 1049 at line 25: Unknown database 'db1'.

I can't edit dumpfile as it is 8Gb (ok, I can split it and stuff like 
that ... but come on!!)


What am I missing here?  Has anyone else seen this?

Regards,
Ian Collins.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump quotes in MySQL 5.0.26

2006-10-11 Thread Ian Collins

Hi,
yes, the create database succeeds. It's in a show databases. The error 
message is complaining about `db1` (i.e. quotes followed by backtick).
I'm actually using rsync to copy the file (faster than ftp for some 
reason). But it also failed when I originally ftp'd it.

I've also tried the mysql -u root -p  file.sql as well with no difference.

I actually got the database in by doing a split -b 1000 dumpfile 
as I can edit a 10Mb file, but not a 10Gb one - and then joining them 
back together - bit of a hack but it worked.


Strange.

Regards,
Ian.


Dan Buettner wrote:

Ian, those backticks are standard stuff for mysqldump.

A couple of thoughts -
1, are you sure your 'create database' succeeds?
2, long shot, but are you FTPing in ASCII mode?  This *might* mess up
the backtick character.
3, instead of cat file.sql | mysql, try this
mysql -u root -p  file.sql

HTH,
Dan


On 10/11/06, Ian Collins [EMAIL PROTECTED] wrote:

Hi,
I have 2 identical Linux machines setup with identical my.cnf files
(except for server-id) and both running 5.0.26.

On server A, I run,

mysqldump -h localhost -u root -p... --single-transaction --flush-logs
--delete-master-logs --master-data=1 --databases db1 db2  dumpfile

I copy dumpfile to server B
and extract with,

mysqladmin -u root -p... create db1
mysqladmin -u root -p... create db2

cat dumpfile | mysql -u root -p...

I get the error,

ERROR 1049 at line 25: Unknown database '`db1`'

Note the backticks around the database name.
Looking at dumpfile, I see,

use `db1`

I tried doing the mysqldump with --compatible=ansi which changes the
backticks to double quotes, but get the same error,

ERROR 1049 at line 25: Unknown database 'db1'.

I can't edit dumpfile as it is 8Gb (ok, I can split it and stuff like
that ... but come on!!)

What am I missing here?  Has anyone else seen this?

Regards,
Ian Collins.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Doing sum's if certain conditions are true

2006-08-29 Thread Ian Barnes

Hi,

I am trying to build a query that does a sum if a certain parameter is set.
For example:

I have a row with four fields: code1, code2, duration and bytes. My current
query looks something like this: SELECT code1 as code, sum(bytes) as bin,
max(bytes) as min, avg(bytes) as ain, count(bytes) as cin, sum(duration) as
dur from data group by code; which returns something like this:

+---+--+---+-+--+-+
| code  | bin  | min   | ain |
cin  | dur |
+---+--+---+-+--+-+
| NONE  | 103939170759 | 485089817 |   3739.1827 |
27797297 | 11681839027 |

Now, what i need todo is exclude certain info from the above NONE entry if
code2 is equal to something. So for example (in php terminology):

if(code == 'NONE') {
 if(code2 == 'DENIED') { continue; }
 else {
 bin += bytes;
 if(bytes  min) { min = bytes; }
 cin++;
 dur += dur;
 }
}

after that i could work out the average by dividing bin / cin for what in
the report is called ain. Is there any way of achieving this via the sql
query because the above is a hugely tedious way of doing it. I know mysql
has an if() statement, but I have no idea how to implement it using what i
want to achieve above.

Thanks in advance.
Ian


RE: Check out this Free software I found to document your IT infrastruct

2006-08-03 Thread Ian
On 3 Aug 2006 at 10:16, John Trammell wrote:

snip

 If you hated spam as much as I did, you would have terminated this user,
 based on the egregious and well-documented abuse of this list.

snip

 This message has been scanned for viruses by McAfee's Groupshield.

You say you hate spam then spam the list with an advert for McAfee!

Ian
-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql.sock deleted

2006-07-20 Thread Ian
On 20 Jul 2006 at 16:56, Vitaliy Okulov wrote:

 , Deniss.

   20  2006 ., 15:59:10:

  Hi All

  While I am doing   portupgrade -vr mysql on freebsd server

  than mysql socket/tmp/mysql.sock  is invisible on,  so mysql cant be start

  How I can create mysql socket.
  How I can start mysql

  thanks

 MySQL server will create this socket after start.

Hi,

MySQL will create the socket when  it starts but it may not be in the /tmp 
folder.  I have
found that some programs expect the socket to be in /tmp whilst others expect 
it to be in
the mysql directory.

I find best solution to this problem is to start mysql, then create a symbolic 
link in the
/tmp directory to the socket:

ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock

This works on my systems, you may have to adjust to your own environment.

Regards

Ian
--


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Date Problems

2006-07-13 Thread Ian
On 13 Jul 2006 at 8:27, wizard007 wrote:

 I used $query_Recordset1 = SELECT date_format(Date, %d/%m/%Y) as Date,
 Course, Time, Horse, Odds1, Odds2, `Result` FROM Results;

Hi,

That's a PHP error because you have a double quotes: %d/%m/%Y within double 
quotes:  $query_Recordset1 = SELECT .. .FROM Results;   

You need to escape the quotes in your statement. 

Regards

Ian
-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re-importing a mysqldump file

2006-06-25 Thread Ian Barnes
Hi,

 

I need to auto re-import a mysqldump file, but when importing it I need to
make a certain field a value for all information imported.  For example my
db looks like this:

 

Id

Name

Value

Serverid

 

Now, on the remote server, name and value get exported, and when I re-import
it here, I need id to auto-increment and serverid to be set by something
that I specify depending on what file im importing. 

 

Is this possible? Or would the best way be to import the dumped file into a
temp table and then select out of the temp table into my correct table ?

 

Thanks for any help!

Ian



RE: Re-importing a mysqldump file

2006-06-25 Thread Ian Barnes
Hi,

No unfortunately not...

Cheers
Ian

 -Original Message-
 From: John Meyer [mailto:[EMAIL PROTECTED]
 Sent: 25 June 2006 05:41 PM
 To: mysql@lists.mysql.com
 Subject: Re: Re-importing a mysqldump file
 
 Ian Barnes wrote:
  Is this possible? Or would the best way be to import the dumped file
 into a
  temp table and then select out of the temp table into my correct table ?
 
 
 
 Anyway to use a trigger?
 
 --
 Online library -- http://pueblonative.110mb.com
 126 books and counting.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

This is all I could see just before it happened:

mysql show processlist;
++--+---+-+-+--++---

---+
| Id | User | Host  | db  | Command | Time | State  | Info
|
++--+---+-+-+--++---

---+
| 11 | root | localhost | testing | Query   | 0| creating table | CREATE
TABLE ` upgrade_history` (
  `upgrade_id` int(10) NOT NULL auto_increment,
  `upgrade_vers |
| 12 | root | localhost | | Query   | 0|| show
processlist
|
++--+---+-+-+--++---

---+
2 rows in set (0.00 sec)


Then after that it the following happened for 60 seconds and then it timed
out:

mysql show processlist;
++--+---+-+-+--+---+
--+
| Id | User | Host  | db  | Command | Time | State | Info
|
++--+---+-+-+--+---+
--+
| 11 | root | localhost | testing | Sleep   | 0|   |
|
| 12 | root | localhost | | Query   | 0|   | show
processlist |
++--+---+-+-+--+---+
--+
2 rows in set (0.00 sec)


I have put more RAM in (I now have 768Mb's, as well as 3Gig of SWAP).

Thanks for the suggestions!

Cheers
Ian


 -Original Message-
 From: mos [mailto:[EMAIL PROTECTED]
 Sent: 09 June 2006 07:15 PM
 To: mysql@lists.mysql.com
 Subject: Re: Importing 3Gb File
 
 At 10:20 AM 6/8/2006, you wrote:
 Hi,
 
 I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
 and im coming across the following error:
 
 mysql: Out of memory (Needed 178723240 bytes)
 mysql: Out of memory (Needed 178719144 bytes)
 
 That error comes up after about 30 minutes worth of import and I would
 guess
 about half way through the import.
 
 What does Show Processlist say its doing just before the crash? I've had
 problems with Load Data on a very large table 500 million rows because the
 machine did not have enough memory to build the index. The data was loaded
 just fine, it's building the index that hung it out to dry because that
 eats up memory like crazy. How much memory do you have on your machine?
 The
 cheapest solution might be to go out and get a few more gb of RAM.
 
 Mike
 
 
 
 The file in question is a mysqldump
 -all-databases file from another server that im trying to import onto my
 desktop machine. I have tried to alter the my.cnf file a bit, and this is
 what it looks like:
 
 [client]
 #password   = your_password
 port= 3306
 socket  = /tmp/mysql.sock
 
 [mysqld]
 port= 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer = 64M
 max_allowed_packet = 8M
 table_cache = 512
 sort_buffer_size = 8M
 net_buffer_length = 8M
 myisam_sort_buffer_size = 45M
 set-variable=max_connections=300
 
 # Replication Master Server (default)
 # binary logging is required for replication
 #log-bin
 
 # required unique id between 1 and 2^32 - 1
 # defaults to 1 if master-host is not set
 # but will not function as a master if omitted
 server-id   = 1
 
 #bdb_cache_size = 4M
 #bdb_max_lock = 1
 
 # Uncomment the following if you are using InnoDB tables
 #innodb_data_home_dir = /var/db/mysql/
 #innodb_data_file_path = ibdata1:10M:autoextend
 #innodb_log_group_home_dir = /var/db/mysql/
 #innodb_log_arch_dir = /var/db/mysql/
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 #innodb_buffer_pool_size = 16M
 #innodb_additional_mem_pool_size = 2M
 # Set .._log_file_size to 25 % of buffer pool size
 #innodb_log_file_size = 5M
 #innodb_log_buffer_size = 8M
 #innodb_flush_log_at_trx_commit = 1
 #innodb_lock_wait_timeout = 50
 
 [mysqldump]
 quick
 max_allowed_packet = 16M
 
 [mysql]
 no-auto-rehash
 # Remove the next comment character if you are not familiar with SQL
 #safe-updates
 
 [isamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [myisamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [mysqlhotcopy]
 interactive-timeout
 
 
 Does anyone have any advice as to what I could change to make it import,
 and
 not break half way through. The command im running to import is: mysql -n
 -f
 -p  alldb.sql
 
 Thanks in advance,
 Ian
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED

  1   2   3   4   >