Patching MySQL CLI to log information

2011-06-02 Thread Andrew Braithwaite
Has anyone ever patched the MySQL or libmysql to log to some logfiles
with information like the UNIX user, time, server connected to, port
etc?

I'm just trying to save myself a bit of C patching.

Cheers,

A

-
LOVEFiLM UK Limited is a company registered in England and Wales. 
Registered Number: 06528297. 
Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom.

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, 
please delete it from your system and notify the sender immediately.

This email message has been delivered safely and archived online by Mimecast.
For more information please visit http://www.mimecast.co.uk 
-

Strange date behaviour

2011-03-31 Thread Andrew Braithwaite
Hi,

Mysql  select curdate() + interval 6 month - interval 6 month;
+-+
| curdate() + interval 6 month - interval 6 month |
+-+
| 2011-03-30  |
+-+
1 row in set (0.00 sec)

Any ideas why this is wrong?

Andrew

-
LOVEFiLM UK Limited is a company registered in England and Wales. 
Registered Number: 06528297. 
Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom.

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, 
please delete it from your system and notify the sender immediately.

This email message has been delivered safely and archived online by Mimecast.
For more information please visit http://www.mimecast.co.uk 
-

RE: Index analyser

2010-02-24 Thread Andrew Braithwaite
There's also the Query Analyser
http://www.mysql.com/products/enterprise/query.html which is part of
MySQL Enterprise - I've never used it and it is very expensive but I
believe it will advise on optimal indicies.

Cheers,

Andrew

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com] 
Sent: 23 February 2010 23:09
To: mysql@lists.mysql.com
Subject: RE: Index analyser


Ya, that one is helpful... just trying to land on a solution like I've
seen in other DB's that have index-advisor that listens and creates what
it thinks is the perfect indexes ... but thx...


From: mos [mo...@fastmail.fm]
Sent: Tuesday, February 23, 2010 4:33 PM
To: mysql@lists.mysql.com
Subject: Re: Index analyser

At 03:28 PM 2/23/2010, you wrote:
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity
over a
poeriod of time and suggest the exact indexes on each table based on
what
it has seen to improve performance
Anyone got that for MySQL?

You can look at www.maatkit.org/doc/mk-query-digest.html


DESCRIPTION

This tool was formerly known as mk-log-parser.

mk-query-digest is a framework for doing things with events from a query
source such as the slow query log or PROCESSLIST. By default it acts as
a
very sophisticated log analysis tool. You can group and sort queries in
many different ways simultaneously and find the most expensive queries,
or
create a timeline of queries in the log, for example. It can also do a
query review, which means to save a sample of each type of query into
a
MySQL table so you can easily see whether you've reviewed and analyzed a
query before. The benefit of this is that you can keep track of changes
to
your server's queries and avoid repeated work. You can also save other
information with the queries, such as comments, issue numbers in your
ticketing system, and so on.


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=bcantw...@firescope.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com

-
LOVEFiLM UK Limited is a company registered in England and Wales. 
Registered Number: 06528297. 
Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom.

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, 
please delete it from your system and notify the sender immediately.

This email message has been delivered safely and archived online by Mimecast.
For more information please visit http://www.mimecast.co.uk 
-

RE: Index analyser

2010-02-24 Thread Andrew Braithwaite
You can use this to get rid of unused indicies too.

http://www.mysqlperformanceblog.com/2009/01/15/dropping-unused-indexes/

Requires the percona extensions to be loaded.

Cheers,

Andrew


-Original Message-
From: Andrew Braithwaite [mailto:andrew.braithwa...@lovefilm.com] 
Sent: 24 February 2010 17:44
To: Cantwell, Bryan; mysql@lists.mysql.com
Subject: RE: Index analyser

There's also the Query Analyser
http://www.mysql.com/products/enterprise/query.html which is part of
MySQL Enterprise - I've never used it and it is very expensive but I
believe it will advise on optimal indicies.

Cheers,

Andrew

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com] 
Sent: 23 February 2010 23:09
To: mysql@lists.mysql.com
Subject: RE: Index analyser


Ya, that one is helpful... just trying to land on a solution like I've
seen in other DB's that have index-advisor that listens and creates what
it thinks is the perfect indexes ... but thx...


From: mos [mo...@fastmail.fm]
Sent: Tuesday, February 23, 2010 4:33 PM
To: mysql@lists.mysql.com
Subject: Re: Index analyser

At 03:28 PM 2/23/2010, you wrote:
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity
over a
poeriod of time and suggest the exact indexes on each table based on
what
it has seen to improve performance
Anyone got that for MySQL?

You can look at www.maatkit.org/doc/mk-query-digest.html


DESCRIPTION

This tool was formerly known as mk-log-parser.

mk-query-digest is a framework for doing things with events from a query
source such as the slow query log or PROCESSLIST. By default it acts as
a
very sophisticated log analysis tool. You can group and sort queries in
many different ways simultaneously and find the most expensive queries,
or
create a timeline of queries in the log, for example. It can also do a
query review, which means to save a sample of each type of query into
a
MySQL table so you can easily see whether you've reviewed and analyzed a
query before. The benefit of this is that you can keep track of changes
to
your server's queries and avoid repeated work. You can also save other
information with the queries, such as comments, issue numbers in your
ticketing system, and so on.


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=bcantw...@firescope.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


-
LOVEFiLM UK Limited is a company registered in England and Wales. 
Registered Number: 06528297. 
Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom.

This e-mail is confidential to the ordinary user of the e-mail address
to which it was addressed. If you have received it in error, 
please delete it from your system and notify the sender immediately.

This email message has been delivered safely and archived online by
Mimecast.
For more information please visit http://www.mimecast.co.uk 

-

RE: Optimizing my.cnf

2009-10-06 Thread Andrew Braithwaite
If it's a dedicated MySQL server I would increase the key buffer to at
least half the available main memory and leave the rest for filesystem
cache.  You'll probably get the biggest performance increase this way.

Cheers,

A

-Original Message-
From: sangprabv [mailto:sangpr...@gmail.com] 
Sent: 06 October 2009 04:57
To: Rob Wultsch
Cc: mysql@lists.mysql.com
Subject: Re: Optimizing my.cnf

As you see on my my.cnf I skip innodb and federated. So I just use
myisam in this case. TIA.



Willy


On Mon, 2009-10-05 at 20:47 -0700, Rob Wultsch wrote:
 
 On Mon, Oct 5, 2009 at 6:12 PM, sangprabv sangpr...@gmail.com wrote:
 I have Dell PE2950iii with 16GB of RAM, and 1 Quadcore
 processor @2.00G.
 Installed with MySQL 5.075 on 64bit Ubuntu Jaunty. I have
 these
 parameters in my.cnf:
 
 blah blah blah...
 
 
 This heavily depends on workload. Are you using innodb? etc...
 
 -- 
 Rob Wultsch
 wult...@gmail.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql dynamic database location

2009-09-18 Thread Andrew Braithwaite
http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld
_datadir

You can specify the data directory at runtime with the --datadir= option
to mysqld (mysqld_safe).

Cheers,

A

-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan
De Meersman
Sent: 18 September 2009 10:30
To: Claudio Nanni
Cc: manasi.s...@artificialmachines.com; mysql@lists.mysql.com
Subject: Re: Mysql dynamic database location

You can do that at runtime no problem, you just need to do OS calls for
it.

r...@soulcake-duck:/var/lib/mysql# *mysqlshow*
++
| Databases  |
++
| information_schema |
| mysql  |
++
r...@soulcake-duck:/var/lib/mysql# *mkdir /tmp/remotedb*
r...@soulcake-duck:/var/lib/mysql# *chown mysql:mysql /tmp/remotedb*
r...@soulcake-duck:/var/lib/mysql# *ln -s /tmp/remotedb/*
r...@soulcake-duck:/var/lib/mysql# *mysqlshow*
++
| Databases  |
++
| information_schema |
| mysql  |
| remotedb   |
++
r...@soulcake-duck:/var/lib/mysql# *mysql remotedb*
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 39
Server version: 5.0.67-0ubuntu6 (Ubuntu)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql *create table a (a int);*
Query OK, 0 rows affected (0.00 sec)

mysql *show tables;*
++
| Tables_in_remotedb |
++
| a  |
++
1 row in set (0.00 sec)

mysql Bye
r...@soulcake-duck:/var/lib/mysql#




On Fri, Sep 18, 2009 at 10:54 AM, Claudio Nanni
claudio.na...@gmail.com
wrote:
 As far as I know, you can't,
 you can do it with symbolic linking but not at run time.
 So if you have a clue you can (pre)build empty databases using
symbolic
 linking and switching to the right one at run time.


 Cheers

 Claudio Nanni


 2009/9/18 Manasi Save manasi.s...@artificialmachines.com

 Hi All,

 Is it possible to change or create any database on a specific
location.
 I want to specify a db path at runtime.

 Thanks in advance.
 --
 Regards,
 Manasi Save
 Artificial Machines Pvt Ltd.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com




 --
 Claudio




-- 
That which does not kill you was simply not permitted to do so for the
purposes of the plot.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: 1 Machine with 4 GB RAM for Big Size MySQL Data Size

2009-09-05 Thread Andrew Braithwaite
One word: Backups!

If your potential client must restrict you to one server then your
primary consideration in this design must be backups, this cannot be
stressed enough.

One server with 4GB main memory should be fine for your 24GB database
with small monthly growth and low number of users, you should be fine
using InnoDB with the default settings or perhaps some my.cnf tuning for
your particular needs (look at the www.mysqlperformanceblog.com archives
for some tips on that).

But you absolutely must consider backups, if they already have a backup
server then look at using the free version of zmanda or some other
backup scripts.  If not then you could consider using Amazon S3 as a
backup solution, it's easy to use and quite cheap too.

Cheers,

Andrew

-Original Message-
From: Colin Streicher [mailto:co...@obviouslymalicious.com] 
Sent: 05 September 2009 05:16
To: mysql@lists.mysql.com
Subject: Re: 1 Machine with 4 GB RAM for Big Size MySQL Data Size

On Friday 04 September 2009 08:15:35 pm muhammad subair wrote:
 On Sat, Sep 5, 2009 at 5:10 AM, mos mo...@fastmail.fm wrote:
  At 11:48 AM 9/4/2009, you wrote:
  One of my potential clients want to migrate their application to
web
  based (PHP  MySQL), estimates of the data size is 24GB and growth
per
  month is 20MB of data.  Unfortunately, they could only use 1 sever
  machine with 4GB RAM.
 
  The application used in intranet, just running simple transactions
and
  the number of users concurent is under 10.
 
  I need information and suggestion about this condition, whether the
  effort spent on implementation and future maintenance is not too
large
  for use MySQL with this condition?
 
  *btw sorry for my English*
 
  Thanks you very much,
  --
  Muhammad Subair
 
  Muhammad,
  It will depend on your queries and how efficiently you write
them. A
  poorly constructed query on a 24MB table will perform worse than an
  optimized query on a 24GB table.  If you can show us your table
structure
  and query example, (are you joining tables?), then we can guestimate
  better.
 
  Mike
 
  --
 
 Thank you for the feedback and input from all friends.
 
 Currently I have yet enter the design phase, just survey phase to get
the
 information about the data which will migrate from the legacy
application.
 Fyi, the input data which will migrate to MySQL is txt and not normal
for
 Relational Database.
 
 Based on existing feedbacks, I conclude that this project makes sense
and
 can be continued. Perhaps with a note of the problem in vailure single
  point because there is only 1 server.
 
 Furthermore if there is progress again, I'll try sharing.
 
 Thank you very much
 
Perhaps its worth looking at a master-slave relationship between 2
servers if 
you are concerned about a single point of failure.

Colin
-- 
There is a 20% chance of tomorrow.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Replication - connecting a slave to a master on the same host via a port or socket

2009-08-11 Thread Andrew Braithwaite
Hi,

I have 2 mysql instances running on a server on different ports with
different datadirs and different .sock files.

I can connect locally via the sock with the -S flag to mysql but I
cannot connect locally via port (-P flag).

Does anyone know if there is a way to configure a mysql slave to use a
socket to connect to a master on localhost?

If not; does anyone know a way to connect to another mysql instance on
the same linux server using a port?

Thanks for your help,

Andrew

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replication - connecting a slave to a master on the same host via a port or socket

2009-08-11 Thread Andrew Braithwaite
Ah.  I have found that if you use 'localhost' to connect, you cannot
specify a port, it silently fails...

You can connect using a hostname (even though it's the same server),
specifying a port...

Andrew

-Original Message-
From: Andrew Braithwaite [mailto:andrew.braithwa...@lovefilm.com] 
Sent: 11 August 2009 16:12
To: mysql@lists.mysql.com
Subject: Replication - connecting a slave to a master on the same host
via a port or socket

Hi,

I have 2 mysql instances running on a server on different ports with
different datadirs and different .sock files.

I can connect locally via the sock with the -S flag to mysql but I
cannot connect locally via port (-P flag).

Does anyone know if there is a way to configure a mysql slave to use a
socket to connect to a master on localhost?

If not; does anyone know a way to connect to another mysql instance on
the same linux server using a port?

Thanks for your help,

Andrew

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Growing database Performance

2009-06-29 Thread Andrew Braithwaite
 Would it be beneficial to divide this database tables 
 across different databases where each database holds some tables?

If you are planning to scale to large amounts of database activity in the 
future then yes, this will help very much.  If you split your tables into 
several logical databases and ensure there are no cross-database joins; in the 
future you will be able to scale by moving some of the logical databases onto 
separate physical hosts relatively easily.

Of course tuning your SQL/mysql configuration/operating system/indexes properly 
will afford you the best scalability rather than throwing hardware at it.

Cheers,

Andrew

-Original Message-
From: fa so [mailto:fak...@yahoo.com] 
Sent: 26 June 2009 16:26
To: mysql@lists.mysql.com
Subject: Growing database  Performance

I have a website where my database is continuously growing. And I started being 
worried about performance.
I have a couple of questions, and I would appreciate it very much if you can 
elaborate on them.

- I have about 70 tables in the same database.  some of them are becoming very 
large (over 1 million record), and I guess in a couple of months some of them 
will be double in size, like the mailbox table. Would it be beneficial to 
divide this database tables across different databases where each database 
holds some tables? 

- I was looking at MySQL table partition, and I would like to try it. I am not 
sure though what is the best way to do it, for example in my mailbox table, I 
have senderId and receiverId as keys and I query inbox and outbox using 
these keys.. If I want to paritition the table, what is the partion by method I 
should use?  

- do you think dividing my mailbox table into separate tables for inbox and 
outbox like: mailbox_inbox, mailbox_outbox, and mailbox_messages would be 
beneficial?


- I am also guessing that my mailbox table will be holding 10s of
millions of records in a year or two period.. and I am not sure about
the best way to habdle such a table when it grows very much.. any ideas
on how to plan for such a senario? 
I can imagine the best way would be to create many tables each holding a 
portion of the mailbox table while using MySQL partition on each of them... I 
am wondering though about the best way to map senderId and receiverId to 
the correct table

thank you



  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: The size of an index (INDEX_LENGTH)

2009-06-15 Thread Andrew Braithwaite
Hi,

Is your table MyISAM or InnoDB?

Andrew

-Original Message-
From: Morten [mailto:my.li...@mac.com] 
Sent: 15 June 2009 21:23
To: mysql@lists.mysql.com
Subject: The size of an index (INDEX_LENGTH)


Hi,

I dropped an index on a table with 25M records today. The INDEX_LENGTH  
in information_schema.tables shrank from 3834642432 to 3215982592, ie.  
~618Mb difference

The index was on an int(11) column.

That means each index key takes up ~618Mb/25M ~= 25 bytes but that  
doesn't sound right? Is that true, or is information_schema.tables  
unreliable or?

Thanks,

Morten




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: ibdata1 lost

2009-05-27 Thread Andrew Braithwaite
Hi,

Your data is gone (unless you can undelete it from whatever filesystems you're 
using).

You should be able to recover the schema from the directories and .frm files by 
doing something like this hack:

1. Take a copy of your .frm files and keep them somewhere safe.
2. Create a database with tables of same name as the XXX.frm files.  Just make 
the tables with one column and don't put any data in them. 
3. Stop the mysqld and replace the newly created .frm files with saved .frm 
files
4. Start mysqld and the tables should be replaced with the old ones but with no 
data.

Andrew

-Original Message-
From: Sebastien MORETTI [mailto:sebastien.more...@unil.ch] 
Sent: 27 May 2009 07:41
To: mysql@lists.mysql.com
Subject: ibdata1 lost

Hi,

Is there a way to recover data and/or database schema when ibdata1 file 
has been removed (for InnoDB databases) ?

(MySQL 5.0.26)
Thanks

-- 
Sébastien Moretti


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Inserting a default null date

2009-05-15 Thread Andrew Braithwaite
Agreed.  And don't forget to listen to the warnings MySQL sends back,
e.g.:

mysql create table temp_date(d date default null);
Query OK, 0 rows affected (0.15 sec)

mysql insert into temp_date(d) values('2009-13-99');
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql show warnings;
+-+--++
| Level   | Code | Message|
+-+--++
| Warning | 1265 | Data truncated for column 'd' at row 1 | 
+-+--++
1 row in set (0.00 sec)

mysql select * from temp_date;
++
| d  |
++
| -00-00 | 
++
1 row in set (0.01 sec)

Andrew

-Original Message-
From: Martijn Tonies [mailto:m.ton...@upscene.com] 
Sent: 15 May 2009 14:43
To: mysql@lists.mysql.com
Subject: Re: Inserting a default null date 

Hi,

 I have a table with a column like:

 date date default null,

 If I enter an empty string in it, the default null value is added (as
it 
 should).

 But if I enter an invalid date by mistake, the date -00-00 date
date 
 is entered instead of the default null, and this is not good.

 Can I do something to force MySQL to insert a null date if the entered

 date is an invalid one?

That's silly, if the data you want to insert is considered to be
invalid, 
reject it,
don't replace it with NULL.

With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download FREE! Database Workbench Lite for MySQL!


Database questions? Check the forum:
http://www.databasedevelopmentforum.com 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Merging Databases

2009-05-11 Thread Andrew Braithwaite
If you are merging table A and table B and say, table A's auto-increment
id is up to 2000, just pick a nice round number like 3000 and add it to
the auto-increment ID column of table B with something like this:

UPDATE tableB SET id = id + 3000;

Then do the same to all the fields in other tables that link to tableB's
auto-increment ID column.

Once that's done, merge the tables with something like:

INSERT INTO tableA SELECT * FROM tableB;

And do the same with the other tables (if they have their own
autoincrement ids then you should leave that out of the insert into
select from (unless those auto-increment ids are referenced by other
tables in which case you'll have to do the same thing cascading down
levels of referential id columns..)

Should do the trick.

Andrew

-Original Message-
From: Johnny Withers [mailto:joh...@pixelated.net] 
Sent: 11 May 2009 22:10
To: Weston, Craig (OFT)
Cc: MySQL General List
Subject: Re: Merging Databases

We don't want to use a view because then this database will not
be consistent with the others.
We can't simply use a select from .. insert into because when we
renumber
table1's ID column, items in table2 and 3 and so on may link to the ID
column in that table. So we need to update the ID column in table1, then
add
the same # to the table1_id columns in any other table. After we do
this, we
could do the select from.. insert into method I suppose.

-jw

On Mon, May 11, 2009 at 2:49 PM, Weston, Craig (OFT) 
craig.wes...@oft.state.ny.us wrote:



 -Original Message-
 From: Johnny Withers [mailto:joh...@pixelated.net]
 Sent: Monday, May 11, 2009 3:30 PM
 To: MySQL General List
 Subject: Merging Databases

 Hi,
 I have a lot of databases that have the exact same tables and each
table
 has
 the exact same column structure. I'm looking at merging two of these
 databases together into a single database (Company A bought Company B
and
 wants the data from A combined into B now).

 I've been tossing around the idea of looking in database B at each
table
 that would need to be merged and simply adding the last ID number to
 every
 ID number in database A's tables. For example, in table1 in B's data,
the
 last ID number is 2000, could we simply add 2000 to every ID number in
 table1 in A's data? Could we then export (SELECT INTO OUTFILE) from
A's
 data
 and import (LOAD DATA) into B's data?

 Has anyone done something like this before? Did you have problems?

 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net

 ---

 Why not create a view and just concatenate on an identifier? This way
the
 data can be kept in the same forms.

 Or, if you do want to have it as one table, you can use a select
insert
 statement to move from one to another. Build the select query first to
get
 the data looking like you want it, then convert it when you think you
are
 ready.

 Of course, backups are your friend in any case.  :)


 Cheers,
 Craig


 This e-mail, including any attachments, may be confidential,
privileged or
 otherwise legally protected. It is intended only for the addressee. If
you
 received this e-mail in error or from someone who was not authorized
to send
 it to you, do not disseminate, copy or otherwise use this e-mail or
its
 attachments.  Please notify the sender immediately by reply e-mail and
 delete the e-mail from your system.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Default my.cnf for (very) high performance servers....

2009-05-06 Thread Andrew Braithwaite
There's no such thing as a generic my.cnf for high performance MySQL
servers, you will need to provide more information..

Some questions:  Are you going to run InnoDB or MyISAM or both (if both,
what's the split?)

Is there anything else running on that server?  i.e. how much of the
16GB is available for MySQL to use?

Can you partition your disks as you wish?  (How much data do you need
host?)

Will this server be a master or slave or standalone? (Do we need to deal
with binlogs here?)

Andrew

-Original Message-
From: Craig Dunn [mailto:li...@codenation.net] 
Sent: 06 May 2009 14:02
To: mysql@lists.mysql.com
Subject: Re: Default my.cnf for (very) high performance servers

Craig Dunn wrote:
 
 
 
 Hi All,
 
 We're setting up a group of servers using MySQL Enterprise 5.1 -
Rather 
 than starting with a blank canvas I wondered if there was a suitable 
 my.cnf that is tuned to the kind of environment I'm running where I
can 
 tweak it from there.
 
 We're running on RHEL, on Sunfire X4140's - 8 disks, 16G RAM, 2 x dual

 core 3000mhz 64bit... which is reasonably beefy.  Environment is more 
 read than write, but write speed is important.
 
 Anyone know where I can look?
 Cheers
 Craig
 
 

I should add, I wanted something a bit more up to date than my-huge.cnf,

which seems to think a huge server is a system with memory of 1G-2G


-- 
Linux web infrastructure consulting, cr...@codenation.net
Free live poker tournament listings, http://www.g5poker.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Default my.cnf for (very) high performance servers....

2009-05-06 Thread Andrew Braithwaite
Your disk config is good and you'll need all the nessesary my.cnf
entries to point all the logs and data to the correct place.  Slaves
should have the relay-logs going to the OS disk too.  I assume you've
set up the master slave config in the my.cnf too.

Here's my brain dump on what you need:

skip-locking
max_allowed_packet = 16M
key_buffer_size = 9000M
max_allowed_packet = 16M
table_cache = 1024
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 1000
query_cache_size  = 256M

# Nice big key_buffer_size - the most important one for read-heavy
MyISAM DBs
# query cache nice and high too - if your tables change a lot you may
want to turn this off as it will be ineffective

tmpdir  = /somedir/tmp/
#You may want to point this somewhere else if you are writing a lot of
tmp tables to disk

innodb_data_home_dir = /somedir/mysql/
innodb_data_file_path = ibdata1:512M:autoextend
innodb_log_group_home_dir = /somedir/mysql/
innodb_log_arch_dir = /somedir/mysql/
innodb_buffer_pool_size=1000M
set-variable = innodb_additional_mem_pool_size=128M
innodb_log_file_size=200M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=0
set-variable = innodb_lock_wait_timeout=50
innodb_thread_concurrency = 8
innodb_file_per_table

# Keep a GB of InnoDB in memory as you're not using that much

# use the innodb_file_per_table param for easier management of disk
space

The most important part is your caches.  You can keep an eye on your
MyISAM key cache efficiency by running 'SHOW STATUS' and 'SHOW
VARIABLES' and calculating the following:

Cache hit ratio:

100 - ((Key_reads * 100) / Key_read_requests)

Percentage of buffer in use:

100 - ((Key_blocks_unused * key_cache_block_size) * 100 /
key_buffer_size)

And tweak them as you need.

Of course you have to remember that these caches (and the filesystem
cache) will take a while to warm up before they become super-efficient.

Hope this helps,

ANdrew





-Original Message-
From: Craig Dunn [mailto:li...@codenation.net] 
Sent: 06 May 2009 14:31
To: Andrew Braithwaite
Cc: mysql@lists.mysql.com
Subject: Re: Default my.cnf for (very) high performance servers

Andrew Braithwaite wrote:
 There's no such thing as a generic my.cnf for high performance MySQL
 servers, you will need to provide more information..
 

Well, I was more after something a bit more up to date than my-huge.cnf 
that I could use as a starting point, I see a few example ones posted to

Mysql Forge, but they are very innodb orientated.

 Some questions:  Are you going to run InnoDB or MyISAM or both (if
both,
 what's the split?)

Both, 90% MyISAM

 Is there anything else running on that server?  i.e. how much of the
 16GB is available for MySQL to use?

It's a dedicated MySQL box

 Can you partition your disks as you wish?  (How much data do you need
 host?)

About 50G of databases - I've currently got 6 disks with RAID 10 running

soley /var/lib/mysql (datadir) on an LVM with the binlogs being written 
to the other 2 disks (which has the OS on them too)

 Will this server be a master or slave or standalone? (Do we need to
deal
 with binlogs here?)

There are 3 in total, 1 master and 2 slaves (one of which is capable of 
being failed over to as a master)

The current MySQL 4.1 servers that they are replacing have at any one 
time on average about 1000 open tables, about double the number of 
selects than inserts, between 2000 and 5000 qps - if thats any use.


Cheers
Craig

-- 
Linux web infrastructure consulting, cr...@codenation.net
Free live poker tournament listings, http://www.g5poker.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Creation date

2009-05-05 Thread Andrew Braithwaite
The create date in show table status is metadata held in the table
itself wheras the create data on the .frm file is when that file was
created - i.e. if you copy the data files (without preserving
attributes) it will have a new creation date on the filesystem but the
metadata of the table will not change.

Andrew

-Original Message-
From: Jim Lyons [mailto:jlyons4...@gmail.com] 
Sent: 05 May 2009 16:38
To: Jerry Schwartz
Cc: mysql@lists.mysql.com
Subject: Re: Creation date

I would have said to look at the creation date of the .frn file, but
there
is also a field called create date in the show table status command
and
the 2 dates often differ.  Anyone know why they do?

On Tue, May 5, 2009 at 9:28 AM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:

 At the risk of getting spanked for not finding this in the
documentation,
 I'm asking a simple question:



 Can I tell when a table was created?



 Regards,



 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341



  http://www.the-infoshop.com www.the-infoshop.com






-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slowness connecting to MySQL

2009-05-04 Thread Andrew Braithwaite
It could be slow reverse DNS lookups.  Make sure the hostname/IP of the
client are in the server's host file.  Or try connecting to the server
using an IP address instead of hostname.

Andrew

-Original Message-
From: Menachem Bazian [mailto:gro...@bcconsultingservices.com] 
Sent: 04 May 2009 16:07
To: myo...@lists.mysql.com; mysql@lists.mysql.com
Subject: Slowness connecting to MySQL

Please forgive the crossposting between the two groups. The problem I am

having SEEMS to be a mysql and not a MyODBC problem but I cannot be sure

so I am opening this question to both lists. If I have violated protocol

in doing so, please accept my apologies.

I have MySQL running under Ubuntu Server 8.04. The server is on a 
network and is used for a desktop application programmed with Visual 
FoxPro running on windows workstations. Everything worked PERFECTLY 
until just recently.

Then the client had to reboot the server. There was nothing wrong with 
the server, he shut it down because he thought he would have to move the

box. He did NOT move the box, as it turns out, and he turned the box 
back on. Ever since then, connecting to the box with the app has been S 
L O W. Like 10-20 seconds slow. Since nothing has changed, I cannot 
figure out what the heck is going on.

This is a small application with fewer than 5 users. The box is used 
ONLY for mysql (and WebMin).

Any ideas on how to debug this would be GREATLY appreciated.

The VERSION variable returns 5.0.51a-3ubuntu5.4

Thank you in advance and my apologies again if I have violated protocol.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Index time columns?

2009-04-27 Thread Andrew Braithwaite
Hi,

If you have that date column in your where clause for example:

SELECT .. FROM . WHERE tstamp  NOW() - INTERVAL 1 WEEK;

Then it's essential to index that column to speed up a table with lots of data.

On a table with many rows, an index on a timestamp column is invaluable.  
However, I should point out that having an index on a column does add a small 
performance overhead to inserts and updates so if you're not going to use it in 
your where clauses then there's not much point in adding it.

Cheers,

Andrew

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com] 
Sent: 24 April 2009 22:56
To: mysql@lists.mysql.com
Subject: Index time columns?

I have a debate with someone about indexes on time columns.
So I ask the community, should you put an index on a timestamp column of a 
table? 
Thanks...


RE: Is Temporary table right approach

2009-04-17 Thread Andrew Braithwaite
 If you can not eliminate your temporary tables, you have to adjust
the
following parameters in my.cnf [mysqld]
max_heap_table_size=1G
tmp_table_size=1G

You're making a lot of assumptions about this guy's setup.  You
shouldn't just tell 
him to apply these kinds of settings as you don't what effect they will
have on his
system.  If he only has 512MB available for MySQL and he starts writing
lots of
1GB temporary tables what's going to happen to the performance of his
server?

With advice like that you could grind his server/s to a halt and cause
his site 
To die a miserable swappy death.

Andrew

-Original Message-
From: Moon's Father [mailto:yueliangdao0...@gmail.com] 
Sent: 17 April 2009 06:36
To: Manoj Singh
Cc: php...@lists.php.net; mysql@lists.mysql.com
Subject: Re: Is Temporary table right approach

If you can not eliminate your temporary tables, you have to adjust the
following parameters in my.cnf
[mysqld]
max_heap_table_size=1G
tmp_table_size=1G

On Fri, Apr 17, 2009 at 12:57 PM, Manoj Singh
manojsingh2...@gmail.comwrote:

 Hi All,

 Thanks for your valuable input.

 I have decided to use temporary table approach. Since I am using it
for the
 first time and this has to be done in the production server. Do I need
to
 consider some facts before using this such as setting some parameters
in
 my.cnf etc or the MYSQL will handle all. Actually I want to know if
any one
 has faces issues practically when implementing temporary tables.

 Waiting for your suggestion.

 Thanks,
 Manoj


 On Fri, Apr 17, 2009 at 8:21 AM, Moon's Father
yueliangdao0...@gmail.comwrote:

 Use temporary table can be a good idea. But I think you performance
would
 be boost so much if you can do  something else to replace the
temporary
 table.
  On Wed, Apr 15, 2009 at 2:03 PM, Manoj Singh
manojsingh2...@gmail.comwrote:

 Hi All,

 I have a query which returns the large number of ids which i am
using in
 other queries. I am doing this in PHP. Now the first query can
return
 unlimited number of ids which might create problem in PHP. I want to
 store
 this ids in MYSQL through temporary table so that i can access that
ids
 in
 other queries directly.

 Do you think the approach is right or there is any other good
approach?

 Please suggest.

 Regards,
 Manoj




 --
 I'm a MySQL DBA in china.
 More about me just visit here:
 http://yueliangdao0608.cublog.cn





-- 
I'm a MySQL DBA in china.
More about me just visit here:
http://yueliangdao0608.cublog.cn

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: MySQL replication status plugin

2009-04-15 Thread Andrew Braithwaite
You could try this:

http://www.consol.de/opensource/nagios/check-mysql-health

(in German but should be self-explanatory).

Cheers,

Andrew

-Original Message-
From: Gabriel - IP Guys [mailto:gabr...@impactteachers.com] 
Sent: 15 April 2009 10:12
To: replicat...@lists.mysql.com
Cc: mysql@lists.mysql.com
Subject: MySQL replication status plugin

Good morning guys, (and girls), I hope all is well. I've been given the
task to, and I quote - Write a Nagios plugin to test the replication
status of two servers by 
comparing the position on the master to that on the slave 

 

To save myself a lot of work, I'd like to know if anything has been done
in this arena already, I would be over the moon, if someone has beaten
me to it, but I am so not in the mood to write one!

 

Any hints, recommendations, and ideas are wholly welcome!

 

---

Kind Regards,

Mr Gabriel

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Annoying .mysql_history problem

2009-04-03 Thread Andrew Braithwaite
I know it's not quite the same but you can use a 'tee' to record what
you do.

I use a small script to invoke the mysql client that looks like this:

and...@myserver:~/bin cat ms
# takes input of server and logical DB, eg: 'ms db1 test'
echo/home/andrew/mysqlhistory/$1.history
echo
===
=   /home/andrew/mysqlhistory/$1.history
echo INITIALISING NEW SESSION /home/andrew/mysqlhistory/$1.history
date  /home/andrew/mysqlhistory/$1.history
echo
===
=   /home/andrew/mysqlhistory/$1.history
mysql --tee=/home/andrew/mysqlhistory/$1.history --prompt=(\...@\h) [\d]
 -u someuser -p -h$1 $2

This records both what you type and the results.

Hope this helps,

Andrew

-Original Message-
From: kabel [mailto:li...@voidship.net] 
Sent: 31 March 2009 15:57
To: mysql@lists.mysql.com
Subject: Annoying .mysql_history problem

Using MySQL 5.0.67 on OpenSolaris 2008.11, whenever I hit Ctrl-C to 
terminate a long-running test query, it hangs while it kills the thread.

  No problem here.  If I accidentally (or impatiently) hit Ctrl-C again,

it terminates the MySQL CLI.  Again, no problem here.  What's getting me

is that on Ubuntu, everything up to that point was still written to 
.mysql_history, while on OSol it only goes back as far as my last clean 
exit.

Is there any way to get the Ubuntu behavior?  I know the easy answer is 
Don't use Ctrl-C, you hack but I'd kind of like the .mysql_history to 
be written as I go.

Thanks for any pointers,

kabel


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Looking at the MySQL binlog and pulling the queries onto one line

2008-11-21 Thread Andrew Braithwaite
Hi,

I can convert the binlogs to text using mysqlbinlog and that works fine.

However; I have queries that span several lines e.g. : 

SELECT blah
  FROM t1
  WHERE some condition
  ORDER BY something

Does anyone know of any utilities to reformat binlogs so that the
queries are all on a single line?

Would make it easier for me to search through it and find certain
queries..

Cheers,

Andrew

Mysql, query


LOVEFiLM UK Limited is a company registered in England and Wales. 
Registered Number: 06528297. 
Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom.

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, please delete it from your 
system and notify the sender immediately.

This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Optimal MySQL server -- opinions?

2008-04-29 Thread Andrew Braithwaite
Hi,

Three things...

1. You need to let us know what the DB server will be doing.  Many CPU
cores are only important of you have many CPU intensive MySQL
connections in parallel.  Will you have a read-intensive or
write-intensive database load?  Those 2950III you're considering can
take up to 8 disks.  If you're doing very read-intensive operations, you
may want to consider a RAID1 pair for your OS, apps and MySQL tmp tables
and a RAID 5 array for the MySQL data (to get the most disk space for
your money without sacrificing redundancy).  However; if you're doing a
heavy work with lots of reads and writes at the same time then you
should consider RAID1+0 for your data.  RAM will always help for both
MySQL caches and buffers and don't underestimate the great effect that
lots of RAM for your filesystem cache will have (talking about
unix/linux here, can't speak for windows).

2. All the hardware vendors have promotions running all the time which
they change every month.  One month it will be cheaper disk, the next
month will be cut-price RAM etc...  The end result will be about the
same...

3. It's very easy to upgrade memory and processors as long as you don't
mind 15 minutes or so of downtime for that server, linux will just see
the new h/w when it comes back up.  With hardware like HP and Dell you
won't even need a screwdriver, it's all easy to use clips.

Cheers,

Andrew

-Original Message-
From: Rene Fournier [mailto:[EMAIL PROTECTED] 
Sent: Sun, 27 April 2008 22:57
To: mysql@lists.mysql.com
Subject: Optimal MySQL server -- opinions?

Okay, the previous subject was too narrow, what I am really looking  
for are opinions on general disk/memory/cpu configurations,  
manufacturer notwithstanding...

As stated previously, I'm configuring a PowerEdge 2950III, and trying  
to decide what will provide the best bang-for-buck. The server will be  
used strictly as a MySQL database server running atop Red Hat Linux.  
Two large databases, each about 2GB, heavy on both Inserts and Selects.

Up until recently, I had spec'd:

2 x Quad-Core Xeon 5430 @ 2.66 GHz (6 MB cache) , 1333 MHz FSB
8 GB Ram (4x2GB)
4 x 146 GB 15K SAS drives (RAID 1/1 -- first set for OS, apps, second  
set for MySQL data)

...worked out to around $5,500. Now however there is a processor  
promotion, such that:

1 x Quad-Core Xeon 5450 @ 3.0 GHz (6 MB cache) , 1333 MHz FSB
8 GB Ram (4x2GB)
4 x 146 GB 15K SAS drives (RAID 1/1 -- first set for OS, apps, second  
set for MySQL data)

...works out to around $4,500. So what I'm wondering is, do I really  
need an eight-core box, since my experience tells me that MySQL's  
greatest bottleneck is disk I/O. I'm wondering if I would be better  
off with just one processor to start with (are they easy to add later,  
btw?), maybe add more RAM, and just save some cash.

Any thoughts or suggestions are much appreciated. I have to pull the  
trigger on this soon. I was hoping they would bump the specs or drop  
the prices significantly... I've been watching these for months and  
there's been promo after promo... I anticipate a major update, has  
anyone heard anything? (Should I wait a little longer maybe?)

Thanks.

...Rene



LOVEFiLM International Limited is a company registered in England and Wales. 
Registered Number: 04392195. Registered Office: No.9, 6 Portal Way, London W3 
6RU, United Kingdom. 

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, please delete it from your 
system and notify the sender immediately.

This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance problem - MySQL at 99.9% CPU

2008-01-02 Thread Andrew Braithwaite
Hi,

If you can follow this document:

http://www.ufsdump.org/papers/uuasc-june-2006.pdf

You should be able to figure out what's happening.

Cheers,

Andrew

-Original Message-
From: Gunnar R. [mailto:[EMAIL PROTECTED] 
Sent: Tue, 01 January 2008 23:31
To: mysql@lists.mysql.com
Subject: Performance problem - MySQL at 99.9% CPU

Hello,

I am running a community site mainly based on phpBB. It has about 9.300
registered users, 650.000 posts and about 200.000 visitors/month (12
mill
hits). The SQL database is about 700MB.

It's all running on a couple of years old Dell box with two P4 Xeon
1.7Ghz
CPUs, 1GB of RAMBUS memory and SCSI disks, with Linux and Apache.

The last year the server has been having huge performance problems, and
MySQL (5.0.45) seems to be the problem. It's almost constantly running
at
99.9% CPU (measured using 'top').

I know the hardware isn't too hot, but either way I am a bit confused by
the
fact that I can't seem to get MySQL to run smoothly. Is this just too
big a
database for this kind of box, or could this be a configuration issue?

I am thinking about buying a new dual core box (with IDE disks?), but I
have
to make sure this really is a hardware issue before I spend thousands of
bucks.

Any help will be hugely appreciated!

Cheers,

Gunnar



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



LOVEFiLM International Limited is a company registered in England and Wales. 
Registered Number: 04392195. Registered Office: No.9, 6 Portal Way, London W3 
6RU, United Kingdom. 

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, please delete it from your 
system and notify the sender immediately.

This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 5.0.27 replication problems

2007-09-28 Thread Andrew Braithwaite
Hi,

I keep getting the below in the error log.  I can't see any problems (no
other errors and replication is working) and the master DB is available
the whole time. 
 
070928 12:07:31 [Note] Slave: received end packet from server, apparent
master shutdown:
070928 12:07:31 [Note] Slave I/O thread: Failed reading log event,
reconnecting to retry, log 'mysql-bin.000346' position 69110563
070928 12:07:32 [Note] Slave: connected to master 'username@ip
address:3306',replication resumed in log 'mysql-bin.000346' at position
69110563
070928 12:07:32 [Note] Slave: received end packet from server, apparent
master shutdown:
070928 12:07:32 [Note] Slave I/O thread: Failed reading log event,
reconnecting to retry, log 'mysql-bin.000346' position 69110563
070928 12:07:32 [Note] Slave: connected to master 'username@ip
address:3306',replication resumed in log 'mysql-bin.000346' at position
69110563

Any ideas what is wrong?

Cheers,

Andrew

Sql, query


LOVEFiLM International Limited is a company registered in England and Wales. 
Registered Number: 04392195. Registered Office: No.9, 6 Portal Way, London W3 
6RU, United Kingdom. 

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, please delete it from your 
system and notify the sender immediately.

This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



thread_concurrency in linux

2007-08-31 Thread Andrew Braithwaite
Hi,

Does anyone know if thread_concurrency works in linux or is it just
limited to Solaris and Windows?

I know the general rule is number of CPU's*2 but will this actually have
any effect with Linux's threading model?

Thanks for any help :)

Andrew

Mysql, query


This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



thread_concurrency in linux

2007-08-29 Thread Andrew Braithwaite
Hi,

Does anyone know if thread_concurrency works in linux or is it just
limited to Solaris and Windows?

I know the general rule is number of CPU's*2 but will this actually have
any effect with Linux's threading model?

Thanks for any help :)

Andrew

Mysql, query


This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: thread_concurrency in linux

2007-08-29 Thread Andrew Braithwaite
Hi,

Just to make it clear; I mean thread_concurrency, not
innodb_thread_concurrency.

Cheers,

Andrew



From: Alex Arul Lurthu [mailto:[EMAIL PROTECTED] 
Sent: Wed, 29 August 2007 10:10
To: Andrew Braithwaite
Cc: mysql@lists.mysql.com
Subject: Re: thread_concurrency in linux


I am not sure whether you are talking about innodb_thread_concurrency.
If so please check out
http://www.mysqlperformanceblog.com/2006/06/05/innodb-thread-concurrency
http://www.mysqlperformanceblog.com/2006/06/05/innodb-thread-concurrenc
y for more details. Innodb_thread_concurrency works on linux.

Thanks
Alex


On 8/29/07, Andrew Braithwaite  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]  wrote: 

Hi,

Does anyone know if thread_concurrency works in linux or is it
just 
limited to Solaris and Windows?

I know the general rule is number of CPU's*2 but will this
actually have
any effect with Linux's threading model?

Thanks for any help :)

Andrew

Mysql, query 


This message has been scanned for viruses by BlackSpider
MailControl - www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]






-- 
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu 



Click here
https://www.mailcontrol.com/sr/wQw0zmjPoHdJTZGyOCrrhg==
iiiREta3C9fLX2sgE4nZ0noAtl5JkL!iHvm8DPbEDRRKpLeL7ikiuRlyFp0i7J7fYEY7nF3F
VUYnGZkja2gCxA9NHjly6QmOpZocISpuA+UrwziytsgRcJau9lJ+URueA6A4sujcf4weV3KV
xWnxPySn+mZ5GsUC1bNZpbK2T8Bb8k9u!n4UIOoj  to report this email as spam.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



thread_concurrency in linux

2007-08-24 Thread Andrew Braithwaite
Hi,

Does anyone know if thread_concurrency works in linux or is it just
limited to Solaris and Windows?

I know the general rule is number of CPU's*2 but will this actually have
any effect with Linux's threading model?

Thanks for any help :)

Andrew

Mysql, query


This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: seoparator help

2007-08-23 Thread Andrew Braithwaite
mysql select format(300,0);
+---+
| format(300,0) |
+---+
| 3,000,000 |
+---+
1 row in set (0.00 sec)

mysql select format(300,2);
+---+
| format(300,2) |
+---+
| 3,000,000.00  |
+---+
1 row in set (0.00 sec)

Cheers,

Andrew 

-Original Message-
From: coolcoder [mailto:[EMAIL PROTECTED] 
Sent: Thu, 23 August 2007 11:55
To: mysql@lists.mysql.com
Subject: seoparator help


Was wondering if anyone could help me with this little problem I'm
having.
I'd like to have a comma separator after every 3 digits. E.g
3,000,000.
How would i go about this?






This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Group by and concatenate

2006-10-11 Thread Andrew Braithwaite
Hi,

I have the following data:

mysql select Dealername,pc from ford_gb where pc='LE4 7SL';

+-+-+
| Dealername  | pc  |
+-+-+
| CD Bramall Ford - Leicester | LE4 7SL |
| CD Bramall Ford - Leicester | LE4 7SL |
| CD Bramall Ford - Leicester | LE4 7SL |
+-+-+

Is there a way in a single SQL query to group by Dealername, and have
the postcodes concatenated into a comma-separated list? e.g.

+-++
| Dealername  | concat_pc  |
+-++
| CD Bramall Ford - Leicester | LE4 7SL, LE4 7SL, LE4 7SL  |
+-++

Running MySQL 4.1 on Fedora C3.

Any help appreciated,

Andrew
SQL, Query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Group by and concatenate

2006-10-11 Thread Andrew Braithwaite
Never mind.

mysql select Dealername,pc,group_concat(pc) from ford_gb where pc='LE4
7SL' group by 1;
++-+--+
| Dealername | pc  | group_concat(pc) |
++-+--+
| CD Bramall - Leicester | LE4 7SL | LE4 7SL  |
| CD Bramall Trucks  | LE4 7SL | LE4 7SL,LE4 7SL  |
++-+--+

Andrew 

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED] 
Sent: Wed, 11 Oct 2006 14:32
To: mysql@lists.mysql.com
Subject: Group by and concatenate

Hi,

I have the following data:

mysql select Dealername,pc from ford_gb where pc='LE4 7SL';

+-+-+
| Dealername  | pc  |
+-+-+
| CD Bramall Ford - Leicester | LE4 7SL | CD Bramall Ford - Leicester | 
| LE4 7SL | CD Bramall Ford - Leicester | LE4 7SL |
+-+-+

Is there a way in a single SQL query to group by Dealername, and have
the postcodes concatenated into a comma-separated list? e.g.

+-++
| Dealername  | concat_pc  |
+-++
| CD Bramall Ford - Leicester | LE4 7SL, LE4 7SL, LE4 7SL  |
+-++

Running MySQL 4.1 on Fedora C3.

Any help appreciated,

Andrew
SQL, Query

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: comparing two databases

2006-09-29 Thread Andrew Braithwaite
Some freebies:

PHP: http://sourceforge.net/projects/phpmycomparer 

Perl: http://freshmeat.net/projects/mysqldiff/

Cheers,

Andrew

-Original Message-
From: Steve Buehler [mailto:[EMAIL PROTECTED] 
Sent: Thu, 28 Sep 2006 21:06
To: mysql
Subject: comparing two databases

Is there a program out there that I can use to compare two databases?
Just the structure, not the content.

Thanks
Steve


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Last access time of a table

2006-02-03 Thread Andrew Braithwaite
Hi everyone,

Does anyone know if there is a way to get the last access time from a
mysql table through mysql commands/queries?

I don't want to go to the filesystem to get this info.

I understand that this could be tricky especially as we have query
caching turned on and serve quite a few sql requests from query cache.

Can anyone help?

Cheers,

Andrew

SQL, Query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Subquery strangeness when used in FROM clause

2005-12-06 Thread Andrew Braithwaite
Hi,

I'm having a problem with subqueries in MySQL 4.1.14 running on Fedore
core 3.

mysql create table day_6_12_2005 (f1 int(1), f2 char(4));
Query OK, 0 rows affected (0.04 sec)

mysql insert into day_6_12_2005 values(1,'test');
Query OK, 1 row affected (0.00 sec)

mysql select * from (select date_format(now(),'day_%e_%c_%Y')) as t1;
+---+
| date_format(now(),'day_%e_%c_%Y') |
+---+
| day_6_12_2005 |
+---+
1 row in set (0.04 sec)

mysql select f1,f2 from (select date_format(now(),'day_%e_%c_%Y')) as
t1;
ERROR 1054 (42S22): Unknown column 'f1' in 'field list'

Any one know what's going on?  According to the docs, this should work
fine...

Any pointers or ideas will be much appreciated...

Cheers,

Andrew

SQL, query


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Resend: 4.1 replication logs growing at a much greater rate than with 4.0

2005-10-04 Thread Andrew Braithwaite
Hi all,

I have just upgraded a master slave database system from 4.0 to 4.1.
the replication binlogs are now growing at a vastly greater rate.  The
queries going through are the same.  Did 4.0 use some kind of
compression by default or something?

Does anyone have any idea what's going on with this?  Any other pointers
will be greatly appreciated. 

Cheers,

Andrew 

Sql, query




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Resend: 4.1 replication logs growing at a much greater rate than with 4.0

2005-10-04 Thread Andrew Braithwaite
You haven't physically looked at and compared the logs, have you 

 

Actually that was the first thing I did.  The mysqlbinlog rendered text
was roughly the same but the binary files much bigger.  It's probably
something obscure and specific to my implementation and not something
that the group has experienced by the looks of it.

 

Thanks for the help anyway.

 

Cheers,

 

Andrew

 



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tue, 04 Oct 2005 14:32
To: Andrew Braithwaite
Cc: mysql@lists.mysql.com
Subject: Re: Resend: 4.1 replication logs growing at a much greater rate
than with 4.0

 



Andrew Braithwaite [EMAIL PROTECTED] wrote on 10/04/2005 05:39:59
AM:

 Hi all,
 
 I have just upgraded a master slave database system from 4.0 to 4.1.
 the replication binlogs are now growing at a vastly greater rate.  The
 queries going through are the same.  Did 4.0 use some kind of
 compression by default or something?
 
 Does anyone have any idea what's going on with this?  Any other
pointers
 will be greatly appreciated. 
 
 Cheers,
 
 Andrew 
 
 Sql, query
 

You haven't physically looked at and compared the logs, have you :-{ If
you had,  you could have answered yourself. 

You will see that v4.1 binlogs more things than 4.0. More log entries =
bigger files. 

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine 



4.1 replication logs growing at a much greater rate than with 4.0

2005-10-03 Thread Andrew Braithwaite
Hi all,

 

I have just upgraded a master slave database system from 4.0 to 4.1.
the replication binlogs are now growing at a vastly greater rate.  The
queries going through are the same.  Did 4.0 use some kind of
compression by default or something?

 

Does anyone have any idea what's going on with this?  Any other pointers
will be greatly appreciated.

 

Cheers,

 

Andrew

 

Sql, query



RE: How to match a binary null in a varchar column???

2005-10-03 Thread Andrew Braithwaite
Hi,

You could try the binary operator:

http://dev.mysql.com/doc/mysql/en/charset-binary-op.html

Cheers,

Andrew

-Original Message-
From: Richard F. Rebel [mailto:[EMAIL PROTECTED] 
Sent: Mon, 03 Oct 2005 17:48
To: Untitled
Subject: How to match a binary null in a varchar column???


Hello,

How do you match all rows with a binary null (octal 000) in a given
column.

I have tried all sorts of strange combinations of REGEXP and LIKE with
no results.  I have dug in the manual, but can't seem to find anything.

Any help would be appreciated.

-- 
Richard F. Rebel

cat /dev/null  `tty`


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



possible MySQL bug - insert into 'double' column problem with mysql 4.1

2005-09-19 Thread Andrew Braithwaite
Hi All,

I have a strange error when trying to insert into a table with 2
'double' fields.  It inserts into the 1st field OK but fills the 2nd one
with nines.  See below for a complete recreate.

Is this a known problem?  Does anyone have a solution?

I'm running standard MySQL binaries on redhat linux 7.2 kernel
2.4.20-28.7smp.  Help!

Cheers,

Andrew


mysql desc table1;
+---+--+--+-+-++
| Field | Type | Null | Key | Default | Extra  |
+---+--+--+-+-++
| id| mediumint(6) |  | PRI | NULL| auto_increment |
| GeoQuality| varchar(5)   | YES  | | NULL||
| lon   | double(7,6)  | YES  | MUL | NULL||
| lat   | double(7,6)  | YES  | | NULL||
| GeocodeDate   | date | YES  | | NULL||
| GeocodeSource | varchar(25)  | YES  | | NULL||
| state | varchar(70)  | YES  | | NULL||
| client_id | varchar(40)  | YES  | MUL | NULL||
+---+--+--+-+-++
15 rows in set (0.00 sec)

mysql INSERT INTO table1VALUES
(8002,'2a',-0.361080,39.468270,'2005-08-31','ES052',NULL,NULL);
Query OK, 1 row affected, 1 warning (0.01 sec)

mysql select * from table1 where id=8002\G
*** 1. row ***
   id: 8002
   GeoQuality: 2a
  lon: -0.361080
  lat: 9.99
  GeocodeDate: 2005-08-31
GeocodeSource: ES052
state: NULL
client_id: NULL
1 row in set (0.00 sec)

Mysql, query 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: possible MySQL bug - insert into 'double' column problem with mysql 4.1

2005-09-19 Thread Andrew Braithwaite
Thanks; you're absolutely right - doh!  It's just amazing that this ever
worked in MySQL 4.0 and below...

Andrew

-Original Message-
From: Roger Baklund [mailto:[EMAIL PROTECTED] 
Sent: Mon, 19 Sep 2005 12:27
To: mysql@lists.mysql.com
Cc: Andrew Braithwaite
Subject: Re: possible MySQL bug - insert into 'double' column problem
with mysql 4.1

Andrew Braithwaite wrote:
 Hi All,
 
 I have a strange error when trying to insert into a table with 2
 'double' fields.  It inserts into the 1st field OK but fills the 2nd
one
 with nines.  See below for a complete recreate.
[...]
 | lon   | double(7,6)  | YES  | MUL | NULL|
|
 | lat   | double(7,6)  | YES  | | NULL|
|

 From the manual:

DOUBLE[(M,D)] [UNSIGNED] [ZEROFILL]

M is the total number of decimal digits and D is the number of digits 
following the decimal point.

URL: http://dev.mysql.com/doc/mysql/en/numeric-type-overview.html 

 mysql INSERT INTO table1VALUES
 (8002,'2a',-0.361080,39.468270,'2005-08-31','ES052',NULL,NULL);
 Query OK, 1 row affected, 1 warning (0.01 sec)
 
 mysql select * from table1 where id=8002\G
 *** 1. row ***
id: 8002
GeoQuality: 2a
   lon: -0.361080
   lat: 9.99

You have created your columns with a max width of 7, out of which 6 are 
decimals, but you are trying to insert a number with 8 digits.

--
Roger




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: possible MySQL bug - insert into 'double' column problem with mysql 4.1

2005-09-19 Thread Andrew Braithwaite
Hmm - and further to this.  Strange differences of behaiour between the
last two production versions of MySQL

sh-2.05b# ln -s mysql-standard-4.0.24-apple-darwin7.7.0-powerpc mysql

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.0.24-standard-log

mysql create table wibble3 (
- test1 double(3,6),
- test2 double(7,6),
- test3 double(9,6));
Query OK, 0 rows affected (0.11 sec)

mysql desc wibble3;
+---+-+--+-+-+---+
| Field | Type| Null | Key | Default | Extra |
+---+-+--+-+-+---+
| test1 | double(7,6) | YES  | | NULL|   |
| test2 | double(7,6) | YES  | | NULL|   |
| test3 | double(9,6) | YES  | | NULL|   |
+---+-+--+-+-+---+
3 rows in set (0.17 sec)

mysql insert into wibble3 values(50.123456,50.123456,50.123456);
Query OK, 1 row affected (0.08 sec)

mysql select * from wibble3;
+---+---+---+
| test1 | test2 | test3 |
+---+---+---+
| 50.123456 | 50.123456 | 50.123456 |
+---+---+---+
1 row in set (0.08 sec)

sh-2.05b# ln -s mysql-standard-4.1.10a-apple-darwin7.7.0-powerpc mysql

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.10a-standard-log

mysql insert into wibble3 values(50.123456,50.123456,50.123456);
Query OK, 1 row affected, 2 warnings (0.11 sec)

mysql select * from wibble3;
+---+---+---+
| test1 | test2 | test3 |
+---+---+---+
| 50.123456 | 50.123456 | 50.123456 |
|  9.99 |  9.99 | 50.123456 |
+---+---+---+
2 rows in set (0.06 sec)

Looks like while MySQL 4.1 was not changing what was stored in the data but
changing what is inserted into new records to match the proper data tye
definitions.


On 19/9/05 17:49, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Thanks; you're absolutely right - doh!  It's just amazing that this ever
 worked in MySQL 4.0 and below...
 
 Andrew
 
 -Original Message-
 From: Roger Baklund [mailto:[EMAIL PROTECTED]
 Sent: Mon, 19 Sep 2005 12:27
 To: mysql@lists.mysql.com
 Cc: Andrew Braithwaite
 Subject: Re: possible MySQL bug - insert into 'double' column problem
 with mysql 4.1
 
 Andrew Braithwaite wrote:
 Hi All,
 
 I have a strange error when trying to insert into a table with 2
 'double' fields.  It inserts into the 1st field OK but fills the 2nd
 one
 with nines.  See below for a complete recreate.
 [...]
 | lon   | double(7,6)  | YES  | MUL | NULL|
 |
 | lat   | double(7,6)  | YES  | | NULL|
 |
 
  From the manual:
 
 DOUBLE[(M,D)] [UNSIGNED] [ZEROFILL]
 
 M is the total number of decimal digits and D is the number of digits
 following the decimal point.
 
 URL: http://dev.mysql.com/doc/mysql/en/numeric-type-overview.html 
 
 mysql INSERT INTO table1VALUES
 (8002,'2a',-0.361080,39.468270,'2005-08-31','ES052',NULL,NULL);
 Query OK, 1 row affected, 1 warning (0.01 sec)
 
 mysql select * from table1 where id=8002\G
 *** 1. row ***
id: 8002
GeoQuality: 2a
   lon: -0.361080
   lat: 9.99
 
 You have created your columns with a max width of 7, out of which 6 are
 decimals, but you are trying to insert a number with 8 digits.
 
 --
 Roger
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow count(1) behavior with large tables

2005-07-15 Thread Andrew Braithwaite
Hi,

You're doing a join on 'BoardID' on the tables MSGS and MBOARD.  Is the
BoardID field indexed on the MSGS table too?  If not then that may be your
problem.

Cheers,

Andrew


On 15/7/05 23:31, Jon Drukman [EMAIL PROTECTED] wrote:

 i'm trying to run this query:
 
 SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 0 AND m.BoardID
 = b.BoardID;
 
 MSGS has 9.5 million rows, and is indexed on BoardID
 MBOARD has 69K rows and is indexed on BaseType
 
 EXPLAIN shows:
 
 mysql explain SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType =
 0 AND m.BoardID = b.BoardID;
 +---+--+--++-+---+---+
 -+
 | table | type | possible_keys| key| key_len | ref   |
 rows  | Extra   |
 +---+--+--++-+---+---+
 -+
 | b | ref  | PRIMARY,BaseType | BaseType   |   1 | const |
 48614 | |
 | m | ref  | BoardIndex   | BoardIndex |   4 | b.BoardID |
   277 | Using index |
 +---+--+--++-+---+---+
 -+
 
 the query takes several minutes to run.  shouldn't this be a simple case
 of doing some math on index values?
 
 -jsd-
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Group By query optimization

2005-07-15 Thread Andrew Braithwaite
Hi,

Put indexes on 'valid' and 'sessiontype' and all will be good.

Cheers,

Andrew


On 15/7/05 18:26, Kishore Jalleda [EMAIL PROTECTED] wrote:

 Hi All,
   I have a mysql query which takes 8 seconds to run ona dual
 xeon 2.4, 3Gig ram box,
 SELECT gamename, MAX(score) AS score, COUNT(valid=1) AS played FROM
 gamesessions AS gamesessions
 WHERE valid=1 AND sessiontype IN (1,2)
 GROUP BY gamename;
 Explain select gives
 
 table  type  possible_keys  key  key_len  ref  rows  Extra
 gamesessions  ALL  NULL  NULL  NULL  NULL  915522  Using where; Using
 temporary; Using filesort
 
 How can I optimize the query
 Thanks 
 Kishore
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow count(1) behavior with large tables

2005-07-15 Thread Andrew Braithwaite
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?

Cheers,

A


On 16/7/05 00:01, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Hi,
 
 You're doing a join on 'BoardID' on the tables MSGS and MBOARD.  Is the
 BoardID field indexed on the MSGS table too?  If not then that may be your
 problem.
 
 Cheers,
 
 Andrew
 
 
 On 15/7/05 23:31, Jon Drukman [EMAIL PROTECTED] wrote:
 
 i'm trying to run this query:
 
 SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 0 AND m.BoardID
 = b.BoardID;
 
 MSGS has 9.5 million rows, and is indexed on BoardID
 MBOARD has 69K rows and is indexed on BaseType
 
 EXPLAIN shows:
 
 mysql explain SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType =
 0 AND m.BoardID = b.BoardID;
 
+---+--+--++-+---+---
+
 -+
 | table | type | possible_keys| key| key_len | ref   |
 rows  | Extra   |
 
+---+--+--++-+---+---
+
 -+
 | b | ref  | PRIMARY,BaseType | BaseType   |   1 | const |
 48614 | |
 | m | ref  | BoardIndex   | BoardIndex |   4 | b.BoardID |
   277 | Using index |
 
+---+--+--++-+---+---
+
 -+
 
 the query takes several minutes to run.  shouldn't this be a simple case
 of doing some math on index values?
 
 -jsd-
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Remove 1st 3 Chars

2005-05-11 Thread Andrew Braithwaite
Hello,

Is there any way to get MySQL to return the results of this query with
the 'fieldname' in the order listed in the in() bit?

select fieldname from tablename where fieldname in
('B4079','B4076','B4069','B4041','A4710','58282','58220','56751','56728'
,'45003','09234','04200','04035','04026');

i.e. I want row 1 to have fieldname=' B4079', row 2 to have
fieldname='B4076', etc.

Or is there any other way to list the results by the order as defined in
a list specified in the query.

Please don't reply saying that I should do this in the application layer
as that isn't an option.

Thanks for any help at all on this...

Cheers,

Andrew

MySQL, Query



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help with a tricky/impossible query...

2005-04-14 Thread Andrew Braithwaite
Thanks for the idea,

Unfortunately I can't do that as the ranges involved are unknown and
will be from 1 to several billion at lease.  I can't have another table
that needs to be augmented each time my ranges change.

Any other ideas?

Cheers,

Andrew

 

-Original Message-
From: Paul B van den Berg [mailto:[EMAIL PROTECTED] 
Sent: Thu 14 April 2005 10:47
To: MySQL
Cc: Andrew Braithwaite
Subject: Help with a tricky/impossible query...

Hi,

In SQL you need to define the data that you want to work with:

create table z ( z int(5) not null primary key); insert into z values
(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15),(16),(
17),(18),(19),(20),(21),(22);

If you need more values you could use a simple perl looping construct:

for ($i=0; $i = $max; $i++) {
 $dbh-do( q{  insert into z set z = $i });
}

Once you have the table filled, it's easy to explode the x/y ranges by
seq:
 
select seq, z
from wibble, z
where z between x and y

Then the rows with seq=1 are:
| seq | z  |
+-++
|   1 |  5 |
|   1 |  6 |
|   1 |  7 |
|   1 |  8 |
|   1 |  9 |
|   1 | 10 |
The rest is as you wanted.

Regards, Paul 

On 14/4/05 1:11 am, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Here's what I need to do...
 
 create table wibble(
 seq int(3) auto_increment primary key, x int(5), y int(5) );
 
 insert into wibble set x=5, y=10;
 insert into wibble set x=1, y=3;
 insert into wibble set x=17, y=22;
 
 mysql select * from wibble;
 +-+--+--+
 | seq | x| y|
 +-+--+--+
 |   1 |5 |   10 |
 |   2 |1 |3 |
 |   3 |   17 |   22 |
 +-+--+--+
 3 rows in set (0.09 sec)
 
 So I want to run a query to explode the x/y ranges by seq.
 
 The required output is:
 
 mysql select some clever things from wibble where some clever stuff 
 mysql happens
 here;
 +-+--+
 | seq | z|
 +-+--+
 |   1 |1 |
 |   1 |2 |
 |   1 |3 |
 |   1 |4 |
 |   1 |5 |
 |   2 |1 |
 |   2 |2 |
 |   2 |3 |
 |   3 |   17 |
 |   3 |   18 |
 |   3 |   19 |
 |   3 |   20 |
 |   3 |   21 |
 |   3 |   22 |
 +-+--+
 14 rows in set (0.17 sec)
 
 Can anyone help me to achieve this result?
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Performance

2005-04-14 Thread Andrew Braithwaite
You could probably save a bit of processing time by changing:

concat(date_format(from_unixtime(time), %d/%m/%Y), - ,
time_format(from_unixtime(time), %H:%i))

to:

date_format(from_unixtime(time), %d/%m/%Y - %H:%i)

This would mean half the date conversions would be executed.

Separating out the 'time' and 'result' indicies will probably help too.

Cheers,

Andrew


On 14/4/05 6:34 pm, Fernando Henrique Giorgetti [EMAIL PROTECTED]
wrote:

 Hi Folks!
 
 Here, I have the following table:
 
 CREATE TABLE `accesses` (
   `time` varchar(15) NOT NULL default '',
   `duration` int(10) default NULL,
   `user` varchar(25) NOT NULL default '',
   `ipaddr` varchar(15) NOT NULL default '',
   `result` varchar(30) default NULL,
   `bytes` int(10) default NULL,
   `reqmethod` varchar(10) default NULL,
   `urlparent` varchar(100) NOT NULL default '',
   KEY `usuario` (`usuario`),
   KEY `time_result` (`time`, `result`)
 );
 
 If my table has a great number of rows (something like 5 millions), the result
 time is too much longer.
 
 select user, count(distinct concat(date_format(from_unixtime(time),
 %d/%m/%Y),  - , time_format(from_unixtime(time), %H:%i)), ipaddr,
 urlparent) as qtd, sec_to_time(sum(duration)/1000) as duration, sum(bytes) as
 bytes from acessos where time = 1109646000 and time = 1112324399 and result
  TCP_DENIED/403 group by user order by user;
 
 PS: explaining this select, the time_result key is a possible_key, but, in the
 key field I have the NULL value (the NULL persists even if I force with use
 index()).
 
 Can anybody help me what can I do to make this query faster (indexes, tuning,
 or, change the table structure or the query).
 
 Thank you !



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help with a tricky/impossible query...

2005-04-13 Thread Andrew Braithwaite
Hi,

I need some help with a tricky query.  Before anyone asks, I cannot bring
this functionality back to the application layer (as much as I'd like to).

Here's what I need to do...

create table wibble(
seq int(3) auto_increment primary key,
x int(5),
y int(5)
);

insert into wibble set x=5, y=10;
insert into wibble set x=1, y=3;
insert into wibble set x=17, y=22;

mysql select * from wibble;
+-+--+--+
| seq | x| y|
+-+--+--+
|   1 |5 |   10 |
|   2 |1 |3 |
|   3 |   17 |   22 |
+-+--+--+
3 rows in set (0.09 sec)

So I want to run a query to explode the x/y ranges by seq.

The required output is:

mysql select some clever things from wibble where some clever stuff happens
here;
+-+--+
| seq | z|
+-+--+
|   1 |1 |
|   1 |2 |
|   1 |3 |
|   1 |4 |
|   1 |5 |
|   2 |1 |
|   2 |2 |
|   2 |3 |
|   3 |   17 |
|   3 |   18 |
|   3 |   19 |
|   3 |   20 |
|   3 |   21 |
|   3 |   22 |
+-+--+
14 rows in set (0.17 sec)

Can anyone help me to achieve this result?

Thanks,

Andrew

SQL, Query




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with a tricky/impossible query...

2005-04-13 Thread Andrew Braithwaite
I should mention that I'm constrained to version 4.0.n so no sub queries for
me!

Andrew


On 14/4/05 1:11 am, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Hi,
 
 I need some help with a tricky query.  Before anyone asks, I cannot bring
 this functionality back to the application layer (as much as I'd like to).
 
 Here's what I need to do...
 
 create table wibble(
 seq int(3) auto_increment primary key,
 x int(5),
 y int(5)
 );
 
 insert into wibble set x=5, y=10;
 insert into wibble set x=1, y=3;
 insert into wibble set x=17, y=22;
 
 mysql select * from wibble;
 +-+--+--+
 | seq | x| y|
 +-+--+--+
 |   1 |5 |   10 |
 |   2 |1 |3 |
 |   3 |   17 |   22 |
 +-+--+--+
 3 rows in set (0.09 sec)
 
 So I want to run a query to explode the x/y ranges by seq.
 
 The required output is:
 
 mysql select some clever things from wibble where some clever stuff happens
 here;
 +-+--+
 | seq | z|
 +-+--+
 |   1 |1 |
 |   1 |2 |
 |   1 |3 |
 |   1 |4 |
 |   1 |5 |
 |   2 |1 |
 |   2 |2 |
 |   2 |3 |
 |   3 |   17 |
 |   3 |   18 |
 |   3 |   19 |
 |   3 |   20 |
 |   3 |   21 |
 |   3 |   22 |
 +-+--+
 14 rows in set (0.17 sec)
 
 Can anyone help me to achieve this result?
 
 Thanks,
 
 Andrew
 
 SQL, Query
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Changing the Prompt for timing purposes

2005-04-13 Thread Andrew Braithwaite
When you say shell, do you mean DOS or UNIX?

If it's the latter then you may do this for the logfile:

sh-2.05b# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2 to server version: 4.0.24-standard-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql ?

For the complete MySQL Manual online visit:
   http://www.mysql.com/documentation

For info on technical support from MySQL developers visit:
   http://www.mysql.com/support

For info on MySQL books, utilities, consultants, etc. visit:
   http://www.mysql.com/portal

List of all MySQL commands:
   (Commands must appear first on line and end with ';')

help(\h)Display this help.
?   (\?)Synonym for `help'.
clear   (\c)Clear command.
connect (\r)Reconnect to the server. Optional arguments are db and host.
edit(\e)Edit command with $EDITOR.
ego (\G)Send command to mysql server, display result vertically.
exit(\q)Exit mysql. Same as quit.
go  (\g)Send command to mysql server.
nopager (\n)Disable pager, print to stdout.
notee   (\t)Don't write into outfile.
pager   (\P)Set PAGER [to_pager]. Print the query results via PAGER.
print   (\p)Print current command.
prompt  (\R)Change your mysql prompt.
quit(\q)Quit mysql.
rehash  (\#)Rebuild completion hash.
source  (\.)Execute a SQL script file. Takes a file name as an argument.
status  (\s)Get status information from the server.
system  (\!)Execute a system shell command.
tee (\T)Set outfile [to_outfile]. Append everything into given
outfile.
use (\u)Use another database. Takes database name as argument.

Connection id: 2  (Can be used with mysqladmin kill)

mysql \T wibble.txt
Logging to file 'wibble.txt'
mysql show databases;
+--+
| Database |
+--+
| andrew   |
| mysql|
| phpdb|
| test |
+--+
4 rows in set (0.00 sec)

mysql Bye
sh-2.05b# ll
total 112
drwxr-xr-x  21 root   wheel   714B 14 Apr 02:07 .
drwxr-xr-x  11 root   wheel   374B 12 Apr 00:44 ..
-rw-r--r--   1 root   wheel18K  5 Mar 04:37 COPYING
-rw-r--r--   1 root   wheel 5K  5 Mar 04:37 EXCEPTIONS-CLIENT
-rw-r--r--   1 root   wheel 8K  5 Mar 04:37 INSTALL-BINARY
-rw-r--r--   1 root   wheel 1K  5 Mar 00:38 README
drwxr-xr-x  50 root   wheel 1K 20 Mar 13:06 bin
-rwxr-xr-x   1 root   wheel   773B  5 Mar 04:50 configure
drwxr-x---  11 mysql  wheel   374B 12 Apr 01:27 data
drwxr-xr-x   7 root   wheel   238B 20 Mar 13:06 docs
drwxr-xr-x  53 root   wheel 1K 20 Mar 13:06 include
drwxr-xr-x  10 root   wheel   340B 20 Mar 13:06 lib
drwxr-xr-x   3 root   wheel   102B 20 Mar 13:06 man
drwxr-xr-x  10 root   wheel   340B 20 Mar 13:06 mysql-test
drwxr-xr-x   3 root   wheel   102B 20 Mar 13:06 scripts
drwxr-xr-x   3 root   wheel   102B 20 Mar 13:06 share
drwxr-xr-x  31 root   wheel 1K 20 Mar 13:06 sql-bench
-rwxr-xr-x   1 root   wheel88B 20 Mar 13:13 start
drwxr-xr-x  13 root   wheel   442B 20 Mar 13:06 support-files
drwxr-xr-x  21 root   wheel   714B 20 Mar 13:06 tests
-rw-r--r--   1 root   wheel   160B 14 Apr 02:08 wibble.txt
sh-2.05b# cat wibble.txt
mysql show databases;
+--+
| Database |
+--+
| andrew   |
| mysql|
| phpdb|
| test |
+--+
4 rows in set (0.00 sec)

mysql sh-2.05b# 


Hope this helps,

Andrew 


On 13/4/05 5:29 pm, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Hi,
  The MYSQL command line interface is very basic. Can it be modified like the
 shell command prompts so that I can include date/time for timing benchmarks?
 
 Secondly, is there an echo command in MYSQL command prompt so that I can see
 the command I issued or a log file that I can write to.sorry about the
 basic questions but I'm a newbe.
 
 George
 
 __
 Switch to Netscape Internet Service.
 As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register
 
 Netscape. Just the Net You Need.
 
 New! Netscape Toolbar for Internet Explorer
 Search from anywhere on the Web and block those annoying pop-ups.
 Download now at http://channels.netscape.com/ns/search/install.jsp



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Can't create a new thread (errno 11). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug'

2005-04-07 Thread Andrew Braithwaite
Hi,

I have loaded and ran mysql-debug-4.1.10a and am getting the same
problem.

There are no errors produced in the error log and mysqld is not crashing
so there is no stack trace to resolve.

Any ideas?

Cheers,

Andrew

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED] 
Sent: Sat 02 April 2005 13:31
To: mysql@lists.mysql.com
Subject: Re: Can't create a new thread (errno 11). If you are not out of
available memory, you can consult the manual for a possible OS-dependent
bug'

Hello.

I don't know. With additional information we could make more exact
conclusions.



Jocelyn Fournier [EMAIL PROTECTED] wrote:
 Hi,
 
 For me it sounds like a glibc issue.
 BTW, currently the 4.1.10a build is compiled against glibc-2.2, does 
 MySQL plan to build next releases against glibc-2.3 which seems to 
 handle much better a high number of simultaneous connected threads ?
 
 Thanks !
   Jocelyn
 
 Gleb Paharenko wrote:
 Hello.
 
 
 
 Please switch to the mysql-debug-4.1.10a version and send
 
 the error log with resolved stack trace. Include the
 
 output of the following statement as well:
 
 
 
   SHOW VARIABLES;
 
 
 
 
 
 
I'm getting this strange error when there are more than 1100 mysql
 
 
connections connected to the same server.
 
 
 
 
 What about ulimits and free memory of your system?
 
 
 
 
 
 
 
 
 
 Andrew Braithwaite [EMAIL PROTECTED] wrote:
 
 
Hi,
 
 
 
Yes - am using the standard binaries and have even upgraded to
 
 
mysql-standard-4.1.10a-pc-linux-gnu-i686.  I'm still getting this 
error
 
 
- does anyone have any ideas?=20
 
 
 
Cheers,
 
 
 
Andrew
 
 
 
-Original Message-
 
 
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
 
 
Sent: Thu 31 March 2005 02:31
 
 
To: mysql@lists.mysql.com
 
 
Subject: Re: Can't create a new thread (errno 11). If you are not out

of
 
 
available memory, you can consult the manual for a possible 
OS-dependent
 
 
bug'
 
 
 
Hello.
 
 
 
I strongly recommend you to upgrade to the latest release. Do you use
 
 
official binaries? See:
 
 
 
 http://dev.mysql.com/doc/mysql/en/crashing.html
 
 
 
 
 
 
 
Andrew Braithwaite [EMAIL PROTECTED] wrote:
 
 
Hi,
 
 
=20
 
 
I'm getting this strange error when there are more than 1100 
mysql=20
 
 
connections connected to the same server.
 
 
=20
 
 
[EMAIL PROTECTED] mysql]# bin/mysql
 
 
bin/mysql: connect to server at 'localhost' failed
 
 
error: 'Can't create a new thread (errno 11). If you are not out 
of=20
 
 
available memory, you can consult the manual for a possible=20
 
 
OS-dependent bug'
 
 
=20
 
 
I've had this running fine in the past with MySQL 4.0.17 and Red 
Hat=20
 
 
7.3 (linux 2.4..) but with the same hardware and MySQL versions 
using=20
 
 
Fedora core 2 (linux 2.6) I am getting these problems.
 
 
=20
 
 
I have checked max_connections and others in my.cnf and all is good.
 
 
I'm running 'out of the box' linux and 'out of the box' MySQL
 
 
binaries.
 
 
=20
 
 
Has anyone had this before?
 
 
=20
 
 
I would love to hear your thoughts and ideas..
 
 
=20
 
 
Cheers for the help,
 
 
=20
 
 
Andrew
 
 
=20
 
 
SQL, Query
 
 
=20
 
 
=20
 
 
 
 
--
 
 
For technical support contracts, goto
 
 
https://order.mysql.com/?ref=3Densita
 
 
This email is sponsored by Ensita.NET http://www.ensita.net/
 
 
  __  ___ ___   __
 
 
 /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 
 
/ /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 
 
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
 
 
  ___/   www.mysql.com
 
 
 
 
 
 
--=20
 
 
MySQL General Mailing List
 
 
For list archives: http://lists.mysql.com/mysql
 
 
To unsubscribe:
 
 
http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 
 
 
 


--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Can't create a new thread (errno 11). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug'

2005-04-01 Thread Andrew Braithwaite
Hi,

Yes - am using the standard binaries and have even upgraded to
mysql-standard-4.1.10a-pc-linux-gnu-i686.  I'm still getting this error
- does anyone have any ideas? 

Cheers,

Andrew

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED] 
Sent: Thu 31 March 2005 02:31
To: mysql@lists.mysql.com
Subject: Re: Can't create a new thread (errno 11). If you are not out of
available memory, you can consult the manual for a possible OS-dependent
bug'

Hello.

I strongly recommend you to upgrade to the latest release. Do you use
official binaries? See:

  http://dev.mysql.com/doc/mysql/en/crashing.html





Andrew Braithwaite [EMAIL PROTECTED] wrote:
 Hi,
 
 I'm getting this strange error when there are more than 1100 mysql 
 connections connected to the same server.
 
 [EMAIL PROTECTED] mysql]# bin/mysql
 bin/mysql: connect to server at 'localhost' failed
 error: 'Can't create a new thread (errno 11). If you are not out of 
 available memory, you can consult the manual for a possible 
 OS-dependent bug'
 
 I've had this running fine in the past with MySQL 4.0.17 and Red Hat 
 7.3 (linux 2.4..) but with the same hardware and MySQL versions using 
 Fedora core 2 (linux 2.6) I am getting these problems.
 
 I have checked max_connections and others in my.cnf and all is good.
 I'm running 'out of the box' linux and 'out of the box' MySQL
binaries.
 
 Has anyone had this before?
 
 I would love to hear your thoughts and ideas..
 
 Cheers for the help,
 
 Andrew
 
 SQL, Query
 
 


--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL inserts and disk full - how to handle gracefully?

2005-04-01 Thread Andrew Braithwaite
Hi All,

When you do a insert into a MySQL database and the disk is full, the
insert just hangs waiting for that table to become available.

This is fine for applications that care about data integrity.  In this
case I care more about availability and speed and would prefer it if the
inserts gracefully returned a nice error instead of waiting forever.

However - if the system is running a bit slow and the inserts are
queuing and taking a few seconds I wouldn't want them to fail in that
case.

Does anyone have any experience in this kind of circumventing the
data-integrity protecting hang-on disk-full condition?  I would love to
hear your thoughts and ideas..

Cheers for the help,

Andrew

SQL, Query





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL inserts and disk full - how to handle gracefully?

2005-03-30 Thread Andrew Braithwaite
 
Hi All,

When you do a insert into a MySQL database and the disk is full, the
insert just hangs waiting for that table to become available.

This is fine for applications that care about data integrity.  In this
case I care more about availability and speed and would prefer it if the
inserts gracefully returned a nice error instead of waiting forever.

However - if the system is running a bit slow and the inserts are
queuing and taking a few seconds I wouldn't want them to fail in that
case.

Does anyone have any experience in this kind of circumventing the
data-integrity protecting hang-on disk-full condition?  I would love to
hear your thoughts and ideas..

Cheers for the help,

Andrew

SQL, Query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Can't create a new thread (errno 11). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug'

2005-03-30 Thread Andrew Braithwaite
Hi,

I'm getting this strange error when there are more than 1100 mysql
connections connected to the same server.

[EMAIL PROTECTED] mysql]# bin/mysql
bin/mysql: connect to server at 'localhost' failed
error: 'Can't create a new thread (errno 11). If you are not out of
available memory, you can consult the manual for a possible OS-dependent
bug'

I've had this running fine in the past with MySQL 4.0.17 and Red Hat 7.3
(linux 2.4..) but with the same hardware and MySQL versions using Fedora
core 2 (linux 2.6) I am getting these problems.

I have checked max_connections and others in my.cnf and all is good.
I'm running 'out of the box' linux and 'out of the box' MySQL binaries.

Has anyone had this before?

I would love to hear your thoughts and ideas..

Cheers for the help,

Andrew

SQL, Query


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Delays in replication and internet latency

2005-03-14 Thread Andrew Braithwaite
Hi,

I can see how this would help with bandwidth (I don't have a bandwidth
problem as explained below) but how will this help with the latency I
have?

Cheers,

Andrew 

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED] 
Sent: Fri 11 March 2005 14:51
To: mysql@lists.mysql.com
Subject: Re: Delays in replication and internet latency

Hello.

You may use --slave_compressed_protocol=1. See:
  http://dev.mysql.com/doc/mysql/en/replication-options.html



Andrew Braithwaite [EMAIL PROTECTED] wrote:
 Hi,
 
 We have replication running here and it has been excellent for a 
 number of years. =20
 
 Recently we have been having lag in replication from London to Palo 
 Alto (Plenty of bandwidth but a latency of 300ms round trip). =20
 
 The replications binlogs are being written at a rate of about 100MB to

 200MB per hour and whilst we have no problems replicating to several 
 servers that are reasonable close to the master, the Palo Alto server 
 is getting over an hour behind at times.
 
 Does anyone know if there is something I can do to solve this? =20
 
 What experiences have others had in the past?
 
 Perhaps there is some way to increase the packet size or something to 
 get over this latency problem...
 
 Or any other ideas?
 
 Cheers,
 
 Andrew
 
 SQL, Query
 
 
 


--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Delays in replication and internet latency

2005-03-11 Thread Andrew Braithwaite
Hi,

We have replication running here and it has been excellent for a number
of years.  

Recently we have been having lag in replication from London to Palo Alto
(Plenty of bandwidth but a latency of 300ms round trip).  

The replications binlogs are being written at a rate of about 100MB to
200MB per hour and whilst we have no problems replicating to several
servers that are reasonable close to the master, the Palo Alto server is
getting over an hour behind at times.

Does anyone know if there is something I can do to solve this?  

What experiences have others had in the past?

Perhaps there is some way to increase the packet size or something to
get over this latency problem...

Or any other ideas?

Cheers,

Andrew

SQL, Query



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Possible bug with wait_timeout

2004-12-02 Thread Andrew Braithwaite
Hi all,

In version 4.0.18 when setting the wait_timeout variable to 10 in
my.cnf, it seems to work when looking at 'mysqladmin variables' as it is
indeed showing up as 10.

However, when in the mysql client and I do a 'show variables' it is
showing up with the default value of 28800.

I'm certain that I've connected to the same server and was using all the
tools from /usr/bin/mysql and specifying paths like this
bin/safe_mysqld, bin/mysqladmin, bin/mysql etc...

Anyone seen this before or am I going crazy?

Cheers,

Andrew

query, sql

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slave replication problem

2004-09-06 Thread Andrew Braithwaite
did you get an answer to your problem yet?  If not I may be able to help..
 
Andrew



From: Jeff McKeon [mailto:[EMAIL PROTECTED]
Sent: Fri 03/09/2004 15:29
To: [EMAIL PROTECTED]
Subject: Slave replication problem



Hello all,

We had a power outage this morning and before we could shut down our
main MySQL server, power to it was lost.  Bad UPS battery. Long
story.

We replicate this server to two others.  On one I get the following
error...



Jeff McKeon
IT Manager*** 1. row ***
  Master_Host: 10.32.1.10
  Master_User: repl
  Master_Port: 3306
Connect_retry: 60
  Master_Log_File: DB01TC07927-bin.022
  Read_Master_Log_Pos: 2223919
   Relay_Log_File: DB02TC07927-relay-bin.286
Relay_Log_Pos: 16361931
Relay_Master_Log_File: DB01TC07927-bin.021
 Slave_IO_Running: Yes
Slave_SQL_Running: No
  Replicate_do_db:
  Replicate_ignore_db:
   Last_errno: 0
   Last_error: Could not parse relay log event entry. The
possible reasons are: the master's binary log is corrupted (you can
check this by running 'mysqlbinlog' on the binary log), the slave's
relay log is corrupted (you can check this by running 'mysqlbinlog' on
the relay log), a network problem, or a bug in the master's or slave's
MySQL code. If you want to check the master's binary log or slave's
relay log, you will be able to know their names by issuing 'SHOW SLAVE
STATUS' on this slave.
 Skip_counter: 0
  Exec_master_log_pos: 85068331
  Relay_log_space: 18604700
1 row in set (0.00 sec)


On the other I'm getting...

*** 1. row ***
  Master_Host: 10.32.1.10
  Master_User: repl
  Master_Port: 3306
Connect_retry: 60
  Master_Log_File: DB01TC07927-bin.021
  Read_Master_Log_Pos: 85079027
   Relay_Log_File: mis02tc07927-relay-bin.106
Relay_Log_Pos: 4
Relay_Master_Log_File: DB01TC07927-bin.021
 Slave_IO_Running: No
Slave_SQL_Running: Yes
  Replicate_do_db:
  Replicate_ignore_db:
   Last_errno: 0
   Last_error:
 Skip_counter: 0
  Exec_master_log_pos: 85079027
  Relay_log_space: 4
1 row in set (0.00 sec)


From the server.err log I have:

040903 14:00:01  Slave I/O thread: connected to master
'[EMAIL PROTECTED]:3306',  replication started in log
'DB01TC07927-bin.021' at position 85079027
040903 14:00:01  Error reading packet from server: Client requested
master to start replication from impossible position (server_errno=1236)
040903 14:00:01  Got fatal error 1236: 'Client requested master to start
replication from impossible position' from master when reading data from
binary log
040903 14:00:01  Slave I/O thread exiting, read up to log
'DB01TC07927-bin.021', position 85079027

How do I fix these problems?

Thanks,

Jeff

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





RE: CIDR ranges in MySQL permissions?

2004-07-22 Thread Andrew Braithwaite
Hi All,

Can I assume by the lack of any responses that the anwser to my question
is no?

Cheers,

Andrew 

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED] 
Sent: Tuesday 20 July 2004 16:44
To: [EMAIL PROTECTED]
Subject: RE: CIDR ranges in MySQL permissions?

Sorry - a /32 is a single ip - I meant a /27 :)

A

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED]
Sent: Tuesday 20 July 2004 16:16
To: [EMAIL PROTECTED]
Cc: Karl Skidmore
Subject: CIDR ranges in MySQL permissions?

Hi All,
 
Has anyone had any experience with using IP address ranges in MySQL
permissions?  It would be easy if you had a whole class C for example
because you would be able to do:
 
Grant all privileges on *.* to someuser@'192.87.12.%';

But if you only wanted to give permissions to a CIDR range (e.g. a /32,
192.87.12.1 to 182.87.12.32 etc..) in one go, and used the above method,
it would leave your databases open to connection from others.

Is there any way to do this in a single line (without having an entry
for each IP address) ?

Hope you can help.

Regards,

Andrew

Mysql, query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



CIDR ranges in MySQL permissions?

2004-07-20 Thread Andrew Braithwaite
Hi All,
 
Has anyone had any experience with using IP address ranges in MySQL
permissions?  It would be easy if you had a whole class C for example
because you would be able to do:
 
Grant all privileges on *.* to someuser@'192.87.12.%';

But if you only wanted to give permissions to a CIDR range (e.g. a /32,
192.87.12.1 to 182.87.12.32 etc..) in one go, and used the above method,
it would leave your databases open to connection from others.

Is there any way to do this in a single line (without having an entry
for each IP address) ?

Hope you can help.

Regards,

Andrew

Mysql, query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: CIDR ranges in MySQL permissions?

2004-07-20 Thread Andrew Braithwaite
Sorry - a /32 is a single ip - I meant a /27 :)

A

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED] 
Sent: Tuesday 20 July 2004 16:16
To: [EMAIL PROTECTED]
Cc: Karl Skidmore
Subject: CIDR ranges in MySQL permissions?

Hi All,
 
Has anyone had any experience with using IP address ranges in MySQL
permissions?  It would be easy if you had a whole class C for example
because you would be able to do:
 
Grant all privileges on *.* to someuser@'192.87.12.%';

But if you only wanted to give permissions to a CIDR range (e.g. a /32,
192.87.12.1 to 182.87.12.32 etc..) in one go, and used the above method,
it would leave your databases open to connection from others.

Is there any way to do this in a single line (without having an entry
for each IP address) ?

Hope you can help.

Regards,

Andrew

Mysql, query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Advice on Database Scheme

2004-06-04 Thread Andrew Braithwaite
Hi,

Can I ask what you used to render that .gif ?  Looks like phpMyAdmin but
I have never seen that feature in phpMyAdmin..

Thanks,

Andrew

-Original Message-
From: Jochem van Dieten [mailto:[EMAIL PROTECTED] 
Sent: Friday 04 June 2004 16:19
To: David Blomstrom
Cc: [EMAIL PROTECTED]
Subject: Re: Advice on Database Scheme

David Blomstrom wrote:
 I put a screenshot of my database organization online at 
 http://www.geoworld.org/database.gif It features four tables, focusing

 on Continents, Nations, States and Counties. Notice that Continents 
 and Nations share a CCode (continent codes) column

So how are you going to put Russia in both Asia and Europe? How are you
going to put Turkey in both Asia and Europe? Egypt in Africa and Asia?


 Nations and States
 share a NCode (nation codes) field, and States and Counties share a 
 SCode (state codes) field.

My country does not have counties, and instead of states it has
provinces. How are you going to deal with that?

Answering my questions is answering your own question :-)

Jochem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problem with totals doubling when using a right join....

2004-06-02 Thread Andrew Braithwaite
Hi All,

I have a problem.  I have 2 tables:

mysql SELECT int_traffic.day, sum(int_traffic.deliveries) as deliveries
- FROM int_traffic
- WHERE int_traffic.day between '2004-05-01' and '2004-05-31'
- GROUP BY int_traffic.day
- ORDER BY int_traffic.day;
+++
| day| deliveries |
+++
| 2004-05-01 |  6 |
| 2004-05-03 | 40 |
| 2004-05-04 |102 |
| 2004-05-05 |103 |
| 2004-05-06 |119 |
| 2004-05-07 | 57 |
| 2004-05-08 | 17 |
| 2004-05-09 | 12 |
| 2004-05-10 | 87 |
| 2004-05-11 | 82 |
| 2004-05-12 | 86 |
| 2004-05-13 | 76 |
| 2004-05-14 | 68 |
| 2004-05-15 | 38 |
| 2004-05-16 |  9 |
| 2004-05-17 |126 |
| 2004-05-18 | 83 |
| 2004-05-19 |149 |
| 2004-05-20 | 57 |
| 2004-05-21 | 76 |
| 2004-05-22 | 18 |
| 2004-05-23 |  7 |
| 2004-05-24 | 99 |
| 2004-05-25 | 78 |
| 2004-05-26 | 56 |
| 2004-05-27 | 16 |
| 2004-05-28 |  8 |
+++

And:

mysql SELECT daily_traffic.day, sum(daily_traffic.deliveries) as
deliveries
- FROM daily_traffic
- WHERE daily_traffic.day between '2004-05-01' and '2004-05-31'
- GROUP BY daily_traffic.day
- ORDER BY daily_traffic.day;
+++
| day| deliveries |
+++
| 2004-05-01 | 101936 |
| 2004-05-03 |  96954 |
| 2004-05-04 | 190871 |
| 2004-05-05 | 192995 |
| 2004-05-06 | 243518 |
| 2004-05-07 | 353032 |
| 2004-05-08 | 197238 |
| 2004-05-09 |  92295 |
| 2004-05-10 | 208991 |
| 2004-05-11 | 201924 |
| 2004-05-12 | 206812 |
| 2004-05-13 | 200308 |
| 2004-05-14 | 184485 |
| 2004-05-15 | 100602 |
| 2004-05-16 |  89567 |
| 2004-05-17 | 202828 |
| 2004-05-18 | 196956 |
| 2004-05-19 | 197561 |
| 2004-05-20 | 197962 |
| 2004-05-21 | 194170 |
| 2004-05-22 | 102406 |
| 2004-05-23 |  86801 |
| 2004-05-24 | 201442 |
| 2004-05-25 | 197852 |
| 2004-05-26 | 191453 |
| 2004-05-27 | 185538 |
| 2004-05-28 | 167304 |
| 2004-05-29 |  95079 |
| 2004-05-30 |  80982 |
| 2004-05-31 |  95863 |
+++

But when I do this:

mysql SELECT distinct daily_traffic.day,
- sum(daily_traffic.deliveries) as external,
sum(yell_int_traffic.deliveries) as deliveries
- FROM daily_traffic
- LEFT JOIN yell_int_traffic
- ON daily_traffic.day = yell_int_traffic.day
- WHERE daily_traffic.day BETWEEN '2004-05-01' AND '2004-05-31' and
daily_traffic.client in ('yell','yell2002') and daily_traffic.type =
'inline'
- GROUP by daily_traffic.day;
++--++
| day| external | deliveries |
++--++
| 2004-05-01 |   203872 | 12 |
| 2004-05-03 |   193908 | 80 |
| 2004-05-04 |   954355 |204 |
| 2004-05-05 |   771980 |206 |
| 2004-05-06 |   974072 |238 |
| 2004-05-07 |  1412128 |114 |
| 2004-05-08 |   394476 | 34 |
| 2004-05-09 |   184590 | 24 |
| 2004-05-10 |   417982 |174 |
| 2004-05-11 |   403848 |164 |
| 2004-05-12 |   413624 |172 |
| 2004-05-13 |   400616 |152 |
| 2004-05-14 |   737940 |136 |
| 2004-05-15 |   201204 | 76 |
| 2004-05-16 |   179134 | 18 |
| 2004-05-17 |   811312 |252 |
| 2004-05-18 |   393912 |166 |
| 2004-05-19 |   395122 |298 |
| 2004-05-20 |   791848 |114 |
| 2004-05-21 |   776680 |152 |
| 2004-05-22 |   204812 | 36 |
| 2004-05-23 |   173602 | 14 |
| 2004-05-24 |   402884 |198 |
| 2004-05-25 |  1187112 |156 |
| 2004-05-26 |   382906 |112 |
| 2004-05-27 |   742152 | 32 |
| 2004-05-28 |   334608 | 16 |
| 2004-05-29 |95079 |   NULL |
| 2004-05-30 |80982 |   NULL |
| 2004-05-31 |95863 |   NULL |
++--++

All the results double for the days where there is data in both tables;
e.g. for the 29th, the total goes from 167304 to 334608 in the left
join.  I want to see the same result set format but without the doubled
up numbers.

Does anyone have any ideas how to solve this?

Help will be most appreciated...

Cheers,

Andrew

Sql, query


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Running MySQL and PostgreSQL on the same hardware

2004-05-26 Thread Andrew Braithwaite
Hi All,

Does anyone have any experience of running MySQL and PostgreSQL on the
same hardware?

At the moment we have several reasonable fast servers (dual Xeon GHz,
1GB ram, 15,000rpm scsi disk) running MySQL in a replicated environment
with high volumes of queries (high read:write ratio) and I now have a
need to deploy PostgreSQL but I don't particularly want to buy a stack
of hardware to do this.

I would love to hear any experiences / problems that people have had or
any ideas on the subject at all.

Looking forward to the replies.

Cheers,

Andrew

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Running MySQL and PostgreSQL on the same hardware

2004-05-26 Thread Andrew Braithwaite
Thanks Kevin,

I am comfortable with the software installs etc.. I was more concerned
with hardware bottlenecks and OS (linux 2.4) problems etc..

Any pointers would be great..

Cheers,

Andrew 

-Original Message-
From: Kevin Cowley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday 26 May 2004 10:53
To: Andrew Braithwaite; [EMAIL PROTECTED]
Subject: RE: Running MySQL and PostgreSQL on the same hardware

Andrew

I've done it but only in a test environment. I actually had 3 different
versions of Mysql running plus Postgres.

Each of the Mysql's and Postgress were installed to
/opt/database-version and paths set accordingly.

I've had no problems and they databases have been continuously up for
over a month.

Kevin Cowley
RD
 
Tel: 0118 902 9099 (direct line)
Email: [EMAIL PROTECTED]
Web: http://www.alchemetrics.co.uk

-Original Message-
From: Andrew Braithwaite [mailto:[EMAIL PROTECTED]
Sent: 26 May 2004 10:47
To: [EMAIL PROTECTED]
Subject: Running MySQL and PostgreSQL on the same hardware

Hi All,

Does anyone have any experience of running MySQL and PostgreSQL on the
same hardware?

At the moment we have several reasonable fast servers (dual Xeon GHz,
1GB ram, 15,000rpm scsi disk) running MySQL in a replicated environment
with high volumes of queries (high read:write ratio) and I now have a
need to deploy PostgreSQL but I don't particularly want to buy a stack
of hardware to do this.

I would love to hear any experiences / problems that people have had or
any ideas on the subject at all.

Looking forward to the replies.

Cheers,

Andrew

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



**
ALCHEMETRICS LIMITED (ALCHEMETRICS)
Mulberry Park, Fishponds Road, Wokingham, Berkshire, RG41 2GX
Tel:  +44 (0) 118 902 9000Fax:  +44 (0) 118 902 9001
This e-mail is confidential and is intended for the use of the addressee
only.
If you are not the intended recipient, you are hereby notified that you
must 
not use, copy, disclose, otherwise disseminate or take any action based
on 
this e-mail or any information herein.
If you receive this transmission in error, please notify the sender
immediately by reply e-mail or by using the contact details above and
then
delete this e-mail.
Please note that e-mail may be susceptible to data corruption,
interception 
and unauthorised amendment.  Alchemetrics does not accept any liability
for 
any such corruption, interception, amendment or the consequences
thereof.

**




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slow insert into select statement

2004-05-26 Thread Andrew Braithwaite
Hi,

I would start with finding out if  it's the select or the insert that's
taking a long time...

Does the priceLast5 table have heavy indexes to build?

Try running the select seperately and see how long it takes...

Andrew


-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: Wednesday 26 May 2004 14:13
To: 'nyem '; '[EMAIL PROTECTED] '
Subject: RE: slow insert into select statement

If you do the math a large result set will be created. You could
rewriting your query or adding more indexes to see if this speeds up the
process.

-Original Message-
From: nyem
To: [EMAIL PROTECTED]
Sent: 5/26/04 2:57 AM
Subject: slow insert into select statement

I have this insert statement that took a long time to execute:

INSERT INTO priceLast5
SELECT
   DISTINCT a.commodity,
   a.PricingDt
FROM rptPricingTEST a
WHERE
   a.PricingDt = (
 SELECT MAX(PricingDt)
 FROM rptPricingTEST b
 WHERE
   b.PricingDt  @date1
   AND b.PricingDt = @date2
   AND a.commodity = b.commodity
   );

  Query OK, 3010 rows affected (4 min 17.37 sec)

Is there any way I can modify the sql statement to speed this up?

Here's the result of explain (splitted for easy reading):

id select_typetable type  possible_keys key
-- -- - - - 
  1 PRIMARYa index NULL  idxcommodity
  2 DEPENDENT SUBQUERY b ref   idxcommodity  idxcommodity

id key_len ref   rows   Extra
-- --- - -- 
  1  25 NULL  384155 Using where; Using index
  2  22 dbsdvitmp.a.commodity 52 Using where; Using index



And here's the rest of the sql before the above insert:

SELECT @date1:=DATE_SUB( CURDATE(), INTERVAL 1 MONTH);
SELECT @date2:=CURDATE();

CREATE TEMPORARY TABLE priceLast5 (
commodity char(22) NOT NULL,
PricingDt date NOT NULL,
PRIMARY KEY  (commodity),
KEY idxPricingDt (PricingDt),
);

LOCK TABLES rptPricingTEST a READ;


The source table rptPricingTEST consists of commodity id (which is a
concat of several ids), pricing date, and prices (low, hi, avg). I need
to generate a report of price change for all commodity for the date
interval selected by users. Since this will be a web-based report, to
wait for 4 minutes for a response is unacceptable. The actual table has
2 million rows which I guess can take forever to execute.

regards,
nyem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Title Case

2004-05-05 Thread Andrew Braithwaite
Hi All,

I have a table with upper case text.  I want to use a function in my select
statement the puts this text in title case so MORE FOO YOU WIBBLE becomes
More Foo You Wibble.

Thanks for any help

Cheers,

Andrew

Sql, query

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Title Case Problem

2004-05-05 Thread Andrew Braithwaite
Hi All,

I have a table with upper case text.  I want to use a function in my select
statement the puts this text in title case so MORE FOO YOU WIBBLE becomes
More Foo You Wibble.

Thanks for any help

Cheers,

Andrew

Sql, query

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Batch table structure update tool

2004-03-20 Thread Andrew Braithwaite
Hi,

The only utility I know about that does this kind of thing is mysqldiff
which can be found at:

http://freshmeat.net/projects/mysqldiff/

I haven't used it myself but it comes quite highly rated on freshmeat.net 

A quote from it's description:

mysqldiff is a Perl script which compares the data structures (i.e. table
definitions) of two MySQL databases and returns the differences as a
sequence of MySQL commands suitable for piping into mysql which will
transform the structure of the first database to be identical to that of the
second (c.f. diff and patch). Database structures can be compared whether
they are files containing table definitions or existing databases, local or
remote.

Hope it's what you're looking for..

Cheers,

Andrew

-Original Message-
From: Pointer [mailto:[EMAIL PROTECTED] 
Sent: Saturday 20 March 2004 10:28
To: [EMAIL PROTECTED]
Subject: Batch table structure update tool

Hi
I'm looking for a tool (program, library), which given a file describing the
structure of the tables in a database, could update the structure of those
tables, i.e. create new table, add fields and change field size.
   Such a tool (especially as a dll library) would be very useful
because is would simplify the proces of deployment the new version of an
application. Each version would have a description of its tables and would
update the structure of the clients database in order to fulfill the
requirements of the new version.
I'm looking for a solution, which is as non-interactive as possible,
just because each activity made by user is a threat that he/she would do
something wrong :).

Thank You
John
   


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: String Concatenation Operator?

2004-03-18 Thread Andrew Braithwaite
 or must you use the CONCAT() function?

Yes

SELECT concat(firstname,' ',lastname) AS fullname FROM customers;

Cheers,

Andrew

-Original Message-
From: Jim McAtee [mailto:[EMAIL PROTECTED] 
Sent: Thursday 18 March 2004 22:30
To: [EMAIL PROTECTED]
Subject: String Concatenation Operator?

Does MySQL have a string contatenation operator, or must you use the
CONCAT() function?

SELECT firstname + ' ' + lastname AS fullname FROM customers


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Maintaining fulltext

2004-03-18 Thread Andrew Braithwaite
Not sure if you can...

Maybe add it to the ToDo list at:

http://www.mysql.com/doc/en/Fulltext_TODO.html

Cheers,

Andrew

p.s. also see you at the conference... 

-Original Message-
From: Trevor Price [mailto:[EMAIL PROTECTED] 
Sent: Thursday 18 March 2004 21:10
To: [EMAIL PROTECTED]
Subject: Maintaining fulltext 

Mysqlians,


1.  How can I tell how many words a fulltext index has indexed?(for purposes
of estimating performance) 2.  How can I tell how much index file space a
full text is responsible for?
( for regular indicies I use (key_length +4)/.067 ) but my fulltext's
are all on text's.


Thanks,

Trevor

p.s.  See you at the conference.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Advise on High Availability configuration

2004-02-02 Thread Andrew Braithwaite
Hi,

OK - I'll try to explain in as much detail as I can..

We have redhat linux apache webservers running our apps with fcgi (which
uses persistant DB connections).  We have about 8 of these. 

It's important to understand that our MySQL system is optimised for a
read-heavy / write-light site (about 80% reads in our case)

We also have 2 Master and 4 Slave MySQL DB's.  The slaves all replicate from
master1 - master2 also replicates from master1.

The masters are compaq DL380 with 15,000rpm raid 5 disk.

Our 4 slaves (and 8 webservers) are what we call RAIN (Redundant Array of
Inexpensive Nodes) - cheap boxes with fast disk running RedHat Linux.  If
one breaks, we don't care - we just buy another and slot it in (automated
install scripts really help us here).

In the applications we have a wrapper for connecting to the DB's - basically
as each instance of the app is initialised, it scans the list of slaves,
gets the number of MySQL processes running on each slave, then connects to
the one with the least amount of connections.  Each instance will also
connect to the master (and if that fails - the 2nd master.) therefore each
instance of our apps will have a persistant connection to a master and a
slave.

We also have a query wrapper we use to run all queries.  This will send
select statements to the slaves and everything else to the master.

Each slave keeps a heartbeat to the master and in the event of a failure,
changes it's master to master2.

We have 128MB dedicated to MySQL query-caching and about half our queries
hit the query-cache.

I think this is a nice meduim sized solution for an extremely avaiable but
non-expensive database system.

We're currently doing more than 5 Mill page views a day (each page view
generates about 5 - 10 queries) using this method.

Hope this helps with your study.

Cheers,

Andrew

-Original Message-
From: A.J.Millan [mailto:[EMAIL PROTECTED] 
Sent: Monday 02 February 2004 08:17
To: [EMAIL PROTECTED]
Subject: Re: Advise on High Availability configuration



 No, when we implemented high-availability MySQL servers we used 
 MySQL's inbuilt replication - this has been running here for years now 
 and we have had constant DB availability during that time, even though 
 individual machines have failed now and again.  We're using 2 masters 
  4 slaves with the logic for sql reads going to the slaves and sql 
 writes going to the master handled in the application layer.  We also 
 have automatic master failover (although I believe MySQL plan to build 
 this into their product
at
 some point in the near future).


At the moment we are studying the implementation of a MySQL-Apache high High
Availability system.   Would you be so kind to explain a bit more detailedly
the soft/hard aspects of yours application?.

I believe that at the moment this is a hot question.  Probably there are
many more people interested in yours experience.

Greetings.

A.J.Millan
ZATOR Systems.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multiple mysql servers with the same datadir

2004-02-02 Thread Andrew Braithwaite
Hi,

 But when runing multiple myisam  enable-external-locking database 
 servers with the same NFS datadir, will there be any deadlock problems?

I have no experience in this but it sounds like it may cause problems.

 I wonder if it is possible to use NFS as the storage backend and 
 to provide data consistency needed by database, so that we can 
 increase the number of database servers easily.

If you plan to use a single disk array back-end and multiple mysql servers
connecting to it via NFS and gradually increase the numbers of DB servers,
then this will only be scalable if your DB system is cpu or ram bound.  If
it's diskbound then you won't achieve scalability this way.

Cheers,

Andrew


-Original Message-
From: ZHANG JIAYING [mailto:[EMAIL PROTECTED] 
Sent: Monday 02 February 2004 20:35
To: [EMAIL PROTECTED]
Subject: RE: Multiple mysql servers with the same datadir


Hi, Andrew. Thank you very much for your explanation. And yes, I actually
wanted to mean BDB, instead of DBD.
 
I just checked BDB document. It seems they maintain their own locks. So my
two cases will lead to data corruption. I wonder if there are any database
management systems implementing locks through filesystem interface. I guess
using myisam with enable-external-locking option is such a case. But when
runing multiple myisam  enable-locking database servers with the same NFS
datadir, will there be any deadlock problems?
 
As you mentioned, replication and cluster seem to be better choices. But in
replication case, there are more than one datacopy in the system. And all
updates need to be send to the master. For cluster, I guess you mean
distributed database systems. Currently, I think such systems are quite
expensive to get and maintain. So I wonder if it is possible to use NFS as
the storage backend and to provide data consistency needed by database, so
that we can increase the number of database servers easily. I do not know if
this is possible or not. I would appreciate any kinds of comments. Thanks! 
 
Jiaying Zhang
 
 


-
Do you Yahoo!?
Yahoo! SiteBuilder - Free web site building tool. Try it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Advise on High Availability configuration

2004-02-02 Thread Andrew Braithwaite
Hi,

 So how does this bit work? If one master falls over and slaves 
 move to master two, how do you rebuild master one without downtime? 
 Don't the slaves try and use Master 1 as soon as it's back 
 online even though its data is out of sync?

If master 1 fails and the slaves move to master 2, then they effectively
swap roles as primary and secondary masters and the directive is issued to
the webservers to connect in the reverse order.  Yes, master 1 will be
offline until it is rebuilt and will join again in the secondary role.  The
master connect wrapper is robust. If a connection fails it will try to
reconnect a few times and run that query again.  All the slaves and masters
are on the same switch so if one fails, generally speaking, they all fail.

 What if one of the slaves loses its heartbeat with master one and 
 falls over to master two when other slaves can still see master one. 
 Don't you then have inconsistencies in your data?

The heartbeat code is pretty robust too. It's similar to the above wrapper
and has the same benefits.

Having said all that, I did mention that this is an inexpensive medium sized
solution to availability and that it is a read-heavy, write-light DB
scenario.  Additionally there is no data of a financial nature (if there
was, we would be using another set-up - commit / rollback for example..). As
such we are tolerant to very small amounts of data loss in the event of a
failover situation.

On this, I refer you to the how many 9's do you need dscussion, nicely
explained by Jeremy here:

http://jeremy.zawodny.com/blog/archives/000805.html

Cheers,

Andrew

-Original Message-
From: Russell Horn [mailto:[EMAIL PROTECTED] 
Sent: Monday 02 February 2004 16:21
To: [EMAIL PROTECTED]
Subject: RE: Advise on High Availability configuration


Andrew Braithwaite wrote:

 Each slave keeps a heartbeat to the master and in the event of a 
 failure, changes it's master to master2.

So how does this bit work? If one master falls over and slaves move to
master two, how do you rebuild master one without downtime? Don't the slaves
try and use Master 1 as soon as it's back online even though its data is out
of sync?

What if one of the slaves loses its heartbeat with master one and falls over
to master two when other slaves can still see master one. Don't you then
have inconsistencies in your data?

Russell.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Advise on High Availability configuration

2004-02-01 Thread Andrew Braithwaite
Hi,

In answer to your questions:

 -   Have any of you seen such a configuration being
  deployed?

No, when we implemented high-availability MySQL servers we used MySQL's
inbuilt replication - this has been running here for years now and we have
had constant DB availability during that time, even though individual
machines have failed now and again.  We're using 2 masters  4 slaves with
the logic for sql reads going to the slaves and sql writes going to the
master handled in the application layer.  We also have automatic master
failover (although I believe MySQL plan to build this into their product at
some point in the near future).

 -   Do you see any big gotcha's in this configuration?

Yes.  You are tring to acheive high availability but with the architecture
you propose, you still have a single point of failure (the data store) if
this were to break, the system is down. Another scenario is that if the data
were to become corrupt (which is quite likely if one of the two MySQL
servers were to crash), you would have to take the MySQL servers down whilst
you do a fsck/myisamchk to regain data integrity.  Besides all that, I have
a gut feeling that this is not a good way to acheive high availability.

 -   Is it possible for the Primary MySQL process to
 lock the data store such that the Secondary MySQL
 process cannot access the data store?

Yes, this could be done using either file permissions or MySQL privileges

 -   Is it possible for the 2 MySQL processes update
 the same table simultaneously?

InnoDB has row level locking but MyISAM doesn't

Hope this helps,

Andrew

-Original Message-
From: Gowtham Jayaram [mailto:[EMAIL PROTECTED] 
Sent: Friday 30 January 2004 18:37
To: [EMAIL PROTECTED]
Subject: Advise on High Availability configuration


Hello All;

I am in the process of examining a High Availability
(HA) configuration. The motivation is to not use
database replication (at least at this stage) because
of the need to work on the complete data set at any
given point in time. Here is the configuration choice
being considered

CONFIGURATION:
-   I would like to configure two machines, Primary
and Secondary.  Each machine has our Application and
an instance of MySQL database server running on it.
-   Additionally, I will setup a SCSII controller in
the Primary and Secondary Application machines so that
the actual data store (disk drive) runs on another
physical machine in a disk-array (RAID).  
-   With this setup the MySQL Servers on both the
machines will write/read data to/from this disk array.


OPERATION:
-   The idea is to have only one machine actively use
the data store at any given time.  The other machine
will be in the stand-by mode.
-   To start off, the Primary is Active, performing
database operations on the data store.  The Secondary
is in stand-by mode and does NOT perform any database operations.  
-   If the Primary goes down, the Secondary becomes
Active and starts to perform the database operations.

CAVEATS I AM AWARE OFF:
-   Does not work with InnoDB.  
-   Works with MyISAM but need to disable key buffer. 
This leads to big hit on performance
 
QUESTIONS:
-   Have any of you seen such a configuration being
deployed?
-   Do you see any big gotcha's in this configuration?
-   Is it possible for the Primary MySQL process to
lock the data store such that the Secondary MySQL
process cannot access the data store?
-   Is it possible for the 2 MySQL processes update
the same table simultaneously?

Looking forward to all your feedback.  Thank you.

Gowtham.


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free web site building tool. Try it!
http://webhosting.yahoo.com/ps/sb/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: images from MySQL backend used with MS-Access 2000 frontend

2004-02-01 Thread Andrew Braithwaite
I would recommend storing the images on the filesystem and put the
information about those images (along with the path to the image) in MySQL.
If you plan to have lots of images, implement a nice logical directory
structure to keep them in as in my experience linux ext2/3 is fast
reading/writing files in that scenario and can be slow in retrieving files
in a single directory containing large amounts of files.

Cheers,

Andrew

-Original Message-
From: Yuri Oleynikov [mailto:[EMAIL PROTECTED] 
Sent: Sunday 01 February 2004 16:33
To: [EMAIL PROTECTED]
Subject: images from MySQL backend used with MS-Access 2000 frontend


Hi everyone,
I am new to MySQL. I am starting a database project for a medium-sized 
medical office. I chose MS-Access (on  10-terminal pre-existing Win2K 
intranet) for rapid front-end dev and familiarity to the existing users. 
I plan to use MySQL on Linux server for backend for its speed, great 
support and ease of use.
The database will consist of doctor's reports, schematic drawings and 
photos (500K JPGs and 50K PICTs). I am planning to implement that with 
BLOB or, alternatively, with file system while storing links to files. 
Later we may add webaccess through MySQL webserving or by using Access 
webserving options.
Has anybody done anything similar? What is the best way to implement 
image storage (and, in future, movies, maybe)?
Thanks a lot.
yuri.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multiple values in the host field

2004-02-01 Thread Andrew Braithwaite
Hi,

 In myuser table, I have something like this:
 | Php.me.com | database_name  |

You could use a wildcard like this:

| %.me.com | database_name  |

This would allow any the user to connect from any subdomain on the me.com
domain.

However it would mean that other servers (e.g. wibble.me.com ) could connect
too.

Cheers,

Andrew

-Original Message-
From: Scott Haneda [mailto:[EMAIL PROTECTED] 
Sent: Sunday 01 February 2004 09:54
To: MySql
Subject: Multiple values in the host field


At a bit of a loss here, I hope there are a few savvy phpmyadmin users out
there.

I generally talk to mysql from the command line, however, I now have a
situation where I need to give phpmyadmin access to uses in a shared
environment.

Currently, I only have MySql running as a datbase server, there is some
limited use of php talking to it as localhost.

The main use, there are many other users that talk to the MySql server from
another machine.

Lets say I have this:
Mysql.me.com - MySql only server
Php.me.com - Php only server
(both are separate hostnames and IP's)

In myuser table, I have something like this:
| Php.me.com | database_name  |

There are many entries like that, the database name is all that changes.

This has worked fine for me for ages.  Herein lies the trouble.  If I am not
at Php.me.com, I can not connect. I know this is how it should be, makes
sense.

I want to let users connect from phpmyadmin, this runs on the Mysql.me.com,
I don't want to run phpmyadmin on the Php.me.com server.

Users are therefore not able to connect, they will be rejected since the
Host field is set to Php.me.com.

I guess, first, can I have more than one value in the Host field:
Php.me.com, localhost

I am going to guess that wont work :-(

Does this mean I am forces to add in 2 users for each database, one for
Php.me.com and one for the phpmuadmin localhost connection?

Incidentally, my Host table is empty, have never used this and do not really
know what this is for, perhaps this is the missing link I am looking for,
not entirely sure.

I am pretty against having more than one user record to maintain just to let
various people use phpmyadmin and only see their data, any suggestions are
greatly appreciated.

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query problem

2004-02-01 Thread Andrew Braithwaite
Hi,

You need:

select job,avg(sal) from emp group by 1 order by 2 limit 1;

Cheers,

Andrew

-Original Message-
From: Edouard Lauer [mailto:[EMAIL PROTECTED] 
Sent: Saturday 31 January 2004 19:23
To: [EMAIL PROTECTED]
Subject: Query problem


Hello,

I would like to query the littlest average salary. I have a table with
employees and their salary like that:

+---+--+
| job   | sal  |
+---+--+
| CLERK |  800 |
| SALESMAN  | 1600 |
| SALESMAN  | 1250 |
| MANAGER   | 2975 |
| SALESMAN  | 1250 |
| MANAGER   | 2850 |
| MANAGER   | 2450 |
| ANALYST   | 3000 |
| PRESIDENT | 5000 |
| SALESMAN  | 1500 |
| CLERK | 1100 |
| CLERK |  950 |
| ANALYST   | 3000 |
| CLERK | 1300 |
+---+--+

Now this query returns the average salary per job:

select job,avg(sal) from emp group by job --

+---+---+
| job   | avg(sal)  |
+---+---+
| ANALYST   | 3000. |
| CLERK | 1037.5000 |
| MANAGER   | 2758. |
| PRESIDENT | 5000. |
| SALESMAN  | 1400. |
+---+---+

The final result should be:

+---+---+
| job   | avg(sal)  |
+---+---+
| CLERK | 1037.5000 |
+---+---+

In ORACLE I can do it like this:
select job,avg(sal) from emp group by job having avg(sal)=(select
min(avg(sal)) from emp group by job);

but this doesn't work in MYSQL. Does somebody know how it can be done in
MySQL???

Regards,
Edi



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slave crashed: error 'The table 'users' is full' on query..

2004-02-01 Thread Andrew Braithwaite
Hi,

Whilst you may have space on the box, you may have reached a file size limit
on whatever OS you're using (on some linux versions, the max size of a file
in 4GB and similar on some windows versions)

It may also be a mysql limit on data length.  Check the status of your table
like this:

mysql show table status like 'users'\G
*** 1. row ***
   Name: users
   Type: MyISAM
 Row_format: Dynamic
   Rows: 42
 Avg_row_length: 95
Data_length: 4140
Max_data_length: 4294967295
   Index_length: 2048
  Data_free: 128
 Auto_increment: NULL
Create_time: 2003-09-02 13:31:53
Update_time: 2004-01-22 14:29:28
 Check_time: NULL
 Create_options:
Comment:
1 row in set (0.01 sec)

Look at the Data_length and Max_data_length and see if you've reached that
limit.

If you don't specify MAX_ROWS to CREATE TABLE MySQL will only allocate
pointers to hold 4G of data.

Hope this helps,

Cheers,

Andrew


-Original Message-
From: Mike Mapsnac [mailto:[EMAIL PROTECTED] 
Sent: Saturday 31 January 2004 12:04
To: [EMAIL PROTECTED]
Subject: Slave crashed: error 'The table 'users' is full' on query..


I have more than enough space on the box.

At the beginnig I check table status and got no error messages. Any ideas 
what can cause such error.
I use mysql 4-0.13.

The documentation http://www.mysql.com/doc/en/Full_table.html refer to older

version of mysql.

Any ideas how to fix the problem?

_
Let the new MSN Premium Internet Software make the most of your high-speed 
experience. http://join.msn.com/?pgmarket=en-uspage=byoa/premST=1


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: 5.0 replication and stored procedure

2004-02-01 Thread Andrew Braithwaite
Hi,

5.0 is sub-alpha at the moment.  If you think there is a problem, go to
http://bugs.mysql.com/

Cheers,

Andrew

-Original Message-
From: William Au [mailto:[EMAIL PROTECTED] 
Sent: Friday 30 January 2004 22:35
To: [EMAIL PROTECTED]
Subject: 5.0 replication and stored procedure


Does replication in 5.0 support stored procedure?
When I create a procedure on the master, it isn't
replicated to the slaves.  I then manually create
the same procedure on the slaves.  Then when I
call the procedure on the master, all the slaves
crashes.  Is this not supported or am I doing something
wrong?

Bill

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL optimisations for search engine

2004-02-01 Thread Andrew Braithwaite
Hi,

Make sure the words.word field is indexed and that the pages.id is an
indexed primary key.

Cheers,

Andrew


-Original Message-
From: Jasper Bryant-Greene [mailto:[EMAIL PROTECTED] 
Sent: Friday 30 January 2004 21:39
To: [EMAIL PROTECTED]
Subject: MySQL optimisations for search engine


Hi

 

I'm running a small search engine that indexes the web, but have been having
trouble with optimising it to handle the load.

There are two tables involved in searching - `pages`, which stores the ID,
URL, title and crawl date, and `words` which has two rows - `word` and
`page` where `word` is a word found on the page and `page` is the ID from
the `pages` table.

 

When we crawl an URL we rip all the words from the page and add them to the
`words` table with the ID of the page.

 

The query we use for searches is:

 

SELECT COUNT(words.word) AS score, words.page AS id, pages.title, pages.url
FROM words,pages WHERE pages.id=words.page AND words.word IN($words) GROUP
BY words.page ORDER BY score DESC LIMIT 10

 

I've put the LIMIT 10 in there because it's been going slow as hell. not
only that but it's still going rather slow since we're getting rather high
load on the search engine at the moment.

 

If anyone could suggest ways to make it run faster that'd be great, bearing
in mind that:

 

a)  I can't change MySQL server parameters since the host won't allow it

b)  I'd rather not start crawling again with a different method - the
words table has over 1,700,000 rows.

 

Thanks

 Jasper Bryant-Greene
 Cabbage Promotions
  mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]  
 http://fatalnetwork.com/ http://fatalnetwork.com/
 US: +1 (509) 691 3287
 NZ: +64 (21) 232 3303


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multiple mysql servers with the same datadir

2004-02-01 Thread Andrew Braithwaite
Hi,

I think you're getting mixed up between DBD (data base driver) and BDB
(BerkeleyDB) but I reckon you mean BDB...

I'm not sure if the locking of the page (i.e. the whole table file) is done
at the filesystem level or is managed internally by each mysqld instance.
If it is managed by each mysqld instance then what you suggest would be a
very bad idea and will certainly lead to data corruption.  If the page
locking is managed by the filesystem then I don't see a problem with what
you suggest.  If the latter is the case and you want to use NFS to access
the datadir make sure you  enable the lock daemon for NFS on the remote
server.

In general, BDB is not as well supported in MySQL as the other types and I'm
sure you can achieve what you're trying to do using another method
(replication, clustering for example)

Cheers,

Andrew


-Original Message-
From: ZHANG JIAYING [mailto:[EMAIL PROTECTED] 
Sent: Friday 30 January 2004 20:06
To: [EMAIL PROTECTED]
Subject: Multiple mysql servers with the same datadir


Hi, all. I notice this is not recommended in the manual. But I wonder if
not considering performance, when using dbd as the table type, are there any
problems in these two cases?
1) running multiple mysqld processes on the same machine with the same
datadir.
2) running multiple mysqld on different machines. but they use the same
datadir through NFS.
 
I think there will be problems when using other type of databases. But I
heard dbd uses page level lock. So if just talking about consistency, are
these two cases ok? 
 
Thanks a lot!  


-
Do you Yahoo!?
Yahoo! SiteBuilder - Free web site building tool. Try it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Read Slaves, and load balancing between them...

2004-01-29 Thread Andrew Braithwaite
Hi,

I employ a simple method, I have a 'status' table on the master and have a
cron job that updates this table with the current time (now()) every minute.
I test all the slaves each minute and if the time in the status table gets
too far behind the actual time then it flags a warning to me.

Cheers,

A


-Original Message-

From: James [mailto:[EMAIL PROTECTED]

 I am interested in setting up 3 read only slaves and one
 write master. I am thinking of implementing this because 
 one server is slower than 3 ;-). But anyway, I have read 
 all the docs and I am fairly sure how I want do this and 
 keep the sanity checking in the loop.  What I am 
 considering doing is checking (in intervals) the binlog 
 positions between the slave and the master. I am worried 
 about this being out of synch, for just about every 3 
 reads, there is a write.  I am looking to see if this is 
 a good idea, and what others comments/suggestions are. 


From: Mike Johnson [mailto:[EMAIL PROTECTED] 
It all depends on how much traffic your db server is getting and how quickly
you want to be notified if replication is behind or a slave thread died.

Your idea of checking the binlog position between the slave(s) and master at
an interval is a good one, though you run the risk of getting it at the
wrong time, such as when a rather large update is being made, and being
notified with a false positive.

I have a very similar setup -- one master, two slaves. What I did was set up
a Perl script to run in the cron every 5 minutes. It logs into each slave
and performs a SHOW SLAVE STATUS. From this it looks at the Slave_IO_Running
and Slave_SQL_Running columns to determine that the slave thread is still
operating. If either is 'no' it pages me with the 'Last_error' column.

This may not be optimal, as you sound as though you want to see if/when
replication is out of sync and not just completely dead.

I'd say give it a shot checking the binlog positions. If you find yourself
getting a lot of false positives, try having the check script sleep for N
seconds and check again to be sure of itself before notifying you, just to
see if it needs to catch up.

HTH!


-- 
Mike Johnson
Web Developer
Smarter Living, Inc.
phone (617) 886-5539

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Read Slaves, and load balancing between them...

2004-01-29 Thread Andrew Braithwaite
I believe MySQL 4.1 has support for ssl replication but it's still alpha at
the moment.

If you are referring to connecting to a MySQL server from an application to
query it then the following applies:

---from the manual---
When you connect to a MySQL server, you normally should use a password. The
password is not transmitted in clear text over the connection, however the
encryption algorithm is not very strong, and with some effort a clever
attacker can crack the password if he is able to sniff the traffic between
the client and the server. If the connection between the client and the
server goes through an untrusted network, you should use an SSH tunnel to
encrypt the communication.
-

http://www.mysql.com/doc/en/Security.html

Cheers,

A


-Original Message-
From: Jim Richardson [mailto:[EMAIL PROTECTED] 
Sent: Thursday 29 January 2004 21:03
To: [EMAIL PROTECTED]
Subject: Re: Read Slaves, and load balancing between them...


On Thu, Jan 29, 2004 at 03:40:17PM -, Andrew Braithwaite wrote:
Hi,

I employ a simple method, I have a 'status' table on the master and 
have a cron job that updates this table with the current time (now()) 
every minute. I test all the slaves each minute and if the time in the 
status table gets too far behind the actual time then it flags a 
warning to me.

Cheers,

A

That's a great idea, thanks!

On a related note, I have two servers, one slave, one master, and I want to
secure the datapath between them. What's the best way ? (on Linux, if that
matters) I am considering stunnel, or ssh tunnel, or is there something in
MySQL I can use to connect securely?

-- 
Jim Richardson http://www.eskimo.com/~warlock
We have to go forth and crush every world view that doesn't believe in
tolerance and free speech, - David Brin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: 100,000,000 row limit?

2003-12-23 Thread Andrew Braithwaite
I don't believe this.  I'm going to write a script to disprove this theory
right now..

Cheers,

Andrew

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED] 
Sent: Tuesday 23 December 2003 20:08
To: Andres Montiel; [EMAIL PROTECTED]
Subject: Re: 100,000,000 row limit?


At 0:57 -0500 12/23/03, Andres Montiel wrote:
I was informed that MySQL has a 100,000,000 row limit. Is this true? We 
were planning to use MySQL for an inventory system. However, our 
current data (rows) for 1 year for one area is already 8.8 million. We 
want to place data for 5 years for 7 areas. This would exceed 
100,000,000. Is there a possible work around for this?

Where did you hear this?

-- 
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

Are you MySQL certified?  http://www.mysql.com/certification/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: date and not null

2003-12-21 Thread Andrew Braithwaite
Not sure how the first insert worked - couldn't test it as the create table
syntax is not valid - nlandings number and nhours number - not sure how
they produced the schema:

| nlandings | int(11) | YES  | | NULL|   |
| nhours| double(8,2) | YES  | | NULL|   |

In your 
mysql desc log_book;

Cheers,

A

-Original Message-
From: landon kelsey [mailto:[EMAIL PROTECTED] 
Sent: Sunday 21 December 2003 23:12
To: [EMAIL PROTECTED]
Subject: date and not null


why did this date work:

insert into log_book values ('08-12-1973','C150','N5787G',1,1.8);

date looks good in table...has been rearranged to std format

and this didn't

insert into salesreps values (109,'Mary Jones',31,11,'Sales 
Rep','10-12-1989',106,30.00,392725.00);

date is -00-00 in table

create table log_book (fdate date, actype varchar2(16), acid varchar(16), 
nlandings number, nhours number);

create table salesreps
(empl_num int(8) not null,
  name varchar(15) not null,
  age int(8),
  rep_office int(8),
  title varchar(10),
  hire_date date not null,
  manager int(8),
  quota float(10,2),
  sales float(10,2) not null,
  primary key (empl_num ));

must be the not null!!!

mysql desc log_book;
+---+-+--+-+-+---+
| Field | Type| Null | Key | Default | Extra |
+---+-+--+-+-+---+
| fdate | date| YES  | | NULL|   |
| actype| varchar(16) | YES  | | NULL|   |
| acid  | varchar(16) | YES  | | NULL|   |
| nlandings | int(11) | YES  | | NULL|   |
| nhours| double(8,2) | YES  | | NULL|   |
+---+-+--+-+-+---+
5 rows in set (0.00 sec)

mysql desc salesreps;
++-+--+-++---+
| Field  | Type| Null | Key | Default| Extra |
++-+--+-++---+
| empl_num   | int(8)  |  | PRI | 0  |   |
| name   | varchar(15) |  | ||   |
| age| int(8)  | YES  | | NULL   |   |
| rep_office | int(8)  | YES  | | NULL   |   |
| title  | varchar(10) | YES  | | NULL   |   |
| hire_date  | date|  | | -00-00 |   |
| manager| int(8)  | YES  | | NULL   |   |
| quota  | float(10,2) | YES  | | NULL   |   |
| sales  | float(10,2) |  | | 0.00   |   |
++-+--+-++---+
9 rows in set (0.00 sec)

_
Have fun customizing MSN Messenger - learn how here!  
http://www.msnmessenger-download.com/tracking/reach_customize


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Very Slow GROUP BY Query

2003-12-21 Thread Andrew Braithwaite
Well, without investigating it too deeply, if you have:

SELECT Realm, COUNT(*) AS CallCount, SUM(AcctSessionTime) AS RealmTime FROM
ServiceRADIUSAccounting WHERE AcctStartTime  '2003-12-12 16:00:00' AND
AcctStopTime  '2003-12-12 15:00:00' AND (Realm = 'bwsys.net') GROUP BY
Realm

If you are using AND (Realm = 'bwsys.net') then you don't need the GROUP
BY Realm clause.

Another tip if you want to speed it up is to make sure you have an index on
the Realm column.  You could do that like this:

Mysql  Alter table ServiceRADIUSAccounting add index idx_realm(Realm);

That should optimise the where clause quite a bit.

Cheers,

Andrew

-Original Message-
From: Michael Shuler [mailto:[EMAIL PROTECTED] 
Sent: Sunday 21 December 2003 23:28
To: [EMAIL PROTECTED]
Subject: RE: Very Slow GROUP BY Query


A little extra info...  MySQL's process list says:

Command   Time   Status   SQL-Query (which looks like it
abbreviates)
Query 67 Sending data   SELECT Realm, COUNT( * ) AS CallCount, SUM(
AcctSessionTime ) AS RealmTime FROM ServiceRADIUSAccounting  

Thinking this could be a network problem I ran it locally on the machine and
got the same result so its definitely something internal.  Also I found if a
vary the queries around more than I had been it constantly takes 60+
seconds.

Other possibly useful info:  RedHat 7.3 (fully up2date) and using the MySQL
RPM's from MySQL's website.  Perhaps I would try compiling from source?

Mike



-Original Message-
From: Michael Shuler [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 21, 2003 4:41 PM
To: [EMAIL PROTECTED]
Subject: Very Slow GROUP BY Query


I have been trying to understand why this query is taking over 60 seconds to
complete SOMETIMES...its not that complicated and the server (4.0.17) is
under very little load.  Any help would be appreciated.

They Query:

SELECT Realm, COUNT(*) AS CallCount, SUM(AcctSessionTime) AS RealmTime FROM
ServiceRADIUSAccounting WHERE AcctStartTime  '2003-12-12 16:00:00' AND
AcctStopTime  '2003-12-12 15:00:00' AND (Realm = 'bwsys.net') GROUP BY
Realm

The Explain:
table   type  possible_keys
key   key_len  refrows   Extra  
ServiceRADIUSAccounting ref AcctStopTime,Realm,RealmAndStart,StartStopRealm
RealmAndStart 65 const  74391  Using where 

The following table has 1.4 million records in it (which *should* not be a
problem for MySQL):

CREATE TABLE `ServiceRADIUSAccounting` (
  `RadAcctId` int(16) NOT NULL auto_increment,
  `AcctSessionId` char(32) NOT NULL default '',
  `AcctUniqueId` char(32) NOT NULL default '',
  `UserName` char(64) NOT NULL default '',
  `Realm` char(64) default NULL,
  `NASIPAddress` char(15) NOT NULL default '',
  `AcctStartTime` datetime NOT NULL default '-00-00 00:00:00',
  `AcctStopTime` datetime NOT NULL default '-00-00 00:00:00',
  `AcctSessionTime` int(12) NOT NULL default '0',
  `XmitSpeed` int(8) NOT NULL default '0',
  `RecvSpeed` int(8) NOT NULL default '0',
  `AcctInputOctets` int(12) default NULL,
  `AcctOutputOctets` int(12) default NULL,
  `CalledStationId` char(11) NOT NULL default '0',
  `CallingStationId` char(11) NOT NULL default '0',
  `AcctTerminateCause` char(8) NOT NULL default '',
  `FramedIPAddress` char(15) NOT NULL default '',
  PRIMARY KEY  (`RadAcctId`),
  KEY `UserName` (`UserName`),
  KEY `FramedIPAddress` (`FramedIPAddress`),
  KEY `AcctSessionId` (`AcctSessionId`),
  KEY `AcctUniqueId` (`AcctUniqueId`),
  KEY `AcctStopTime` (`AcctStopTime`),
  KEY `NASIPAddress` (`NASIPAddress`),
  KEY `Realm` (`Realm`),
  KEY `RealmAndStart` (`Realm`,`AcctStartTime`,`AcctStopTime`),
  KEY `StartStopRealm` (`AcctStartTime`,`AcctStopTime`,`Realm`)
) TYPE=InnoDB AUTO_INCREMENT=5397482 ;


Anyway the output (when it fiannly shows up) looks like this:
Realm CallCount  RealmTime  
bwsys.net 4062093284 

The interesting thing I have noticed is that performance varies depending on
the dates I use (but still only reporting over a 1hr duration..the call
count stays about the same so the SUM function is still being used about the
same amount).  This would lead me to believe that MySQL is having a hard
time retrieving the 406 records from the table...this DB server isn't really
that busy until I run one of these queries.  For example a query on
12/12/2003 seems to be 10X slower than a query for the same time on
12/1/2003.  It seems that the older the query (time wise) the faster it
goes...very strange.  And now when I run it for some reason its going fast
againI am assuming because of disk caching or something along those
lines...it can't seem to make up its mind.  All the server is doing is
taking in RADIUS accoutning records and authenticating users...CPU
utilization is around 1% per CPU and the machine has 1GB RAM.  Here's my
my.cnf if that helps too:


[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-bin
server-id   = 1

skip-locking
set-variable = max_connections=200
set-variable = 

A series of essays on fulltext searching.

2003-12-19 Thread Andrew Braithwaite
Hi,

This isn't MySQL specific, but it's very interesting and I thought people
may be interested.

http://www.tbray.org/ongoing/When/200x/2003/07/30/OnSearchTOC

Cheers,

Andrew

SQL, Query

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Avarage monthly import

2003-12-08 Thread Andrew Braithwaite
Are you using MySQL?

OK, if you are then first simplify your query:

select date_format(dt_imp,'%Y/%m') as date,
SUM(imp)
from sp
group by 1
order by 1

Then add the AVG column which will work ok with the group by :

select date_format(dt_imp,'%Y/%m') as date,
SUM(imp),
AVG(imp)
from sp
group by 1
order by 1

Is this what you were after? (if not, let me know and I'll try again..)

Cheers,

Andrew

-Original Message-
From: getting_out [mailto:[EMAIL PROTECTED] 
Sent: Monday 08 December 2003 17:21
To: MySQL ML
Subject: Avarage monthly import


Hello people.
I have a table structured like this

+--+---+
| Field| Type  |
+--+---+
| dt_imp   | date  |
| imp  | decimal(5,2)  |
+--+---+

If I want to get the total per month I can do a query like this

select date_format(a.dt_imp,'%Y/%m') date,
SUM(a.imp)
from sp a
group by date_format(a.dt_imp, '%Y/%m')
order by 1

with a result like this

+-++
| ... |... |
| 2002/02 | 238.30 |
| 2002/03 |1385.95 |
| 2002/04 | 475.30 |
| 2002/05 | 171.10 |
| ... |... |
+-++

now, I would like to get the avarage monthly import, so I could use the 
AVG() function.
Unfortunly I didn't understand how to use it in my case.
I've tried something like

select avg(subqry1) from sp

or

select * from subqry1

where suqry1 is the query precedently written; but I didn't succed in this.
How can I do to solve my problem?

thanks everybody.

G.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query to emulate what mysqldump does

2003-12-03 Thread Andrew Braithwaite
You could try to use the select into {OUTFILE | DUMPFILE} from tablename
where blah=blah...

I think you may be able to do select into local outfile from blah

Which will put the file on the same server as the MySQL client is running
on...

Cheers,

Andrew

-Original Message-
From: Matt Babineau [mailto:[EMAIL PROTECTED] 
Sent: Wednesday 03 December 2003 17:37
To: [EMAIL PROTECTED]
Subject: RE: Query to emulate what mysqldump does


On Wed, 2003-12-03 at 15:22, Jay Blanchard wrote:
 [snip]
 I thought about that Jay, but the mysql server is not on the webserver 
 machine. Any other suggestions? [/snip]
 
 phpmyadmin will allow you to connect to the remote MySQL server and do 
 dumps

What if I don't have phpmyadmin available? :)

What I am trying to do, it setup a simple script to pull down essentially a
backup of their database and write it to a file on my development machine so
when they mess up their data (..and I said WHEN) I can be a hero and revert
them to the last good backup before they didn't touch a thing.

Thx-
M


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql disaster recovery

2003-12-03 Thread Andrew Braithwaite
I have had some nasty NFS experiences (especially with the server from which
you're mounting the data going down).  In my experience (and I'm echoing
previous responses now) replication is better.

Cheers,

Andrew

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday 03 December 2003 22:47
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: RE: mysql disaster recovery


This information may not be current, but I seem to remember hearing some 
really nasty stories about people putting MySQL data dirs on NFS 
exports.  I would research the appropriate documentation before 
attempting such a configuration.

-Original Message-
From: mysql-digest-help [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2003 5:10 PM
To: halla3; mysql
Cc: John.Griffin
Subject: RE: mysql disaster recovery


Hi Andrew,

I am not a Guru. I would suggest that you look at MySQL's excellent 
replication facility rather than NFS mount a drive. Having your data on 
an NFS mounted drive will significantly degrade the performance of your 
database. Replication will not.


John Griffin

-Original Message-
From: Andrew Hall [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2003 4:51 PM
To: [EMAIL PROTECTED]
Subject: mysql disaster recovery


Greetings Gurus,

I have a mysql server that I need to create a disaster recovery system for.
What I am planning on doing is putting the data dir on a NFS mounted
directory so that I can start mysql on either of two servers in case one
dies.  The inbound connections would be load balanced in a fail over
scenario, so the IP that clients will connect to will be on the load
balancer.

I'm wondering if there is anything already developed that would test mysql
on the primary server, and if its not functioning, kill any remaining mysql
processes if necessary, and start it on the secondary.  
This logic seems to be the biggest problem.

Any suggestions, or other methodologies to implement this would be welcome. 

Thank you for your time in advance,

Andrew


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: starting another server

2003-11-06 Thread Andrew Braithwaite
Would I need to create different datadirectories?
Yes.

See http://www.mysql.com/doc/en/Multiple_servers.html

Cheers,

Andrew

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday 06 November 2003 11:38
To: [EMAIL PROTECTED]
Subject: starting another server


Hello,

I have two (production) databases running under the same server process. I
would like to give each database its own server process. Several reasons for
this, one of them I need to be able to stop/start the databases separately.
What would be the easiest approach? Recompilation is not an option. Would I
need to create different datadirectories? Version is 3.23.40

Regards Oscar ===
De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is alleen
bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt,
wordt u verzocht de inhoud niet te gebruiken en de afzender direct te
informeren door het bericht te retourneren. Hoewel Orange maatregelen heeft
genomen om virussen in deze email of attachments te voorkomen, dient u ook
zelf na te gaan of virussen aanwezig zijn aangezien Orange niet
aansprakelijk is voor computervirussen die veroorzaakt zijn door deze
email..

The information contained in this message may be confidential and is
intended to be only for the addressee. Should you receive this message
unintentionally, please do not use the contents herein and notify the sender
immediately by return e-mail. Although Orange has taken steps to ensure that
this email and attachments are free from any virus, you do need to verify
the possibility of their existence as Orange can take no responsibility for
any computer virus which might be transferred by way of this email.
===



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Online Tutorials for beginners

2003-10-23 Thread Andrew Braithwaite
Hi,

Does anyone know of any good mysql tutorials online that would suit someone
who has a computer science degree but knows nothing about MySQL.

Pointers will be most welcome.

Cheers,

Andrew

Sql, query

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Auto generate MySQL schema based on a text file?

2003-10-14 Thread Andrew Braithwaite
Hi,

Does anyone know of a perl module or other code that can look at a text file
(CSV, tab-delim etc..) of data and determine a MySQL table definition from
it?

The data may or may not have a set of column headers as the first line.

I would appreciate it greatly if anyone could give me any pointers to
existing stuff as I have to implement this function (and I don't
particularly like spending my time painfully churning out regexps if it's
not absolutely necessary!! :)

Cheers,

Andrew

SQL, Query

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Auto generate MySQL schema based on a text file?

2003-10-14 Thread Andrew Braithwaite
Thanks, that was extremely useful especially the date format detection.
However as it's usually quite scientific data, I need to detect FLOATs,
DOUBLEs and DECIMALs and the structure is not conducive to being integrated
into my current framework.

Is it OK if I pilfer parts of the code and extend it for use in my app?

Would be much appreciated...

Cheers,

Andrew

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED] 
Sent: Tuesday 14 October 2003 19:19
To: Andrew Braithwaite; '[EMAIL PROTECTED]'
Subject: Re: Auto generate MySQL schema based on a text file?


At 19:05 +0100 10/14/03, Andrew Braithwaite wrote:
Hi,

Does anyone know of a perl module or other code that can look at a text 
file (CSV, tab-delim etc..) of data and determine a MySQL table 
definition from it?

The data may or may not have a set of column headers as the first line.

You can use the guess_table.pl utility that's part of the software
distribution that accompanies MySQL Cookbook.  The distribution is available
at:

http://www.kitebird.com/mysql-cookbook/

Follow the Downloads link, grab the recipes distribution, unpack it, and
look in the transfer directory.

It requires tab-delimited data. For other formats, you may be able to use
the cvt_file.pl util that's in the same directory to convert them to
tab-delimited.

% guess_table.pl --help
Usage: guess_table.pl [options] [data_file]

Options:
--help
 Print this message
--labels, -l
 Interpret first input line as row of table column labels
 (default = c1, c2, ...)
--lower, --upper
 Force column labels to be in lowercase or uppercase --quote-names
Quote table and column names with `` characters (in case they are
reserved words)
--report , -r
 Report mode; print findings rather than generating a CREATE
 TABLE statement
--table=tbl_name, -ttbl_name
 Specify table name (default = t)




I would appreciate it greatly if anyone could give me any pointers to 
existing stuff as I have to implement this function (and I don't 
particularly like spending my time painfully churning out regexps if 
it's not absolutely necessary!! :)

Cheers,

Andrew

SQL, Query


-- 
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

Are you MySQL certified?  http://www.mysql.com/certification/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: RAID, miiror OR replication?

2003-10-07 Thread Andrew Braithwaite
Hi,

Having implemented all the solutions you suggest, I would need more
information to answer this problem.

1. What is the acceptable uptime of the system?  95%, 99%, 99.9%, 99.99% ?

2. In the event of a failure, what is the acceptable recovery time?  None,
20 mins, 1 hr, 5 hrs, 1 day ?

3. What hardware is running the DB now?

4. How many queries per second is the system running?  Is it read heavy or
write heavy?  (and what about the future)

5. What is the hardware budget?  Just the existing hardware, $1000, $5000
etc..

6. How much time can you afford to spend on it?

With this info, I could help to suggest a solution... But without it, You
may receive ideas for solutions that are overkill or underkill for your
needs.

Hope this helps,

Andrew

-Original Message-
From: Richard Reina [mailto:[EMAIL PROTECTED] 
Sent: Monday 06 October 2003 20:36
To: [EMAIL PROTECTED]
Subject: RAID, miiror OR replication?


I am wanting to protect myself against future potential hard drive 
failures on my database server running version 3.23.49a.  Should I try 
and set up a RAID, a mirror or would the best solution be to set up 
MySQL replication.  Any suggestions would be greatly appreciated.

Richard


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: [Fwd: MySQL w/dual-master replication?]

2003-10-07 Thread Andrew Braithwaite
Is the system read-heavy or write-heavy?

Cheers,

Andrew

-Original Message-
From: Don MacAskill [mailto:[EMAIL PROTECTED] 
Sent: Monday 06 October 2003 20:47
To: [EMAIL PROTECTED]
Subject: [Fwd: MySQL w/dual-master replication?]



Hey all,

I sent this a few days ago, but it may have gotten lost in the weekend 
for many of you.  Don't worry, I won't keep re-posting it.  :)

I'm hoping someone out there is doing some sort of high-availability 
replication and can give me a few pointers.

Thanks in advance!

Don


 Original Message 
Subject: MySQL w/dual-master replication?
Date: Sat, 04 Oct 2003 11:07:43 -0700
From: Don MacAskill [EMAIL PROTECTED]
To: [EMAIL PROTECTED]


Hi there,

I realize it's not supported, but nonetheless, I need to investigate how
possible it is.  The primary goal is high-availability, not performance
scaling.

It sounds like if I'm careful, it might work out ok.  Careful means:

- No auto_increment columns... handle unique key assignment in my app

- Update/insert/delete on primary master only except in case of failure

- Prevent possibly 'flap' by automatically using secondary master
exclusively after primary master failure.  Bring up primary master manually
and manually tell the app when to start using it again after it's allowed to
catch back up.

Are there any other gotchas I need to worry about?  Is anyone actually doing
this in a production environment?  I'd love to hear from you.


Another option is to use master/slave, and have a monitoring app change the
slave's status to master after the master has died.  There's info about this
in the FAQ (http://www.mysql.com/doc/en/Replication_FAQ.html), but I'm
afraid the documentation on the sequence of events is pretty vague.  Does
anyone have any insight as to exactly how it works?

In particular, one part of the doc says to use 'STOP SLAVE; RESET MASTER;
CHANGE MASTER TO' and another part of the doc says 'STOP SLAVE;
  CHANGE MASTER TO' ...  which is appropriate?   Certainly, I understand
why 'STOP SLAVE', but why 'RESET MASTER'?   Would the sequence of events
differ if we're just dealing with 1 master/1 slave instead of 1
master/multiple slaves?

Once the old master joins back up, I can let it be a slave for awhile to
catch back up, then swap back, correct?

Thanks in advance!

Don MacAskill
http://onethumb.smugmug.com/





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >