BUG: MySQL 5.1.19 with UNCOMPRESS on *VALID* data fails when generated by 3rd party zlib.

2007-06-12 Thread Kevin Burton

OK I think I've found a bug with MySQL's compression support. :-/

I'm using two Java zlib implementations.  One is jzlib 1.0.7 and the other
is java.io.DeflateOutputStream .  Both of these are referenced by the zlib
implementation as being compatible.

I can compress/uncompress locally WITHOUT a problem.

When I store the data in the DB the value is stored correctly in a blob and
I can compare the MD5 hashcode with my local array of and the hashcode
values are identical which for all practical purposes means they're the
same.

The only problem is that UNCOMPRESS won't work...

It returns null and I get:


mysql SHOW WARNINGS;
+---+--+-+
| Level | Code |
Message
|
+---+--+-+
| Error | 1256 | Uncompressed data size too large; the maximum size is
1047552 (probably, length of uncompressed data was corrupted) |
+---+--+-+
1 row in set (0.00 sec)

Sure enough:

mysql SELECT UNCOMPRESSED_LENGTH(BODY) FROM FOO;
+---+
| UNCOMPRESSED_LENGTH(BODY) |
+---+
| 147577464 |
+---+
1 row in set (0.00 sec)

..

I've tested this on 5.1.19 and 4.1.21 with the same symptoms.

Anyone have any advice here?  Did you guys make any changes with the zlib
implementation you're using?

I'm willing to file this as a bug if necessary.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev

hi david..  thanks...

i've done this many times and yes either trough php, perl, python or on
the mysql cl client. but my question here is not about doing it and insert
times it's more about hosting it and query times. i currently have a
working table for the same purpose with about 1.5 million records in it.
and the thing runs smoot on a machine that is 4 years old with 1 gig of
ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
this x 100 times. more or less. i'm not worried about the insert times -
this happens only ones and for a million entries, depending on what
technic is used, it takes no longer than a few minutes.
what i was asking basically was somebody to share experience with running
a server with that amount of records in one table.

currently the table i have has a size of 65 mgb which by 100 is about 6600
mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
successfully use a table like that. either that or cluster 2 machines with
4 gigs each and split the table. does this sound reasonable? is my logic
flawed somehow?

i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.  I
 can't speak for insert times, though, especially when keys are involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from the
 command line) that you should be able to hack down.  It should answer your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalone/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/usrs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.

 Dave




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread Olaf Stein
I guess a lot of that depends what an acceptable query execution time for
you is.
Also, what else does the machine do, are there other databases or tables
that are queried at the same time, do you have to join other tables in for
your queries, etc?

Olaf


On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote:

 
 hi david..  thanks...
 
 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with running
 a server with that amount of records in one table.
 
 currently the table i have has a size of 65 mgb which by 100 is about 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines with
 4 gigs each and split the table. does this sound reasonable? is my logic
 flawed somehow?
 
 i'll appreciate any comments on this subject    thanks...
 
 
 
 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:
 
 hi all...
 
 from http://dev.mysql.com/doc/refman/5.0/en/features.html:
 
 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.
 
 that's cool but i assume this is distributed over a few machines...
 
 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?
 
 can somebody please share some/any experience s/he has/had with managing
 databases/tables with that amount of records. i'd really appreciate
 it...
 
 
 99 million isn't that large of a number.
 
 If you key the database properly, search times should be very modest.  I
 can't speak for insert times, though, especially when keys are involved.
 
 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just for($i=0;
 $i9900; $i++) it with random numbers.
 
 If you have PHP on your system, here is some PHP code (runnable from the
 command line) that you should be able to hack down.  It should answer your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):
 
 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup
 
 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup
 
 Near the end of it, especially if the software writes output, you should
 get
 an intuitive feel for how long each INSERT is taking.
 
 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.
 
 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.
 
 Dave
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to get Number of rows matched?

2007-06-12 Thread J.R. Bullington

Ace,

I am sorry to get into this so late, but you didn't mention what version you 
are running.

If you are running 5.0.1 or greater, you can use the MySQL function ROW_COUNT().

You will find that it will help you in returning the results that you need.

Here's the page in the manual for your review:

http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_row-count

HTH!

J.R.



From: Ace [EMAIL PROTECTED]
Sent: Tuesday, June 12, 2007 1:07 AM
To: Michael Dykman [EMAIL PROTECTED]
Subject: Re: how to get Number of rows matched? 

Thanks All for your help!

If someone from MySQL team is looking at this mail thread, we request to
include this feature in future release.

Cheers,
Rajan
On 6/11/07, Michael Dykman  wrote:

 no, there is nothing else.  There are cleaner interfaces to this
 information but, for PHP. the string returned by mysql_info() is all
 you get.  The format of that string is very regular and we have been
 using it in production software for well over a year now with no
 issues.

 - michael


 On 6/11/07, Ace  wrote:
  Yes, you are right!  mysql_info() is not most useful. It does give me
 number
  of rows matched but will involve complications of parsing the string.
 
  Is there no other way to this? How can this be missed? I am not so
  convinienced on mysql_info()!
 
 
  On 6/11/07, Jerry Schwartz  wrote:
  
   Have you looked at mysql_info()? The format of the return value might
 not
   be
   the most useful, but it should give you what you need.
  
   Regards,
  
   Jerry Schwartz
   The Infoshop by Global Information Incorporated
   195 Farmington Ave.
   Farmington, CT 06032
  
   860.674.8796 / FAX: 860.674.8341
  
   www.the-infoshop.com
   www.giiexpress.com
   www.etudes-marche.com
  
  
-Original Message-
From: ViSolve DB Team [mailto:[EMAIL PROTECTED]
Sent: Monday, June 11, 2007 6:15 AM
To: Ace; mysql@lists.mysql.com
Subject: Re: how to get Number of rows matched?
   
Hi
   
AFAIK, before changing data, the old values are saved in the
 rollback
segment.
On saving the updated values, from the Buffer to the rollback
segment/data
files,
--- it checks if there is any matched row that matches the
condition.  If
found, then flags Matched.
---after filtering out the matched row, it check whether
there is need to
change the old value to new value. if need then flags
Changed and rewrite
the same in the datafile/rollback segment.
   
   
Thanks
ViSolve DB Team.
- Original Message -
From: Ace 
To: 
Sent: Monday, June 11, 2007 11:41 AM
Subject: how to get Number of rows matched?
   
   
 Hi Experts,

 When issuing updates in mysql (in the console window),
mysql will tell
 you if any rows matched and how many rows were updated (see
below).  I
 know how to get number of rows udpated using
mysql_affected_rows(), but is
 there any
 way to get the number of rows matched?  I want to find out,
when rows
 updated = 0, if there were no updates because the row wasn't found
 (rows matched will = 0) or because the update would not have
 changed
 any data (rows matched = 1).

 mysql select * from test;
 +--+--+
 | roll | s|
 +--+--+
 |1 | new  |
 +--+--+
 1 row in set (0.00 sec)

 mysql update test set roll = 1, s = 'new' where roll = 1;
 Query OK, 0 rows affected (0.00 sec)
 Rows matched: 1  Changed: 0  Warnings: 0

 mysql update test set roll = 1, s = 'new' where roll = 17;
 Query OK, 0 rows affected (0.00 sec)
 Rows matched: 0  Changed: 0  Warnings: 0

 mysql update test set roll = 1, s = 'neww' where roll = 1;
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

 --
 Cheers,
 Rajan

   
   
--
--
   
   
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.472 / Virus Database: 269.8.13/843 - Release
Date: 6/10/2007
1:39 PM
   
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
   
   
  
  
 


 --
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.





Table crashing

2007-06-12 Thread Sharique uddin Ahmed Farooqui

Tables frequently crashing on my site. Here is the message I'm seeing
*
Warning*: Table './mydb/sessions' is marked as crashed and should be
repaired query: SELECT u.*, s.* FROM users u INNER JOIN sessions s ON u.uid=
s.uid WHERE s.sid = '854c6474111de174abbddf77f74b9d99' in *
/www/ims/includes/database.mysqli.inc* on line *151*

I don't understand what's wrong with db.
--
Sharique uddin Ahmed Farooqui
(C++/C# Developer, IT Consultant)
A revolution is about to begin.
A world is about to change.
And you and me are the initiator.


Re: maximum number of records in a table

2007-06-12 Thread J.R. Bullington

Olaf is right.

It is really more about query execution time, and more importantly, QUERY 
OPTIMIZATION.

Depending on how you setup your keys, your table type, and what else your 
server does, you should be able to run multiple queries on this table without 
too much of an issue.

2 BIG suggestions -- 

1) Whatever query you want to run on this table, run EXPLAIN. Then study the 
results and do your optimization and key creation.

2) QUERY_CACHE. This is where you are going to live or die. Since you said you 
will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the 
QUERY_CACHE is going to help out a lot here.

HTH!
J.R.



From: Olaf Stein [EMAIL PROTECTED]
Sent: Tuesday, June 12, 2007 8:13 AM
To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED]
Subject: Re: maximum number of records in a table 

I guess a lot of that depends what an acceptable query execution time for
you is.
Also, what else does the machine do, are there other databases or tables
that are queried at the same time, do you have to join other tables in for
your queries, etc?

Olaf

On 6/12/07 3:24 AM, kalin mintchev  wrote:

 
 hi david..  thanks...
 
 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with running
 a server with that amount of records in one table.
 
 currently the table i have has a size of 65 mgb which by 100 is about 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines with
 4 gigs each and split the table. does this sound reasonable? is my logic
 flawed somehow?
 
 i'll appreciate any comments on this subject    thanks...
 
 
 
 On 6/11/07, kalin mintchev  wrote:
 
 hi all...
 
 from http://dev.mysql.com/doc/refman/5.0/en/features.html:
 
 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.
 
 that's cool but i assume this is distributed over a few machines...
 
 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?
 
 can somebody please share some/any experience s/he has/had with managing
 databases/tables with that amount of records. i'd really appreciate
 it...
 
 
 99 million isn't that large of a number.
 
 If you key the database properly, search times should be very modest.  I
 can't speak for insert times, though, especially when keys are involved.
 
 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just for($i=0;
 $i9900; $i++) it with random numbers.
 
 If you have PHP on your system, here is some PHP code (runnable from the
 command line) that you should be able to hack down.  It should answer your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):
 
 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup
 
 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup
 
 Near the end of it, especially if the software writes output, you should
 get
 an intuitive feel for how long each INSERT is taking.
 
 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.
 
 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.
 
 Dave
 
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ian P. Christian
I upgraded my slave server a few weeks ago, and the slave failed, with
an error similar to the one shown below.

I rolled back my upgrade, and it started working again, so I forgot
about it.

Today, I upgraded the master (again, from 5.0.32 to 5.0.42) - and hte
slave failed again.

I thought upgrading the slave to match the master might help, but still
it failed.  Below is the error.

The hostname did *not* change.


070612 13:35:09 [Warning] No argument was provided to --log-bin, and
--log-bin-index was not used; so replication may break when this MySQL
server acts as a master and has his hostname changed!! Please use
'--log-bin=/var/run/mysqld/mysqld-bin' to avoid this problem.

070612 13:35:09  InnoDB: Started; log sequence number 40 824537593
070612 13:35:09 [Warning] Neither --relay-log nor --relay-log-index were
used; so replication may break when this MySQL server acts as a slave
and has his hostname changed!! Please use
'--relay-log=/var/run/mysqld/mysqld-relay-bin' to avoid this problem.
070612 13:35:09 [ERROR] Failed to open the relay log
'./xian-relay-bin.000962' (relay_log_pos 284157529)
070612 13:35:09 [ERROR] Could not find target log during relay log
initialization
070612 13:35:09 [ERROR] Failed to initialize the master info structure
070612 13:35:09 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.0.42-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306
 Gentoo Linux mysql-5.0.42

Any ideas/suggestions welcome, reseeding the slave will literally take days.

-- 
Ian P. Christian ~ http://pookey.co.uk



signature.asc
Description: OpenPGP digital signature


Translation of sql into mysql

2007-06-12 Thread David Scott


I am trying to update the column CreditCode in a table extract using data 
from another table CreditCodes, which has columns CreditCode and 
Consumer_No.


I have been given the following sql which works on another database (not 
sure which and it is late at night so I can't ring up and ask)


update extract
set CustomerCreditCode = b.CreditCode
from extract a
inner join CreditCodes b
on a.ConsumerNO = b.Consumer_No;

This gives an error in mysql:
ERROR 1064: You have an error in your SQL syntax; check the manual that 
corresponds to your MySQL server version for the right syntax to use near 
'from extract a

inner join CreditCodes b
on a.ConsumerNO = b.Cons

Can anyone translate it into correct mysql syntax for me?

Thanks

David Scott

_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
Auckland 1142,NEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]

Graduate Officer, Department of Statistics
Director of Consulting, Department of Statistics


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Translation of sql into mysql

2007-06-12 Thread Baron Schwartz

Hi David.

David Scott wrote:


I am trying to update the column CreditCode in a table extract using 
data from another table CreditCodes, which has columns CreditCode and 
Consumer_No.


I have been given the following sql which works on another database (not 
sure which and it is late at night so I can't ring up and ask)


update extract
set CustomerCreditCode = b.CreditCode
from extract a
inner join CreditCodes b
on a.ConsumerNO = b.Consumer_No;


It looks like SQL Server or Sybase to me, but maybe other things have the same 
syntax.



This gives an error in mysql:
ERROR 1064: You have an error in your SQL syntax; check the manual that 
corresponds to your MySQL server version for the right syntax to use 
near 'from extract a

inner join CreditCodes b
on a.ConsumerNO = b.Cons

Can anyone translate it into correct mysql syntax for me?


Here are two ways you can try, which may have different performance:

update extract set CustomerCreditCode
   = ( select CreditCode from CreditCodes as b where extract.ConsumerNO = 
b.Consumer_No);

update extract as a
   inner join CreditCodes b
 on a.ConsumerNO = b.Consumer_No
   set a.CustomerCreditCode = b.CreditCode;

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



zabbix mysql problem

2007-06-12 Thread sizo nsibande

I am trying to install zabbix, and at the third step I get this error:



[EMAIL PROTECTED] etc]# mysql -u zabbix -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6 to server version: 5.0.27

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql create database zabbix;
ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'zabbix'
mysql




I am reading the mysql in 21 days book and hope to become more
knowledgeable on this subject, please point me in the right direction?
--
Communication is not just english grammaer and literature, but a well
established database between people that have come to know each other!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: zabbix mysql problem

2007-06-12 Thread Gerald L. Clark

sizo nsibande wrote:

I am trying to install zabbix, and at the third step I get this error:



[EMAIL PROTECTED] etc]# mysql -u zabbix -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6 to server version: 5.0.27

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql create database zabbix;
ERROR 1044 (42000): Access denied for user ''@'localhost' to database 
'zabbix'

mysql




I am reading the mysql in 21 days book and hope to become more
knowledgeable on this subject, please point me in the right direction?


You are not looged in to mysql as a user with create privileges.
You might need to backup a few chapters.
--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ian P. Christian
Ian P. Christian wrote:
 I upgraded my slave server a few weeks ago, and the slave failed, with
 an error similar to the one shown below.


I have figured out what happened here now - and I'm part of the way
though fixing it.

It turned out the defaults had changed somewhere, and rather then using
/var/lib/mysql/hostname-bin, it was using /var/run/mysql/mysqld-bin
(and the same change fro relay logs too).

Now... I've changed the slave to use it's correct logs now - however, if
I do the same on the master, I'll have the last 4 hours of logs in
/var/run/mysql/mysqld-bin ignored.

Somehow, I need to get the slave to catch up with the master's old logs
in /var/lib/mysql/hostname-bin, and then continue from the brand new
logs in /var/run/mysql/mysqld-bin

This is an awful mess, and I'm not sure it's recoverable - perhaps it is.

In theory, I should be able to find out where the slave was up to in the
old logs, extract them manually and replay them on the slave, and then
reset the slave to use the new logs - however i'm not sure how reliable
that's going to be - or even how to go about doing it yet.

Ideas anyone?

-- 
Ian P. Christian ~ http://pookey.co.uk



signature.asc
Description: OpenPGP digital signature


Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Gordan Bobic
Just to clarify - are you asking for suggestions regarding avoiding 
re-seeding the slave or regarding what is likely to have gone wrong?

Generally, a newer slave can cope with an older master, but not the other 
way around. If you updated the master while slave was out of date, you may 
be out of options.

For what it's worth, LOAD DATA FROM MASTER tends to be much faster and 
more reliable than copying tar balls across if you have to re-seed.

The error indicates that there's a problem with that version, though. Have 
you tried any versions between 32 and 42?

Gordan

On Tue, 12 Jun 2007, Ian P. Christian wrote:

 I upgraded my slave server a few weeks ago, and the slave failed, with
 an error similar to the one shown below.
 
 I rolled back my upgrade, and it started working again, so I forgot
 about it.
 
 Today, I upgraded the master (again, from 5.0.32 to 5.0.42) - and hte
 slave failed again.
 
 I thought upgrading the slave to match the master might help, but still
 it failed.  Below is the error.
 
 The hostname did *not* change.
 
 
 070612 13:35:09 [Warning] No argument was provided to --log-bin, and
 --log-bin-index was not used; so replication may break when this MySQL
 server acts as a master and has his hostname changed!! Please use
 '--log-bin=/var/run/mysqld/mysqld-bin' to avoid this problem.
 
 070612 13:35:09  InnoDB: Started; log sequence number 40 824537593
 070612 13:35:09 [Warning] Neither --relay-log nor --relay-log-index were
 used; so replication may break when this MySQL server acts as a slave
 and has his hostname changed!! Please use
 '--relay-log=/var/run/mysqld/mysqld-relay-bin' to avoid this problem.
 070612 13:35:09 [ERROR] Failed to open the relay log
 './xian-relay-bin.000962' (relay_log_pos 284157529)
 070612 13:35:09 [ERROR] Could not find target log during relay log
 initialization
 070612 13:35:09 [ERROR] Failed to initialize the master info structure
 070612 13:35:09 [Note] /usr/sbin/mysqld: ready for connections.
 Version: '5.0.42-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306
  Gentoo Linux mysql-5.0.42
 
 Any ideas/suggestions welcome, reseeding the slave will literally take days.
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ofer Inbar
Ian P. Christian [EMAIL PROTECTED] wrote:
 In theory, I should be able to find out where the slave was up to in the
 old logs, extract them manually and replay them on the slave, and then
 reset the slave to use the new logs - however i'm not sure how reliable
 that's going to be - or even how to go about doing it yet.

Assuming your slave is not usable by client programs now anyway and
you don't mind it being unusable for a while longer, you can restart
the slaving from scratch:

1. take a full mysqldump of the master, --with-master-data --flush-logs
2. drop your databases on the slave, stop slaving, and restore the dump
3. restart slaving on the slave using the master data in from the dump

(see the mysql docs on how to set up replication if you need more detail)

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ian P. Christian
Ofer Inbar wrote:
 Assuming your slave is not usable by client programs now anyway and
 you don't mind it being unusable for a while longer, you can restart
 the slaving from scratch:

This is exactly what I'm trying to avoid doing, it means 2 days downtime
whilst the data is re-inserted.

I have actually managed to fix it now though.  I checked the old binary
log from the master, and it had no new data for the slave, so I simply
issued a 'CHANGE MASTER ...' on the slave to tell it to use the new
binary log file, with a position of 4 (the start) and off it when - back
to being in sync.

Why these defaults changed on a minor mysql release update is beyond me,
however I suspect this is gentoo's fault, not MySQLs.

-- 
Ian P. Christian ~ http://pookey.co.uk



signature.asc
Description: OpenPGP digital signature


Re: Translation of sql into mysql

2007-06-12 Thread Gordan Bobic
On Wed, 13 Jun 2007, David Scott wrote:

 
 I am trying to update the column CreditCode in a table extract using data 
 from another table CreditCodes, which has columns CreditCode and 
 Consumer_No.
 
 I have been given the following sql which works on another database (not 
 sure which and it is late at night so I can't ring up and ask)
 
 update extract
  set CustomerCreditCode = b.CreditCode
  from extract a
  inner join CreditCodes b
  on a.ConsumerNO = b.Consumer_No;
 
 This gives an error in mysql:
 ERROR 1064: You have an error in your SQL syntax; check the manual that 
 corresponds to your MySQL server version for the right syntax to use near 
 'from extract a
 inner join CreditCodes b
 on a.ConsumerNO = b.Cons
 
 Can anyone translate it into correct mysql syntax for me?

Try:

UPDATE  extract,
CreditCodes
SET extract.CustomerCreditCode = CreditCodes.CreditCode
WHERE   extract.ConsumerNO = CreditCodes.Consumer_No;

You should probably try this on a scratch database or at least take a 
backup first.

Gordan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev
 I guess a lot of that depends what an acceptable query execution time for
 you is.

well...  i don't really know. 30 secs maximum?! i've never worked with
such huge tables. 3 - 5 million records is fine but i've never worked on a
db with a table with 100 000 000 records.


 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in for
 your queries, etc?

that would be installed on a separate machine that might run only that
project. so yea there will be queries to other tables but only after the
result of the 99 million table returns.
there are no join requests with the 99 m table.

my calculation was mostly based on resources - like ram. like i mentioned
earlier the .MYD and .MYI files together on the current one that i have -
which has about 1.2 million records - are 90 mgb.
are the .MYI files kept in ram or both .MYD and .MYI?

multiplying 90x100 is what the size of the MYI + MYD will be, right?
is that all living in ram?

thanks




 Olaf


 On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.

 Dave









-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Gordan Bobic
On Tue, 12 Jun 2007, Ian P. Christian wrote:

 Ian P. Christian wrote:
  I upgraded my slave server a few weeks ago, and the slave failed, with
  an error similar to the one shown below.
 
 
 I have figured out what happened here now - and I'm part of the way
 though fixing it.
 
 It turned out the defaults had changed somewhere, and rather then using
 /var/lib/mysql/hostname-bin, it was using /var/run/mysql/mysqld-bin
 (and the same change fro relay logs too).

I think you've just discovered why non-packaged distributions have 
no place in production environment. Compiling things with defaults and 
expecting it to work the same is asking for trouble most of the time.

 Now... I've changed the slave to use it's correct logs now - however, if
 I do the same on the master, I'll have the last 4 hours of logs in
 /var/run/mysql/mysqld-bin ignored.
 
 Somehow, I need to get the slave to catch up with the master's old logs
 in /var/lib/mysql/hostname-bin, and then continue from the brand new
 logs in /var/run/mysql/mysqld-bin
 
 This is an awful mess, and I'm not sure it's recoverable - perhaps it is.
 
 In theory, I should be able to find out where the slave was up to in the
 old logs, extract them manually and replay them on the slave, and then
 reset the slave to use the new logs - however i'm not sure how reliable
 that's going to be - or even how to go about doing it yet.
 
 Ideas anyone?

Has the slave started replicating from the new logs? If it has, you'll 
find it will be quicker to re-seed the slave - even if it takes a day or 
two. The only difference will be that in one case the slave will catch up 
on it's own, and in the other, you'll also lose a day or two of your time 
trawling through the logs manually trying to re-construct the data.

If it hasn't, you can probably change the big-log sequencing numbers and 
change the pointers to file names and offsets in the index files on the 
master and the slave, and hope for the best. With some luck, it'll work, 
but I wouldn't count on it.

Gordan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev

thanks j.r...


   Olaf is right.

 It is really more about query execution time, and more importantly, QUERY
 OPTIMIZATION.

 Depending on how you setup your keys, your table type, and what else your
 server does, you should be able to run multiple queries on this table
 without too much of an issue.

are you suggesting that ONE machine can handle that load without problems?



 2 BIG suggestions --

 1) Whatever query you want to run on this table, run EXPLAIN. Then study
 the results and do your optimization and key creation.

ok...


 2) QUERY_CACHE. This is where you are going to live or die. Since you said
 you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the
 QUERY_CACHE is going to help out a lot here.

well. not sure about this since basically the queries will be different
every time - i mean the query itself is the same but the looked up values
are 99 million different ones.

i'll be running some tests and will report...  might be helpful to
somebody else...


 HTH!

me too..  thanks.


 J.R.

 

   From: Olaf Stein [EMAIL PROTECTED]
 Sent: Tuesday, June 12, 2007 8:13 AM
 To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED]
 Subject: Re: maximum number of records in a table

 I guess a lot of that depends what an acceptable query execution time for
 you is.
 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 Olaf

 On 6/12/07 3:24 AM, kalin mintchev  wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev  wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  

Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev

thanks j.r...


   Olaf is right.

 It is really more about query execution time, and more importantly, QUERY
 OPTIMIZATION.

 Depending on how you setup your keys, your table type, and what else your
 server does, you should be able to run multiple queries on this table
 without too much of an issue.

are you suggesting that ONE machine can handle that load without problems?



 2 BIG suggestions --

 1) Whatever query you want to run on this table, run EXPLAIN. Then study
 the results and do your optimization and key creation.

ok...


 2) QUERY_CACHE. This is where you are going to live or die. Since you said
 you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the
 QUERY_CACHE is going to help out a lot here.

well. not sure about this since basically the queries will be different
every time - i mean the query itself is the same but the looked up values
are 99 million different ones.

i'll be running some tests and will report...  might be helpful to
somebody else...


 HTH!

me too..  thanks.


 J.R.

 

   From: Olaf Stein [EMAIL PROTECTED]
 Sent: Tuesday, June 12, 2007 8:13 AM
 To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED]
 Subject: Re: maximum number of records in a table

 I guess a lot of that depends what an acceptable query execution time for
 you is.
 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 Olaf

 On 6/12/07 3:24 AM, kalin mintchev  wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev  wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  

Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Baron Schwartz

Ian P. Christian wrote:

Ofer Inbar wrote:

Assuming your slave is not usable by client programs now anyway and
you don't mind it being unusable for a while longer, you can restart
the slaving from scratch:


This is exactly what I'm trying to avoid doing, it means 2 days downtime
whilst the data is re-inserted.

I have actually managed to fix it now though.  I checked the old binary
log from the master, and it had no new data for the slave, so I simply
issued a 'CHANGE MASTER ...' on the slave to tell it to use the new
binary log file, with a position of 4 (the start) and off it when - back
to being in sync.


Now that you are back up, you probably have different data on the master and slave, 
from the sounds of it.  I've been there, often.  I wrote two tools to help solve this 
problem without a lot of overhead:


1) take table checksums via replication and discover which tables are 
different, if any.
2) efficient syncing of those tables, sending only the changes needed to reconcile 
different data, via replication.


Both things need to work through replication, for obvious reasons.  Both are available 
at http://sourceforge.net/projects/mysqltoolkit.  I would be glad to get your feedback 
on them.


In practice, this has made it possible for me to re-initialize slaves in minutes 
instead of half a day.  And as you know, there are lots of things that can go wrong in 
replication, so this is a Good Thing.


Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



flush logs vs. mysqladmin

2007-06-12 Thread Ofer Inbar
We run a mysqladmin flush-logs from cron every night.  This causes our
server to start a new binary log.  However, the slow query log does
not get flushed - our server continues updating the same slow query
log file.

If I run mysql and then issue a flush logs command, it flushes the
binary logs and the slow query log as well.

 - Redhat ES 4
 - MySQL 5.0.24 from redhat-supplied RPM
 - Both mysqladmin from cron, and my mysql flush logs, use -u root

Why does mysqladmin not flush the slow query log?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread Jon Ribbens
On Tue, Jun 12, 2007 at 11:09:41AM -0400, kalin mintchev wrote:
  I guess a lot of that depends what an acceptable query execution time for
  you is.
 
 well...  i don't really know. 30 secs maximum?! i've never worked with
 such huge tables. 3 - 5 million records is fine but i've never worked on a
 db with a table with 100 000 000 records.

I wouldn't worry about it. I have a table here with nearly a billion
rows, and it fetches based on the primary key in 0.00 sec.

 my calculation was mostly based on resources - like ram. like i mentioned
 earlier the .MYD and .MYI files together on the current one that i have -
 which has about 1.2 million records - are 90 mgb.
 are the .MYI files kept in ram or both .MYD and .MYI?

Neither of them are kept in RAM. You want a reasonable amount of
memory for buffers, query cacheing, etc, but you don't need to hold
any of the files in memory.

My main concern with large tables is disk space, especially if you're
ever going to want to do ALTER TABLE, when you will need enough free
space to hold the existing and new copies of the table on disk at the
same time.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problems with accents and display alignement

2007-06-12 Thread Gilles MISSONNIER


hello,

first, some settings : MySQL 5.0.32, under Linux debian etch

the problem is the bad alignment :

mysql select nom,id from t2;
+--++
| nom  | id |
+--++
| aàb |  1 |
| été|  2 |
| cçoôeêeèeëi |  3 |
| c'est tout   |  4 |
+--++

AS YOU SEE, THE DISPLAY IS NOT GOOD AT ALL :
how could I got the column WELL ALIGNED ???


thank you.


WHAT I HAVE DONE :


mysql SHOW VARIABLES LIKE 'character_set_system';
+--+---+
| Variable_name| Value |
+--+---+
| character_set_system | utf8  |
+--+---+

I create a tiny test table, loaded with a file data2 made under 
editor vi,

and on wich the unix command file data2 returns :
data2: UTF-8 Unicode text
Of course, there is no space before each end of line character


mysql CREATE TABLE t2 (id int(10) unsigned NOT NULL auto_increment,nom 
varchar(255) NOT NULL,PRIMARY KEY  (`id`)) ENGINE=InnoDB DEFAULT 
CHARSET=UTF8;

after loading the file data2, I display the content


_-¯-_-¯-_-¯-_-¯-_
Gilles Missonnier
IAP - [EMAIL PROTECTED]
01 44 32 81 36

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: maximum number of records in a table

2007-06-12 Thread Olaf Stein
You should be fine. 100 000 000 is not that much.
Just make sure you set the right keys (and then query by them) on the table
and even more importantly, set caches and buffers to utilize your RAM
appropriately.

Olaf


On 6/12/07 11:09 AM, kalin mintchev [EMAIL PROTECTED] wrote:

 I guess a lot of that depends what an acceptable query execution time for
 you is.
 
 well...  i don't really know. 30 secs maximum?! i've never worked with
 such huge tables. 3 - 5 million records is fine but i've never worked on a
 db with a table with 100 000 000 records.
 
 
 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in for
 your queries, etc?
 
 that would be installed on a separate machine that might run only that
 project. so yea there will be queries to other tables but only after the
 result of the 99 million table returns.
 there are no join requests with the 99 m table.
 
 my calculation was mostly based on resources - like ram. like i mentioned
 earlier the .MYD and .MYI files together on the current one that i have -
 which has about 1.2 million records - are 90 mgb.
 are the .MYI files kept in ram or both .MYD and .MYI?
 
 multiplying 90x100 is what the size of the MYI + MYD will be, right?
 is that all living in ram?
 
 thanks
 
 
 
 
 Olaf
 
 
 On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote:
 
 
 hi david..  thanks...
 
 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.
 
 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my logic
 flawed somehow?
 
 i'll appreciate any comments on this subject    thanks...
 
 
 
 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:
 
 hi all...
 
 from http://dev.mysql.com/doc/refman/5.0/en/features.html:
 
 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.
 
 that's cool but i assume this is distributed over a few machines...
 
 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?
 
 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...
 
 
 99 million isn't that large of a number.
 
 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.
 
 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.
 
 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):
 
 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standa
 lo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup
 
 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib
 /u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup
 
 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.
 
 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.
 
 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.
 
 Dave
 
 
 
 
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql

Re: maximum number of records in a table

2007-06-12 Thread J.R. Bullington

If you are talking about a Dual Core computer with 4 GBs of RAM 
and at least 6GB of swap space, you should have no problems running it on one 
computer.

MySQL is really robust and can handle quite a load, esp. on Linux. If you are 
running Windows, just remember to remove all the processes that you don't need 
to give MySQL the most memory and availability.

Depending on the size of your queries and the results of said queries, you may 
be able to keep most of them in the query cache (QC). Since you said there were 
only 2-3 columns in the table, and said columns only have integers (for 
example), you could keep quite a bit of information in the QC. However, of 
course, if they are all longtext, you will run into a lot of disk swapping.

To quote Jay Pipes at the MySQL Users Conference, the default answer to your IT 
question is IT DEPENDS.

I know for a fact that I was using commodity hardware (Intel Celeron D 2.0 GHz, 
1GB RAM) and was able to run heavy statistical analyses (MIN, MAX, STDEV, AVG, 
MEAN, etc) on 130M records with 50+ columns in 1 table and that was returning 
the data in 80 seconds. Not bad for that old of a computer. Nowadays it's a 
Quad Xeon 2.5GHz with 4GB RAM and the same query on 400M+ records returns in 
less than 15 seconds.

It's all about query optimization and the query cache (or key buffer length, if 
you use InnoDB).

J.R.



From: kalin mintchev [EMAIL PROTECTED]
Sent: Tuesday, June 12, 2007 11:20 AM
To: [EMAIL PROTECTED]
Subject: Re: maximum number of records in a table 

thanks j.r...


   Olaf is right.

 It is really more about query execution time, and more importantly, QUERY
 OPTIMIZATION.

 Depending on how you setup your keys, your table type, and what else your
 server does, you should be able to run multiple queries on this table
 without too much of an issue.

are you suggesting that ONE machine can handle that load without problems?


 2 BIG suggestions --

 1) Whatever query you want to run on this table, run EXPLAIN. Then study
 the results and do your optimization and key creation.

ok...

 2) QUERY_CACHE. This is where you are going to live or die. Since you said
 you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the
 QUERY_CACHE is going to help out a lot here.

well. not sure about this since basically the queries will be different
every time - i mean the query itself is the same but the looked up values
are 99 million different ones.

i'll be running some tests and will report...  might be helpful to
somebody else...

 HTH!

me too..  thanks.

 J.R.

 

   From: Olaf Stein 
 Sent: Tuesday, June 12, 2007 8:13 AM
 To: [EMAIL PROTECTED], David T. Ashley 
 Subject: Re: maximum number of records in a table

 I guess a lot of that depends what an acceptable query execution time for
 you is.
 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 Olaf

 On 6/12/07 3:24 AM, kalin mintchev  wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev  wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this 

syntax to revoke

2007-06-12 Thread Stephen Liu
Hi folks,


OpenBSD 4.0 x86_64
Mysql
Postfix_2.4.3


After running following command;

mysql GRANT SELECT, INSERT, UPDATE, DELETE ON test.* TO
'vmailuser'@'localhost' IDENTIFIED BY 'password123';
Query OK, 0 rows affected (0.00 sec)


I found I made a mistake to run it. I need to revoke the GRANT given to
vmailuser to test database.


I found on;
13.5.1.5. REVOKE Syntax
http://dev.mysql.com/doc/refman/5.0/en/revoke.html

The syntax;
REVOKE ALL PRIVILEGES, GRANT OPTION FROM user [, user]

but can't resolve whether retaining the 1st 'user'
and
replace [,  user] with [, vmailuser]???

Please shed me some light.  TIA


B.R.
Stephen Liu


Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev
 You should be fine. 100 000 000 is not that much.
 Just make sure you set the right keys (and then query by them) on the
 table
 and even more importantly, set caches and buffers to utilize your RAM
 appropriately.

thanks.



 Olaf


 On 6/12/07 11:09 AM, kalin mintchev [EMAIL PROTECTED] wrote:

 I guess a lot of that depends what an acceptable query execution time
 for
 you is.

 well...  i don't really know. 30 secs maximum?! i've never worked with
 such huge tables. 3 - 5 million records is fine but i've never worked on
 a
 db with a table with 100 000 000 records.


 Also, what else does the machine do, are there other databases or
 tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 that would be installed on a separate machine that might run only that
 project. so yea there will be queries to other tables but only after the
 result of the 99 million table returns.
 there are no join requests with the 99 m table.

 my calculation was mostly based on resources - like ram. like i
 mentioned
 earlier the .MYD and .MYI files together on the current one that i have
 -
 which has about 1.2 million records - are 90 mgb.
 are the .MYI files kept in ram or both .MYD and .MYI?

 multiplying 90x100 is what the size of the MYI + MYD will be, right?
 is that all living in ram?

 thanks




 Olaf


 On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or
 on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig
 of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram
 to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've
 never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standa
 lo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib
 /u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold
 of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.

 Dave














-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To 

Re: syntax to revoke

2007-06-12 Thread Baron Schwartz

Hi Stephen,

Stephen Liu wrote:

Hi folks,


OpenBSD 4.0 x86_64
Mysql
Postfix_2.4.3


After running following command;

mysql GRANT SELECT, INSERT, UPDATE, DELETE ON test.* TO
'vmailuser'@'localhost' IDENTIFIED BY 'password123';
Query OK, 0 rows affected (0.00 sec)


I found I made a mistake to run it. I need to revoke the GRANT given to
vmailuser to test database.


I found on;
13.5.1.5. REVOKE Syntax
http://dev.mysql.com/doc/refman/5.0/en/revoke.html

The syntax;
REVOKE ALL PRIVILEGES, GRANT OPTION FROM user [, user]

but can't resolve whether retaining the 1st 'user'
and
replace [,  user] with [, vmailuser]???


To undo this GRANT, run

REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM 'vmailuser'@'localhost';

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax to revoke

2007-06-12 Thread Stephen Liu
Hi Baron,


Tks for your advice.


 To undo this GRANT, run
 
 REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
 'vmailuser'@'localhost';

mysql REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
'vmailuser'@'localhost';
Query OK, 0 rows affected (0.00 sec)

mysql 

I suppose it has been done ???


B.R.
Stephen

Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax to revoke

2007-06-12 Thread Baron Schwartz

Hi,

Stephen Liu wrote:

Hi Baron,


Tks for your advice.



To undo this GRANT, run

REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
'vmailuser'@'localhost';


mysql REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
'vmailuser'@'localhost';
Query OK, 0 rows affected (0.00 sec)

mysql 


I suppose it has been done ???


Yes, but you can check with SHOW GRANTS FOR 'vmailuser'@'localhost' to be sure.  If you 
are running an older version of MySQL you may also need to run FLUSH PRIVELEGES.  Check 
the manual for the versions where this is necessary.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax to revoke

2007-06-12 Thread Stephen Liu
Hi Baron,


  I suppose it has been done ???
 
 Yes, but you can check with SHOW GRANTS FOR 'vmailuser'@'localhost'
 to be sure.  


mysql SHOW GRANTS FOR 'vmailuser'@'localhost';
+--+
| Grants for [EMAIL PROTECTED]   
   |
+--+
| GRANT USAGE ON *.* TO 'vmailuser'@'localhost' IDENTIFIED BY PASSWORD
'*A0F874BC7F54EE086FCE60A37CE7887D8B31086B' | 
+--+
1 row in set (0.00 sec)
* end *

 If you 
 are running an older version of MySQL you may also need to run FLUSH
 PRIVELEGES.  Check 
 the manual for the versions where this is necessary.


I'm running Mysql version;

$ mysql --version
mysql  Ver 14.12 Distrib 5.0.24a, for unknown-openbsd4.0 (x86_64) using
readline 4.3


To safe guard, it would be better to run 'FLUSH PRIVELEGES'.

Can I re-run
REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
'vmailuser'@'localhost';

then

FLUSH PRIVELEGES;
???

Tks.


B.R.
Stephen

Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax to revoke

2007-06-12 Thread Baron Schwartz

Stephen Liu wrote:

mysql SHOW GRANTS FOR 'vmailuser'@'localhost';
+--+
| Grants for [EMAIL PROTECTED]   
   |

+--+
| GRANT USAGE ON *.* TO 'vmailuser'@'localhost' IDENTIFIED BY PASSWORD
'*A0F874BC7F54EE086FCE60A37CE7887D8B31086B' | 
+--+


OK, the privileges are gone.  USAGE is a synonym for no privileges.  If you want to 
get rid of the user entirely, use DROP USER.



To safe guard, it would be better to run 'FLUSH PRIVELEGES'.

Can I re-run
REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
'vmailuser'@'localhost';

then

FLUSH PRIVELEGES;


Yes.  I think on this recent a version, it will have no effect, but will not 
harm anything.

Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



date function question

2007-06-12 Thread Andrey Dmitriev
Can someone advise the best way to determine 
a) the # of days since the first of the month from last month (e.g. from 
5/1/07)
b) the # of days since the end of last month (e.g. from 5/31/07)

Is there are a good way to determine the # of workdays in a month.

Thanks,
Andrey


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Subject: determining if tables are being used

2007-06-12 Thread Sean Kelly

Keith,

The definitive way is to turn on the query log and watch it for a few days
or weeks.

In your my.cnf under [mysqld] add the line:

log=mysql-query

Then restart your server.

In your server's data directory (e.g. /usr/local/mysql/data/), there will
now be a log file called mysql-query.log. After some time there may be one
for each day, week or size interval.

This file, 'mysql-query.log' shows all queries and can be opened, cat'd,
tailed, etc.

Using the query log impacts performance, comment out the line and restart
mysql when you're done.

Link to the mysql doc:

http://dev.mysql.com/doc/refman/5.0/en/query-log.html

-sean

On 6/11/07, Keith Murphy [EMAIL PROTECTED] wrote:


I have inherited database servers from a group whose main job was not
working on mysql. I am not certain if all the tables on the databases are
being used. Is there some efficient way of determining if a table is being
accessed or when the last time it was accessed?

Thanks,

Keith

--
Keith Murphy
Database Administrator
iContact
2635 Meridian Parkway, 2nd Floor
Durham, North Carolina 27713
(o) 919-433-0786
(c) 850-637-3877



Re: syntax to revoke

2007-06-12 Thread Stephen Liu
--- Baron Schwartz [EMAIL PROTECTED] wrote:

 OK, the privileges are gone.  USAGE is a synonym for no privileges.

Noted with tks.


  If you want to 
 get rid of the user entirely, use DROP USER.

Could you please explain in more detail???  Where shall I add DROP
USER


  To safe guard, it would be better to run 'FLUSH PRIVELEGES'.
  
  Can I re-run
  REVOKE SELECT, INSERT, UPDATE, DELETE ON test.* FROM
  'vmailuser'@'localhost';
  
  then
  
  FLUSH PRIVELEGES;
 
 Yes.  I think on this recent a version, it will have no effect, but
 will not harm anything.

Noted with tks.


B.R.
Stephen


Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax to revoke

2007-06-12 Thread Baron Schwartz

Stephen Liu wrote:
 If you want to 
get rid of the user entirely, use DROP USER.


Could you please explain in more detail???  Where shall I add DROP
USER


The manual always explains the full syntax (http://dev.mysql.com/), but in 
brief,

DROP USER 'vmailuser'@'localhost';

will remove the user if you wish.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySql Host through Heartbeat

2007-06-12 Thread Ben Clewett

Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, even as 
a replication slave and Heartbeat continously stopping and starting the 
server.


The Heartbeat moves MySQL around from server to server when a failure 
occures.  I am trying to find a way for MySQL to report the server host 
name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information from MySQL?

Many thanks,

Ben


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax to revoke

2007-06-12 Thread Stephen Liu
--- Baron Schwartz [EMAIL PROTECTED] wrote:

 Stephen Liu wrote:
   If you want to 
  get rid of the user entirely, use DROP USER.
  
  Could you please explain in more detail???  Where shall I add DROP
  USER
 
 The manual always explains the full syntax (http://dev.mysql.com/),

Whether you meant;

MySQL 5.0 Reference Manual
http://dev.mysql.com/doc/refman/5.0/en/index.html
Chapter 13. SQL Statement Syntax
http://dev.mysql.com/doc/refman/5.0/en/sql-syntax.html
???

Tks


 but in brief,
 
 DROP USER 'vmailuser'@'localhost';
 
 will remove the user if you wish.

Noted with tks.


Stephen

Send instant messages to your online friends http://uk.messenger.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql Host through Heartbeat

2007-06-12 Thread Baron Schwartz

Hi,

Ben Clewett wrote:

Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, even as 
a replication slave and Heartbeat continously stopping and starting the 
server.


The Heartbeat moves MySQL around from server to server when a failure 
occures.  I am trying to find a way for MySQL to report the server host 
name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information from 
MySQL?


I only know of two ways, though there may be more.

1) The hostname system variable, which was added in 5.0.41 
(http://dev.mysql.com/doc/refman/5.0/en/releasenotes-cs-5-0-41.html)


2) Perhaps a UDF that makes a system call.

There might be some external ways to do it also.  For example, create a table with a 
single row, and have a startup script replace the value in it with the server's 
hostname upon startup.  Then you can query this value.


Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql Host through Heartbeat

2007-06-12 Thread Gerald L. Clark

Ben Clewett wrote:

Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, even as 
a replication slave and Heartbeat continously stopping and starting the 
server.


The Heartbeat moves MySQL around from server to server when a failure 
occures.  I am trying to find a way for MySQL to report the server host 
name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information from 
MySQL?


Many thanks,

Ben



Heartbeat moves the IP address around as well as the services.
Your hostname should not change.


--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql Host through Heartbeat

2007-06-12 Thread Ben Clewett

Baron,

Thanks for the fast reply.

I like the idea of piping in the servername to a small table on startup. 
 Since this will only change on startup, sounds like an excellent idea.


Or I may upgrade to above 5.0.41...

Regards,

Ben


Baron Schwartz wrote:

Hi,

Ben Clewett wrote:

Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, even 
as a replication slave and Heartbeat continously stopping and starting 
the server.


The Heartbeat moves MySQL around from server to server when a failure 
occures.  I am trying to find a way for MySQL to report the server 
host name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information from 
MySQL?


I only know of two ways, though there may be more.

1) The hostname system variable, which was added in 5.0.41 
(http://dev.mysql.com/doc/refman/5.0/en/releasenotes-cs-5-0-41.html)


2) Perhaps a UDF that makes a system call.

There might be some external ways to do it also.  For example, create a 
table with a single row, and have a startup script replace the value in 
it with the server's hostname upon startup.  Then you can query this value.


Cheers
Baron




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql Host through Heartbeat

2007-06-12 Thread Baron Schwartz

Gerald L. Clark wrote:

Ben Clewett wrote:

Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, even 
as a replication slave and Heartbeat continously stopping and starting 
the server.


The Heartbeat moves MySQL around from server to server when a failure 
occures.  I am trying to find a way for MySQL to report the server 
host name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information from 
MySQL?


Many thanks,

Ben



Heartbeat moves the IP address around as well as the services.
Your hostname should not change.


DNS won't change, but the server's /etc/hostname will, right?

Disclaimer: I'm no expert on this... I didn't even know the IP address moved too.  I 
should read about Heartbeat.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql Host through Heartbeat

2007-06-12 Thread Gerald L. Clark

Baron Schwartz wrote:

Gerald L. Clark wrote:


Ben Clewett wrote:


Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, even 
as a replication slave and Heartbeat continously stopping and 
starting the server.


The Heartbeat moves MySQL around from server to server when a failure 
occures.  I am trying to find a way for MySQL to report the server 
host name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information from 
MySQL?


Many thanks,

Ben



Heartbeat moves the IP address around as well as the services.
Your hostname should not change.



DNS won't change, but the server's /etc/hostname will, right?

Disclaimer: I'm no expert on this... I didn't even know the IP address 
moved too.  I should read about Heartbeat.


Baron

Do you actually have /etc/hostname?
RHEL and Centos do not.
They do have an entry in /etc/sysconfig/network

--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fwd: Table crashing

2007-06-12 Thread Sharique uddin Ahmed Farooqui

Tables frequently crashing on my site. Here is the message I'm seeing
*
Warning*: Table './mydb/sessions' is marked as crashed and should be
repaired query: SELECT u.*, s.* FROM users u INNER JOIN sessions s ON u.uid=
s.uid WHERE s.sid = '854c6474111de174abbddf77f74b9d99' in *
/www/ims/includes/database**.mysqli.inc* on line *151*

I don't understand what's wrong with db.
--
Sharique uddin Ahmed Farooqui
(C++/C# Developer, IT Consultant)
A revolution is about to begin.
A world is about to change.
And you and me are the initiator.


Re: MySql Host through Heartbeat

2007-06-12 Thread Baron Schwartz

Gerald L. Clark wrote:

Baron Schwartz wrote:

Gerald L. Clark wrote:


Ben Clewett wrote:


Dear MySQL,

I'm running 5.0.26 through Heartbeat.  Which seems to work well, 
even as a replication slave and Heartbeat continously stopping and 
starting the server.


The Heartbeat moves MySQL around from server to server when a 
failure occures.  I am trying to find a way for MySQL to report the 
server host name on which it's currently sitting.  Without any luck.


Would any kind members know of a way of getting this information 
from MySQL?


Many thanks,

Ben



Heartbeat moves the IP address around as well as the services.
Your hostname should not change.



DNS won't change, but the server's /etc/hostname will, right?

Disclaimer: I'm no expert on this... I didn't even know the IP address 
moved too.  I should read about Heartbeat.


Baron

Do you actually have /etc/hostname?
RHEL and Centos do not.
They do have an entry in /etc/sysconfig/network


Debian and Gentoo have /etc/hostname and /etc/conf.d/hostname, respectively.  I would 
think this is what /usr/bin/hostname uses, and probably where the hostname server 
variable gets set from in MySQL 5.0.41.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: date function question

2007-06-12 Thread Peter Brawley

 # of days since the first of the month from last month
datediff(now(),date(concat(period_add(date_format(now(),'%Y%m'),-1),'01')))

 the # of days since the end of last month (e.g. from 5/31/07)
datediff(now(),date_sub(concat(date_format(now(),'%Y-%m-'),'01'),INTERVAL 
1 DAY))


PB

-

Andrey Dmitriev wrote:
Can someone advise the best way to determine 
a) the # of days since the first of the month from last month (e.g. from 
5/1/07)

b) the # of days since the end of last month (e.g. from 5/31/07)

Is there are a good way to determine the # of workdays in a month.

Thanks,
Andrey


  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Error 2013 (Lost Connection to MySQL Server) On A Solaris 10 Box

2007-06-12 Thread Rajesh Mehrotra
Hi,
 
We copied over a test copy of a multi-GB database to a new Sun V440 box
running Solaris 10 (64-bit) with a 64-bit MySQL install. When trying to
connect to the database, we consistently get Error 2013.
 
Any pointers?
 
TIA.
 
Sincerely,

Raj Mehrotra
hccs
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 



Re: Error 2013 (Lost Connection to MySQL Server) On A Solaris 10 Box

2007-06-12 Thread Baron Schwartz

Rajesh Mehrotra wrote:

Hi,
 
We copied over a test copy of a multi-GB database to a new Sun V440 box

running Solaris 10 (64-bit) with a 64-bit MySQL install. When trying to
connect to the database, we consistently get Error 2013.
 
Any pointers?


Check the server's error log and see if there's anything in it.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: date function question

2007-06-12 Thread Baron Schwartz

There is also a LAST_DAY() function that returns the last day of the month:

http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html

Peter Brawley wrote:

  # of days since the first of the month from last month
datediff(now(),date(concat(period_add(date_format(now(),'%Y%m'),-1),'01')))

  the # of days since the end of last month (e.g. from 5/31/07)
datediff(now(),date_sub(concat(date_format(now(),'%Y-%m-'),'01'),INTERVAL 
1 DAY))


PB

-

Andrey Dmitriev wrote:
Can someone advise the best way to determine a) the # of days since 
the first of the month from last month (e.g. from 5/1/07)

b) the # of days since the end of last month (e.g. from 5/31/07)

Is there are a good way to determine the # of workdays in a month.

Thanks,
Andrey


  




--
Baron Schwartz
http://www.xaprb.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Why doesn't MySQL support gzip encode/decode functions.

2007-06-12 Thread Kevin Burton

Why doesn't mysql support gzip for COMPRESS/UNCOMPRESS and only zlib

For network applications zlib is a lot less compatible than gzip.

For example I could send gzip'd content directly from the database within a
larger gzip'd stream.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078