Re: key_buffer_size vs innodb_buffer_pool_size

2006-01-15 Thread Eric Bergen
The difference in recommendation size comes from the different
techniques each storage engine uses for caching data. myisam
(key_buffer_size) only stores indexes where innodb_buffer_pool_size
stores both indexes and data. mysiam relies on the operating system to
cache data in ram which is why you don't want to use all available
memory for the key buffer.


On 1/14/06, Grant Giddens [EMAIL PROTECTED] wrote:
 Hi,

 After reading through the example my.cnf files (large, huge,  etc), I 
 started to wonder what the difference was between the isam  key_buffer_size 
 and the innodb innodb_buffer_pool_size.

   I realize that they are two different table types, but some of the docs  
 says to set the key_buffer_size to 25%-50% of the overall system  memory.  
 The comments for the innodb_buffer_pool_size say that it  can be set to 
 50%-80% of the overall system memory.

   Maybe I don't understand exactly the difference between the two because  I 
 don't understand why they have different memory recommendations.

   Is there any FAQs on the my.cnf file?  How would you set these two  
 variables if you had an even mix of isam and innodb tables?  Where  can I 
 learn more about tweaking the my.cnf file?  The mysql online  documentation 
 is good, but I need a more basic description of these two  variables and all 
 the other my.cnf settings.

   Thanks,
   Grant



 -
 Yahoo! Photos
  Got holiday prints? See all the ways to get quality prints in your hands 
 ASAP.



--
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1114 (HY000): The table is full converting a big table from MyISAM to InnoDB on 5.0.18

2006-01-15 Thread Patrick Herber
Hello!
I have a database with a big table (Data File 45 GB, Index File 30 GB). 
Since I have some performance troubles with table-locking in a multi-user
environment (when one of them performs a complex query all the other have to
wait up to 1 minute, which is not very nice...), I would like to convert
this (and other tables) into InnoDB engine.
 
I first tried using the innodb_file_per_table option but when running the
statement
 
ALTER TABLE invoice ENGINE=INNODB;
 
ERROR 1114 (HY000): The table '#sql...' is full
 
(this about one our after the start of the command, when the size of the
file was bigger than ca. 70GB (I don't know exactly the size))
 
I tried then without the innodb_file_per_table option, setting my
innodb_data_file_path as follows:
 
innodb_data_file_path=ibdata1:500M;ibdata2:500M;ibdata3;500M;ibdata4:500M;ib
data5:500M;ibdata6:500M;ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata10:500M
:autoextend

Also in this case I got the same error message.
 
What should I do in order to convert this table?
 
Should I set in the innodb_data_file_path for example 50 Files, each big 4GB
?
 
Thanks a lot for your help.
 
Best regards,
Patrick
 
PS: I'm running MySQL 5.0.18 on a Linux 2.6.13-15.7-smp server.


pam_mysql + Cyrus IMAP + MySQL 5.0

2006-01-15 Thread Ady Wicaksono

Dear All,

I have a table like this, please do attentiion on password field

+--+--+++
| username | password | prefix | 
domain_name|

+--+--+++
| cyrus| 747fc82325405198 |
||

+--+--+++

As you see, password field is using OLD_PASSWORD format.

This table is consist of millions of email users that commonly 
authenticated using Cyrus-SASL (saslauthd) with pam_mysql

support

WHen i upgrade to MySQL 5.0.18, i found a problem all user can't log in

I try to see the code inside pam_mysql (Pam_Mysql Version 0.4.5)

I found this problem on pam_mysql.c

   573 /* PASSWORD */
   574 case 2: make_scrambled_password(encryptedPass, 
passwd);

   575 break;
   576

As you know make_scrambled_password() is C API which working like 
PASSWORD() function in MySQL.
However i saved all user password with OLD_PASSWORD() or actually 
PASSWORD() function on older MySQL version


Hmm anybody know? What function C-API that work as OLD_PASSWORD()? 
or any tricks?


Thx





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 1114 (HY000): The table is full converting a big table from MyISAM to InnoDB on 5.0.18

2006-01-15 Thread Jocelyn Fournier

Hi,

I think you should change the tmpdir variable value to a directory which 
 have enough room to create your temp big table (by default, it points 
to /tmp dir).


Regards,
  Jocelyn

Patrick Herber a écrit :

Hello!
I have a database with a big table (Data File 45 GB, Index File 30 GB). 
Since I have some performance troubles with table-locking in a multi-user

environment (when one of them performs a complex query all the other have to
wait up to 1 minute, which is not very nice...), I would like to convert
this (and other tables) into InnoDB engine.
 
I first tried using the innodb_file_per_table option but when running the

statement
 
ALTER TABLE invoice ENGINE=INNODB;
 
ERROR 1114 (HY000): The table '#sql...' is full
 
(this about one our after the start of the command, when the size of the

file was bigger than ca. 70GB (I don't know exactly the size))
 
I tried then without the innodb_file_per_table option, setting my

innodb_data_file_path as follows:
 
innodb_data_file_path=ibdata1:500M;ibdata2:500M;ibdata3;500M;ibdata4:500M;ib

data5:500M;ibdata6:500M;ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata10:500M
:autoextend

Also in this case I got the same error message.
 
What should I do in order to convert this table?
 
Should I set in the innodb_data_file_path for example 50 Files, each big 4GB

?
 
Thanks a lot for your help.
 
Best regards,

Patrick
 
PS: I'm running MySQL 5.0.18 on a Linux 2.6.13-15.7-smp server.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: ERROR 1114 (HY000): The table is full converting a big table from MyISAM to InnoDB on 5.0.18

2006-01-15 Thread Patrick Herber
Thanks a lot for your answer!
However, when I used the option innodb_file_per_table I saw that the temp
file (#sql...) was created in my DB directory and on this partition I still
have plenty of space (more than 200GB).
Do you think I CAN'T use this option for such a big table and I have to use
innodb_data_file_path?

Thanks a lot and regards,
Patrick

 -Original Message-
 From: Jocelyn Fournier [mailto:[EMAIL PROTECTED] 
 Sent: Sunday, 15 January 2006 15:09
 To: Patrick Herber
 Cc: mysql@lists.mysql.com
 Subject: Re: ERROR 1114 (HY000): The table is full converting 
 a big table from MyISAM to InnoDB on 5.0.18
 
 Hi,
 
 I think you should change the tmpdir variable value to a 
 directory which
   have enough room to create your temp big table (by default, 
 it points to /tmp dir).
 
 Regards,
Jocelyn
 
 Patrick Herber a écrit :
  Hello!
  I have a database with a big table (Data File 45 GB, Index 
 File 30 GB). 
  Since I have some performance troubles with table-locking in a 
  multi-user environment (when one of them performs a complex 
 query all 
  the other have to wait up to 1 minute, which is not very 
 nice...), I 
  would like to convert this (and other tables) into InnoDB engine.
   
  I first tried using the innodb_file_per_table option but 
 when running 
  the statement
   
  ALTER TABLE invoice ENGINE=INNODB;
   
  ERROR 1114 (HY000): The table '#sql...' is full
   
  (this about one our after the start of the command, when 
 the size of 
  the file was bigger than ca. 70GB (I don't know exactly the size))
   
  I tried then without the innodb_file_per_table option, setting my 
  innodb_data_file_path as follows:
   
  
 innodb_data_file_path=ibdata1:500M;ibdata2:500M;ibdata3;500M;ibdata4:5
  00M;ib 
  
 data5:500M;ibdata6:500M;ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata1
  0:500M
  :autoextend
  
  Also in this case I got the same error message.
   
  What should I do in order to convert this table?
   
  Should I set in the innodb_data_file_path for example 50 
 Files, each 
  big 4GB ?
   
  Thanks a lot for your help.
   
  Best regards,
  Patrick
   
  PS: I'm running MySQL 5.0.18 on a Linux 2.6.13-15.7-smp server.
  
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A difficult query- urgent for me

2006-01-15 Thread [EMAIL PROTECTED]
Dear Friends,
I have a problm, try to solve that.
Actually there is a table with columns a and b .
So i want if a contains a particular word than a's value should return else
'b' value should return. And there must be one and only one column
returning. I have mysql 4.x and i think the logic will be like. I know i am
wrong by syntax but ...
SELECT IF(a REGEXP CONCAT('word' , '$'),a,b) from table_name where a =
'anything';

I think i am clear if not do please ask me questions.
I shall be very grateful if any one of you can give me a solutions.
--
Regards
Abhishek jain.


mail2web - Check your email from the web at
http://mail2web.com/ .



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A difficult query- urgent for me

2006-01-15 Thread Jochem van Dieten
On 1/15/06, [EMAIL PROTECTED] wrote:

 Actually there is a table with columns a and b .
 So i want if a contains a particular word than a's value should return else
 'b' value should return.

SELECT
  CASE
WHEN a = 'Good' THEN a
ELSE b
  END
FROM
  table

Jochem


Help with SQL DELETE issue

2006-01-15 Thread David Rabinowitz

Hi,


We are using MySQL 4.1.16, recently upgraded from 4.0.18. On the old 
server we tried not to delete records, as their is a common belief that 
deleting records will corrupt the table's index and we will have to call 
repair table. I couldn't find any documentation on that. Unfortunately 
he is not working here any more, so we cannot ask him where he heard 
about it.



Can someone confirm or deny this?


Regards,
David




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A difficult query- urgent for me

2006-01-15 Thread Rhino

An example of the _data_ would be very helpful here.

For example, let's say this is your data:

Col_ACol_B
-----
aceexpert
doghound
hungryravenous

If you are searching for the word ace, you should find it. You then want 
the SQL to return ace, right?


If you are searching for the word puppy, you won't find it in the first 
column. Which word from the second column do you want to return??? There is 
no obvious reason to prefer any of the different values in the second column 
when the search word does not appear in the first column.


Or do you only have a single row in this table? If so, I'm not sure why you 
want to create a table just to contain these two values; it might be 
justified, depending on what you are doing, but it seems unlikely.


Can you clarify what you are trying to accomplish? Otherwise, it's going to 
be hard to help you.


Also, which version of MySQL are you on, 4.0.x or 4.1.x? It might make a big 
difference to the answer I would give since 4.0.x does not support 
subqueries while 4.1.x does.


Rhino

- Original Message - 
From: [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Sunday, January 15, 2006 10:21 AM
Subject: A difficult query- urgent for me


Dear Friends,
I have a problm, try to solve that.
Actually there is a table with columns a and b .
So i want if a contains a particular word than a's value should return else
'b' value should return. And there must be one and only one column
returning. I have mysql 4.x and i think the logic will be like. I know i am
wrong by syntax but ...
SELECT IF(a REGEXP CONCAT('word' , '$'),a,b) from table_name where a =
'anything';

I think i am clear if not do please ask me questions.
I shall be very grateful if any one of you can give me a solutions.
--
Regards
Abhishek jain.


mail2web - Check your email from the web at
http://mail2web.com/ .



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.371 / Virus Database: 267.14.18/230 - Release Date: 14/01/2006




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.371 / Virus Database: 267.14.18/230 - Release Date: 14/01/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with SQL DELETE issue

2006-01-15 Thread Jocelyn Fournier

Hi,

Excepted if he found a bug in an older version of MySQL, it's of course 
false ! (it would be a major issue which would make MySQL just unusable)


Regards,
  Jocelyn

David Rabinowitz a écrit :

Hi,


We are using MySQL 4.1.16, recently upgraded from 4.0.18. On the old 
server we tried not to delete records, as their is a common belief that 
deleting records will corrupt the table's index and we will have to call 
repair table. I couldn't find any documentation on that. Unfortunately 
he is not working here any more, so we cannot ask him where he heard 
about it.



Can someone confirm or deny this?


Regards,
David






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5.0.15 in Linux OS

2006-01-15 Thread Daniel Kasak

Reynier Perez Mira wrote:


Hi list:
Recently I download MySQL source from MySQL.com site for install it in my Linux 
Server. I follow the standard procedure for it, means:

shell ./configure
shell make
shell make install

 


That's not what the documentation says to do.
Have a look at the INSTALL-SOURCE file in the source distrubution.

--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Weird behaviour of Load Data

2006-01-15 Thread Rhino
I'm getting some odd behaviour from the 'load data' command which I can't 
resolve. For some reason, 'load data' is putting a quotation at the end of 
the first column of the table


Here is my table definition:

create table if not exists Ref
(ref_name varchar(30) not null,
ref_org varchar(30) not null,
ref_title varchar(30) not null,
ref_email varchar(30) not null,
ref_phone varchar(30) not null,
ref_calltime varchar(30) not null,
primary key(ref_name),
index r_pk (ref_name)
) Type=InnoDB;


Here is my 'load data' statement:

load data infile '/home/rhino/MySQL/Ref.asc'
replace into table Ref
fields terminated by ';'
optionally enclosed by ''
escaped by '\\'
lines starting by ''
terminated by '\n';

Here is an excerpt of my data file:

Joe Blow;ABC Inc.;CEO;[EMAIL PROTECTED];(212) 555-1212 
(office);call anytime
Fred Smith;DEF Corp.;CFO;[EMAIL PROTECTED];(213) 555- 
(home);call evenings


Here is what I get when I display the contents of the table via a Select *:

Joe BlowABC Inc.CEO[EMAIL PROTECTED](212) 555-1212 
(office)call anytime
Fred SmithDEF Corp.   CFO[EMAIL PROTECTED](213) 555- (home) 
call evenings


For some reason, 'load data' is putting a quotation, i.e. a , at the end of 
each value in the first column of the table, e.g. Joe Smith instead of Joe 
Smith. If I load the same data with Insert statements rather than 'load 
data', the result is identical except that the trailing quotation mark at 
the end of each name in the first column is absent, which is precisely as it 
should be.


I'm trying to populate this table with 'load data' if possible - it's more 
convenient than having to write the Insert statements - but I keep getting 
the unwanted quotations when I do it that way despite trying umpteen 
variations. Does anyone know why 'load data' is insisting on putting this 
quotation in each row?


I don't have that same behaviour in other very similar situations in which 
the table definition, the 'load data' statement and the organization of the 
data file is comparable.


I can't find anything on this problem in the manual or the archive of all 
the mailing lists


I'm running MySQL 4.0.15'-Max'.

---
Rhino



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.371 / Virus Database: 267.14.18/230 - Release Date: 14/01/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Weird behaviour of Load Data

2006-01-15 Thread Rhino


- Original Message - 
From: John Doe [EMAIL PROTECTED]

To: Rhino [EMAIL PROTECTED]
Sent: Sunday, January 15, 2006 6:15 PM
Subject: Re: Weird behaviour of Load Data



Rhino am Sonntag, 15. Januar 2006 23.53:

I'm getting some odd behaviour from the 'load data' command which I can't
resolve. For some reason, 'load data' is putting a quotation at the end 
of

the first column of the table

Here is my table definition:

create table if not exists Ref
(ref_name varchar(30) not null,
 ref_org varchar(30) not null,
 ref_title varchar(30) not null,
 ref_email varchar(30) not null,
 ref_phone varchar(30) not null,
 ref_calltime varchar(30) not null,
 primary key(ref_name),
 index r_pk (ref_name)
) Type=InnoDB;


Here is my 'load data' statement:

load data infile '/home/rhino/MySQL/Ref.asc'
replace into table Ref
fields terminated by ';'
optionally enclosed by ''
escaped by '\\'
lines starting by ''


Hi Rhino

My posts are not accepted on the list.

Without test, the lines starting by could be the reason, since the char 
is

not included in the value as the line end is not.

hth, joe

Your guess about the problem was not correct but it helped me find a 
reasonable workaround for the problem.


I dropped the

   lines starting by ''

clause from the load data statement and got an even worse result where _all_ 
of the fields had quotes before and after the data values - except that the 
first column had no end quotes. But that gave me the idea to put an extra 
character, a greater than sign, in the first position of each row in the 
data file. Then, I changed the 'load data' statement so that it included


   lines starting by ''

The data loaded correctly via the 'load data' statement.

I'm not sure why this should have been necessary - I had no problems before 
with the configuration I showed in my original note - but the problem is 
solved to my satisfaction.


Thank you for getting me onto the right track!

I'm copying the list with this note so that others can learn from it.

Rhino


terminated by '\n';

Here is an excerpt of my data file:

Joe Blow;ABC Inc.;CEO;[EMAIL PROTECTED];(212) 555-1212
(office);call anytime
Fred Smith;DEF Corp.;CFO;[EMAIL PROTECTED];(213) 555-
(home);call evenings

Here is what I get when I display the contents of the table via a Select 
*:


Joe BlowABC Inc.CEO[EMAIL PROTECTED](212) 555-1212
(office)call anytime
Fred SmithDEF Corp.   CFO[EMAIL PROTECTED](213) 555- 
(home)

call evenings

For some reason, 'load data' is putting a quotation, i.e. a , at the end
of each value in the first column of the table, e.g. Joe Smith instead 
of
Joe Smith. If I load the same data with Insert statements rather than 
'load

data', the result is identical except that the trailing quotation mark at
the end of each name in the first column is absent, which is precisely as
it should be.

I'm trying to populate this table with 'load data' if possible - it's 
more
convenient than having to write the Insert statements - but I keep 
getting

the unwanted quotations when I do it that way despite trying umpteen
variations. Does anyone know why 'load data' is insisting on putting this
quotation in each row?

I don't have that same behaviour in other very similar situations in 
which
the table definition, the 'load data' statement and the organization of 
the

data file is comparable.

I can't find anything on this problem in the manual or the archive of all
the mailing lists

I'm running MySQL 4.0.15'-Max'.

---
Rhino



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.371 / Virus Database: 267.14.18/230 - Release Date: 
14/01/2006



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.371 / Virus Database: 267.14.18/230 - Release Date: 
14/01/2006







--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.371 / Virus Database: 267.14.18/230 - Release Date: 14/01/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Suse Version 10 Documentation for MYSQL.

2006-01-15 Thread Andrew Burrows
Hi MYSQL users.

 

I have just install SuSE Version 10 and need to start working with MYSQL
could someone help me out regarding 

Documentation . Don't seem to be able to find any in the help files.

 

Andrew



Re: Suse Version 10 Documentation for MYSQL.

2006-01-15 Thread George Law

Andrew,

did you install mysql when you installed suse?

you might have to fire up YaST and install the rpms.

this is a little old... but shoudl help: 
http://www.novell.com/coolsolutions/feature/595.html
this is for 9.1, but the procedure should have remained pretty much the same 
with suse 10.



--
George

- Original Message - 
From: Andrew Burrows [EMAIL PROTECTED]

To: 'MYSQL General List' mysql@lists.mysql.com
Sent: Sunday, January 15, 2006 9:05 PM
Subject: Suse Version 10 Documentation for MYSQL.



Hi MYSQL users.



I have just install SuSE Version 10 and need to start working with MYSQL
could someone help me out regarding

Documentation . Don't seem to be able to find any in the help files.



Andrew






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]