what is the rationale for not allowing LOCK TABLES in a stored procedure

2018-08-13 Thread jeff
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an explanation as to *why* that is. Could someone please
enlighten me?

Thanks



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



BINLOG data

2012-10-02 Thread Jeff Smelser
I am having an issue with mysql 5.1.52 and using mysqlbinlog. Essentially I
am running the command piped into mysql and all is working well until I am
getting to a huge BINLOG command and I get:

121002 16:09:03 [ERROR] /usr/libexec/mysqld: Out of memory (Needed
3759556332 bytes)

Small versions process just fine.

BINLOG '
BqtgUBNBBgAAYgAAANeP7QAAAEMsAAEABHRlbXAAD3l0X3N5bnRocHViX2F1ZwARAw8PDw8P
D/YDAwP29vb2Dw8algAeAB4AlgCWAB4AAwMcCBwIHAgcCP0C/QI=
BqtgUBdBBgAACAQAAN+T7QAAAEMsEf///wAA/gEACDIwMTIwODAxAIPnDAMA
AMIBAADcRgAAgAAABcOpiICAAAIDw9JIgAAA
AuVLGP4CAAgyMDEyMDgwMgCD5yFJAACKBwAA0TQBAIAABgR7rXiA
gAAsBIW7sIAABgWvc7QAAAD+AwAIMjAxMjA4
MDMAg+dZWgAAKwgAALh5AQCAABYE6fowgCYloIAAUQE8
NyyAAA4ASHFQ/gQACDIwMTIwODA0AIPnBVwAANoHAAA2cQEAgAAA
AAAWBBGi/IBhqACAAD0CKo5ogAALAh073P4F
AAgyMDEyMDgwNQCD5zNoAADTCAAAz3cBAIAAGgGO8niAUmXA
gABCAIfs4IAADALT31QAAAD+BgAIMjAxMjA4MDYAg+csfgAA
7AsAAPuNAQCAAD0DFom8gAAABRKF4IAAWwKqkRyAABQE
j0Jc/gcACDIwMTIwODA3AIPnLYAAAOkJAADpfgEAgABIAWA/yIAA
AgHCImCAAFgAOBgogAAVBaL2DP4IAAgyMDEyMDgw
OACD50R+AACGCQAA3HwBAIAATwNswyyAAAEBuPqggACBAAcP
0IAAHAHZhkQAAAD+CQAIMjAxMjA4MDkAg+ebiQAAYAkAAJODAQCA
ADEDNHtcgAABBBOzgIAAfQQHr7iAABcFQDbA/goA
CDIwMTIwODEwAIPnWJcAAMAJAABNmgEAgABGA/Z9xIAAAQThseCA
AJQBPIEAgAAdBEfO9P4LAAgyMDEyMDgxMQCD5wyRAAAI


And the above goes on for pages and pages..  I dont have some memory high
enough, just trying to sort out which one.

[mysqld]
innodb_file_per_table
datadir=/data/mysql/databases
socket=/var/lib/mysql/mysql.sock
user=mysql
port = 3306
tmpdir  = /data/mysql/temp
# skip-external-locking   = 1
# skip-name-resolve
open-files-limit= 2
socket = /var/lib/mysql/mysql.sock
collation_server=utf8_general_ci
character_set_server=utf8
event-scheduler = OFF
expire_logs_days = 7
#max_binlog_size = 500M
back_log = 500
max_connections = 100
max_connect_errors = 1
table_open_cache = 2048
table-definition-cache = 1024
max_allowed_packet = 16M
binlog_cache_size = 10M
max_heap_table_size = 2G
sort_buffer_size = 256M
join_buffer_size = 256M
thread_cache_size = 16
thread_concurrency = 8
query_cache_size = 128M
query_cache_limit = 8M
ft_min_word_len = 4
default-storage-engine = InnoDB
thread_stack = 768K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 2G
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
interactive-timeout = 2400
wait-timeout = 2400
ssl-ca  = /var/lib/mysql/certs/ca-cert.pem
ssl-cert= /var/lib/mysql/certs/server-cert.pem
ssl-key = /var/lib/mysql/certs/server-key.pem

# ***  Replication related settings
server-id = 3
skip-slave-start
## read-only

#*** MyISAM Specific options
key_buffer_size = 1024M
read_buffer_size = 8M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 1024M
myisam_max_sort_file_size = 20G
myisam_repair_threads = 1
myisam_recover

# *** INNODB Specific options ***
default-storage-engine  = InnoDB
innodb = FORCE
ignore-builtin-innodb
plugin-load=ha_innodb_plugin.so
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 130G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
innodb-open-files = 300

[mysqldump]
quick

max_allowed_packet = 16M

[mysql]
no-auto-rehash

[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]

Thanks,
Jeff


thread connected and thread cached

2011-12-26 Thread Jeff Pang
Hello,

I got the two picutures attached from mycheckpoint's web graph.
They are showing the threads_connected have been increasing from 24th.
But threads_cached have been decreasing from the same day.
What does this mean for my mysql?

Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

add index to slave but not master

2011-10-31 Thread Jeff Pang
Hello,

I have a question that, if I add the index to the table in slave, but
don't do it in master, will it make problems?

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master

2011-09-28 Thread Jeff Pang
Hello,

In last day I inserted a SQL which is about 5M to the master, the
max_allowed_packet in both master and slave was 4M at that time.
Then the replication crached, and the info is below:

110928 23:31:53 [Note] Slave SQL thread initialized, starting
replication in log 'mysql-bin.002730' at position 52953235, relay log
'./mysqld-relay-bin.006210' position: 52953380
110928 23:31:53 [Note] Slave I/O thread: connected to master
'mysqlrepl@119.147.163.137:3306',replication started in log
'mysql-bin.002730' at position 54657199
110928 23:31:53 [ERROR] Error running query, slave SQL thread aborted.
Fix the problem, and restart the slave SQL thread with SLAVE START.
We stopped at log 'mysql-bin.002730' position 52953235
110928 23:31:53 [ERROR] Error reading packet from server: log event
entry exceeded max_allowed_packet; Increase max_allowed_packet on
master ( server_errno=1236)
110928 23:31:53 [ERROR] Got fatal error 1236: 'log event entry
exceeded max_allowed_packet; Increase max_allowed_packet on master'
from master when reading data from binary log
110928 23:31:53 [Note] Slave I/O thread exiting, read up to log
'mysql-bin.002730', position 54657199


I tried to increase the value of max_allowed_packet in both master and
slave to 1G:

mysql show variables like 'max_allowed_packet';
+++
| Variable_name | Value |
+++
| max_allowed_packet | 1073741824 |
+++
1 row in set (0.00 sec)

(And restarted both master and slave).
But when I start slave, the problem is still there,with the same error log.

Please help, thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Practical connection limits MySQL 5.1/5.5

2011-04-13 Thread Jeff Lee
Hey All,

Can anyone provide some guidance as to what the practical connection limits
to MySQL 5.1/5.5 are under linux?

We're running a ruby on rails application that establishes 50 to 100
connections to our database upon startup resulting in around 1,000
persistent db connections.  I've been told to expect anywhere from 5 - 10x
our current transaction volume and I'm trying to predict where we're going
to top out.  The servers are pretty beefy so I don't have a problem
reserving memory for connections if that's what it takes but was more
concerned about other problems that might be caused by having so many
connections.

I have started looking at doing connection concentration using MySQL Proxy
Funnel but it doesn't look like it's been updated in a while so I'm not sure
how far I'll get.

Thanks


RE: Using IF in a query to set a variable then sort on said variable

2009-10-26 Thread Jeff
Perhaps case is the way to go, I'll look into that this morning.

Unfortunately there are three groupings. So my IF or CASE needs to check for 
example:

if timezone = 3,5,6,7 then 1
if timezone = 1,2,4 then 2
if timezone = 8,9 then 3

So, it's a bit more complicated than I tohught it would be. I originally wanted 
to use mySQL as the part that did this processing as opposed to outputting the 
results then sorting an array.

Jeff

From: 卢钧轶 [mailto:cenal...@gmail.com]
Sent: Sunday, October 25, 2009 1:19 AM
To: Jeff
Subject: Re: Using IF in a query to set a variable then sort on said variable

Hi jeff

If there's only two Candidate value for @tempzone, you can accomplish you goal 
like this :

Select xxx from tbl_name d order by if( d.timezone in (1,3,5,8), 1 , 0);


2009/10/23 Jeff j...@platinumsynergy.commailto:j...@platinumsynergy.com
I currently have a query like so:

SELECT p.fldId, p.fldFName, p.fldLName, p.fldEmail, p.fldPhone, p.resellerId, 
d.timezoneId, d.bestTime, d.lastDate, d.lastTime, d.lastConnected FROM 
tblProspects as p LEFT JOIN tblProspectsDetails as d ON d.prospectId = p.fldId 
WHERE p.uId = 46437 ORDER BY d.lastDate ASC, d.timezoneId DESC

Basically I have been trying several ways to get this set up to work but due to 
the way the timezones are set up in this system, a regular sort won't work.

What I want to know is, is it possible to check the value of  field and set a 
variable, then sort on that. For example:

IF d.timezone = 5 THEN @tempzone = 1

Then sort the entire query on that tempzone. Basically what I want to do is 
test the timezone ids versus a few numbers, assign them a value and sort on 
that.

IF d.timezone = (1,3,5,8) THEN @tempzone = 1 (pseudo code)

Is this possible in a mySQL query directly? The only other option I guess I 
have is to run a mass update on a field then doing the normal order but doing 
three updates just to make one select work seems kind of unfortunate :(

Jeff



RE: Using IF in a query to set a variable then sort on said variable

2009-10-26 Thread Jeff
An additional field is pretty much impossible since that case would change 
throughout the day unfortunately. But thank you all for your assistance, will 
do some testing.

Jeff

-Original Message-
From: Jaime Crespo Rincón [mailto:jcre...@warp.es] 
Sent: Monday, October 26, 2009 12:52 PM
To: Jeff
Cc: mysql@lists.mysql.com
Subject: Re: Using IF in a query to set a variable then sort on said variable

2009/10/26 Jeff j...@platinumsynergy.com:
 Perhaps case is the way to go, I'll look into that this morning.

 Unfortunately there are three groupings. So my IF or CASE needs to check for 
 example:

 if timezone = 3,5,6,7 then 1
 if timezone = 1,2,4 then 2
 if timezone = 8,9 then 3

Yes, Case function is the way to go.

Anyway, check for performance issues: in that case, precalculating and
storing an aditional field would be the best way (it could be done
with a trigger, for example).


-- 
Jaime Crespo
MySQL  Java Instructor
Warp Networks
http://warp.es


Using IF in a query to set a variable then sort on said variable

2009-10-23 Thread Jeff
I currently have a query like so:

SELECT p.fldId, p.fldFName, p.fldLName, p.fldEmail, p.fldPhone, p.resellerId, 
d.timezoneId, d.bestTime, d.lastDate, d.lastTime, d.lastConnected FROM 
tblProspects as p LEFT JOIN tblProspectsDetails as d ON d.prospectId = p.fldId 
WHERE p.uId = 46437 ORDER BY d.lastDate ASC, d.timezoneId DESC

Basically I have been trying several ways to get this set up to work but due to 
the way the timezones are set up in this system, a regular sort won't work.

What I want to know is, is it possible to check the value of  field and set a 
variable, then sort on that. For example:

IF d.timezone = 5 THEN @tempzone = 1

Then sort the entire query on that tempzone. Basically what I want to do is 
test the timezone ids versus a few numbers, assign them a value and sort on 
that.

IF d.timezone = (1,3,5,8) THEN @tempzone = 1 (pseudo code)

Is this possible in a mySQL query directly? The only other option I guess I 
have is to run a mass update on a field then doing the normal order but doing 
three updates just to make one select work seems kind of unfortunate :(

Jeff


User Defined Types

2009-07-28 Thread Jeff Lanzarotta
Hello,

I am evaluating MySQL. I am coming from Microsoft SQL Server 2005.

This may have been discussed before but I have not been able to find it. In SQL 
Server you are able to define a user-defined type like this:

CREATE TYPE [dbo].[status] FROM INT NOT NULL

This will allow you to then define columns like this:

CREATE TABLE [dbo].[tableName]
(
    [statusColumn] [dbo].[status]
)

Is there such a thing in MySQL?

Thanks,

Jeff


RE: User Defined Types

2009-07-28 Thread Jeff Lanzarotta
Darn, oh well.

Thanks for the quick response.

--- On Tue, 7/28/09, Gavin Towey gto...@ffn.com wrote:

From: Gavin Towey gto...@ffn.com
Subject: RE: User Defined Types
To: delux256-my...@yahoo.com delux256-my...@yahoo.com, 
mysql@lists.mysql.com mysql@lists.mysql.com
Date: Tuesday, July 28, 2009, 2:39 PM

Nope

-Original Message-
From: Jeff Lanzarotta [mailto:delux256-my...@yahoo.com]
Sent: Tuesday, July 28, 2009 11:37 AM
To: mysql@lists.mysql.com
Subject: User Defined Types

Hello,

I am evaluating MySQL. I am coming from Microsoft SQL Server 2005.

This may have been discussed before but I have not been able to find it. In SQL 
Server you are able to define a user-defined type like this:

CREATE TYPE [dbo].[status] FROM INT NOT NULL

This will allow you to then define columns like this:

CREATE TABLE [dbo].[tableName]
(
    [statusColumn] [dbo].[status]
)

Is there such a thing in MySQL?

Thanks,

Jeff

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


Newbie First Use Connection Question - Mac OSX 10.5.6

2009-02-19 Thread Jeff Murdock
I am brand new to MySQL and JAVA/Netbeans 6.5 so please excuse the  
stupid questions ...


1. I have just downloaded and successfully installed MySQL v5.1 on my  
MacBook Pro running OS X 10.5.6


2. I have also downloaded and installed MySQL Tools:  Administrator   
Query Browser


(I come from a Visual Basic  MS SQL Server 2000/2005 environment)

When you install MS SQL server the default login is sa with a blank  
password.


My question is;

 How do I login (connection settings) to MySQL (for Administrator and  
Query Browser tools)?



 (I start my server by going to settings and then MySQL icon, Start  
Server, so my server is running) Unfortunately, I have never seen  
MySQL in action nor do I know anyone to ask/show me how to get started.




Newbie Question - MySQL Administrator

2009-02-19 Thread Jeff Murdock

This is on a Mac OS X (v10.5.6) system in case that matters.



1. - MySQL Administrator Help button says:

HELP

Help isn’t available for MySQL Administrator.


Really, no help or did I screw-up the install somehow?



2. - I tried to create my first Table in MySQL Administrator but got  
this message:


ERROR

Error executing SQL commands to create table.
You have an error in your SQL syntax; check the manual that  
corresponds to your MySQL server version for the right syntax to use  
near 'DEFAULT NULL,

  `LastName` VARCHAR DEFAULT NULL,
  `Street1` VARCHAR DEFAULT NUL' at line 3 (error 1064)


Trying to Execute this:


CREATE TABLE `test`.`AddressBook` (
  `RecNo` INT NOT NULL AUTO_INCREMENT,
  `FirstName` VARCHAR DEFAULT NULL,
  `LastName` VARCHAR DEFAULT NULL,
  `Street1` VARCHAR DEFAULT NULL,
  `Street2` VARCHAR DEFAULT NULL,
  `City` VARCHAR DEFAULT NULL,
  `State` VARCHAR DEFAULT NULL,
  `Zip` VARCHAR DEFAULT NULL,
  `HomePhone` VARCHAR DEFAULT NULL,
  `CellPhone` VARCHAR DEFAULT NULL,
  PRIMARY KEY (`RecNo`)
)
CHARACTER SET utf8
COMMENT = 'Sample';







Jeff






delete query question

2008-07-08 Thread Jeff Mckeon
I think this is possible but I'm having a total brain fart as to how to
construct the query..

Table2.ticket = table1.ID

Table2 is a many to 1 relationship to table1

I need to delete all records from table1 where created 
unix_timestamp(date_sub(now(), interval 3 month)) 
And all rows from table2 where Table2.ticket = Table1.ID (of the deleted
rows..)

Can't this be done in one query? Or two?

Thanks,

Jeff




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: delete query question

2008-07-08 Thread Jeff Mckeon


 -Original Message-
 From: Ian Simpson [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, July 08, 2008 11:27 AM
 To: Jeff Mckeon
 Cc: mysql@lists.mysql.com
 Subject: Re: delete query question
 
 If the tables are InnoDB, you could temporarily set up a foreign key
 relationship between the two, with the 'ON DELETE CASCADE' option.
 

Nope, MyISAM...

 On Tue, 2008-07-08 at 11:14 -0400, Jeff Mckeon wrote:
  I think this is possible but I'm having a total brain fart as to how
 to
  construct the query..
 
  Table2.ticket = table1.ID
 
  Table2 is a many to 1 relationship to table1
 
  I need to delete all records from table1 where created 
  unix_timestamp(date_sub(now(), interval 3 month))
  And all rows from table2 where Table2.ticket = Table1.ID (of the
 deleted
  rows..)
 
  Can't this be done in one query? Or two?
 
  Thanks,
 
  Jeff
 
 
 
 
 --
 Ian Simpson
 System Administrator
 MyJobGroup
 
 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has
 misdirected this email, please notify the author by replying to this
 email. If you are not the intended recipient(s) disclosure,
 distribution, copying or printing of this email is strictly prohibited
 and you should destroy this mail. Information or opinions in this
 message shall not be treated as neither given nor endorsed by the
 company. Neither the company nor the sender accepts any responsibility
 for viruses or other destructive elements and it is your responsibility
 to scan any attachments.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: delete query question

2008-07-08 Thread Jeff Mckeon
Thanks, that did it!

 -Original Message-
 From: Peter Brawley [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, July 08, 2008 11:57 AM
 To: Jeff Mckeon
 Cc: mysql@lists.mysql.com
 Subject: Re: delete query question
 
 Jeff,
 
 Table2.ticket = table1.ID
 Table2 is a many to 1 relationship to table1
 I need to delete all records from table1 where created 
 unix_timestamp(date_sub(now(), interval 3 month))
 And all rows from table2 where Table2.ticket = Table1.ID
 (of the deleted rows..)
 
 Like this (untested)?
 
 DELETE table1,table2
 FROM table1 t1
 JOIN table2 t2 ON t1.id=t2.ticket
 WHERE t2.created  UNIX_TIMESTAMP( DATE_SUB( NOW(), INTERVAL 3 MONTH ))
 ;
 
 PB
 
 -
 
 Jeff Mckeon wrote:
  I think this is possible but I'm having a total brain fart as to how
 to
  construct the query..
 
  Table2.ticket = table1.ID
 
  Table2 is a many to 1 relationship to table1
 
  I need to delete all records from table1 where created 
  unix_timestamp(date_sub(now(), interval 3 month))
  And all rows from table2 where Table2.ticket = Table1.ID (of the
 deleted
  rows..)
 
  Can't this be done in one query? Or two?
 
  Thanks,
 
  Jeff
 
 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Error: No query specified

2007-12-06 Thread Jeff Mckeon
When I run a Show slave status \G I get a message at the bottom that says
Error: No query specified  

I don't recall ever seeing this before and can't find anything online about
it.  Anyone know what it means?

   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
 Replicate_Do_Table:
 Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
 Last_Errno: 0
 Last_Error:
   Skip_Counter: 0
Exec_Master_Log_Pos: 845962457
Relay_Log_Space: 739790470
Until_Condition: None
 Until_Log_File:
  Until_Log_Pos: 0
 Master_SSL_Allowed: No
 Master_SSL_CA_File:
 Master_SSL_CA_Path:
Master_SSL_Cert:
  Master_SSL_Cipher:
 Master_SSL_Key:
  Seconds_Behind_Master: 0
1 row in set (0.01 sec)

ERROR:
No query specified

mysql



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Error: No query specified

2007-12-06 Thread Jeff Mckeon
DUH!!! 

 

LOL,  I'm an idiot..

 

Gonna go flush my head down the toilet now.

 

Thanks guys. 

 

From: William Newton [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 1:29 PM
To: Jeff Mckeon; MySql
Subject: Re: Error: No query specified

 

I'm guessing you are adding a  semi-colon (;) to the end of the statement.
Its unnecessary with the \G



- Original Message 
From: Jeff Mckeon [EMAIL PROTECTED]
To: MySql mysql@lists.mysql.com
Sent: Thursday, December 6, 2007 12:19:22 PM
Subject: Error: No query specified

When I run a Show slave status \G I get a message at the bottom that says
Error: No query specified  

I don't recall ever seeing this before and can't find anything online about
it.  Anyone know what it means?

  Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
  Skip_Counter: 0
Exec_Master_Log_Pos: 845962457
Relay_Log_Space: 739790470
Until_Condition: None
Until_Log_File:
  Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
  Master_SSL_Cipher:
Master_SSL_Key:
  Seconds_Behind_Master: 0
1 row in set (0.01 sec)

ERROR:
No query specified

mysql



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 

 

  _  

Never miss a thing. Make Yahoo
http://us.rd.yahoo.com/evt=51438/*http:/www.yahoo.com/r/hs  your homepage.




RE: REPLICATION

2007-12-06 Thread Jeff Mckeon
No, I do not think this is possible.  

 -Original Message-
 From: Krishna Chandra Prajapati [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 06, 2007 8:38 AM
 To: ars k; MySql
 Subject: Re: REPLICATION
 
 A, B, C, D are mysql Servers
 
 
 On Dec 6, 2007 12:18 PM, ars k [EMAIL PROTECTED] wrote:
 
  are A,B,C,D  servers or separate mysql instances?
 
 
 
  On Dec 6, 2007 10:18 AM, Krishna Chandra Prajapati
 [EMAIL PROTECTED]
  wrote:
 
   Does any body has tried this
  
   On Dec 6, 2007 10:08 AM, Krishna Chandra Prajapati 
   [EMAIL PROTECTED]
   wrote:
  
Yes, Is it possible or not
   
-Krishna Chandra Prajapati
   
   
On Dec 5, 2007 8:56 PM, Jeff Mckeon [EMAIL PROTECTED]
 wrote:
   
  -Original Message-
  From: Krishna Chandra Prajapati
 [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, December 05, 2007 8:09 AM
  To: MySql
  Subject: REPLICATION
 
  Hi,
 
  I am working on production and thinking of implementing chain
  replication
  A-B-C. A is replicated to B. B is being replicated to C.
  I want to know that there is any script or any cron by which
 i can
  replicate
  (manually or automatically) D server to C.
  (D is another replication server).
 
  I was thinking that manually i can get the mysql-bin log sql
 and
  execute it
  on server C. In this way D will replicate to C.
  The above task is possible or not. I haven't tested till now.
  Any other idea any body have.
 

 So you want to do this?

 A-B-C-D


   
   
   
  
  
   --
   Krishna Chandra Prajapati
   MySQL DBA,
   Ed Ventures e-Learning Pvt.Ltd.
   1-8-303/48/15, Sindhi Colony
   P.G.Road, Secunderabad.
   Pin Code: 53
   Office Number: 040-66489771
   Mob: 9912924044
   URL: ed-ventures-online.com
   Email-id: [EMAIL PROTECTED]
  
 
 
 
 
 --
 Krishna Chandra Prajapati
 MySQL DBA,
 Ed Ventures e-Learning Pvt.Ltd.
 1-8-303/48/15, Sindhi Colony
 P.G.Road, Secunderabad.
 Pin Code: 53
 Office Number: 040-66489771
 Mob: 9912924044
 URL: ed-ventures-online.com
 Email-id: [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: REPLICATION

2007-12-05 Thread Jeff Mckeon
 -Original Message-
 From: Krishna Chandra Prajapati [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, December 05, 2007 8:09 AM
 To: MySql
 Subject: REPLICATION
 
 Hi,
 
 I am working on production and thinking of implementing chain
 replication
 A-B-C. A is replicated to B. B is being replicated to C.
 I want to know that there is any script or any cron by which i can
 replicate
 (manually or automatically) D server to C.
 (D is another replication server).
 
 I was thinking that manually i can get the mysql-bin log sql and
 execute it
 on server C. In this way D will replicate to C.
 The above task is possible or not. I haven't tested till now.
 Any other idea any body have.
 

So you want to do this?

A-B-C-D


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help with SQL query construction

2007-12-03 Thread Jeff Mckeon


 -Original Message-
 From: Marcus Claesson [mailto:[EMAIL PROTECTED]
 Sent: Monday, December 03, 2007 7:49 AM
 To: mysql@lists.mysql.com
 Subject: Help with SQL query construction
 
 Hi!
 
 I have a SQL query construction question that I hope someone can help
 me with. After comparing a bunch of DNA fragments (see name below) with
 a larger reference sequence I get a ordered list ranked according to
 similarities, and with start/stop co-ordinates where the fragments map
 to the reference sequence:
 
 +--+--+---+--+--+
 | name | rank | start | stop | sub_rank |
 +--+--+---+--+--+
 | A|1 | 1 | 1000 |   NULL   |
 | B|2 | 2 |  998 |   NULL   |
 | C|4 |  1100 | 2000 |   NULL   |
 | D|3 |  3050 | 4100 |   NULL   |
 | E|5 |  2040 | 3000 |   NULL   |
 | F|6 |  1102 | 2000 |   NULL   |
 | G|7 |  1098 | 1998 |   NULL   |
 | H|8 |  3048 | 4100 |   NULL   |
 | I|9 |  3051 | 4102 |   NULL   |
 +--+--+---+--+--+
 
 A graphical representation of fragments mapped to the ref sequence:
 
 ref
 1  A--
 2  B
 3  D--
 4   C--
 5 E
 6   F---
 7  G---
 8  H---
 9   I---
 
 Now, I want to group fragments in each overlapping position and sub-
 rank
 them according to their rank in that position. The final table would
 then look like:
 +--+--+---+--+--+
 | name | rank | start | stop | sub_rank |
 +--+--+---+--+--+
 | A|1 | 1 | 1000 | 1|
 | B|2 | 2 |  998 | 2|
 | C|4 |  1100 | 2000 | 1|
 | D|3 |  3050 | 4100 | 1|
 | E|5 |  2040 | 3000 | 1|
 | F|6 |  1102 | 2000 | 2|
 | G|7 |  1098 | 1998 | 3|
 | H|8 |  3048 | 4100 | 2|
 | I|9 |  3051 | 4102 | 3|
 +--+--+---+--+--+
 
 Is this possible to achieve using SQL queries alone (perhaps with GROUP
 BY, nested SELECTs etc)?
 
 I've managed to do this with a Perl-DBI script, but would much prefer
 to
 do it completely with MySQL instead. The Perl code is below and below
 that is the MySQL-dump of the test data set...
 
 Many thanks in advance!
 Marcus
 
 
 while (@{$dbh-selectcol_arrayref(SELECT rank FROM test WHERE sub_rank
 IS NULL)}) {
 @null_sub_ranks = @{$dbh-selectcol_arrayref(SELECT rank FROM test
 WHERE sub_rank IS NULL AND NOT (start=(SELECT stop FROM test WHERE
 rank
 = (SELECT min(rank) FROM test WHERE sub_rank IS NULL)) OR stop =
 (SELECT start FROM test WHERE rank = (SELECT min(rank) FROM test WHERE
 sub_rank IS NULL};
 for ($rank=0; $rank  scalar(@null_sub_ranks); $rank++ ) {
   $sub_rank = $rank + 1;
   $dbh-do(UPDATE test SET sub_rank=$sub_rank WHERE rank=
 $null_sub_ranks[$rank]);
 }
 }
 
 
 -- MySQL dump 10.10
 --
 -- Host: localhostDatabase: bxb
 -- --
 -- Server version   5.0.22
 
 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
 /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
 /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
 /*!40101 SET NAMES utf8 */;
 /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
 /*!40103 SET TIME_ZONE='+00:00' */;
 /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
 /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS,
 FOREIGN_KEY_CHECKS=0 */;
 /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO'
 */;
 /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
 
 --
 -- Table structure for table `test`
 --
 
 DROP TABLE IF EXISTS `test`;
 CREATE TABLE `test` (
   `name` text,
   `rank` int(11) default NULL,
   `start` int(11) default NULL,
   `stop` int(11) default NULL,
   `sub_rank` int(11) default NULL
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
 --
 -- Dumping data for table `test`
 --
 
 
 /*!4 ALTER TABLE `test` DISABLE KEYS */;
 LOCK TABLES `test` WRITE;
 INSERT INTO `test` VALUES
 ('A',1,1,1000,NULL),('B',2,2,998,NULL),('C',4,1100,2000,NULL),('D',3,30
 50,4100,NULL),('E',5,2040,3000,NULL),('F',6,1102,2000,NULL),('G',7,1098
 ,1998,NULL),('H',8,3048,4100,NULL),('I',9,3051,4102,NULL);
 UNLOCK TABLES;
 /*!4 ALTER TABLE `test` ENABLE KEYS */;
 /*!40103 SET [EMAIL PROTECTED] */;
 
 /*!40101 SET [EMAIL PROTECTED] */;
 /*!40014 SET [EMAIL PROTECTED] */;
 /*!40014 SET [EMAIL PROTECTED] */;
 /*!40101 SET [EMAIL PROTECTED] */;
 /*!40101 SET [EMAIL PROTECTED] */;
 /*!40101 SET [EMAIL PROTECTED] */;
 /*!40111 SET [EMAIL PROTECTED] */;
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


I'd say perl is 

RE: backup InnoDB db to another server

2007-12-02 Thread Jeff Mckeon

 -Original Message-
 From: js [mailto:[EMAIL PROTECTED]
 Sent: Saturday, December 01, 2007 8:11 PM
 To: Jeff Mckeon
 Cc: [EMAIL PROTECTED]; mysql@lists.mysql.com
 Subject: Re: backup InnoDB db to another server
 
 You might want to use --single-transaction option when mysqldumping
 innodb
 
We have a mix of InnoDB and MyIsam tables so that's really not an option.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: backup InnoDB db to another server

2007-12-02 Thread Jeff Mckeon
 -Original Message-
 From: Osvaldo Sommer [mailto:[EMAIL PROTECTED]
 Sent: Saturday, December 01, 2007 8:23 AM
 To: 'Jeff Mckeon'; 'David Campbell'; mysql@lists.mysql.com
 Subject: RE: backup InnoDB db to another server
 
 Jeff:
 
 Mysqldump don't back up your index, that's your data only.
 
 Osvaldo Sommer
 

Actually I think it's more than that.   We have cleaning scripts put place
to delete records older than 3 months from certain tables.  I think the
users have been running these without optimizing the tables afterwards and
therefore never reclaiming the space the created with the deletes.  These
tablename_Old tables were huge.  On the main systems I did a mysqldump of
just these tables, then dropped the originals from the db and restored them.
The entire db size went from 65G to 20G.  

The database was already screwed up and I have another master running for
our applications so there was no risk if I screwed something up.  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: backup InnoDB db to another server

2007-11-30 Thread Jeff Mckeon
 -Original Message-
 From: David Campbell [mailto:[EMAIL PROTECTED]
 Sent: Friday, November 30, 2007 11:29 AM
 To: mysql@lists.mysql.com
 Subject: Re: backup InnoDB db to another server
 
 Jørn Dahl-Stamnes wrote:
  On Friday 30 November 2007 17:12, Jeff Mckeon wrote:
  Ok, so what would be the command to get a mysqldump of DB1 from
 10.10.0.1
  into file DB1backup.sql on 10.10.0.2?
 
  What about running mysqldump on 10.10.0.2?
 
  or
 
 
  scp dump.sql [EMAIL PROTECTED]:.
 
 
 Onliner
 
 mysqldump DB1 -uroot -ppassword  dump.sql | ssh 10.10.0.2 cat 
 dump.sql
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]

The Mysqldump has finished but I've only got a 10gig .sql file.  The db is
about 65gig in raw size.  Does this sound right?

Is there a filesize limit for mysqldump .sql files?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: backup InnoDB db to another server

2007-11-30 Thread Jeff Mckeon
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 Behalf Of Baron Schwartz
 Sent: Friday, November 30, 2007 11:06 AM
 To: Jeff Mckeon
 Cc: mysql list
 Subject: Re: backup InnoDB db to another server
 
 On Nov 30, 2007 10:55 AM, Jeff Mckeon [EMAIL PROTECTED] wrote:
  I'm trying to use mysqldump to backup an innoDB based db from one
 server to
  an sql file on another.  It doesn't seem to be working however...
 
  Here is the command I'm using on the source server
 
  mysqldump DB1 -uroot -ppassword | mysql --compress -h 10.10.0.1 -
 uroot
  -ppassword DB1  /DATA/DB1backup.sql
 
  I see a /DATA/DB1backup.sql file created on the source server with 0
 size,
  but nothing on the destination server.
 
  What am I screwing up here?
 
 Your command is actually telling mysql on 10.01.0.1 to execute the
 dumped output.  It is doing so without creating any ouput, but the 
 is creating an output file anyway.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]

Ok, so what would be the command to get a mysqldump of DB1 from 10.10.0.1
into file DB1backup.sql on 10.10.0.2?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



backup InnoDB db to another server

2007-11-30 Thread Jeff Mckeon
I'm trying to use mysqldump to backup an innoDB based db from one server to
an sql file on another.  It doesn't seem to be working however...

Here is the command I'm using on the source server

mysqldump DB1 -uroot -ppassword | mysql --compress -h 10.10.0.1 -uroot
-ppassword DB1  /DATA/DB1backup.sql

I see a /DATA/DB1backup.sql file created on the source server with 0 size,
but nothing on the destination server.

What am I screwing up here?

Thanks,

Jeff




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: backup InnoDB db to another server

2007-11-30 Thread Jeff Mckeon
 -Original Message-
 From: Jørn Dahl-Stamnes [mailto:[EMAIL PROTECTED]
 Sent: Friday, November 30, 2007 11:16 AM
 To: mysql@lists.mysql.com
 Subject: Re: backup InnoDB db to another server
 
 On Friday 30 November 2007 17:12, Jeff Mckeon wrote:
  Ok, so what would be the command to get a mysqldump of DB1 from
 10.10.0.1
  into file DB1backup.sql on 10.10.0.2?
 
 What about running mysqldump on 10.10.0.2?
 
 or
 
 mysqldump DB1 -uroot -ppassword  dump.sql
 scp dump.sql [EMAIL PROTECTED]:.
 
 --
 Jørn Dahl-Stamnes
 homepage: http://www.dahl-stamnes.net/dahls/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]

Ok so on 10.10.0.2 (destination server) issue a:

% mysqldump DB1 -h10.10.0.1 -C -uroot -ppassword  /DATA/DB01bacup.sql




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



server optimization

2007-10-18 Thread Jeff Mckeon
Hey all,

I've got a new server set up, with dual Intel quad core processors, 4 gig of
ram, OpenSuse 10.3 (64bit) and MySql 5.0.45.  

The majority of the tables are MyISAM with a few InnoDB here or there.  I'm
using the huge-my.cnf as the base for my config.

Can anyone suggest some tweeking to the my.conf that will give me the best
performance on this platform?

Thanks,

Jeff




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



About charset

2007-08-27 Thread Jeff list
Hello list,

I have a table whose 'default charset=utf8'.
I insert a record into it with non-utf8 charset,like GBK.Then I select
this record,it print GBK characters correctly.
Then I insert a utf8 record into it,and read this record in scripts and
decode it with utf8_decode,the result is also correct.

So I think,for table (or column) with utf8 charset,I could insert it
with any characters like GBK,gb2312,iso8859-1.Mysql doesn't convert them
to utf8 characters automaticly.

Am I right? Thanks!




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



hang up mysql shell

2007-07-16 Thread Jeff Pang
hello,

When saying 'exit'  in mysql shell,mysqld would release the lock which was made 
before.
But if I do 'ctrl+z' under linux to hang up mysql shell,would mysqld also 
release the lock or not?
Thank you.

Viel oder wenig? Schnell oder langsam? Unbegrenzt surfen + telefonieren
ohne Zeit- und Volumenbegrenzung? DAS TOP ANGEBOT JETZT bei Arcor: günstig
und schnell mit DSL - das All-Inclusive-Paket für clevere Doppel-Sparer,
nur  39,85 €  inkl. DSL- und ISDN-Grundgebühr!
http://www.arcor.de/rd/emf-dsl-2

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



best filesystem for mysql

2007-05-01 Thread Jeff Pang
hello list,

I saw this article for the suitable filesystem for mysql.
http://www.bullopensource.org/ext4/sqlbench/

From what I saw,the best filesystem for MyISAM is ext3,the best filesystem for 
InnoDB is Reiserfs.
How about your thought on it?Thanks.

50€ AMAZON-Einkaufsgutschein bei Bestellung von Arcor-DSL:
Viel oder wenig? Schnell oder langsam? Unbegrenzt surfen + telefonieren
ohne Zeit- und Volumenbegrenzung? DAS TOP ANGEBOT JETZT bei Arcor: günstig
und schnell mit DSL - das All-Inclusive-Paket für clevere Doppel-Sparer,
nur  39,85 €  inkl. DSL- und ISDN-Grundgebühr!
http://www.arcor.de/rd/emf-dsl-2

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to query this sql?

2007-04-28 Thread Jeff Pang
Hello list,

I want to get the counter for db-items by each day,so I wrote this sql:

 select count(*) as dd from items group by updatetime;

But sorry updatetime is datetime type,not date type.Then I can't get the 
correct result.

How can I do this?Thanks.

50€ AMAZON-Einkaufsgutschein bei Bestellung von Arcor-DSL:
Viel oder wenig? Schnell oder langsam? Unbegrenzt surfen + telefonieren
ohne Zeit- und Volumenbegrenzung? DAS TOP ANGEBOT JETZT bei Arcor: günstig
und schnell mit DSL - das All-Inclusive-Paket für clevere Doppel-Sparer,
nur  39,85 €  inkl. DSL- und ISDN-Grundgebühr!
http://www.arcor.de/rd/emf-dsl-2

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



trouble starting mysqld daemon

2006-12-31 Thread Jeff Jones

Hi!

I'm a rookie, so bear with me...

Keep getting:

Starting mysqld daemon with databases from /opt/mysql/mysql/data
Stopping server from pid file /opt/mysql/mysql/data/unknown.pid
061228 17:16:04 mysqld ended

After this command  bin/safe_mysqld --user=mysql 

Very open to suggestions

Thanks

_
Find sales, coupons, and free shipping, all in one place!  MSN Shopping 
Sales  Deals 
http://shopping.msn.com/content/shp/?ctid=198,ptnrid=176,ptnrdata=200639



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: trouble starting mysqld daemon

2006-12-31 Thread Jeff Jones


Thanks!

I think I'm in business.  I changed some privileges, examined the logs 
(which helped a lot), and initialized the database again using 
mysql_install_db --user=mysql --datadir=/opt/mysql/mysql/data


now onto PHP

take care

Jeff



From: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: Jeff Jones [EMAIL PROTECTED]
CC: mysql@lists.mysql.com
Subject: Re: trouble starting mysqld daemon
Date: Sun, 31 Dec 2006 14:23:53 -0500 (EST)


Check for .err text log files ..  they are probably in
/opt/mysql/mysql/data/

called servername.err
  servername is the hostname of your box.


On Thu, 28 Dec 2006, Jeff Jones wrote:

 Hi!

 I'm a rookie, so bear with me...

 Keep getting:

 Starting mysqld daemon with databases from /opt/mysql/mysql/data
 Stopping server from pid file /opt/mysql/mysql/data/unknown.pid
 061228 17:16:04 mysqld ended

 After this command  bin/safe_mysqld --user=mysql 

 Very open to suggestions

 Thanks

 _
 Find sales, coupons, and free shipping, all in one place!  MSN Shopping
 Sales  Deals
 http://shopping.msn.com/content/shp/?ctid=198,ptnrid=176,ptnrdata=200639


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




_
Type your favorite song.  Get a customized station.  Try MSN Radio powered 
by Pandora. http://radio.msn.com/?icid=T002MSN03A07001



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



subquery performance

2006-09-25 Thread Jeff Drew

I have a query with a subquery which does not throw an error, but does not
return either.   I've been testing the query using mysql Query Browser and
the poor dolphin jumps only once a minute or so ;)   I use MYSQL's excellent
error documentation heavily so if the query doesn't throw an error, I'm at a
disadvantage.  Is there an optimization or just better syntax to use?

THE DATA
I have a table that contains an abbreviated identifier and a full
identifier.  The real data is a bit messy so here's a sanitized example:

Abbreviated Column contents:   TR123, RG456
Full Identifier Column contents: TR 123 abc, RG 456 def

THE QUERY
My intent is to:
1. select some of the Abbreviated Column and convert that to a selection for
the Full Identifier Column by:
   - extracting the first 2 characters
   - inserting a space
   - selecting the last 3 characters
   - appending % so I can match any of the last 3 characters in the
Full Identifier
2. select rows from Full Identifier Column based on #1

PROBLEMS
I think I have two problems:
1. in... % syntax  is not present in any examples I've seen.  They are all
like... %  so in may not work.
2. Here's query that runs, but does not return:

select name, address from testTable where FullIdentifier in ( select concat
( substring ( AbbreviatedIdentifier,1,2) ,   ,
substring(AbbreviatedIdentifier from 3) , % ) from testTable where name
like 'Daisy'));

My left join attempt complained  because the data is all in one table. Is
there a better solution than my  FullIdentifier in(select...  ?

I am not an SQL expert so I'd appreciate any ideas on how to correct this
query.

Thanks


Re: subquery performance

2006-09-25 Thread Jeff Drew

Thanks for the detailed feedback.  I do not create the data so I can't
modify it.  Your illustration of the table is correct.  I'll try to
implement some of your feedback.

Thanks

On 9/25/06, Michael Stassen [EMAIL PROTECTED] wrote:


Jeff Drew wrote:
 I have a query with a subquery which does not throw an error, but does
not
 return either.   I've been testing the query using mysql Query Browser
and
 the poor dolphin jumps only once a minute or so ;)   I use MYSQL's
 excellent error documentation heavily so if the query doesn't throw an
 error, I'm at a disadvantage.  Is there an optimization or just better
 syntax to use?

 THE DATA
 I have a table that contains an abbreviated identifier and a full
 identifier.  The real data is a bit messy so here's a sanitized example:

Please don't do that.  When you don't show us the real problem, you
increase the
odds of getting the wrong solution.

 Abbreviated Column contents:   TR123, RG456
 Full Identifier Column contents: TR 123 abc, RG 456 def

Do I understand correctly that your table looks something like

   AbbreviatedIdentifier  FullIdentifier Name   Address
   -  -- 
---
   TR123  TR 123 abc a name an address
   RG456  RG 456 def another name   another
address

with FullIdentifier as the primary key?  If so, that's a poor table
design,
which is almost certainly causing your problems.  See below.

 THE QUERY
 My intent is to:
 1. select some of the Abbreviated Column and convert that to a selection
 for the Full Identifier Column by:
- extracting the first 2 characters
- inserting a space
- selecting the last 3 characters
- appending % so I can match any of the last 3 characters in
the
 Full Identifier

...in the subquery.  The % character is the wildcard for LIKE
matches.  There is
no wildcard for IN.  IN is followed by a list of values, one of which must
match
exactly for the row to be selected, so this won't work as intended.

 2. select rows from Full Identifier Column based on #1

That's very convoluted.  How do you expect this to help?

I'm guessing that FullIdentifier is your primary key.  Because it's a
long,
messy string, you are finding it slow to use it to select rows,
particularly
when you need to find rows with a particular substring buried in the
key.  You
hoped that a shorter string might match more quickly, enabling you to
narrow
down the number of rows where the full id has to be examined.  That will
never
work as you've described it.  The subquery might run faster, but then you
must
do the full id comparison anyway to find rows which match the subquery
list
(FullIdentifier IN ...).  You've actually added overhead.  This *might*
work in
some situations if you had an integer primary key to use to do the
matching
between inner and outer query, but there would probably still be a better
way.

In any case, the reality is that MySQL doesn't optimize subqueries all
that
well, so they are seldom the best way to speed up a query.  It may be the
case
that your subquery is being treated as DEPENDENT, in which case it is
being run
once for each row found in the outer query.  That is, once for every row
in your
table.  Since your query never seems to return, I'd bet that's the case
here.

 PROBLEMS
 I think I have two problems:
 1. in... % syntax  is not present in any examples I've seen.  They are
 all like... %  so in may not work.

Right, it won't.

 2. Here's query that runs, but does not return:
reformatted

 SELECT name, address
 FROM testTable
 WHERE FullIdentifier
   IN ( SELECT CONCAT(SUBSTRING(AbbreviatedIdentifier,1,2),   ,
  SUBSTRING(AbbreviatedIdentifier FROM 3) , % )
FROM testTable
WHERE name LIKE 'Daisy'));

Is this really your query?  As I've already mentioned, the % won't work
for IN.
  Also, neither FullIdentifier nor AbbreviatedIdentifier is mentioned in
the
selection criteria, so the problem you tried to fix with the subquery is
not
present in this query.  Finally, there is no wildcard in the string that
name is
supposed to match, so there is no need for LIKE.  The intent of this query
is
equivalent to

   SELECT name, address
   FROM testTable
   WHERE name = 'Daisy';

which should be as fast as possible if there is an index on name.

 My left join attempt complained  because the data is all in one table.
Is
 there a better solution than my  FullIdentifier in(select...  ?

 I am not an SQL expert so I'd appreciate any ideas on how to correct
this
 query.

When I see a string primary key that looks like TR 123 abc, I find it is
usually a safe bet that the TR means something (a code, a manufacturer,
...),
the 123 means something (a production run, a part number, a plant
number, ...)
and the abc means something (some detail, a team code, ...).  In other
words,
you've crammed the answers to 3 questions into one column

stuck on localhost authentication

2006-08-05 Thread Jeff Drew

I'm apologize for posting an apparently simple question, but I'm stuck. I'm
trying to get my java application to connect to mysql on the same server.
The server is redhat with mysql 5.1.11. I'd done this with mysql 4.x in the
distant past.

I add a user with the command:

   insert into user (user,host) values ('jeff','localhost');
   flush privileges;
   grant all on databaseName.* to jeff;

select user,host from user; returns
 jeff  localhost

In Java, I use:
DriverManager.getConnection( /localhost/databaseName , 'jeff',null );

when the application tries to connect, DriverManager.getConnection() gets a
bad handshake error.

So I set the password in mysql with:
  set password for jeff = password('xyz');

now select user,host, password from user; returns 2 rows
 jeff localhost
 jeff %  *4232432323

I think this is the problem - the following getConnection() method is
directed to the 2nd entry because it has a password, but it's not localhost
so my localhost-based Java app is denied.

In Java, I use:
 DriverManager.getConnection( /localhost/tm , 'jeff','xyz' );

Then I get an authentication failed error.

I've also tried:
create user 'jeff'; but that created a   % host entry,
not localhost

What should the user table look like for a localhost user? Which commands
get me there?

Thanks


Multi master replication question

2006-07-18 Thread Jeff
I'm about to re-create a mulit master replication setup that was
dismantled during server hardware/software upgrades.

Replication setup:

A - B - A
|
C

One thing I can't remember is do I have to set an option somewhere to
tell the masters to ignore the queries in the binlog that oringated from
them?

I know I have to have log-bin and log-slave-updates set on each but is
there something else or will Server A automatically know to ignore an
update query that originated on it, then replicated to server B which
wrote it into it's binlog due to log-slave-updates setting (there are
downstream servers doing one way replication off of server B) and then
replicated back to A?

Just having a brain fart here...

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multi master replication question

2006-07-18 Thread Jeff
 -Original Message-
 From: Dominik Klein [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, July 18, 2006 11:06
 To: mysql@lists.mysql.com
 Subject: Re: Multi master replication question
 
 
  Replication setup:
  
  A - B - A
  |
  C
  
  One thing I can't remember is do I have to set an option 
 somewhere to 
  tell the masters to ignore the queries in the binlog that oringated 
  from them?
 
 Make sure you set different Server IDs on each machine and 
 you should be 
 just fine.
 
 -- 

Excellent.  Many thanks.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysqlhotcopy and replication

2006-06-29 Thread Jeff
I've got a replication slave db (all MyISAM tables) that I currently
back up every night by stopping MySQL, tarballing the table files, then
starting MySQL up again.

I'd like to use Mysqlhotcopy to do this so that the db can stay up for
reads while the backup is being done.  Here's my question.  While the
tables are flushed with read lock, what becomes of all the Insert,Update
and delete queries that come in via replication?  Will they be applied
after the backup is done and the tables unlocked?

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqlhotcopy and replication

2006-06-29 Thread Jeff McKeon
Excellent Dan! 
 
Thanks!

-Original Message- 
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Thu 6/29/2006 4:35 PM 
To: Jeff 
Cc: mysql@lists.mysql.com 
Subject: Re: Mysqlhotcopy and replication



Jeff, that is indeed the case - the replication thread will freeze 
just like any other thread, and pick up where it left off. 

In fact, the statements are still replicated to the slave by the 
replication thread, stored in the slave's relay-bin file I think. 
Therefore even though the slave data is not being updated, the slave 
is in a sense staying current with the master even during the hotcopy 
(though you would have to wait for all those statements to be applied 
for the data to be synchronized). 

Dan 

On 6/29/06, Jeff [EMAIL PROTECTED] wrote: 
 I've got a replication slave db (all MyISAM tables) that I currently 
 back up every night by stopping MySQL, tarballing the table files, 
then 
 starting MySQL up again. 
 
 I'd like to use Mysqlhotcopy to do this so that the db can stay up 
for 
 reads while the backup is being done.  Here's my question.  While the 
 tables are flushed with read lock, what becomes of all the 
Insert,Update 
 and delete queries that come in via replication?  Will they be 
applied 
 after the backup is done and the tables unlocked? 
 
 Thanks, 
 
 Jeff 
 
 
 
 -- 
 MySQL General Mailing List 
 For list archives: http://lists.mysql.com/mysql 
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] 
 
 



Upgrading and table engine change advise

2006-06-23 Thread Jeff
Hello all,

Just looking for some advice from any of you that have done what I'm
about to do.  I'm being forced by management to make a whole lot of
changes to our current MySQL db at one time.  Something I'm personnaly
not thrilled with.

Current config:

Redhat 9
MySQL ver 4.0.16
DB Engine MyISAM for all tables.
48G total space
1G ram

New config:

RH ES3
MySQL ver 5.x (latest)
Mix of MyISAM and InnoDB
~140gig total space
1G ram

Current my.cnf

# The MySQL server
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-bin
log-slave-updates
server-id=1
port = 3306
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 4
set-variable= max_connections=500

[mysql.server]
user=mysql
basedir=/var/lib

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout



I know I'll need to use a different my.cnf to set variables for using
InnoDB.  I'd prefer not to do all this at one time, too many changes,
but I'm not being given a choice.

I've never used InnoDB before.  We're switching to it to eliminate long
table locks caused by reads from large tables.

Any advise about pitfalls/potential problems I need to be aware of?

Thanks in advance.

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Broken Mac pref panel

2006-06-05 Thread Jeff Shapiro
On Sunday 04 June 2006 20:38, Marcus Bointon scribble on about:
 On 4 Jun 2006, at 23:25, Marcus Bointon wrote:
  I'm having trouble with the prefpane in OS X

 It seems I'm not alone - there are several reports of this in the
 MySQL bug tracker. Looks like a bug in the prefpane:

 http://bugs.mysql.com/bug.php?id=19577

In your original message your said that you just installed 5.0.22 (i686). I 
just upgraded my installation to  5.0.22 as well. However, I didn't update 
the preference pane software. My preference pane is still working correctly; 
however, it is from 5.0.20. (Which was a fresh install on a brand new MacBook 
Pro).

So, something may have broken in the lastest software. 

-- 
Life may have no meaning, or, even worse, it may have a meaning of which
you disapprove.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fun with Dates and Incentives.

2006-05-26 Thread Jeff Shapiro
On Wednesday 24 May 2006 15:18, Brian Menke scribble on about:
 Peter, thanks for the detailed info. I will figure out how to get rid of
 the UNIQUE key. Somehow that got added. Thanks for the catch. As far as INT
 for student id goes, I'm using email because it will be unique, and offers
 an easy way to track a user through the app I'm building (user name,
 password, session id's etc.) but I do get what you are saying. Thanks for

Umm just a question here: What happens if the person changes his or her email 
address? That is if you are allowing them to change it. If you are using it 
as the table relationship key, it may break your relationships if it changes 
in the parent table.. You'll have to ensure that your parent and child table 
keys stay in sync. This could be a headache waiting to happen.

Normally, I place an autoincrement column in the parent table and use it as 
the relationship key for all child tables. This way I know that the value 
will never change and it's not part of data important to the end user or the 
application. This is basically what Peter suggested.

jeff

-- 
Counting in octal is just like counting in decimal--if you don't use your 
thumbs.
-- Tom Lehrer

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fun with Dates and Incentives.

2006-05-26 Thread Jeff Shapiro
On Wednesday 24 May 2006 15:18, Brian Menke scribble on about:
 Peter, thanks for the detailed info. I will figure out how to get rid of
 the UNIQUE key. Somehow that got added. Thanks for the catch. As far as INT
 for student id goes, I'm using email because it will be unique, and offers
 an easy way to track a user through the app I'm building (user name,
 password, session id's etc.) but I do get what you are saying. Thanks for

Umm just a question here: What happens if the person changes his or her email 
address? That is if you are allowing them to change it. If you are using it 
as the table relationship key, it may break your relationships if it changes 
in the parent table.. You'll have to ensure that your parent and child table 
keys stay in sync. This could be a headache waiting to happen.

Normally, I place an autoincrement column in the parent table and use it as 
the relationship key for all child tables. This way I know that the value 
will never change and it's not part of data important to the end user or the 
application. This is basically what Peter suggested.

jeff

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Loop Trigger

2006-04-21 Thread Jeff Smelser
On Wednesday 19 April 2006 11:10, Lucas Vendramin wrote:
 Hi all.
 I am creating a trigger that update the some table witch call it.
 Ex:
 create table t1 (
  id int, name varchar(50), c int default 0, father int,
  primary key(id),
  index (father),
  foreign key (father) references t1(id) on update restrict on delete
 restrict );
 create trigger tg_t1
 before update on t1
 for each row
 begin
  update t1 set c=c+1 where father=NEW.id;
 end;

set NEW.c = NEW.c + 1;

or whatever. You dont update on a before trigger.. 

Jeff


pgpeUL5J7xa3l.pgp
Description: PGP signature


Out of control connections

2006-04-14 Thread Jeff
Hello all,

I've got a problem where a php web application is intermitantly making
bursts of 500+ connections to a mysql database which then locks out all
other connections.  These connection don't apparently do anything query
wise they just use up connections.  The problem is when it happens I
can't get a processlist to see what user is causing it due to too many
connection and therefore track down the offending web app.  I can do a
netstat but that only confirms which web server the out of controll
connections are coming from.  

The connections then just die off on their own in about a minute.  Does
anyone know of any way I can log these connections or some other way of
catching it in progress from myslq so I can figure out which website is
causing it and then go correct the code?

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Auto increment Primary Index fields in replicated db

2006-03-13 Thread Jeff
All,

We are using circular replication now on db's that were originally stand
alone.  One problem we have is that all the primary index fields for
most of the tables are auto increment fields.  This prevents us from
writing to both db servers because of confilicting INDEX entries.  Is
there some way to have this work?  Someone on another msg board told me
to look at Auto_increment_increment and Auto_increment_offset to
accompish this but I don't see anything like that in the MySQL docs.

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help regarding a simple query

2006-03-13 Thread Jeff
 -Original Message-
 From: VenuGopal Papasani [mailto:[EMAIL PROTECTED] 
 Sent: Monday, March 13, 2006 10:33
 To: mysql@lists.mysql.com
 Subject: Help regarding a simple query
 
 
 Hi,
   I am searching for a query where i can have pattern 
 matching without considering the cases.You can consider the 
 following example for detailed description of what i want exactly.
 
Let my table X consists of following data
 Name
 ---
 venu
 venup
 venugopla
 VenugOpal
 VENU
 papasani
 papasni
 pvenu
   Now i need to get all the records which consists of the 
 string venu(case should not be considered either case should 
 be).i.e i should get 1,2,3,4,5,8 records
  I will be very much thankful if any of you give me the 
 query for this.
 
  Thanks in Advance,
 
 Regards,
 venu.
 (sorry for my poor English)
 

Won't this work?  

Select * from X where name like '%venu%'

jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help regarding a simple query

2006-03-13 Thread Jeff
-Original Message-
From: VenuGopal Papasani [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 13, 2006 11:48
To: Jeff
Subject: Re: Help regarding a simple query


Hi Jeff,
   This is venu again.Last mail i did not include a constraint that is
what irritating me most.Actually if i got venu-kkk
   I should not get that venu-kkk.
   This was the query actually i want.

Can you please give me teh query for that 

Regards,
venu.

 
Please post all responses to the mailing list, not directly to another
person.

the % is a wild card character

Name
--
Venu
VENU
XVENU
yVeNu
Venuzztest

select * from X where Name like '%venu%'

returns:

Venu
VENU
XVENU
yVeNu
Venuzztest


select * from X where Name like 'venu%'

returns:
Venu
VENU
Venuzztest

select * from X where Name like '%venu'

returns:

Venu
VENU
XVENU
yVeNu

Jeff 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Changing the binlog dir

2006-03-06 Thread Jeff
Hello all,

I had a problem last night where my Master server filled up the /var/
partition and stopped logging to the bin log.  This caused all sorts of
havok on my slaves and replication. My bad for not watching this but now
what I'd like to do is move where MySQL writes the binlog to.  Currently
it's in the default /var/lib/mysql but I'd like to move it where I
actually have the database files which is on a much larger partition.

From what I've read I can put --log-bin=/data/hostname-bin into the
my.cnf and restart the mysql server.  

Questions:

1) do I have to move the old binlogs to the /data/ prior to restarting
mysql
2) should I move the binlog index as well?
3) will moving the binlog location throw the slaves off?

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication from multiple masters?

2006-03-02 Thread Jeff
Sorry for the top post, just saying thanks, that's what I thought

Back to the drawing board...

Jeff

 -Original Message-
 From: David Griffiths [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, March 01, 2006 18:13
 To: mysql@lists.mysql.com
 Cc: [EMAIL PROTECTED]
 Subject: Re: Replication from multiple masters?
 
 
 Good point about the bin-logs. Yup - that would sink it. If 
 mysql used individual binary logs per master database, it would work. 
 Ya, if someone was silly enough to have two different 
 databases with the same name, it would be bad, even with 
 separate binary 
 logs for each database.
 
 If you have two mysql instances on a single slave, you'll 
 need more memory, faster CPUs, more disk space, etc. But it 
 could be a 
 viable option if the machine is just being used to provide a 
 hot-standby.
 
 
 David
 
 
 
 
 
 [EMAIL PROTECTED] wrote:
  
  MySQL cannot handle more than one incoming binlog at a time. The
  facilities are just not in the code.
  
  You also run into a nightmare if a database exists on BOTH masters 
  (same
  name on both systems) and the PK values of any tables (also with 
  matching names)  overlap. If  both masters update the 
 same row at appx 
  the same time, we could run into deadlocking in the slave 
 that didn't 
  happen on either master. It also means that the slave and 
 at least one 
  of the masters will become out of sync (because the other 
 master's 
  changes remain in the database) and replication is 
 considered broken 
  at that point.  It's a serious can of worms to handle multi-master 
  replication.
  
  Your two instances on one matching replicating to two 
 separate masters
  is not a multi-master replication (more than one master 
 replicating with 
  a single slave) it's two single-master slave setups running 
 on the same 
  machine. Close but not quite what the original post was 
 looking for (I 
  don't think).
  
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine
  
  
  
  David Griffiths [EMAIL PROTECTED] wrote on 03/01/2006 04:34:26 
  PM:
  
That's not entirely true.
   
You can have two instances of mysql running on the 
 slave, and dbA  
   connects to one instance, and dbB connects to the other.  
   
   
Jeff, when you say, different databases, do you mean that each
master has a single mysql instance, and if you typed on M1,
   
show databases you'd see (for example),  dbA
   
and if you did the same on M2, you'd see, dbB?
   
If so, I wonder if there is another way to get around it:
   
- create a virtual IP address that represents both 
 masters. Use that
virtual master in the my.cnf on the slave; each master has to
have an identical replication account
   
- put dbA and dbB on the slave
   
- restrict replication from each master to their respective
databases - dbA and dbB - ie don't replicate changes to the
mysql database.
   
The two masters appear as one (which overcomes the 
 single-IP-address
in the slave's my.cnf file), and each master has a different
database inside the mysql instance, they aren't stepping on each 
  others toes.
   
Just my 2 cents.
   
David.
   
Greg Donald wrote:
 On 3/1/06, Jeff [EMAIL PROTECTED] wrote:
 Does anyone know if it's possible to replicate to a 
 single slave from
 different databases on different masters?

 For instance:

 M1:dbAM2:dbB
   \ /
   rep rep
 \ /
  Slave


 
 http://dev.mysql.com/doc/refman/5.1/en/replication-features.ht
ml
   
snipMySQL only supports one master and many slaves./snip
   
   
   
--
Greg Donald
Zend Certified Engineer
MySQL Core Certification
http://destiney.com/
   
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
  

-- 

MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MyISAM Backup

2006-03-02 Thread Jeff
Currently I backup my MyISAM tables every night by running a shell
script that does the following:

Run: 
Mysqlanalyze, mysqlrepair, mysqloptimize on all the tables
Then shutdown mysql
Then tar all the .MYI, .MYD and .frm files from the database's directory
to a backup director Start MySQL again.

Later on an outside backup device connects and backs up the tar file for
archiving.

Question:

Is it safe to do this without actually shutting down the mysql db?
Perhaps putting a write lock on all the tables first so that they can be
read but not written to during the tar.  Does it matter if I'm tar'ing a
file while mysql has it open?

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication from multiple masters?

2006-03-01 Thread Jeff
Does anyone know if it's possible to replicate to a single slave from
different databases on different masters?

For instance:

M1:dbAM2:dbB
  \ /
  rep rep
\ /
 Slave

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need Update Query Help (Urgent)

2006-02-20 Thread Jeff Shapiro
On Monday 20 February 2006 03:27, Veerabhadrarao Narra wrote:
 Hi

   i ahve one table table_1 and columns like col_1,col_2,col_3

 col_1   col_2  col_3
 1   aa aaa
 2   bb

   Now i want to update my table table_1 SET col_3 as bbb where max of col_1

 I wrote this below Query but it shows error how to write

 UPDATE table_1
 SET col_3 = 'bbb'
 WHERE  col_1 = (SELECT max(col_1) FROM table_1)

It appears that you can't do what you want. 

This is at the bottom of the UPDATE syntax page:
 Currently, you cannot update a table and select from the same table in a 
subquery.

http://dev.mysql.com/doc/refman/5.0/en/update.html


However, you can to something like:

select @maximum_column :=max(col_1) from table_1;
UPDATE table_1
SET col_3 = 'bbb'
WHERE  col_1 = @maximum_column;

Perhaps someone else has a better solution.

-- 
Jeff Shapiro
listserv only address

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Relication from multiple databases

2006-01-13 Thread Jeff
Anyone know if it's possible to do replication from more than one
database?

Example:

System A: Database 1
System B: Database 2

System C: Replication of SYSA:DB1, Replication SYSB:DB2

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Anyone experimented with CPU affinity for mysqld on multiprocessor and/or hyperthreaded systems?

2006-01-12 Thread Jeff Barr
Like the subject says, I am looking for any info regarding positive 
or negative effects of using CPU affinity to lock the MySQL process
to a single processor (possibly hyperthreaded).

Anyone tried this? Learn anything interesting?

Thanks,

Jeff;

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Alter MyISAM table to adjust max_rows and Avg_row_length

2005-12-08 Thread Jeff
I've got a table that just hit the 4gig limit for MyISAM tables in a 4.x
ver db.  I need to alter the table structure and set the max_rows and
Avg_row_length to override the default of 4 gig.  Problem is I can't
find any refernce in the mysql docs that indicates how to decide a
setting for Avg_row_length or even what the measurement is.  Bytes?
Columns? 

Any help is much appreciated.

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Optimize: 14 hours and still running!

2005-12-08 Thread Jeff McKeon
 -Original Message-
 From: Nathan Gross [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 08, 2005 13:58
 To: mysql@lists.mysql.com
 Subject: Optimize: 14 hours and still running!
 
 
 On a 1.6ghz, 1gb ram, Linux machine running Mysql 4.1x.
   I have an Innodb table with over 20 million records and 
 index size about 3.7 gig, data size 2.2gig (yes, many 
 indexes, more space then the data itself). Last night I tried 
 an Optimize from the Admin gui console (logged in as root at 
 the host server), and the thing is still running! Problem is 
 I need to leave early this evening and have to take some action.
 
 The Linux 'top' utility has it on the top since then at about 
 11%-18% cpu Disk activity is continuously heavy.
 
 1. How long should it take?
 
 2. If I hit cancel will it:
 a) Roll back what it did, another 14 hours!
 b) Just stop as if nothing happened.
 c) The table will be partially optimized and will run normally.
 d) hang the process and/or machine.
 
 3. Is the data in jeopardy?
 
 Thank you all.
 -nat
 
 -- 

From my understanging of the memory needs of an InnoDB engine with
tables of that size, you're system is very underpowered.  Depending on
your system innodb variables you could be using up all the available ram
and 
Bogging down the OS or not giving the db enough.  I think you just need
to let it go and wait.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Alter MyISAM table to adjust max_rows and Avg_row_length

2005-12-08 Thread Jeff
 -Original Message-
 From: Michael Stassen [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 08, 2005 14:34
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: Alter MyISAM table to adjust max_rows and Avg_row_length
 
 
 Jeff wrote:
  I've got a table that just hit the 4gig limit for MyISAM 
 tables in a 
  4.x ver db.  I need to alter the table structure and set 
 the max_rows 
  and Avg_row_length to override the default of 4 gig.  Problem is I 
  can't find any reference in the mysql docs that indicates how to 
  decide a setting for Avg_row_length or even what the 
 measurement is.  
  Bytes? Columns?
  
  Any help is much appreciated.
  
  Jeff
 
 MAX_ROWS and AVG_ROW_LENGTH are defined in the manual page 
 for CREATE TABLE 
 http://dev.mysql.com/doc/refman/4.1/en/create-table.html:
 
 MAX_ROWS: The maximum number of rows you plan to store in 
 the table.  This is 
 not a hard limit, but rather an indicator that the table must 
 be able to store 
 at least this many rows.
 
 AVG_ROW_LENGTH: An approximation of the average row length 
 for your table.  You 
 need to set this only for large tables with variable-size records.
 
 The current value of AVG_ROW_LENGTH can be seen in the output 
 of SHOW TABLE 
 STATUS 
 http://dev.mysql.com/doc/refman/4.1/en/show-table-status.html
, along 
with the maximum_data_length.  They are in bytes.

As for MAX_ROWS, you really just need a value large enough to require a
larger 
pointer.  For example, the manual suggests

   ALTER TABLE tbl_name MAX_ROWS=10 AVG_ROW_LENGTH=nnn;

It goes on to say that you don't need to set AVG_ROW_LENGTH unless your
table 
has BLOB or TEXT values
http://dev.mysql.com/doc/refman/4.1/en/full-table.html.

Michael


Thanks Michael, the show table status was what I needed.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sporadic batch update problem

2005-12-04 Thread Jeff Drew
Sporadically, the last few entries of a batch are not written.  I'm writing
to a mysql database using JDBC.   Here's a short version of my code.  Does
anyone have suggestions on possible causes or other diagnostics?

class DatabaseWriter{

int writeCount=0;

public DatabaseWriter(){

  PreparedStatement preparedStatement = connection.prepareStatement(insert
into   msgpersecond ( time , count , sendercompid , targetcompid )
values ( ? , ? , ? , ?  ));

  connection.setAutoCommit( false ); // turn off auto-Commit
}

public void process(Object input){

  preparedStatement.setFloat( 2 , event.msgPerSecond );
  preparedStatement.addBatch( );
  writeCount++:

  if (writeCount  50) {
updateCounts = preparedStatement.executeBatch( );
connection.commit( );

preparedStatement.clearBatch( );
writeCount=0;
  }

}
}

process() gets called a lot.  The code usually works fine, but sometimes 3
to 20 or so records that definitely are added to the batch but don't get
written.

I'd greatly appreciate any suggestions.

Thanks


Insert query problem

2005-11-29 Thread Jeff
All,

I can't get this query to run, it keeps compaining that there is a
problem:

The Query:

insert into
tickets(id,from,department,subject,body,lastaction,lastpost,priority,sta
tus,created,fromname,lastpostname,attach,ct,uniq,notify,replyto)
values(null,'jmckeon','1','test','test
test',unix_timestamp(now()),'[EMAIL PROTECTED]
[EMAIL PROTECTED]','Medium','Open',unix_timestamp(now()),'Jeff
McKeon','Jeff McKeon','','','ks5hslajdfasd','1','[EMAIL PROTECTED]')

The error: 

You have an error in your SQL syntax near
'from,department,subject,body,lastaction,lastpost,priority,status,create
d,fromnam' at line 1

I suspect it doesn't like the fact that I have a field named from but
I KNOW it's possible to write to this table and that field, I just can't
figure out the correct syntax to get it to accept the fieldname from.

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Insert query problem [solved]

2005-11-29 Thread Jeff
Nevermind, found that usig `from` works.

Jeff

 -Original Message-
 From: Jeff [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, November 29, 2005 11:43
 To: mysql@lists.mysql.com
 Subject: Insert query problem
 
 
 All,
 
 I can't get this query to run, it keeps compaining that there is a
 problem:
 
 The Query:
 
 insert into 
 tickets(id,from,department,subject,body,lastaction,lastpost,pr
 iority,sta
 tus,created,fromname,lastpostname,attach,ct,uniq,notify,replyto)
 values(null,'jmckeon','1','test','test
 test',unix_timestamp(now()),'[EMAIL PROTECTED]
 [EMAIL PROTECTED]','Medium','Open',unix_timestamp(now()),'Jeff
 McKeon','Jeff McKeon','','','ks5hslajdfasd','1','[EMAIL PROTECTED]')
 
 The error: 
 
 You have an error in your SQL syntax near 
 'from,department,subject,body,lastaction,lastpost,priority,sta
 tus,create
 d,fromnam' at line 1
 
 I suspect it doesn't like the fact that I have a field named 
 from but I KNOW it's possible to write to this table and 
 that field, I just can't figure out the correct syntax to get 
 it to accept the fieldname from.
 
 Thanks,
 
 Jeff
 
 
 
 -- 
 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Turn of bin log for a session

2005-11-15 Thread Jeff
Hey all,

I need to do some repairs on a replication master/slave and can't
remember the command to turn off bin logging on the master for a single
session.

Set session log_bin = off;

Just returns an error: ERROR 1193: Unknown system variable 'LOG_BIN'

MySQL ver 4.1.13

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql hidden processes

2005-11-04 Thread Jeff Smelser
On Friday 04 November 2005 08:06 am, Chris Wells wrote:
 /usr/lib/chkrootkit/chkproc -v -v

 PID  1230(/proc/1230): not in readdir output
 PID  1230: not in ps output
 CWD  1230: /var/lib/mysql
 EXE  1230: /usr/sbin/mysqld
 ... (report the same for 1231 - 1238)
 You have 9 process hidden for readdir command
 You have 9 process hidden for ps command

 The command `cat /proc/1230/cmdline` outputs:

 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql
 --pid-file=/var/run/mysqld/mysqld.pid --skip-locking --port=3306
 --socket=/var/run/mysqld/mysqld.sock

Isnt this just nptl showing 1 process instead of 9 because it shared? Just add 
H to the ps command and you will see them.

Jeff


pgpOWtrPWgKxg.pgp
Description: PGP signature


Re: MySQL 5.0 : error using max(idrow) on a null value

2005-11-04 Thread Jeff Smelser
On Friday 04 November 2005 07:30 am, AESYS S.p.A. [Enzo Arlati] wrote:
 Hi, I'm trying to migrate from MySQL 4.1.11 to MySQL 5.0 and I get a
 problem with the new release.
 I have this table...

 provasql
  CREATE TABLE `provasql`

 `idrow` bigint(20) unsigned NOT NULL default '0',
 `descr` varchar(50) default NULL,
 PRIMARY KEY (`idrow`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1


 ...this is the SQL command...

 insert into provasql ( idrow, descr )
 select ( select max(c.idrow)+1 from provasql c ) , 'This is only a test';


 ...and this is the error:

 ERROR 1263 (22004): Column set to default value; NULL supplied to NOT NULL
 colum
 n 'idrow' at row 1


 With MySQL 4.1.11 I'd never get this error message, but it happens with the
 5.0.15 version.
 Can anyone help me?

Mysql 5 is much more picky on things you shouldnt have been able to do in the 
first place.. Just change idrow to auto_increment and stop doing max.. Its 
not needed. innodb properly handles auto_increment now.

Jeff


pgpApFheY1YGC.pgp
Description: PGP signature


Re: Bug? Set Null Value in NOT NULL field...

2005-10-27 Thread Jeff Smelser
On Wednesday 26 October 2005 04:24 pm, LMS wrote:
 Hi,
 I have this structure:
 ---
 CREATE TABLE tabla (
id int(10) unsigned NOT NULL auto_increment,
nombre varchar(100) NOT NULL default '',

because your defaulting it to ''.. so null = '' on insert..

Jeff


pgpIHzJjqrxJ0.pgp
Description: PGP signature


Re: does mysql 4.0.25 standard support utf-8?

2005-10-27 Thread Jeff Smelser
On Thursday 27 October 2005 03:51 pm, [EMAIL PROTECTED] wrote:
 hy! I've been trying to find this piece of information in the manual, but
 seem to have some trouble. so I'm asking it out loud: I'd like to know
 whether mysql 4.0.25 supports utf-8 as an internal encoding. Because a
 server I like has this version and I need to deploy an international
 application, so I really need to store data in utf-8.
 please help if you know. thank you!

No, You need 4.1 and above for that.

Jeff


pgpp1CF3G0AeG.pgp
Description: PGP signature


Re: source rpm for mysql 4.1.15

2005-10-21 Thread Jeff Smelser
On Friday 21 October 2005 07:04 pm, Stever wrote:
 Is there any reason why there isn't a source rpm for mysql 4.1.15? I
 need to compile my own version since (annoyingly), there doesn't seem to
 be a binary rpm version that works with glibc 2.2.5.

 Anyone out there created a source rpm for mysql?

Why cant you just get the source? whats the difference?

-- 
===
Jabber: tradergt@(smelser.org|jabber.org)
Quote:  How much money does it take for you to have a relationship with God?
===


pgp7yMIrT0b8H.pgp
Description: PGP signature


Re: ARCHIVE storage engine and INSERT DELAY in MySQL 4.1

2005-10-20 Thread Jeff Smelser
On Wednesday 19 October 2005 04:34 pm, Mihail Manolov wrote:

 It's the way our code is written, and I just changed the table type and
 began getting this error. We have lots of data to insert into this
 table, therefore the optimal option is to use DELAYED and insert them in
 blocks.

 Not sure why they say that ARCHIVE storage engine is a new feature in 5.0?

I had to take a double take, as I thought it was.. That could be why its not 
working 100%, as its actually a feature of 5.0. However, I am guessing that 
this point..

Jeff


pgpv8zNukrx9v.pgp
Description: PGP signature


MySQL error 1267: Illegal mix of collations (latin1_swedish_ci,COERCIBLE) and (utf8_general_ci,IMPLICIT) for operation 'UNION' -- again

2005-10-19 Thread Jeff Kolber
Hi list,

I've got a query coming out of sugarCRM that is generating this error:

MySQL error 1267: Illegal mix of collations
(latin1_swedish_ci,COERCIBLE) and (utf8_general_ci,IMPLICIT) for
operation 'UNION'

I recently converted the entire database to utf8 - made sure all the
connections are utf8 etc -- made php use utf8 - set the doctype on the
page to utf8 -- when I run the same query in the mysql monitor it runs
fine - when apache/php run it it fails to deal with the collation.

the data was converted via mysqldump to text file and reimporting
changing all tables/database to utf.

fwiw: the query looks like this:

( SELECT meetings.id , meetings.name , meetings.status , ' '
contact_name , ' ' contact_id , meetings.date_start ,
meetings.parent_id , meetings.parent_type , meetings.time_start ,
'meetings' panel_name FROM meetings where ( meetings.parent_id=
'63301596-6175-1b89-75df-431283170495' AND
meetings.parent_type='Opportunities' AND meetings.deleted=0 AND
(meetings.status='Planned')) AND meetings.deleted=0 ) UNION ALL (
SELECT tasks.id , tasks.name , tasks.status ,
CONCAT(CONCAT(jt0.first_name , ' '), CONCAT(jt0.last_name , ' '))
contact_name, tasks.contact_id , tasks.date_due , tasks.parent_id ,
tasks.parent_type , tasks.time_due , 'tasks' panel_name FROM tasks
LEFT JOIN contacts jt0 ON jt0.id= tasks.contact_id AND jt0.deleted=0
where ( tasks.parent_id= '63301596-6175-1b89-75df-431283170495' AND
tasks.parent_type='Opportunities' AND tasks.deleted=0 AND
(tasks.status='Not Started' OR tasks.status='In Progress' OR
tasks.status='Pending Input')) AND tasks.deleted=0 ) UNION ALL (
SELECT calls.id , calls.name , calls.status , ' ' contact_name , ' '
contact_id , calls.date_start , calls.parent_id , calls.parent_type ,
calls.time_start , 'calls' panel_name FROM calls where (
calls.parent_id= '63301596-6175-1b89-75df-431283170495' AND
calls.parent_type='Opportunities' AND calls.deleted=0 AND
(calls.status='Planned')) AND calls.deleted=0 )

and in this case it doesn't return anything - which is correct given the data.

we are using: mysql  Ver 14.7 Distrib 4.1.10a, for redhat-linux-gnu (i386)
Server characterset:utf8
Db characterset:utf8
Client characterset:utf8
Conn.  characterset:utf8

I've seen some stuff that versions before 4.1.11 suffered from
collation issues - is this likely to my case or can anyone see some
other path through this - we have a single production database that we
are very reluctant to update at this time.

should i just switch back to latin1 ?

thanks mysql list - you guys rock,

lost in translation

Jeff


Re: implicit cast forces table scan?

2005-10-19 Thread Jeff Smelser
On Wednesday 19 October 2005 01:15 pm, Olaf Faaland wrote:
 The queries in question are:

 This query uses the index:
 mysql explain
 - select itran_log_date, itran_log_actionid from itran_log where
 - itran_log_actionid = 170807;

 This query performs a table scan:
 mysql explain
 - select itran_log_date, itran_log_actionid from itran_log where
 - itran_log_actionid = 170807;

 My question is this: is the issue here that mysql is converting every
 single itran_log_actionid value, from all 1.5 million rows, and hence the
 index is not useful and not used?  My initial assumption was that the
 constant value 170807 in the second query, would be converted to text
 before the query was executed, and so the index could be used.  This does
 not seem to be the case.

 I ask both for my own edification, and also because it seems to me this
 should be mentioned in the manual for newbies like myself.

It doesnt know what value your giving it. If it thought to assume converting 
the data, you could have 17h120, and it would fail converting the data. 
Mysql, nor any DB for that matter, should not, and do not, assume anything. 
It just happens in the case your dealing with numeric data. If thats the 
case, you should have made the column numeric in type. (int whatever)

Jeff

Jeff


pgpNnLJGNfYAz.pgp
Description: PGP signature


Database user Permissions

2005-10-12 Thread Jeff
Just rebuilt one of my servers and when setting up MySQL again an old
problem I'd had and worked around came up again.

Why is it that if I grant a user@'%' permissions, that user can access
the database from any other machine on the network, but that same user
logon accessing the db from the local system, fails until I actually
create another grant record for [EMAIL PROTECTED]

It's not a huge problem but I'd like to understand it better.

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Database user Permissions

2005-10-12 Thread Jeff
Shawn,

Thanks again for responding :o)

All understood, it seems to me though that this is achieved when you
create the user by specifying where the specific user can login from.

So granting permissions to user@'%' means from anywhere while
[EMAIL PROTECTED] means only when they access from that server.  I guess
that could be easily spoofed though.

In any event, thanks for a thorough answer, at least I know the behavior
is truly by design.

Jeff
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, October 12, 2005 09:53
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: Database user Permissions
 
 
 Jeff [EMAIL PROTECTED] wrote on 10/12/2005 08:43:16 AM:
 
  Just rebuilt one of my servers and when setting up MySQL 
 again an old
  problem I'd had and worked around came up again.
  
  Why is it that if I grant a user@'%' permissions, that user 
 can access
  the database from any other machine on the network, but 
 that same user
  logon accessing the db from the local system, fails until I actually
  create another grant record for [EMAIL PROTECTED]
  
  It's not a huge problem but I'd like to understand it better.
  
  Thanks,
  
  Jeff
  
 
http://dev.mysql.com/doc/mysql/en/adding-users.html

The security system wisely treats local users and remote users 
differently. For a truly secure server, someone must be physically at
the 
machine in order to make a localhost login attempt. This presumes that 
some level of physical security also protects that machine. If an 
administrator had only one account, it wouldn't make a difference from 
where they logged in. That would be a hole in the security plan as you
now 
have exposed admin rights beyond the server's physical security
perimeter.

Think about it in terms of James Bond or Mission Impossible. They 
wouldn't need to break into the vault containing the database computer
if 
an administrative account could do what they wanted from outside, would 
they?  With the two-tier system, an administrator could have limited 
privileges when not physically at the console and full privileges while
at 
the console. 

Of course, logging in to the server through SSH, telnet, or some other 
remote terminal software defeats this kind of security check as the user

now appears to be at the local terminal. Oh, well. It is not perfect but

it is better than nothing at all!

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Detect if table exists from within MySQL?

2005-10-06 Thread Jeff Smelser
On Thursday 06 October 2005 10:57 am, Ryan Stille wrote:
 I am converting some code from MSSQL to MySQL.  In one place I need to
 have a conditional query depending on if a table exists or not.  There
 are different versions of this application and the table only exists in
 some of them.  Here is how it was done in MSSQL:

 IF OBJECT_ID('cfgbiz') IS NOT NULL
   SELECT notifyto FROM cfgbiz
 ELSE
   SELECT '' as notifyto

 Is there something similar in MySQL? I am running version 4.1.x.

5.X has information_schema.. 

However...

4.1, you can show tables like '%tab%' and you will get a row back if exists.. 
WARNING, myisam will be quick, innodb will be really slow, specially if the 
tables are big.. 

Jeff


pgphkfOW9sFUY.pgp
Description: PGP signature


Re: Cursors in 5.0.13

2005-10-05 Thread Jeff Smelser
On Wednesday 05 October 2005 04:05 am, Rob Hall wrote:
 Having a few problems using cursors in 5.0.13 and I don't know wether it's
 an 'operator error' :)

 Should this work?

when loading a procedure, do show warnings after it.. It tells you what 
problems its running into.. 

So what error are you getting?

Jeff


pgpZjgXtyciGn.pgp
Description: PGP signature


Re: Encryption for mySQL 3.23

2005-10-04 Thread Jeff Pflueger
ah hah you are righht about the installationmysql -V shows 4.1  
but the SHOW VARIABLES shows 3.something so something went wrong with 
thhe update.


thanks for that.

Now if I encrypt using AES_ENCRYPT if I expect somebody else to decrypt 
outside of mySQL I will need to provide the 'password' and the 128 bit 
key...correct?


Where might I find the key so that I can send it along to them? any 
suggestions on this?


Thank you for your help.

Jeff

[EMAIL PROTECTED] wrote:



Make sure your upgrade actually took:

SHOW VARIABLES like 'ver%';
+-+---+
| Variable_name   | Value |
+-+---+
| version | 4.1.12-nt-log |
| version_comment | Official MySQL binary |
| version_compile_machine | ia32  |
| version_compile_os  | Win32 |
+-+---+
4 rows in set (0.02 sec)

spquartz.local_chemselect AES_ENCRYPT('hello','password');
+-+
| AES_ENCRYPT('hello','password') |
+-+
| ≤h¶
╔▐nφ≈₧(Lα╡⌐p|
+-+
1 row in set (0.11 sec)

If you didn't upgrade completely, the old server is still going to be 
running and it won't recognize the new function. As you can see, it's 
working for me and I am on the same version. The manual says that 
AES_ENCRYPT was added as of 4.0.2 so make sure your server is better 
than that.


Sorry to not be much help

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Jeff Pflueger [EMAIL PROTECTED] wrote on 10/04/2005 04:12:39 PM:

 Thank you for the help. The upgrade was a good suggestion, so I havve
 completed it, but I am now having syntax problems

 Why might I be getting the following message:

 ERROR 1064 (0): You have an error in your SQL syntax near
 '('hello','password')' at line 1


 When I type this in at the command line:

 SELECT AES_ENCRYPT('hello','password');



 Version info: mysql  Ver 14.7 Distrib 4.1.12, for redhat-linux-gnu
 (i686) using readline 4.3

 Thanks for any help

 [EMAIL PROTECTED] wrote:

 
 
  Jeff Pflueger [EMAIL PROTECTED] wrote on 10/03/2005 03:46:09 PM:
 
   Hi,
   I need to encrypt data as I insert it into a mySQL database.
   The data will then be sent as a text file to another institution 
to be

   decrypted.
  
   I am using mySQL version 11.18 Distrib 3.23.58 for RedHat Linux.
  
   I cannot find a single encryption function in the documentation that
   seems to not create a syntax error for the version of mySQL I am 
using.

  
   I am also concerned because whatever encryption I am using needs 
to be

   decrypted outside of mySQL.
  
   Any suggestions?
  
   Thanks!
  
 
  I don't expect a de-encryptor to be able to disentagle encrypted data
  from the other file and record markers from outside of MySQL if you
  only send them the data file. If you do send them the datafile
  (as-is), then they will need a MySQL server to read the data. They
  will need to decrypt the data on the client-side.
 
  Did you mean to say that an EXTRACT of the data will be written to a
  text file and encrypted before being sent to your other location? 
That

  may be a better solution for transport security.
 
  As another point of design why are you attempting to use the old,
  decrepit 3.23 branch?  At least upgrade to the 4.1 branch. You gain
  encryption options and stability and lots of other improvements with
  the move to the newer version.
 
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Encryption for mySQL 3.23

2005-10-03 Thread Jeff Pflueger

Hi,
I need to encrypt data as I insert it into a mySQL database.
The data will then be sent as a text file to another institution to be
decrypted.

I am using mySQL version 11.18 Distrib 3.23.58 for RedHat Linux.

I cannot find a single encryption function in the documentation that
seems to not create a syntax error for the version of mySQL I am using.

I am also concerned because whatever encryption I am using needs to be
decrypted outside of mySQL.

Any suggestions?

Thanks!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



encryption syntax

2005-10-03 Thread Jeff Pflueger

Anybody have an idea why I might be getting the following message:

ERROR 1064 (0): You have an error in your SQL syntax near 
'('hello','password')' at line 1



When I type this in at the command line:

SELECT AES_ENCRYPT('hello','password');



Version info: mysql  Ver 14.7 Distrib 4.1.12, for redhat-linux-gnu 
(i686) using readline 4.3


Thanks for any help

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-29 Thread Jeff

 Jeff wrote:
  
  Ugh...
  
  mysqladmin -uroot -ptelaurus processlist | grep -c Sleep
  
  And it returned 200 sleeping connections, all persistant 
 connections 
  from our app servers and 4 threads_running
  
  Also a show status gave me a max_used_connections of 236.
  
  If that's the case then I can probably only set it to about 
 250 which 
  means if I set my innodb_buffer_pool_size = 100M  and dropping my 
  key_buffer_size to 250, I'll need 1884M of ram according to the 
  formula above, which is dangerously close to the 2G limit 
 specified in 
  the warning on the link above.
  
  Currently the key_reads to Key_reads_requests is about 
 1:1970 with the 
  key_buffer_size of 384M, so I guess I can safely drop this to 250M
  
  Even if I changed the entire DB over to InnoDB, and pushed the 
  key_buffer_size down really low it wouldn't drop the total memory 
  usage below 1600M.
  
  So what is this telling me?  I need more ram or less 
 connections or I 
  should just stay with MyISAM?
  
  Thanks,
  
  Jeff
  
 
 I would suggest taking a hard look at why your application 
 servers are 
 creating 200 sleeping connections, and if that is necessary. You may 
 also be able to reduce sort_ and read_buffer_size to 1M each, but I 
 couldn't tell you how that might affect your application, so 
 you may not 
 want to do that. (Does anyone on the list have experience 
 modifying these?)
 
 I think the biggest issue will be the system's RAM - the 2G limit on 
 MySQL's total allocated RAM is a per-process hard limit on 32-bit 
 architecture, but most 32-bit systems benefit greatly from 
 having more 
 than 2G total RAM (the OS may use the rest for disk caching, 
 etc). If, 
 say, your server had 4G RAM, then you could safely configure MySQL to 
 use very close to 2G, and performance should fly. With only 2G in the 
 system, setting MySQL to use as much RAM as possible would 
 leave next to 
 nothing for the OS or other processes, and that is the 
 problem (as I see 
 it).
 
 However, that said, more RAM is not always the answer. You 
 may get much 
 more of a performance increase by modifying your application code so 
 that it doesn't waste so many connections (thus allowing you to 
 allocate plenty of RAM to the innodb_buffer_pool).
 
 Of course, you can do both (just to play it safe, right?).  ;)
 

Well the applications with persistant connections is a touchy subject.
Our apps send and rec data over satelite links which are very expensive.
The shorter the duration of the link the less it costs us.  So the
pervailing theory is that with persistant connections the apps will
spend less time re-connecting/dis-connecting from the db.  Even
fractions of a second counts when you're talking about thousands of
connections a day and we are charged by the second for airtime.  That's
the whole driving force behind wanting to switch over to InnoDB.  The
thought is it would give us faster writes when we have a hundred apps
trying to write at or very near the same time because of the record
level locking as opposed to the MyISAM Table level locking during writes
and updates.

Now, the question is, if we need to drop the persistant connections in
order to move to an InnoDB engine, will the speed benefit of record
level locking outweigh what is lost by not having persistant
connections?  

That being said and having just looked at our connections for the past
60 minutes during what is our roughly our peak time I only see about 350
which is roughly one every 10 seconds with a rough avg connection time
of about 28 seconds most of which is transfer of data and not db
read/write/updates.  So, I believe, from that information I can make an
educated guess that the MyISAM table locking is not the real bottleneck
here and therefore it's probably not going to do us a lot of good to
switch to InnoDB, especially with our current hardware and application
behavior.  Thoughts?

At some point however, as our traffic grows we probably will hit a point
where the db read/write/updates will start to become a bottleneck and
we'll need to look at moving to a 64bit arch, 2gig ram and the InnoDB
engine.  What status variables should I be looking at to see if we have
a lot of read/write/updates being delayed?


Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-29 Thread Jeff McKeon
 Jeff [EMAIL PROTECTED] wrote on 09/29/2005 08:47:52 AM:
 
  
   Jeff wrote:

Ugh...
mysqladmin -uroot -ptelaurus processlist | grep -c 
 Sleep And it 
returned 200 sleeping connections, all persistant
   connections
from our app servers and 4 threads_running
Also a show status gave me a max_used_connections of 236. If 
that's the case then I can probably only set it to about
   250 which
means if I set my innodb_buffer_pool_size = 100M  and 
 dropping my
key_buffer_size to 250, I'll need 1884M of ram according to the 
formula above, which is dangerously close to the 2G limit 
   specified in
the warning on the link above.

Currently the key_reads to Key_reads_requests is about
   1:1970 with the
key_buffer_size of 384M, so I guess I can safely drop 
 this to 250M

Even if I changed the entire DB over to InnoDB, and pushed the
key_buffer_size down really low it wouldn't drop the 
 total memory 
usage below 1600M.

So what is this telling me?  I need more ram or less
   connections or I
should just stay with MyISAM?

Thanks,

Jeff

   
   I would suggest taking a hard look at why your application
   servers are 
   creating 200 sleeping connections, and if that is 
 necessary. You may 
   also be able to reduce sort_ and read_buffer_size to 1M 
 each, but I 
   couldn't tell you how that might affect your application, so 
   you may not 
   want to do that. (Does anyone on the list have experience 
   modifying these?)
   
   I think the biggest issue will be the system's RAM - the 
 2G limit on
   MySQL's total allocated RAM is a per-process hard limit on 32-bit 
   architecture, but most 32-bit systems benefit greatly from 
   having more 
   than 2G total RAM (the OS may use the rest for disk caching, 
   etc). If, 
   say, your server had 4G RAM, then you could safely 
 configure MySQL to 
   use very close to 2G, and performance should fly. With 
 only 2G in the 
   system, setting MySQL to use as much RAM as possible would 
   leave next to 
   nothing for the OS or other processes, and that is the 
   problem (as I see 
   it).
   
   However, that said, more RAM is not always the answer. You
   may get much 
   more of a performance increase by modifying your 
 application code so 
   that it doesn't waste so many connections (thus allowing you to 
   allocate plenty of RAM to the innodb_buffer_pool).
   
   Of course, you can do both (just to play it safe, right?).  ;)
   
  
  Well the applications with persistant connections is a 
 touchy subject. 
  Our apps send and rec data over satelite links which are very 
  expensive. The shorter the duration of the link the less it 
 costs us.  
  So the pervailing theory is that with persistant 
 connections the apps 
  will spend less time re-connecting/dis-connecting from the 
 db.  Even 
  fractions of a second counts when you're talking about thousands of 
  connections a day and we are charged by the second for airtime.
 
 And all of those sleeping connections are costing you how 
 much in unused 
 air time? 

I think there's a missunderstanding here.  The applications run on
servers in our datacenter and wait for client connections to call in.
The client which is another server on the other end of the sat link,
transfers the data to our apps and our apps send data to it (depending
on whether or not data is waiting for it, one of the db queries tells
the local app this) and the local apps in turn write the connection
information to the database.

Compared with many other databases, the cost (time 
 and data) of 
 making and breaking a MySQL connection is cheap. Try a small 
 set of test 
 cases and see for yourself. Maybe you could move 10 of your 
 normal clients 
 from using your persistent connections into a 
 connect-as-needed model and 
 see what that does to your air-time, sleeping connection 
 counts, and total 
 throughput.
 
 The only way to know for certain is to try it in your 
 environment but I 
 know that in the world of web development (where connections are also 
 precious and throughput is king) that being connected only 
 when necessary 
 usually works much better than trying to stay connected all 
 of the time. 
 By minimizing the communications overhead imposed on the server by 
 maintaining unused open connections, the server should be 
 able to respond 
 better. You should not only have less dead air but each connection 
 itself will take less time as the server will be more responsive.
 
 Remember, I recommend making and breaking connections around 
 blocks of 
 execution not per-statement. Let's say you have a lookup 
 routine that 
 uses 6 queries and massages the data into something useful 
 client-side. It 
 makes no sense to flip a connection 6 times for those 6 
 queries as they 
 are all part of one larger process. Prepare your SQL 
 statements as much as 
 possible, make one connection, run the 6 queries, cache

RE: MyISAM to InnoDB

2005-09-29 Thread Jeff
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, September 29, 2005 14:56
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
  
  Well the applications with persistant connections is a 
 touchy subject. 
  Our apps send and rec data over satelite links which are very 
  expensive. The shorter the duration of the link the less it 
 costs us.  
  So the pervailing theory is that with persistant 
 connections the apps 
  will spend less time re-connecting/dis-connecting from the 
 db.  Even 
  fractions of a second counts when you're talking about thousands of 
  connections a day and we are charged by the second for airtime.  
  That's the whole driving force behind wanting to switch over to 
  InnoDB.  The thought is it would give us faster writes when 
 we have a 
  hundred apps trying to write at or very near the same time 
 because of 
  the record level locking as opposed to the MyISAM Table 
 level locking 
  during writes and updates.
  
  Now, the question is, if we need to drop the persistant 
 connections in 
  order to move to an InnoDB engine, will the speed benefit of record 
  level locking outweigh what is lost by not having persistant 
  connections?
 
 The only way to know is to test it in your environment. I 
 don't believe 
 anyone on the list could answer that question with certainty.
 
 Just out of curiosity, I wrote a couple scripts in perl to 
 very loosely 
 test this.
 --
 [EMAIL PROTECTED] - test]# cat loop.sh
 #!/bin/bash
 
 for x in `seq 1 10`; do
   $1
 done
 --
 [EMAIL PROTECTED] - test]# cat con.pl
 #!/usr/bin/perl
 
 use strict;
 use warnings;
 require DBI;
 
 print Start\n;
 my $con = DBI-connect(DBI:mysql::db3-p, 'user', 'pass')
  or die(Failed to connect!);
 print Connected!\n;
 exit;
 --
 [EMAIL PROTECTED] - test]# cat nocon.pl
 #!/usr/bin/perl
 
 use strict;
 use warnings;
 require DBI;
 
 print Start\n;
 #my $con = DBI-connect(DBI:mysql::db3-p, 'user', 'pass')
 #   or die(Failed to connect!);
 print Skipped Connecting!\n;
 exit;
 ---
 time ./loop.sh ; time ./loop.sh ./nocon.pl /dev/null; time ./loop.sh 
 ./con.pl /dev/null
 
 ((( bash script overhead )))
 real0m0.004s
 user0m0.002s
 sys 0m0.002s
 ((( perl script with no connection )))
 real0m0.595s
 user0m0.520s
 sys 0m0.057s
 ((( same perl script with connection )))
 real0m0.781s
 user0m0.682s
 sys 0m0.064s
 
 Now, I know this is *far* from an accurate test, and doesn't 
 demonstrate 
 any of the specifics of your servers, but it does show that, on my 
 servers, with perl, there is roughly a 0.02sec real and 0.007sec sys 
 overhead to make and close the connection. Take that for what 
 you will.
  
  That being said and having just looked at our connections 
 for the past 
  60 minutes during what is our roughly our peak time I only 
 see about 
  350 which is roughly one every 10 seconds with a rough avg 
 connection 
  time of about 28 seconds most of which is transfer of data 
 and not db 
  read/write/updates.  So, I believe, from that information I 
 can make 
  an educated guess that the MyISAM table locking is not the real 
  bottleneck here and therefore it's probably not going to do 
 us a lot 
  of good to switch to InnoDB, especially with our current 
 hardware and 
  application behavior.  Thoughts?
 
 With one connection every 10 seconds, I don't understand how 
 table lock 
 contention is a concern, unless your queries are so large 
 that they lock 
 the table for *that* long. If so, are they properly indexed?
 
 It doesn't sound like that is your problem though, so that's not a 
 reason to move to InnoDB.
 
  
  At some point however, as our traffic grows we probably will hit a 
  point where the db read/write/updates will start to become a 
  bottleneck and we'll need to look at moving to a 64bit 
 arch, 2gig ram 
  and the InnoDB engine.  What status variables should I be 
 looking at 
  to see if we have a lot of read/write/updates being delayed?
  
 
 See http://dev.mysql.com/doc/mysql/en/internal-locking.html
 and http://dev.mysql.com/doc/mysql/en/show-status.html
 
 Table_locks_immediate
 
 The number of times that a table lock was acquired 
 immediately. This variable was added as of MySQL 3.23.33.
 
 Table_locks_waited
 
 The number of times that a table lock could not be acquired 
 immediately and a wait was needed. If this is high, and you 
 have performance problems, you should first optimize your 
 queries, and then either split your table or tables or use 
 replication. This variable was added as of MySQL 3.23.33.
 
 For example, this is from our MyISAM server (uptime 200days, 
 7% selects, 
 very un-optimized but still performs well enough),
 
 mysql show status like 'table%';
 Table_locks_immediate   12810013
 Table_locks_waited  306450
 
 
 Hope that helps!
 Devananda vdv
 

All the information

RE: Weird database files

2005-09-28 Thread Jeff McKeon
 -Original Message-
 From: Gleb Paharenko [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, September 28, 2005 06:30
 To: mysql@lists.mysql.com
 Subject: Re: Weird database files
 
 
 Hello.
 
   On the master we're still running 4.0.16, the slaves are 
 up to 4.1.13.
 
 If you can - upgrade the  master server.
 

It's in the plans but that is our main production server so it's not
something we can just do at any time.  I've upgraded the slaves first
because generally you can replicate from an older version to a newer one
but not the other way around.

 Jeff McKeon wrote:
 Jeff wrote:
 
 Had problem with our database this weekend, apparently an
 
 app did an
 
 insert query that was huge size wise and this totally boogered up 
 replication downstream.  Also I cant read past that point 
 in the=20 
 binlog using mysqlbinlog on the master server.  It complains that: 
 =20
 ERROR: Error in Log_event::read_log_event(): 'Event too big',
 data_len: 1953458240, event_type: 119
 ERROR: Could not read entry at offset 66113944 : Error in=20
 
 log format
 
 or read error
 =20
 And then there are the weird table files that showed up in 
 the data 
 directory for the database (all MyISAM): =20
 -rw-rw1 mysqlmysql 14K Sep 12 11:50
 #sql-7c1c_217c.frm
 -rw-rw1 mysqlmysql1.8G Sep 12 11:54
 #sql-7c1c_217c.MYD
 -rw-rw1 mysqlmysql 92M Sep 12 12:09
 #sql-7c1c_217c.MYI
 =20
 Anyone ever see something like this before?  Are they files
 
 for a temp
 
 table maybe?
 =20
 Jeff
 =20
 
 =20
 Hello.
 =20
 Yes, these files are from some unterminated query. See:
http://dev.mysql.com/doc/mysql/en/temporary-files.html
 =20
 You may want to use --start-position (--start-datetime) and 
 --stop-position (--stop-datetime) to skip the 
 problematic=20 statement 
 and perform necessary updates on the slave by hand.=20 What 
 versions 
 of=20 MySQL do you use?
 =20
  
  
  On the master we're still running 4.0.16, the slaves are up 
 to 4.1.13. 
  =20
  
  To repair the problem with replication I simply restarted 
 the master 
  so it created another binlog and then took a snapshot and recreated 
  the slaves.
  
  I found out just this morning however that one of the tables has a 
  corrupted MYI file.  When I try to run a query on it, I get...
  
  ERROR 1016: Can't open file: 'Mailbox_Old.MYI'. (errno: 144)
  
  Running perror I get:
  
  Error code 144:  Unknown error 144
  144 =3D Table is crashed and last repair failed
  
  I'm running mysqlcheck on the offending table now.
  
  Thanks,
  
  Jeff
  
  
 
 
 -- 
 
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
___/   www.mysql.com
 
 
 
 
 -- 
 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-28 Thread Jeff
  Cut orignal thread because it was too long 

Ok so I'm about to convert two tables in my database from MyISAM to
InnoDB.  They are currently:

14K Sep 15 13:15 Table1.frm
2.1G Sep 28 14:15 Table1.MYD
198M Sep 28 14:15 Table1.MYI

11K Sep 20 08:45 Table2.frm
424K Sep 28 14:15 Table2.MYD
110K Sep 28 14:15 Table2.MYI

The system is only used as a database server, it's a dual processor
system with 2gig of ram.

As you can see, Table1's MyISAM data file is quite large at 2.1 gig.
Taking this into account what size InnoDB data files should I configure
in my my.cnf file?

I was thinking of this:

My.cnf

snip

[mysqld]

datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-bin
server-id=70
port = 3306
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 4
set-variable= max_connections=500

### InnoDB setup ###

# use default data directory for database
innodb_data_home_dir = /DATA/dbdata/
innodb_data_file_path =
/ibdata/ibdata1:2G;/ibdata/ibdata2:50M:autoextend:max:2G
innodb_log_group_home_dir = /DATA/dbdata/ibdata/iblogs

innodb_buffer_pool_size = 1G
innodb_additional_mem_pool_size = 20M
innodb_log_files_in_group = 3
innodb_log_file_size = 500M
innodb_log_buffer_size = 8M
innodb_buffer_pool_size = 1.5G
innodb_additional_mem_pool_size = 2M
innodb_file_io_threads = 4

/snip

But what happens if the ibdata2 fills up to the max of 2G?
I've got 50 gig available on the partition where the db data is stored.

Is there anything else here that looks incorrect?

Thanks,

Jeff






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-28 Thread Jeff
 -Original Message-
 From: Sujay Koduri [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, September 28, 2005 11:04
 To: Jeff; mysql@lists.mysql.com
 Cc: [EMAIL PROTECTED]
 Subject: RE: MyISAM to InnoDB
 
 
 
 If you think your storage requiremnets will increase in 
 future, try to estimate how much you will be needing in the 
 future in the worst case and try allocating that much of disk 
 space now itself (Any way you have good amount of disk space left). 
 Try creating a different partition for storing the log files. 
 This will increase the performance
 

Well currently MySQL is set up in the default dir of /var/lib/mysql and
soft links to the database data residing on another partition
/DATA/dbname.

Should I maybe specify:

innodb_log_group_home_dir = /var/lib/mysql/iblogs/

I have about 9 gig available on /var so 1.5 gig of logs shouldn't be too
bad.

 Even if you don't do this and run out of space, you just have 
 to add more add data files and a restart the server.
 
 And for 2G RAM, its better to limit the 
 innodb_bufferpool_size to 1G. You can also look at the 
 query_cache_size parameter and try tuning that by running 
 some load tests.
 
 Apart from that everything is looking fine for me
 
 sujay 
 
 -Original Message-
 From: Jeff [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, September 28, 2005 8:22 PM
 To: mysql@lists.mysql.com
 Cc: [EMAIL PROTECTED]
 Subject: RE: MyISAM to InnoDB
 
   Cut orignal thread because it was too long
 
 Ok so I'm about to convert two tables in my database from 
 MyISAM to InnoDB. They are currently:
 
 14K Sep 15 13:15 Table1.frm
 2.1G Sep 28 14:15 Table1.MYD
 198M Sep 28 14:15 Table1.MYI
 
 11K Sep 20 08:45 Table2.frm
 424K Sep 28 14:15 Table2.MYD
 110K Sep 28 14:15 Table2.MYI
 
 The system is only used as a database server, it's a dual 
 processor system with 2gig of ram.
 
 As you can see, Table1's MyISAM data file is quite large at 
 2.1 gig. Taking this into account what size InnoDB data files 
 should I configure in my my.cnf file?
 
 I was thinking of this:
 
 My.cnf
 
 snip
 
 [mysqld]
 
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 log-bin
 server-id=70
 port = 3306
 skip-locking
 key_buffer = 384M
 max_allowed_packet = 1M
 table_cache = 512
 sort_buffer_size = 2M
 read_buffer_size = 2M
 myisam_sort_buffer_size = 64M
 thread_cache = 8
 query_cache_size = 32M
 # Try number of CPU's*2 for thread_concurrency 
 thread_concurrency = 4 set-variable= max_connections=500
 
 ### InnoDB setup ###
 
 # use default data directory for database 
 innodb_data_home_dir = /DATA/dbdata/ 
 innodb_data_file_path =
/ibdata/ibdata1:2G;/ibdata/ibdata2:50M:autoextend:max:2G
 innodb_log_group_home_dir = /DATA/dbdata/ibdata/iblogs
 
 innodb_buffer_pool_size = 1G
 innodb_additional_mem_pool_size = 20M
 innodb_log_files_in_group = 3
 innodb_log_file_size = 500M
 innodb_log_buffer_size = 8M
 innodb_buffer_pool_size = 1.5G
 innodb_additional_mem_pool_size = 2M
 innodb_file_io_threads = 4
 
 /snip
 
 But what happens if the ibdata2 fills up to the max of 2G?
 I've got 50 gig available on the partition where the db data 
 is stored.
 
 Is there anything else here that looks incorrect?
 
 Thanks,
 
 Jeff
 
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-28 Thread Jeff
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, September 28, 2005 13:06
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
  Cut orignal thread because it was too long
  
  
  The system is only used as a database server, it's a dual processor 
  system with 2gig of ram.
  
  As you can see, Table1's MyISAM data file is quite large at 
 2.1 gig. 
  Taking this into account what size InnoDB data files should I 
  configure in my my.cnf file?
  
  I was thinking of this:
  
  My.cnf
  
  snip
  
  [mysqld]
  
  datadir=/var/lib/mysql
  socket=/var/lib/mysql/mysql.sock
  log-bin
  server-id=70
  port = 3306
  skip-locking
  key_buffer = 384M
  max_allowed_packet = 1M
  table_cache = 512
  sort_buffer_size = 2M
  read_buffer_size = 2M
  myisam_sort_buffer_size = 64M
  thread_cache = 8
  query_cache_size = 32M
  # Try number of CPU's*2 for thread_concurrency 
 thread_concurrency = 4
  set-variable= max_connections=500
  
  ### InnoDB setup ###
  
  # use default data directory for database innodb_data_home_dir = 
  /DATA/dbdata/ innodb_data_file_path =
  /ibdata/ibdata1:2G;/ibdata/ibdata2:50M:autoextend:max:2G
  innodb_log_group_home_dir = /DATA/dbdata/ibdata/iblogs
  
  innodb_buffer_pool_size = 1G
  innodb_additional_mem_pool_size = 20M innodb_log_files_in_group = 3
  innodb_log_file_size = 500M
  innodb_log_buffer_size = 8M
  innodb_buffer_pool_size = 1.5G
 ((( duplicate setting, later-occurring one will take precedence )))
  innodb_additional_mem_pool_size = 2M
  innodb_file_io_threads = 4
  
  /snip
  
  But what happens if the ibdata2 fills up to the max of 2G? 
 I've got 50 
  gig available on the partition where the db data is stored.
  
  Is there anything else here that looks incorrect?
  
  Thanks,
  
  Jeff
  
 
 I agree with what Sujay suggested: you can set the 
 innodb_log_file_size 
 much smaller, and will get the same performance with better start-up 
 time. 100M x 3 log_files_in_group should be fine. Also I recommend 
 setting up your ibdata files large enough to anticipate need 
 initially. 
 If, or once, they are full, you will not be able to write to 
 tables in 
 InnoDB, so make sure that does not happen!
 
 However, I see a potential problem - you said your system only has 2G 
 RAM. Here's the formula for how much RAM MySQL can (worst case) use, 
 taken from http://dev.mysql.com/doc/mysql/en/innodb-configuration.html
 
 innodb_buffer_pool_size
 + key_buffer_size
 + 
 max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size)
 + max_connections*2MB
 
 1024M   ((( assuming you meant 1G and not 1.5G )))
 + 384M
 + 500 * (2M + 2M + ??)
 + 500 * 2M
 
 According to your config, this results in a minimum of 1408M + 6M * 
 current_connections. That doesn't leave much RAM for the 
 underlying OS 
 and any other processes running. And, far worse, if your application 
 servers attempted to establish more than 100 connections, MySQL could 
 not allocate enough memory for them, and would either crash 
 or deny new 
 connections.
 
 You need to adjust something in the formula - reduce 
 max_connections if 
 that is possible, or reduce the key_buffer_size if you do not 
 need to be 
 working with MyISAM tables on this server, or allocate less memory to 
 innodb_buffer_pool_size.
 
 
 Best Regards,
 Devananda
 

Since this server will have InnoDB and MyISAM tables, 100+ connections,
I'll need to reduce the innodb_buffer_pool_size.  I can possibly drop
the max_connections to 250 as well.

Thanks!

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-28 Thread Jeff
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, September 28, 2005 13:06
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
  Cut orignal thread because it was too long
  
  
  The system is only used as a database server, it's a dual processor 
  system with 2gig of ram.
  
  As you can see, Table1's MyISAM data file is quite large at 
 2.1 gig. 
  Taking this into account what size InnoDB data files should I 
  configure in my my.cnf file?
  
  I was thinking of this:
  
  My.cnf
  
  snip
  
  [mysqld]
  
  datadir=/var/lib/mysql
  socket=/var/lib/mysql/mysql.sock
  log-bin
  server-id=70
  port = 3306
  skip-locking
  key_buffer = 384M
  max_allowed_packet = 1M
  table_cache = 512
  sort_buffer_size = 2M
  read_buffer_size = 2M
  myisam_sort_buffer_size = 64M
  thread_cache = 8
  query_cache_size = 32M
  # Try number of CPU's*2 for thread_concurrency 
 thread_concurrency = 4
  set-variable= max_connections=500
  
  ### InnoDB setup ###
  
  # use default data directory for database innodb_data_home_dir = 
  /DATA/dbdata/ innodb_data_file_path =
  /ibdata/ibdata1:2G;/ibdata/ibdata2:50M:autoextend:max:2G
  innodb_log_group_home_dir = /DATA/dbdata/ibdata/iblogs
  
  innodb_buffer_pool_size = 1G
  innodb_additional_mem_pool_size = 20M innodb_log_files_in_group = 3
  innodb_log_file_size = 500M
  innodb_log_buffer_size = 8M
  innodb_buffer_pool_size = 1.5G
 ((( duplicate setting, later-occurring one will take precedence )))
  innodb_additional_mem_pool_size = 2M
  innodb_file_io_threads = 4
  
  /snip
  
  But what happens if the ibdata2 fills up to the max of 2G? 
 I've got 50 
  gig available on the partition where the db data is stored.
  
  Is there anything else here that looks incorrect?
  
  Thanks,
  
  Jeff
  
 
 I agree with what Sujay suggested: you can set the 
 innodb_log_file_size 
 much smaller, and will get the same performance with better start-up 
 time. 100M x 3 log_files_in_group should be fine. Also I recommend 
 setting up your ibdata files large enough to anticipate need 
 initially. 
 If, or once, they are full, you will not be able to write to 
 tables in 
 InnoDB, so make sure that does not happen!
 
 However, I see a potential problem - you said your system only has 2G 
 RAM. Here's the formula for how much RAM MySQL can (worst case) use, 
 taken from http://dev.mysql.com/doc/mysql/en/innodb-configuration.html
 
 innodb_buffer_pool_size
 + key_buffer_size
 + 
 max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size)
 + max_connections*2MB
 
 1024M   ((( assuming you meant 1G and not 1.5G )))
 + 384M
 + 500 * (2M + 2M + ??)
 + 500 * 2M
 
 According to your config, this results in a minimum of 1408M + 6M * 
 current_connections. That doesn't leave much RAM for the 
 underlying OS 
 and any other processes running. And, far worse, if your application 
 servers attempted to establish more than 100 connections, MySQL could 
 not allocate enough memory for them, and would either crash 
 or deny new 
 connections.
 
 You need to adjust something in the formula - reduce 
 max_connections if 
 that is possible, or reduce the key_buffer_size if you do not 
 need to be 
 working with MyISAM tables on this server, or allocate less memory to 
 innodb_buffer_pool_size.
 
 
 Best Regards,
 Devananda
 

Ugh...

mysqladmin -uroot -ptelaurus processlist | grep -c Sleep 

And it returned 200 sleeping connections, all persistant connections
from our app servers and 4 threads_running

Also a show status gave me a max_used_connections of 236.  

If that's the case then I can probably only set it to about 250 which
means if I set my innodb_buffer_pool_size = 100M  and dropping my
key_buffer_size to 250, I'll need 1884M of ram according to the formula
above, which is dangerously close to the 2G limit specified in the
warning on the link above.

Currently the key_reads to Key_reads_requests is about 1:1970 with the
key_buffer_size of 384M, so I guess I can safely drop this to 250M

Even if I changed the entire DB over to InnoDB, and pushed the
key_buffer_size down really low it wouldn't drop the total memory usage
below 1600M.

So what is this telling me?  I need more ram or less connections or I
should just stay with MyISAM?

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Weird database files

2005-09-27 Thread Jeff
 Jeff wrote:
  Had problem with our database this weekend, apparently an 
 app did an 
  insert query that was huge size wise and this totally boogered up 
  replication downstream.  Also I cant read past that point in the 
  binlog using mysqlbinlog on the master server.  It complains that:
  
  ERROR: Error in Log_event::read_log_event(): 'Event too big', 
  data_len: 1953458240, event_type: 119
  ERROR: Could not read entry at offset 66113944 : Error in 
 log format 
  or read error
  
  And then there are the weird table files that showed up in the data 
  directory for the database (all MyISAM):
  
  -rw-rw1 mysqlmysql 14K Sep 12 11:50
  #sql-7c1c_217c.frm
  -rw-rw1 mysqlmysql1.8G Sep 12 11:54
  #sql-7c1c_217c.MYD
  -rw-rw1 mysqlmysql 92M Sep 12 12:09
  #sql-7c1c_217c.MYI
  
  Anyone ever see something like this before?  Are they files 
 for a temp 
  table maybe?
  
  Jeff
  
 
 Hello.
 
 Yes, these files are from some unterminated query. See:
http://dev.mysql.com/doc/mysql/en/temporary-files.html
 
 You may want to use --start-position (--start-datetime) and 
 --stop-position (--stop-datetime) to skip the problematic 
 statement and perform necessary updates on the slave by hand. 
 What versions of 
 MySQL do you use?
 

On the master we're still running 4.0.16, the slaves are up to 4.1.13.  

To repair the problem with replication I simply restarted the master so
it created another binlog and then took a snapshot and recreated the
slaves.

I found out just this morning however that one of the tables has a
corrupted MYI file.  When I try to run a query on it, I get...

ERROR 1016: Can't open file: 'Mailbox_Old.MYI'. (errno: 144)

Running perror I get:



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Weird database files

2005-09-27 Thread Jeff McKeon
 Jeff wrote:
  Had problem with our database this weekend, apparently an
 app did an
  insert query that was huge size wise and this totally boogered up
  replication downstream.  Also I cant read past that point in the 
  binlog using mysqlbinlog on the master server.  It complains that:
  
  ERROR: Error in Log_event::read_log_event(): 'Event too big',
  data_len: 1953458240, event_type: 119
  ERROR: Could not read entry at offset 66113944 : Error in 
 log format
  or read error
  
  And then there are the weird table files that showed up in the data
  directory for the database (all MyISAM):
  
  -rw-rw1 mysqlmysql 14K Sep 12 11:50
  #sql-7c1c_217c.frm
  -rw-rw1 mysqlmysql1.8G Sep 12 11:54
  #sql-7c1c_217c.MYD
  -rw-rw1 mysqlmysql 92M Sep 12 12:09
  #sql-7c1c_217c.MYI
  
  Anyone ever see something like this before?  Are they files
 for a temp
  table maybe?
  
  Jeff
  
 
 Hello.
 
 Yes, these files are from some unterminated query. See:
http://dev.mysql.com/doc/mysql/en/temporary-files.html
 
 You may want to use --start-position (--start-datetime) and
 --stop-position (--stop-datetime) to skip the problematic 
 statement and perform necessary updates on the slave by hand. 
 What versions of 
 MySQL do you use?
 

On the master we're still running 4.0.16, the slaves are up to 4.1.13.  

To repair the problem with replication I simply restarted the master so
it created another binlog and then took a snapshot and recreated the
slaves.

I found out just this morning however that one of the tables has a
corrupted MYI file.  When I try to run a query on it, I get...

ERROR 1016: Can't open file: 'Mailbox_Old.MYI'. (errno: 144)

Running perror I get:

Error code 144:  Unknown error 144
144 = Table is crashed and last repair failed

I'm running mysqlcheck on the offending table now.

Thanks,

Jeff


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Weird database files

2005-09-25 Thread Jeff
Had problem with our database this weekend, apparently an app did an
insert query that was huge size wise and this totally boogered up
replication downstream.  Also I cant read past that point in the binlog
using mysqlbinlog on the master server.  It complains that: 

ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len:
1953458240, event_type: 119
ERROR: Could not read entry at offset 66113944 : Error in log format or
read error

And then there are the weird table files that showed up in the data
directory for the database (all MyISAM):

-rw-rw1 mysqlmysql 14K Sep 12 11:50
#sql-7c1c_217c.frm
-rw-rw1 mysqlmysql1.8G Sep 12 11:54
#sql-7c1c_217c.MYD
-rw-rw1 mysqlmysql 92M Sep 12 12:09
#sql-7c1c_217c.MYI

Anyone ever see something like this before?  Are they files for a temp
table maybe?

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-23 Thread Jeff
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, September 22, 2005 19:03
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 22, 2005 16:14
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
 
 True, is there a way to tell a slave to not replicate
 
 certain queries
 
 like alter table or would I need to get creative and stop
 
 replication
 
 and all writes to the main database, then issue the alter table
 statement, then restart replication with a  set global 
 slave_sql_skip_counter=1 so that it skips the alter statemtent?
 
 There's a much easier way - issue the statement SET
 SQL_LOG_BIN = 0; 
 before issuing any ALTER TABLE statements. This will cause all 
 statements for the duration of that session to not be 
 written to the 
 binlog. See http://dev.mysql.com/doc/mysql/en/set-option.html 
 for more 
 information.
 
  
  
  First off, thanks for the help to you and Bruce both!
  
 You're quite welcome, Jeff :)
 
  When you say here, for the duration of that session does 
 that mean 
  that only queries I issue with my connection skip the 
 binlog?  Or do 
  all queries during that time skip the binlog.  In other 
 words, when I 
  SET SQL_LOG_BIN = 0; should I first stop all applications 
 writing to 
  the database to prevent missing data in the slaves?
  
 
 
 It only affects that connection. Bruce wrote a response at about the 
 same time I did; his covers this topic as well. SQL_LOG_BIN 
 is a session 
 variable, meaning that it only affects the current session 
 (connection). 
 So, any applications running at the same time will not be 
 affected by a 
 change to this variable, and if you close your client and 
 reconnect, you 
 will have to set the variable again. As Bruce suggested, it's best to 
 set it only when you need it and unset it immediately 
 afterwards (as a 
 precaution against operator error, not because it affects the server).
 
 I do want to point out that while the commands you issue 
 (after setting 
 SQL_LOG_BIN to 0) will not be written to the binlog (thus 
 will not run 
 on any slave reading from this server), they may affect other running 
 processes on the server. If, for example, you run an ALTER TABLE on a 
 table currently in MyISAM format, the table will be locked and all 
 processes running on that server that read from / write to that table 
 will wait until that ALTER finishes. Setting SQL_LOG_BIN to 0 doesn't 
 affect this in any way - it _only_ affects whether statements 
 from that 
 specific session are recorded in the binary log.
 

Thanks, that answer my question regarding SQL_LOG_BIN varialbe. 

 Side question - you've stated that you are planning to migrate to 
 InnoDB, but you haven't said anything to the list about how much data 
 you have. Just be aware that it can take a lot of time and disk space 
 for MySQL to transfer all your data from one format to the other (of 
 course depending on how much data you have) and if anything 
 goes wrong 
 during that time, the results will probably not be what you 
 expect, or 
 want. I would advise you to at least investigate an alternate 
 approach 
 if you have a lot of data - take the server you are going to 
 migrate out 
 of the 'cluster' and make sure it is not processing any data / no 
 clients are connecting to it; dump all your data to text files, 
 preferably separating your data definition statements (ie 
 CREATE TABLE 
 statements) from your actual data; modify the CREATE statements to 
 specify the InnoDB engine; lastly load all the data from the 
 text files 
 into MySQL, and bring this server back into the 'cluster'.
 
 If you don't have a _lot_ of data, then it may not be worth all that 
 work. Of course, a lot is subjective; I'd say, based purely 
 on my own 
 experiences with this, that if you are going to migrate 1G of 
 data, you 
 will probably be better off exporting / alter the text files / 
 importing. If you have 10's or 100's of G of data, I would strongly 
 recommend that you do it this way. And regardless of how much 
 data you 
 have, it is, IMHO, safer to export/import. 

Well like you say a lot of data is subjective.  Our situation is this;
we currently have a DB01 up and running and in production.  We're moving
to a more redundant data center and have purchased new hardware to
migrate the database to (new server will be DB03).  The speed of the
database directly affects our profitability.  This being the case I've
suggested to our developers that we take the opertunity to migrate the
tables involved in heavy write actions from MyISAM to InnoDB on the new
DB03 server before we put it into production.  Currently I have DB03 up
and running (all MyISAM) at the new datacenter and doing circular
replication with DB01 over VPN.  No apps or users are currently writing
or even reading

RE: MyISAM to InnoDB

2005-09-23 Thread Jeff
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Friday, September 23, 2005 09:40
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: RE: MyISAM to InnoDB
 
 
 Sorry to butt in but I wanted to make sure you didn't do 
 actually do what 
 you proposed to do, yet. More responses interspersed...
 

Nope nothing yet, I don't rush things when I'm unsure... ;o)

 Jeff [EMAIL PROTECTED] wrote on 09/23/2005 08:32:57 AM:
 
   -Original Message-
   From: Devananda [mailto:[EMAIL PROTECTED]
   Sent: Thursday, September 22, 2005 19:03
   To: Jeff
   Cc: mysql@lists.mysql.com
   Subject: Re: MyISAM to InnoDB
   
   
   Jeff wrote:
   -Original Message-
   From: Devananda [mailto:[EMAIL PROTECTED]
   Sent: Thursday, September 22, 2005 16:14
   To: Jeff
   Cc: mysql@lists.mysql.com
   Subject: Re: MyISAM to InnoDB
   
   
   Jeff wrote:
   
   True, is there a way to tell a slave to not replicate
   
   certain queries
   
   like alter table or would I need to get creative and stop
   
   replication
   
   and all writes to the main database, then issue the 
 alter table 
   statement, then restart replication with a  set global 
   slave_sql_skip_counter=1 so that it skips the alter statemtent?
   
   There's a much easier way - issue the statement SET 
 SQL_LOG_BIN = 
   0; before issuing any ALTER TABLE statements. This 
 will cause all
   statements for the duration of that session to not be 
   written to the
   binlog. See http://dev.mysql.com/doc/mysql/en/set-option.html
   for more 
   information.
   


First off, thanks for the help to you and Bruce both!

   You're quite welcome, Jeff :)
   
When you say here, for the duration of that session does
   that mean
that only queries I issue with my connection skip the
   binlog?  Or do
all queries during that time skip the binlog.  In other
   words, when I
SET SQL_LOG_BIN = 0; should I first stop all applications
   writing to
the database to prevent missing data in the slaves?

   
   
   It only affects that connection. Bruce wrote a response 
 at about the
   same time I did; his covers this topic as well. SQL_LOG_BIN 
   is a session 
   variable, meaning that it only affects the current session 
   (connection). 
   So, any applications running at the same time will not be 
   affected by a 
   change to this variable, and if you close your client and 
   reconnect, you 
   will have to set the variable again. As Bruce suggested, 
 it's best to 
   set it only when you need it and unset it immediately 
   afterwards (as a 
   precaution against operator error, not because it affects 
 the server).
   
   I do want to point out that while the commands you issue
   (after setting 
   SQL_LOG_BIN to 0) will not be written to the binlog (thus 
   will not run 
   on any slave reading from this server), they may affect 
 other running 
   processes on the server. If, for example, you run an 
 ALTER TABLE on a 
   table currently in MyISAM format, the table will be 
 locked and all 
   processes running on that server that read from / write 
 to that table 
   will wait until that ALTER finishes. Setting SQL_LOG_BIN 
 to 0 doesn't 
   affect this in any way - it _only_ affects whether statements 
   from that 
   specific session are recorded in the binary log.
   
  
  Thanks, that answer my question regarding SQL_LOG_BIN varialbe.
  
   Side question - you've stated that you are planning to migrate to
   InnoDB, but you haven't said anything to the list about 
 how much data 
   you have. Just be aware that it can take a lot of time 
 and disk space 
   for MySQL to transfer all your data from one format to 
 the other (of 
   course depending on how much data you have) and if anything 
   goes wrong 
   during that time, the results will probably not be what you 
   expect, or 
   want. I would advise you to at least investigate an alternate 
   approach 
   if you have a lot of data - take the server you are going to 
   migrate out 
   of the 'cluster' and make sure it is not processing any data / no 
   clients are connecting to it; dump all your data to text files, 
   preferably separating your data definition statements (ie 
   CREATE TABLE 
   statements) from your actual data; modify the CREATE 
 statements to 
   specify the InnoDB engine; lastly load all the data from the 
   text files 
   into MySQL, and bring this server back into the 'cluster'.
   
   If you don't have a _lot_ of data, then it may not be 
 worth all that
   work. Of course, a lot is subjective; I'd say, based purely 
   on my own 
   experiences with this, that if you are going to migrate 1G of 
   data, you 
   will probably be better off exporting / alter the text files / 
   importing. If you have 10's or 100's of G of data, I 
 would strongly 
   recommend that you do it this way. And regardless of how much 
   data you 
   have, it is, IMHO, safer to export/import. 
  
  Well like you

RE: MyISAM to InnoDB

2005-09-23 Thread Jeff
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Friday, September 23, 2005 10:25
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: RE: MyISAM to InnoDB
 
 
 Jeff [EMAIL PROTECTED] wrote on 09/23/2005 09:57:06 AM:
 
   -Original Message-
   From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
   Sent: Friday, September 23, 2005 09:40
   To: Jeff
   Cc: mysql@lists.mysql.com
   Subject: RE: MyISAM to InnoDB
   
   
   Sorry to butt in but I wanted to make sure you didn't do
   actually do what 
   you proposed to do, yet. More responses interspersed...
   
  
  Nope nothing yet, I don't rush things when I'm unsure... ;o)
  
   Jeff [EMAIL PROTECTED] wrote on 09/23/2005 08:32:57 AM:
   
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 22, 2005 19:03
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 22, 2005 16:14
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
 
 True, is there a way to tell a slave to not replicate
 
 certain queries
 
 like alter table or would I need to get creative and stop
 
 replication
 
 and all writes to the main database, then issue the
   alter table
 statement, then restart replication with a  set global
 slave_sql_skip_counter=1 so that it skips the 
 alter statemtent?
 
 There's a much easier way - issue the statement SET
   SQL_LOG_BIN =
 0; before issuing any ALTER TABLE statements. This
   will cause all
 statements for the duration of that session to not be
 written to the
 binlog. See 
 http://dev.mysql.com/doc/mysql/en/set-option.html
 for more
 information.
 
  
  
  First off, thanks for the help to you and Bruce both!
  
 You're quite welcome, Jeff :)
 
  When you say here, for the duration of that session does
 that mean
  that only queries I issue with my connection skip the
 binlog?  Or do
  all queries during that time skip the binlog.  In other
 words, when I
  SET SQL_LOG_BIN = 0; should I first stop all applications
 writing to
  the database to prevent missing data in the slaves?
  
 
 
 It only affects that connection. Bruce wrote a response
   at about the
 same time I did; his covers this topic as well. SQL_LOG_BIN
 is a session 
 variable, meaning that it only affects the current session 
 (connection). 
 So, any applications running at the same time will not be 
 affected by a 
 change to this variable, and if you close your client and 
 reconnect, you 
 will have to set the variable again. As Bruce suggested, 
   it's best to
 set it only when you need it and unset it immediately
 afterwards (as a 
 precaution against operator error, not because it affects 
   the server).
 
 I do want to point out that while the commands you 
 issue (after 
 setting SQL_LOG_BIN to 0) will not be written to the binlog 
 (thus will not run
 on any slave reading from this server), they may affect 
   other running
 processes on the server. If, for example, you run an
   ALTER TABLE on a
 table currently in MyISAM format, the table will be
   locked and all
 processes running on that server that read from / write
   to that table
 will wait until that ALTER finishes. Setting SQL_LOG_BIN
   to 0 doesn't
 affect this in any way - it _only_ affects whether statements
 from that 
 specific session are recorded in the binary log.
 

Thanks, that answer my question regarding SQL_LOG_BIN varialbe.

 Side question - you've stated that you are planning 
 to migrate 
 to InnoDB, but you haven't said anything to the list about
   how much data
 you have. Just be aware that it can take a lot of time
   and disk space
 for MySQL to transfer all your data from one format to
   the other (of
 course depending on how much data you have) and if anything
 goes wrong 
 during that time, the results will probably not be what you 
 expect, or 
 want. I would advise you to at least investigate an alternate 
 approach 
 if you have a lot of data - take the server you are going to 
 migrate out 
 of the 'cluster' and make sure it is not processing 
 any data / no 
 clients are connecting to it; dump all your data to 
 text files, 
 preferably separating your data definition statements (ie 
 CREATE TABLE 
 statements) from your actual data; modify the CREATE 
   statements to
 specify the InnoDB engine; lastly load all the data from the
 text files 
 into MySQL, and bring this server back into the 'cluster'.
 
 If you

MyISAM to InnoDB

2005-09-22 Thread Jeff
Hey all,

I've got a production database that made up of all MyISAM tables.  I'd
like to change some of the more heavily written to tables to InnoDB to
take advantage of the record level locking and thus improve write
performance of our applications.  

I currently have a second db server that is replicating from the current
production system but not in production yet. I'd like to try to convert
it to InnoDB.  MySQL version is 4.0.16.  It it as symple as just issuing
the modify table query or are there problems I should be aware of when
doing this?

Also are there known problems replicating from A - B - A (circular
replication) when A had Table1= InnoDB and B has Table1=MyISAM?

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Web-based reporting tool?

2005-09-22 Thread Jeff
 -Original Message-
 From: Warrick Wilson [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, September 22, 2005 12:09
 To: mysql@lists.mysql.com
 Subject: Web-based reporting tool?
 
 
 This seems to be a common question, but answers aren't that common...
 
 What are people using as a web-based reporting tool? I'd like 
 to add a 
 user-facing interface to allow users to run pre-defined 
 reports (in which 
 they may need to enter data, like Start Date and End 
 Date) against their 
 data.
 
 I've done a bunch of Google searching, and there's always 
 Crystal Reports. 
 That is cost-prohibitive currently, though I was discussing 
 this with them. 
 However, there are issues where we may want to split our 
 current database 
 server into a larger number of servers, and then the license 
 issues crop up 
 again.
 
 What else is good and reliable? I'm looking at QLR Manager, 
 looked at Agata, 
 downloaded a number of other programs to find out they are 
 intended for 
 running on the desktop (as opposed to being a web-based app). 
 I'm not overly 
 concerned with language, either, although the database server 
 is currently a 
 Windows box. I'm using PHP for some stuff, but could run 
 Java, etc. if 
 needed for the right software.
 
 Thanks. 

If they're pre defined reports that just require date ranges or simple
arguments then why not simply build a php website that 
Has a these reports on them.  You can use simple forms to collect the
criteria from the user and then generate the report.
I use this extensively in my company for distributing information.  No
licensing to worry about and and exporting the reports to a 
Spreadsheet with a download button is fairly easy as well.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-22 Thread Jeff
 -Original Message-
 From: Bruce Dembecki [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 22, 2005 11:41
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 You will need to make sure you have innodb configured in the my.cnf
 file and you have enough space built for it in the shared table  
 space. InnoDB also needs it's own memory pool, so make sure you give  
 it enough memory. For day to day issues there is no problem doing  
 innodb/myisam replication, with a couple of small caveats... an  
 ALTER TABLE would replicate and thus... may change the table type  
 from myisam to innodb or vice versa depending on which server the  
 ALTER TABLE came from. To go with that the original conversion from  
 myisam to InnoDB would also need to be done in such a way as to not  
 be replicated.
 

True, is there a way to tell a slave to not replicate certain queries
like alter table or would I need to get creative and stop replication
and all writes to the main database, then issue the alter table
statement, then restart replication with a  set global
slave_sql_skip_counter=1 so that it skips the alter statemtent?

 
 Remember that an ALTER TABLE that could have an impact could be as
 simple as adding or dropping an index... although usually 
 very simple  
 alter table statements like that can be done without defining the  
 table engine, some GUIs may however insert that for you on even the  
 simplest ALTER TABLE commands.
 

If I understand what you're saying here, some MySQL front end gui
software will add onto any Alter table statement you submit a
statement specifying the type of table like myisam automatically.  So if
you used that gui and tried to issue an alter statement to say add an
index to a InnoDB table it would add on a table type = MyISAM and cause
havoc?  

Normally I don't rely on gui tools to do my serious quiries like
altering tables or adding indexes etc.  I'll do them logging directly
into mysql server on the linux box itself.  In this case there shouldn't
be a problem correct?

 Best Regards, Bruce
 
 On Sep 22, 2005, at 7:59 AM, Jeff wrote:
 
  Hey all,
 
  I've got a production database that made up of all MyISAM
 tables.  I'd
  like to change some of the more heavily written to tables
 to InnoDB to
  take advantage of the record level locking and thus improve write
  performance of our applications.
 
  I currently have a second db server that is replicating from the 
  current production system but not in production yet. I'd like to try

  to
  convert
  it to InnoDB.  MySQL version is 4.0.16.  It it as symple as just  
  issuing
  the modify table query or are there problems I should be 
 aware of when
  doing this?
 
  Also are there known problems replicating from A - B - A (circular
  replication) when A had Table1= InnoDB and B has Table1=MyISAM?
 
  Thanks,
 
  Jeff
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql? 
  [EMAIL PROTECTED]
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Avg row length is varying a lot from oracle to MySQL

2005-09-22 Thread Jeff
 -Original Message-
 From: Sujay Koduri [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, September 22, 2005 15:23
 To: mysql@lists.mysql.com
 Subject: Avg row length is varying a lot from oracle to MySQL
 
 
 hi ,,
 
 we are converting our oracle DB to MySQL DB. One problem i 
 see is that the
 
 abg row length in MySQL is much higher compared to that of Oracle.
 
 In oracle it is around 180 bytes and in MySQL it is around 
 686 bytes. So as
 
 a result, MySQL is taking more space to store the same number 
 of records. 
 
 Can someone please explain me if this is the intended 
 behaviour or i am
 
 missing out something. I am also including the o/p of desc 
 table_name of the
 
 same table on both the databases. 

Probably do to the way the two database store data and how much space
they reserve for specific column types.

For a way to calculate row size see this link.  Relize you must also
calculate the size of all indexes.

http://dev.mysql.com/doc/mysql/en/storage-requirements.html



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Avg row length is varying a lot from oracle to MySQL

2005-09-22 Thread Jeff

 Each row in the table takes around 600 bytes, taking every 
 thing into consideration and assuming every field is used to 
 its maximum bytes. But the major portion of this 600 bytes 
 are composed of varchar's (100 + 150 + 50 + 16 + 50 + 20 + 
 9..) Out of these 400 bytes we generally use only 40 to 50 
 bytes. Most of them are reserved for future uses. So strictly 
 speaking even including the space taken by the indexes, the 
 avg length should not come more than 250 bytes.
 

If you have a varchar(50) but usually only use 10 in those fields MySQL
still counts the unused 40 for the total byte count of the row, so you
must count them.

An empty varchar(50) field still uses 50 bytes.

Also, I believe text and blob fields are always counted as 255 bytes
regardless of your settings.

Ordered indexes are 10bytes per column (in the index) per row.  I
believe there is also some paging overhead so generally take your
calculated row size and multiply by 1.1.

 sujay
 
 -Original Message-
 From: Jeff [mailto:[EMAIL PROTECTED] 
 Sent: Friday, September 23, 2005 1:12 AM
 To: mysql@lists.mysql.com
 Subject: RE: Avg row length is varying a lot from oracle to MySQL
 
  -Original Message-
  From: Sujay Koduri [mailto:[EMAIL PROTECTED]
  Sent: Thursday, September 22, 2005 15:23
  To: mysql@lists.mysql.com
  Subject: Avg row length is varying a lot from oracle to MySQL
  
  
  hi ,,
  
  we are converting our oracle DB to MySQL DB. One problem i 
 see is that
  the
  
  abg row length in MySQL is much higher compared to that of Oracle.
  
  In oracle it is around 180 bytes and in MySQL it is around 
 686 bytes. 
  So as
  
  a result, MySQL is taking more space to store the same number of
  records.
  
  Can someone please explain me if this is the intended behaviour or i
  am
  
  missing out something. I am also including the o/p of desc 
 table_name
  of the
  
  same table on both the databases.
 
 Probably do to the way the two database store data and how 
 much space they reserve for specific column types.
 
 For a way to calculate row size see this link.  Relize you 
 must also calculate the size of all indexes.
 
http://dev.mysql.com/doc/mysql/en/storage-requirements.html



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MyISAM to InnoDB

2005-09-22 Thread Jeff
 -Original Message-
 From: Devananda [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, September 22, 2005 16:14
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: MyISAM to InnoDB
 
 
 Jeff wrote:
  True, is there a way to tell a slave to not replicate 
 certain queries 
  like alter table or would I need to get creative and stop 
 replication 
  and all writes to the main database, then issue the alter table 
  statement, then restart replication with a  set global 
  slave_sql_skip_counter=1 so that it skips the alter statemtent?
 
 There's a much easier way - issue the statement SET 
 SQL_LOG_BIN = 0; 
 before issuing any ALTER TABLE statements. This will cause all 
 statements for the duration of that session to not be written to the 
 binlog. See http://dev.mysql.com/doc/mysql/en/set-option.html 
 for more 
 information.
 

First off, thanks for the help to you and Bruce both!

When you say here, for the duration of that session does that mean
that only queries I issue with my connection skip the binlog?  Or do all
queries during that time skip the binlog.  In other words, when I SET
SQL_LOG_BIN = 0; should I first stop all applications writing to the
database to prevent missing data in the slaves?

 
 You may want to look at a few pages in the docs, for 
 information about 
 InnoDB / MyISAM differences. If your code relies on one table 
 type (or 
 features only available with that table type, like transactions for 
 InnoDB or SELECT COUNT(*) for MyISAM), you may run into 
 some problems. 
 Here are a couple links to try to help. 
 http://dev.mysql.com/doc/mysql/en/innodb-and-mysql-replication
.html
http://dev.mysql.com/doc/mysql/en/converting-tables-to-innodb.html
http://dev.mysql.com/doc/mysql/en/innodb-auto-increment-column.html
http://dev.mysql.com/doc/mysql/en/replication-features.html (towards the

bottom it talks about replication of transactions and MyISAM engine)


Best Regards,
Devananda vdv



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Circular Replication

2005-09-21 Thread Jeff
 -Original Message-
 From: Bruce Dembecki [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, September 20, 2005 23:05
 To: Jeff
 Cc: mysql@lists.mysql.com
 Subject: Re: Circular Replication
 
 
 On Sep 16, 2005, at 11:07 AM, Jeff wrote:
 
  There shouldn't be a problem if:
 
  server A is ver 4.0.x
  server B is ver 4.1.x
 
  should there?
 
 
 There will totally by a problem here... The 4.1 server will take the  
 4.0 feed without issue. The 4.1 server however puts all sorts of  
 information into the binary log which isn't in the original query,  
 like what sort of collation to use, and which character set it uses  
 and so on... 4.0 doesn't understand such commands and lots of things  
 break in this situation.
 
 As a side note we deploy servers in pairs, with circular 
 replication.  
 We did three and four server circles, but it gets messy if  
 replication stops somewhere, the data becomes unpredictably  
 inconsistent (assuming all the servers in the circle are getting  
 production updates). Now we do simple two way replication between a  
 pair, and we hang a third server off the pair somewhere just  
 reading... the third server we use for backups, data dumps, reports  
 and other non production issues. Essentially it is something like A- 
  B-C, where A and B have two way replication and C is used for  
 backups/reports etc... anything that changes the data happens 
 on A or B.
 

I assume you then need to start server B with --Log_slave_updates?

 We do some other black magic to manage the replication on C so it's  
 perpetually behind the master servers by between 15 minutes and 2  
 hours... that way if we have a stupid operator error or some other  
 data corrupting event we can stop replication on the backup server  
 before it executes and start from there rather than having to 
 go back  
 to yesterdays backup or something.
 
 Best Regards, Bruce
 

Bruce,

Thanks for the info.  All my downstream servers are 4.1 and my masters
are all 4.0.  They will all be brought up to the latest when I can
figure out a production schedule that will allow it.

So for now what I'll have is:

A - B - C
|\ 
D E

Where A and B (4.0.16) are masters in a circular replication and C,D,E
(4.1.13) are backups and other read only slave servers .

The reason for needing the circular replication is we are moving our
production systems to another data center and I need to be able to
quickly switch the applications over to the new data center and then
back again if something goes south.

After that however I think I'll keep the two db servers at the new data
center in a circular replication for redundancy probably.

I am interested in how you go about doing a delayed replication to
protect against operator error.  We've already fallen victim to that
situation here.

Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   6   7   8   9   >