RE: www.innodb.com

2006-11-10 Thread George Law



I'm seeing:



  
  

  


  
innodb.comHere are some 
related websites for: 
  innodb.com

  
  

  

  


  


  
  

  
  Sponsored Links
  


  MySQL database designData 
modelling, architecture Terabyte database designs www.shoreview.com

  Easy Event Log ComplianceConsolidate event logs to a DB. MsSQL, MySql  
Oracle are supported www.eventsentry.com

  SQL Server MonitoringFree 
White Paper: Discover what drives SQL server performance. 
www.heroix.com/SQL_monitoring

  MySQL Manager - DownloadAdmin, Import/Export, Data Sync, Query Builder, Backup, 
Report! www.mysqlfrontend.com

  100% Linux Native BackupPostgres, MySQL, AS400, DB2 Send your data securely 
offsite www.vaultlogix.com

  New Database Query ToolEasy-to-Use DB Data Management, Query  Design 
Tool. Free Trial! www.Altova.com/DatabaseSpy

  Back up with spBackup 1.2PHP 
script for backing up web site files and MySQL databases 
skypanther.com/spbackup.php

  MySQL Database DesignFree 
Database Development Quotes from Multiple Companies. Start Now! 
www.BuyerZone.com/Database_Design

  EntitySpaces ArchitectureHierarchical Object Models, LINQ Transactions, Dynamic 
Queries, more www.entityspaces.net

  Aqua Data Studio- OSXDownload 
Now! Mac Query Tool Oracle, DB2, SQL Server, Sybase 


-Original Message-From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]Sent: 
Friday, November 10, 2006 7:43 AMTo: 
mysql@lists.mysql.comSubject: Re: 
www.innodb.comBill,we are moving the DNS of 
innodb.com from Capnova to Oracle Corp.I can now view http://www.innodb.com through my ISP, 
Elisa.Does anyonestill have problems accessing http://www.innodb.com?If you 
cannot see some web page, you can resort to Google's cache toview it. I 
hope that we will not get more disruption of service 
thisweekend.Best 
regards,HeikkiOracle Corp./Innobase OyInnoDB - 
transactions, row level locking, and foreign keys for MySQLInnoDB Hot 
Backup - a hot backup tool for InnoDB which also backs upMyISAM 
tableshttp://www.innodb.com/order.phpWhat 
happened to the Innodb web pages? What comes up for beis a 
searchpage with a bunch of related links on it. I wanted to 
pulldown a copy ofibbackup documentation and it isn't there 
anymore.Bill+---| 
Bill MacAllister, Senior Programmer| 10030 Foothills Blvd| 
Roseville, CA 95747--MySQL General Mailing 
ListFor list archives: http://lists.mysql.com/mysqlTo 
unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] 



RE: bin-log with expire_logs_days

2006-10-19 Thread George Law
Thanks Dan.

According to the docs, the BEFORE option was introduced in 4.1.   

I just tried the purge with the to option :
 PURGE MASTER LOGS TO 'db1-bin.002';
Query OK, 0 rows affected (0.01 sec)

so I think I will just purge a couple log files at a time until I can
get the disk space down to a more manageable capacity.  

The previous DBA had told me that the last time he purged the logs, it
took it several minutes - but I can only assume he tried to purge too
much at once.

Thanks again!

--
George


-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 18, 2006 3:28 PM
To: George Law
Cc: mysql@lists.mysql.com
Subject: Re: bin-log with expire_logs_days

I haven't used the server variable you refer to, but instead have
always used an external command piped in via cron - PURGE BINARY LOGS
BEFORE date
and I just use a DATE_SUB function to subtract X days from 
today's date.
http://dev.mysql.com/doc/refman/5.0/en/purge-master-logs.html

It's a pretty quick command to run, generally a fraction of a second.
Since you have 132 files it might be a few seconds but I would not
expect longer than that.

I don't know whether MySQL willl go back and delete the old logs if
you set that variable and restart - presumably it would, but not
certain.

Dan



On 10/18/06, George Law [EMAIL PROTECTED] wrote:
 Hi All,

 I have a **high traffic** mysql 4.0.18-standard-log server 
running with
 bin-logging enabled.

 Right now, this must be using a default setting for 
expire_log_days.  I
 do not see this anyway in
 show variables or show status


 $ echo show variables | sql |grep bin
 binlog_cache_size   32768
 log_bin ON
 max_binlog_cache_size   4294967295
 max_binlog_size 1073741824


 # echo show status | sql |grep bin
 Com_show_binlog_events  0
 Com_show_binlogs9

 Right now, I have 132 bin-logs, each at 1 GB. the logs go back to
 2/11/2006

 If I were to add 'expire_logs_days 45' to my.cnf and restart 
mysql, is
 mysql going to attempt to purge the logs
  45 days old and if so... how long does it typically take.  
We cannot
 afford to restart if its going to take
 any significant amount of time for it to purge the logs and restart.

 thanks!


 George Law
 [EMAIL PROTECTED]
 MSN: [EMAIL PROTECTED]
 Phone: 864-678-3161


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



bin-log with expire_logs_days

2006-10-18 Thread George Law
Hi All,

I have a **high traffic** mysql 4.0.18-standard-log server running with
bin-logging enabled.  

Right now, this must be using a default setting for expire_log_days.  I
do not see this anyway in 
show variables or show status


$ echo show variables | sql |grep bin
binlog_cache_size   32768
log_bin ON
max_binlog_cache_size   4294967295
max_binlog_size 1073741824


# echo show status | sql |grep bin
Com_show_binlog_events  0
Com_show_binlogs9

Right now, I have 132 bin-logs, each at 1 GB. the logs go back to
2/11/2006

If I were to add 'expire_logs_days 45' to my.cnf and restart mysql, is
mysql going to attempt to purge the logs
 45 days old and if so... how long does it typically take.  We cannot
afford to restart if its going to take 
any significant amount of time for it to purge the logs and restart.

thanks!


George Law
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
Phone: 864-678-3161
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



purging bin logs

2006-10-10 Thread George Law
Hi All,

I have a question on purging some old bin-logs.

whats the best way to do it?  

This is a fairly old version - 4.0.18-standard-log.  I have 128 1 GB
files out there, going back 8 months.  

I think the correct syntax is : 
 PURGE BINARY LOGS TO 'mysql-bin.010'; 

but from what the previous admin who I inherited this from says, this
locks up the whole database while its purging the logs.

Are there any low-impact solutions?

This is a fairly high traffic DB, so locking up the database really is
not an option.

Thanks!


George Law
glaw at ionosphere.net

 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Question

2006-10-03 Thread George Law
show processlist gives you an abbreviated list of queries.

show full processlist gives you the full queries.

 

-Original Message-
From: Feliks Shvartsburd [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 03, 2006 1:57 PM
To: mysql@lists.mysql.com; [EMAIL PROTECTED]
Subject: Question

Does anybody know how can I see what queries are currently being
executed?


Thanks



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



perl/dbi - insert into server2.db2.table2 select * from server1.db1.table1

2006-09-29 Thread George Law
Hi All,

I have 2 separate mysql servers and need to import data from a table on
sever1 to
a table on server2.  This would need to be done in Perl.  

The script in question already has open handles to both servers, so I
know I can
select all the rows from server1 and process them one by one and insert
into the table on server2, but I was wondering if there was a more
simple way to do this which would allow me to transfer the data from one
database handle directly to another?

Looking on google, the closest example I can find is something like :

#!/usr/bin/perl
use DBI;

$dbh1=.;
$dbh2=.;
...

$statement = select a,b,c,d,... from table1 where condition='$value';
$sth=$dbh1-prepare($sql);
my @results;
while (@results = $sth-fetchrow_array) {

  # build placeholders based on num of fields
  my $placeholders;
  $placeholders .= ($placeholders ? ,? : ?) for (@results);

  my $sth2 = $dbh2-prepare(INSERT INTO table2 values
($placeholders););
  $sth2-execute(@results); 
  $sth2-finish;
}

$sth1-finish;
$dbh1-disconnect();
$dbh2-disconnect();




George Law
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
Phone: 864-678-3161
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql and Perl

2006-09-25 Thread George Law
someone correct me if I am wrong.

DBI is the generic database interface module for perl

DBD::mysql is the mysql specific module DBI relies upon to talk to mysql.

cpan i /mysql/

 

-Original Message-
From: Jørn Dahl-Stamnes [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 25, 2006 1:59 PM
To: MySQL List
Subject: Mysql and Perl

I try to write a perl script that will update a mysql 
database, but when 
running the script I get:

install_driver(mysql) failed: Can't locate DBD/mysql.pm 

I have installed DBI 1.52, but that did not help. Now I have 
spend 1 hour 
searching internet for this well hidden file mysql.pm, but 
without luck. I 
have found a lot of references etc, but not a single link to 
the file...

Can some kind soul please guide me to this file? Or has it 
been lost for man 
kind? ;-)

-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql and Perl

2006-09-25 Thread George Law
damn outlook!

fire up perl's CPAN interface:

perl -MCPAN -eshell


cpan i /mysql/
...
Bundle  Bundle::DBD::mysql (C/CA/CAPTTOFU/DBD-mysql-3.0007.tar.gz)
...
cpan install  Bundle::DBD::mysql



-Original Message-
From: Jørn Dahl-Stamnes [mailto:[EMAIL PROTECTED] 
Sent: Monday, September 25, 2006 1:59 PM
To: MySQL List
Subject: Mysql and Perl

I try to write a perl script that will update a mysql 
database, but when 
running the script I get:

install_driver(mysql) failed: Can't locate DBD/mysql.pm 

I have installed DBI 1.52, but that did not help. Now I have 
spend 1 hour 
searching internet for this well hidden file mysql.pm, but 
without luck. I 
have found a lot of references etc, but not a single link to 
the file...

Can some kind soul please guide me to this file? Or has it 
been lost for man 
kind? ;-)

-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread George Law
I see the same type of slow downs using 5.0.18

I am using load data in file to load CSV files.  

with clean tables, I see fairly quick inserts (ie instant)

2006-08-30 12:07:15 : begin import into table1
2006-08-30 12:07:15: end import into table1 records (10962) 


From earlier this morning, before I rotated my tables:
2006-08-30 09:02:01 : begin import into table1
2006-08-30 09:05:07: end import into table1 records (10082)


I've posted about this before - one person will say that its my indexes
getting rebuilt, others have said its disk io. I can never get a solid
answer.

If I disable the keys, do the import, then re-enable the keys, it takes
just as long, 
if not longer.


I have just about given up on finding a solution for this and just
rotate my tables out
regularly once the imports take over 5 minutes to process roughly 10,000
records

--
George





-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 30, 2006 12:06 PM
To: Phantom
Cc: mysql@lists.mysql.com
Subject: Re: Degrading write performance using MySQL 5.0.24

On Wed, 2006-08-30 at 08:31 -0700, Phantom wrote:
 We have an application that stores versioned data in 
MySQL. Everytime a
 piece of data is retrieved and written to, it is stored in 
the database with
 a new version and all old versions are subsequently 
deleted. We have a
 request rate of 2 million reads per hour and 1.25 million 
per hour. What I
 am seeing is that as the DB grows the performance on the 
writes degrades
 substantially. When I start with a fresh database writes 
are at 70ms. But
 once the database reaches around 10GB the writes are at 
200 ms. The DB can
 grow upto 35GB. I have tried almost performance related 
tuning described in
 the MySQL documentation page.
 
 What do I need to look at to start addressing this problem 
or this is how
 the performance is going to be ?

Before getting into server parameters, is it possible to 
take a look at
your schema and a sample of your SQL queries from the 
application?  That
would help immensely.  70ms for an UPDATE seems very slow... 
and 200ms
is very slow.

Cheers,
-- 
Jay Pipes
Community Relations Manager, North America, MySQL, Inc.
[EMAIL PROTECTED] :: +1 614 406 1267


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread George Law
|   |
| called_party_on_dest_num_type   | int(1)
| YES  | | 0   |   |
| called_party_from_src_num_type  | int(1)
| YES  | | 0   |   |
| call_source_realm_name  | char(3)
| YES  | | NULL|   |
| call_dest_realm_name| char(3)
| YES  | | NULL|   |
| call_dest_crname| char(50)
| YES  | | NULL|   |
| call_dest_custid| char(20)
| YES  | | NULL|   |
| call_zone_data  | char(20)
| YES  | | NULL|   |
| calling_party_on_dest_num_type  | int(1)
| YES  | | 0   |   |
| calling_party_from_src_num_type | int(1)
| YES  | | 0   |   |
| original_isdn_cause_code| int(1)
| YES  | | 0   |   |
+-+-
-+--+-+-+---+


-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 30, 2006 1:44 PM
To: George Law
Cc: mysql@lists.mysql.com
Subject: RE: Degrading write performance using MySQL 5.0.24

What type of data are you inserting?  What storage engine are you
inserting into?  What is the average row size?

On Wed, 2006-08-30 at 12:32 -0400, George Law wrote:
 I see the same type of slow downs using 5.0.18
 
 I am using load data in file to load CSV files.  
 
 with clean tables, I see fairly quick inserts (ie instant)
 
 2006-08-30 12:07:15 : begin import into table1
 2006-08-30 12:07:15: end import into table1 records (10962) 
 
 
 From earlier this morning, before I rotated my tables:
 2006-08-30 09:02:01 : begin import into table1
 2006-08-30 09:05:07: end import into table1 records (10082)
 
 
 I've posted about this before - one person will say that 
its my indexes
 getting rebuilt, others have said its disk io. I can never 
get a solid
 answer.
 
 If I disable the keys, do the import, then re-enable the 
keys, it takes
 just as long, 
 if not longer.
 
 
 I have just about given up on finding a solution for this and just
 rotate my tables out
 regularly once the imports take over 5 minutes to process 
roughly 10,000
 records
 
 --
 George
 
 
 
 
 
 -Original Message-
 From: Jay Pipes [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 30, 2006 12:06 PM
 To: Phantom
 Cc: mysql@lists.mysql.com
 Subject: Re: Degrading write performance using MySQL 5.0.24
 
 On Wed, 2006-08-30 at 08:31 -0700, Phantom wrote:
  We have an application that stores versioned data in 
 MySQL. Everytime a
  piece of data is retrieved and written to, it is stored in 
 the database with
  a new version and all old versions are subsequently 
 deleted. We have a
  request rate of 2 million reads per hour and 1.25 million 
 per hour. What I
  am seeing is that as the DB grows the performance on the 
 writes degrades
  substantially. When I start with a fresh database writes 
 are at 70ms. But
  once the database reaches around 10GB the writes are at 
 200 ms. The DB can
  grow upto 35GB. I have tried almost performance related 
 tuning described in
  the MySQL documentation page.
  
  What do I need to look at to start addressing this problem 
 or this is how
  the performance is going to be ?
 
 Before getting into server parameters, is it possible to 
 take a look at
 your schema and a sample of your SQL queries from the 
 application?  That
 would help immensely.  70ms for an UPDATE seems very slow... 
 and 200ms
 is very slow.
 
 Cheers,
 -- 
 Jay Pipes
 Community Relations Manager, North America, MySQL, Inc.
 [EMAIL PROTECTED] :: +1 614 406 1267
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Degrading write performance using MySQL 5.0.24

2006-08-30 Thread George Law
Jay,

Here you go - at least according to mysqldump 


CREATE TABLE `table1` (
  `start_time` char(19) default NULL,
  `start_time_epoch` int(10) default '0',
  `call_duration` char(9) default NULL,
  `call_source` char(15) default NULL,
  `call_source_q931sig_port` int(5) default '0',
  `call_dest` char(15) default NULL,
  `undef1` char(1) default NULL,
  `call_source_custid` char(20) default NULL,
  `called_party_on_dest` char(32) default NULL,
  `called_party_from_src` char(32) default NULL,
  `call_type` char(2) default NULL,
  `undef2` tinyint(1) default NULL,
  `disconnect_error_type` char(1) default '',
  `call_error_num` int(4) default '0',
  `call_error` char(24) default NULL,
  `undef3` char(1) default NULL,
  `undef4` char(1) default NULL,
  `ani` char(32) default NULL,
  `undef5` char(1) default NULL,
  `undef6` char(1) default NULL,
  `undef7` char(1) default NULL,
  `cdr_seq_no` int(9) NOT NULL default '0',
  `undef8` char(1) default NULL,
  `callid` char(50) NOT NULL default '',
  `call_hold_time` char(9) default NULL,
  `call_source_regid` char(20) default '',
  `call_source_uport` int(1) default '0',
  `call_dest_regid` char(20) default '',
  `call_dest_uport` int(1) default '0',
  `isdn_cause_code` int(3) default '0',
  `called_party_after_src_calling_plan` char(32) default NULL,
  `call_error_dest_num` int(4) default '0',
  `call_error_dest` char(25) default NULL,
  `call_error_event_str` char(20) default '',
  `new_ani` char(32) default NULL,
  `call_duration_seconds` int(5) default '0',
  `incoming_leg_callid` char(1) default NULL,
  `protocol` enum('sip','h323') default NULL,
  `cdr_type` enum('start1','start2','end1','end2','hunt') default NULL,
  `hunting_attempts` int(1) default '0',
  `caller_trunk_group` int(3) default NULL,
  `call_pdd` int(5) default '0',
  `h323_dest_ras_error` int(2) default '0',
  `h323_dest_h225_error` int(2) default '0',
  `sip_dest_respcode` int(3) default '0',
  `dest_trunk_group` char(1) default NULL,
  `call_duration_fractional` decimal(8,3) default '0.000',
  `timezone` char(3) default '',
  `msw_name` char(10) default NULL,
  `called_party_after_transit_route` char(1) default NULL,
  `called_party_on_dest_num_type` int(1) default '0',
  `called_party_from_src_num_type` int(1) default '0',
  `call_source_realm_name` char(3) default NULL,
  `call_dest_realm_name` char(3) default NULL,
  `call_dest_crname` char(50) default NULL,
  `call_dest_custid` char(20) default NULL,
  `call_zone_data` char(20) default NULL,
  `calling_party_on_dest_num_type` int(1) default '0',
  `calling_party_from_src_num_type` int(1) default '0',
  `original_isdn_cause_code` int(1) default '0',
  PRIMARY KEY  (`callid`,`cdr_seq_no`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=1;

 

-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 30, 2006 3:06 PM
To: George Law
Cc: mysql@lists.mysql.com
Subject: RE: Degrading write performance using MySQL 5.0.24

Hi!

Could you please post a SHOW CREATE TABLE table1 \G

thanks!

On Wed, 2006-08-30 at 14:32 -0400, George Law wrote:
 data is all alphanumeric - any char fields are all fixed 
lengths, no
 varchars
 
 
Name: table1
  Engine: MyISAM
 Version: 10
  Row_format: Fixed
Rows: 330344
  Avg_row_length: 624
 Data_length: 206134656
 Max_data_length: 2680059592703
Index_length: 18638848
   Data_free: 0
  Auto_increment: NULL
 Create_time: 2006-08-30 09:50:23
 Update_time: 2006-08-30 14:17:17
  Check_time: NULL
   Collation: latin1_swedish_ci
Checksum: NULL
  Create_options: max_rows=1
 Comment: 
 
 
 
+-+--
---
 -+--+-+-+---+
 | Field   | Type
 | Null | Key | Default | Extra |
 
+-+--
---
 -+--+-+-+---+
 | start_time  | char(19)
 | YES  | | NULL|   |
 | start_time_epoch| int(10)
 | YES  | | 0   |   |
 | call_duration   | char(9)
 | YES  | | NULL|   |
 | call_source | char(15)
 | YES  | | NULL|   |
 | call_source_q931sig_port| int(5)
 | YES  | | 0   |   |
 | call_dest   | char(15)
 | YES  | | NULL|   |
 | undef1  | char(1)
 | YES  | | NULL|   |
 | call_source_custid  | char(20)
 | YES  | | NULL|   |
 | called_party_on_dest| char(32)
 | YES  | | NULL|   |
 | called_party_from_src   | char(32)
 | YES  | | NULL|   |
 | call_type   | char(2)
 | YES  | | NULL|   |
 | undef2  | tinyint(1)
 | YES  | | NULL

RE: Finding field in all tables

2006-08-22 Thread George Law
Love it when that happens :)

Fastest way I can think of is dumping out the structure of the database
with mysqldump -d  database.sql 

and then searching the output to see where those columns appear



-Original Message-
From: Jerry Schwartz [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 22, 2006 2:22 PM
To: mysql@lists.mysql.com
Subject: Finding field in all tables

Is there any way to find a column name (or, better yet, a 
partial column
name) in all tables within a data base?

I inherited a complex and totally undocumented data base, 
and need to find
out (for example) which tables have a column name like xxx_exported.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sorting datafile for load data infile

2006-08-18 Thread George Law
Hello All,

An ex-coworker of my recently sent over a link to a mysql article about
speeding up mass imports.
unfortunately I deleted the email before I had time to actually digest
it and now I am wondering if it could help with my imports.

The article suggested sorting the data ahead of time to prevent mysql
from having to jump around as much.
What I have is a raw data file that I pre-process to create 3 separate
tab delimited files, which are then loaded into mysql using load data
infile...


I am working with a table with a primary key consisting of 2 fields.
| cdr_seq_no  | int(9) unsigned  | NO   | PRI |
0   |   |
| callid  | char(33) | NO   | PRI |
|   |


show index from comp_cdr;

+--++--+--+-
-+---+-+--++--++
-+
| Table| Non_unique | Key_name | Seq_in_index | Column_name
| Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |
+--++--+--+-
-+---+-+--++--++
-+
| comp_cdr |  0 | PRIMARY  |1 | callid
| A | 3454606 | NULL | NULL   |  | BTREE  |
|
| comp_cdr |  0 | PRIMARY  |2 | cdr_seq_no
| A | 3454606 | NULL | NULL   |  | BTREE  |
|
...

some sample data is :
| callid   | cdr_seq_no |
+--++
| 002644e5f21f4abaa204120fad41 |   89630624 |
| 004644e5f3db5af88248020fad41 |   89627398 |
| 008344e5ef975e6eafe0020fad41 |   89630801 |
| 009a44e5f2694aabb6cc12115a45 |   89614941 |
| 00e044e5f2e94dd45074020fad41 |   89619029 |
| 0007bb54c8691110800100201c0060ff |   89616484 |
| 002a401ae58711d382f98183346766e7 |   89621314 |
| 007f2ad6df2cdb118dd2c879d6db9673 |   89631061 |
| 008724bcc7691110800100201c0144ff |   89628873 |
| 008bbf9dc9691110800100201c00edff |   89627287 |

Any suggestions on exactly how to sort those fields before 


Thanks!!!

--
George Law
[EMAIL PROTECTED]
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: sorting datafile for load data infile

2006-08-18 Thread George Law
his exact email:

As you know, when you are doing mass inserts of millions of
rows of data it can take hours.  Well, these guys said that if you sort
the rows of data 1st by the information that will be inserted in to the
database primary key before you do the inserts then the total insert
time will take a fraction of the time.  The reason being that then the
database doesn't have to jump back and forth in the TREE structure to
insert each row of data.  One row will be inserted immediately after the
previous row that was inserted and so it takes a lot less database
processing time/overhead.



Makes sense!  I thought you might be interested in this theory.
They claim it makes a world of difference!


Now I know he references doing straight inserts, not using load data,
so I am working on rewriting my code to do inserts.


I am working on loading everything into hashes in perl, keyed off the
callid field.  
then, realistically, I should be able to sort the hash on the index, and
process the inserts with the sorted data.



-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Friday, August 18, 2006 1:40 PM
To: George Law
Cc: MYSQL General List
Subject: Re: sorting datafile for load data infile

George, I've not heard of that technique - sorting so that mysql
doesn't have to jump around as much.

What I am aware of, and what can make a difference, is disabling
indexes prior to starting the load.  Your load goes quite a bit
faster, in general, but then there is some time at the spent updating
indexes.   Overall it can be faster.  From
http://dev.mysql.com/doc/refman/5.0/en/load-data.html :


If you use LOAD DATA INFILE on an empty MyISAM table, all non-unique
indexes are created in a separate batch (as for REPAIR TABLE).
Normally, this makes LOAD DATA INFILE much faster when you have many
indexes. In some extreme cases, you can create the indexes
even faster
by turning them off with ALTER TABLE ... DISABLE KEYS before loading
the file into the table and using ALTER TABLE ... ENABLE KEYS to
re-create the indexes after loading the file. See Section 7.2.16,
Speed of INSERT Statements.


Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:
 Hello All,

 An ex-coworker of my recently sent over a link to a mysql
article about
 speeding up mass imports.
 unfortunately I deleted the email before I had time to
actually digest
 it and now I am wondering if it could help with my imports.

 The article suggested sorting the data ahead of time to
prevent mysql
 from having to jump around as much.
 What I have is a raw data file that I pre-process to
create 3 separate
 tab delimited files, which are then loaded into mysql
using load data
 infile...


 I am working with a table with a primary key consisting of
2 fields.
 | cdr_seq_no  | int(9) unsigned  |
NO   | PRI |
 0   |   |
 | callid  | char(33) |
NO   | PRI |
 |   |


 show index from comp_cdr;


+--++--+--+--
---

-+---+-+--++--+--
--+
 -+
 | Table| Non_unique | Key_name | Seq_in_index
| Column_name
 | Collation | Cardinality | Sub_part | Packed | Null | Index_type |
 Comment |

+--++--+--+--
---

-+---+-+--++--+--
--+
 -+
 | comp_cdr |  0 | PRIMARY  |1 | callid
 | A | 3454606 | NULL | NULL   |  | BTREE  |
 |
 | comp_cdr |  0 | PRIMARY  |2
| cdr_seq_no
 | A | 3454606 | NULL | NULL   |  | BTREE  |
 |
 ...

 some sample data is :
 | callid   | cdr_seq_no |
 +--++
 | 002644e5f21f4abaa204120fad41 |   89630624 |
 | 004644e5f3db5af88248020fad41 |   89627398 |
 | 008344e5ef975e6eafe0020fad41 |   89630801 |
 | 009a44e5f2694aabb6cc12115a45 |   89614941 |
 | 00e044e5f2e94dd45074020fad41 |   89619029 |
 | 0007bb54c8691110800100201c0060ff |   89616484 |
 | 002a401ae58711d382f98183346766e7 |   89621314 |
 | 007f2ad6df2cdb118dd2c879d6db9673 |   89631061 |
 | 008724bcc7691110800100201c0144ff |   89628873 |
 | 008bbf9dc9691110800100201c00edff |   89627287 |

 Any suggestions on exactly how to sort those fields before


 Thanks!!!

 --
 George Law
 [EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED]


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: sorting datafile for load data infile

2006-08-18 Thread George Law
Dan,

Dan,

Right now I am using load data to load into my database.

I am inserting 10,000 - 20,000 rows (X3) every 10 minutes. 

One table currently has 17866472 rows

Just using the date command from bash, the rough estimates for the
inserts via load data... are :

2006-08-18 15:39:23 : begin import into table1
2006-08-18 15:41:33 : import into table1 records: (18183) deleted:(0)
skipped: (0) warnings:(0)

130 seconds for 18182 rows... as the number or rows in the table
increases, this import time increases.
Every once in a while I rotate this table out and when doing the same
type of 'load data', it takes  1 second.  

I have tried disabling the keys, but if I remember correctly, it took
just as long if not longer.  
This table has a primary key based on the 2 fields I described earlier,
plus indexes on 4 other fields.  
All alphanumeric fields are fixed width char fields.  

So, I am eager to see if this sorting idea helps any.

--
George

 

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 18, 2006 3:53 PM
To: George Law
Cc: MYSQL General List
Subject: Re: sorting datafile for load data infile

George, for raw speed into a MyISAM table, I think you'll 
find it hard
to beat LOAD DATA INFILE, especially if you disable keys before and
re-enable afterwards (which is not unlike what your friend proposes -
creating the index in a more efficient fashion).  I'd be 
interested to
hear how you get on with perl vs. LOAD DATA INFILE, if you do any
comparative benchmarks.

Dan


On 8/18/06, George Law [EMAIL PROTECTED] wrote:
 his exact email:

 As you know, when you are doing mass inserts of 
millions of
 rows of data it can take hours.  Well, these guys said 
that if you sort
 the rows of data 1st by the information that will be 
inserted in to the
 database primary key before you do the inserts then the 
total insert
 time will take a fraction of the time.  The reason being 
that then the
 database doesn't have to jump back and forth in the TREE 
structure to
 insert each row of data.  One row will be inserted 
immediately after the
 previous row that was inserted and so it takes a lot less database
 processing time/overhead.



 Makes sense!  I thought you might be interested in 
this theory.
 They claim it makes a world of difference!


 Now I know he references doing straight inserts, not using 
load data,
 so I am working on rewriting my code to do inserts.


 I am working on loading everything into hashes in perl, 
keyed off the
 callid field.
 then, realistically, I should be able to sort the hash on 
the index, and
 process the inserts with the sorted data.



 -Original Message-
 From: Dan Buettner [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 18, 2006 1:40 PM
 To: George Law
 Cc: MYSQL General List
 Subject: Re: sorting datafile for load data infile
 
 George, I've not heard of that technique - sorting so 
that mysql
 doesn't have to jump around as much.
 
 What I am aware of, and what can make a difference, is disabling
 indexes prior to starting the load.  Your load goes quite a bit
 faster, in general, but then there is some time at the 
spent updating
 indexes.   Overall it can be faster.  From
 http://dev.mysql.com/doc/refman/5.0/en/load-data.html :
 
 
 If you use LOAD DATA INFILE on an empty MyISAM table, 
all non-unique
 indexes are created in a separate batch (as for REPAIR TABLE).
 Normally, this makes LOAD DATA INFILE much faster when 
you have many
 indexes. In some extreme cases, you can create the indexes
 even faster
 by turning them off with ALTER TABLE ... DISABLE KEYS 
before loading
 the file into the table and using ALTER TABLE ... ENABLE KEYS to
 re-create the indexes after loading the file. See 
Section 7.2.16,
 Speed of INSERT Statements.
 
 
 Dan
 
 
 On 8/18/06, George Law [EMAIL PROTECTED] wrote:
  Hello All,
 
  An ex-coworker of my recently sent over a link to a mysql
 article about
  speeding up mass imports.
  unfortunately I deleted the email before I had time to
 actually digest
  it and now I am wondering if it could help with my imports.
 
  The article suggested sorting the data ahead of time to
 prevent mysql
  from having to jump around as much.
  What I have is a raw data file that I pre-process to
 create 3 separate
  tab delimited files, which are then loaded into mysql
 using load data
  infile...
 
 
  I am working with a table with a primary key consisting of
 2 fields.
  | cdr_seq_no  | int(9) unsigned  |
 NO   | PRI |
  0   |   |
  | callid  | char(33) |
 NO   | PRI |
  |   |
 
 
  show index from comp_cdr;
 
 
 +--++--+--+--
 ---
 
 -+---+-+--++--+--
 --+
  -+
  | Table| Non_unique | Key_name | Seq_in_index
 | Column_name
  | Collation | Cardinality | Sub_part | Packed | Null 
| Index_type

RE: forcing leading 0 for numeric fields

2006-08-10 Thread George Law
Mark,

Perfect!  Thanks!!

I altered my table to specify ZEROFILL and that did the the trick.



--
George


-Original Message-
From: Mark Leith [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 09, 2006 3:56 PM
To: Jonathan Mangin
Cc: George Law; MYSQL General List
Subject: Re: forcing leading 0 for numeric fields

Jonathan Mangin wrote:
 - Original Message - 
 From: George Law [EMAIL PROTECTED]
 To: MYSQL General List mysql@lists.mysql.com
 Sent: Wednesday, August 09, 2006 3:40 PM
 Subject: forcing leading 0 for numeric fields


 Hello All,

 I have what is probably a simple question.

 I have a table of phone numbers, broken into npa,nxx,station

 So, 8001231234npa =800  nxx=123  station=1234

 Some queries pull data from this table in the format:

 select * from table where concat(npa,nxx,station)=8001231234


 That is all good.

 The problem I ran into today is where the station column 
is  1000, 
 ie 8001230123

 station =0123 which gets stored as 123 by mysql



 Is there a quick and easy way to force station to 4 digits 
when I do the
 query
 select * from table where concat(npa,nxx,station)=8001230123

 This query does not work, butselect * from table where
 concat(npa,nxx,station)=800123123
   

Store them as INT with ZEROFILL:

mysql CREATE TABLE zeros (i INT(4) ZEROFILL, j INT(4) ZEROFILL, k 
INT(4) ZEROFILL);
Query OK, 0 rows affected (0.09 sec)

mysql INSERT INTO zeros VALUES (23,3244,0123);
Query OK, 1 row affected (0.01 sec)

mysql select * from zeros;
+--+--+--+
| i| j| k|
+--+--+--+
| 0023 | 3244 | 0123 |
+--+--+--+
1 row in set (0.08 sec)

Make sure you specify the length of the digits that you would like 
padded to within the INT specification.

Best regards

Mark

-- 
Mark Leith, Support Engineer
MySQL AB, Worcester, England, www.mysql.com
Are you MySQL certified?  www.mysql.com/certification



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



forcing leading 0 for numeric fields

2006-08-09 Thread George Law
Hello All,

I have what is probably a simple question.

I have a table of phone numbers, broken into npa,nxx,station

So, 8001231234npa =800  nxx=123  station=1234

Some queries pull data from this table in the format:

select * from table where concat(npa,nxx,station)=8001231234


That is all good.

The problem I ran into today is where the station column is  1000, 
ie 8001230123

station =0123 which gets stored as 123 by mysql



Is there a quick and easy way to force station to 4 digits when I do the
query
select * from table where concat(npa,nxx,station)=8001230123

This query does not work, butselect * from table where
concat(npa,nxx,station)=800123123
does.

TIA!

--
George

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Checkboxes

2006-07-26 Thread George Law
Nick,

It looks like the mysql part has been answered.  I was going to suggest
using a ENUM
field myself.

For the PHP part, if you are trying to keep multiple rows in your HTML
correlated,
ie $customer[1] = $mod_type[1] you may not want to use a checkbox field

checkbox fields only pass a field if they are checked - so if you have
information
for customer[0], and the checkbox is not checked, then $mod_type[0]
would NOT be from the first
row, it would be from the first CHECKED row.

The best solution I have found is to use a RADIO button with the value
set to the default. 
this way it always gets a value. I have seen others suggest to using a
hidden field
to preset this field so there is always a value -- 
input type=hidden name=mod_type_0 value=other/
input type=checkbox name=mod_type_0 value=alternative
/Alternativebr /)

This way, if the checkbox is unchecked, it gets a value of other, but
if it is
checked, it will pass a value ot alternative.

Note - I am not sure how this would work with an array (mod_type[]).
PHP might treat
the hidden field as index 0, and if the checkbox is checked, it would be
index 1


-
George

-Original Message-
From: Nicholas Vettese [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 26, 2006 12:04 PM
To: mysql@lists.mysql.com
Subject: Checkboxes

I am trying to allow a user to use checkboxes for multiple 
selections, and I am trying to get the database to record 
the choices made by the user.  I am using a VARCHAR(250), 
but I am guessing that may be wrong.  Also, other than 
adding a [] to the end of the 'name' (e.g. input 
type=checkbox name=mod_type[] value=alternative 
/Alternativebr /), what do I need to do?

I hope I am asking this question correctly.  If not, please 
feel free to flame me.

Nick




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: yes or no checkbox

2006-07-25 Thread George Law
I think if you do it as a enum field, it shows as a checkbox in phpmysql

define it as enum 'Yes','No'

 

-Original Message-
From: Brian E Boothe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 25, 2006 11:07 AM
To: mysql@lists.mysql.com
Subject: yes or no checkbox 

know this is probably a simple question  but if im going to 
setup a yes 
or no checkbox on a table feild  on MySQL thru phpmyadmin,  
how do i do 
that ??
thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: log

2006-07-11 Thread George Law
 
Just a shot in the dark...
how about : \T /home/karl/sql_log_$$


I believe, in bash, $$ will give you the pid of the process, so you
should
get a unique (somewhat) file name every time


-Original Message-
From: Karl Larsen [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 11, 2006 2:51 PM
To: MYSQL General List
Subject: log 

I have found that \T /home/karl/sql_log will cause 
evcrything I do 
to be saved in the file sql_log. Alas if I turn off mysql as 
I do daily, 
the log is lost and I have to do it again.

Is there a way to get a log like this to be perminant :-)


Karl Larsen


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MS SQL TOP Replacement

2006-06-29 Thread George Law

SELECT max(id) FROM menu_links

:)


 -Original Message-
 From: Mike Wexler [mailto:[EMAIL PROTECTED]
 Sent: Thursday, June 29, 2006 1:22 PM
 To: Mike
 Cc: mysql@lists.mysql.com
 Subject: Re: MS SQL TOP Replacement
 
 Mike wrote:
  I have a SELECT statement for MS SQL that is
 
  SELECT TOP 1 id FROM menu_links ORDER BY id desc
 
  and need to convert it to MySQL. I cannot find what will do that.
 
  Help is greatly appreciated.
 
  TIA
 
  Mike
 
 
 
 SELECT id FROM menu_links ORDER BY id desc LIMIT 1
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: reclaim disk space

2006-06-21 Thread George Law
I had to do some disk space recovery mysql last month...

I have backed up some of the raw .MYI, .MYD files and deleted them from
the data directory.  Running 5.0.18, I had to shut down mysql and
restart before it freed up the space.



 -Original Message-
 From: Dan Nelson [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, June 21, 2006 3:58 PM
 To: luiz Rafael
 Cc: mysql@lists.mysql.com
 Subject: Re: reclaim disk space
 
 In the last episode (Jun 21), luiz Rafael said:
  how to reclain the disk space used by an table that was dropped?
 
 For most storage engines, each table is in its own file so a dropped
 table immediately returns space back to the OS.  For InnoDB in
 tablespace mode (i.e. innodb_file_per_table is unset), you will have
to
 back up and drop all your InnoDB tables, delete the tablespace files,
 and reload the tables.
 
 http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html
 http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
 
 --
   Dan Nelson
   [EMAIL PROTECTED]
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: select 9000000000000.10*19.80 wrong result 178200000000001.97

2006-05-30 Thread George Law
You don't need to take it to 16 digits :

mysql 4 :

select 1/666;
+---+
| 1/666 |
+---+
|  0.00 |
+---+

mysql 5.0.18:
select 1/666;
++
| 1/666  |
++
| 0.0015 |
++


George Law

 -Original Message-
 From: Chris W [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, May 30, 2006 4:39 PM
 To: [EMAIL PROTECTED]
 Cc: mysql@lists.mysql.com
 Subject: Re: select 9.10*19.80 wrong result
1782001.97
 
 wang shuming wrote:
 
  Hi,
   select 9.10*19.80 wrong result 1782001.97
   if = 9000.10*19.80 right  result178201.98
 
 
 This is very typical floating point error.  Computers don't have
 infinite precision or it would take infinite time to compute the
 answer.  I have to ask why you need such precise number.  In the real
 world if you can measure or control something to 4 significant digits
 you are doing really good.  If you can do 6 you are doing great.  If
you
 can do 8 you are one step from a miracle worker.  Measuring or
 controlling something to a level of precision of 16 significant digits
 is completely impossible.
 
 --
 Chris W
 KE5GIX
 
 Gift Giving Made Easy
 Get the gifts you want 
 give the gifts they want
 One stop wish list for any gift,
 from anywhere, for any occasion!
 http://thewishzone.com
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SPAMs

2006-05-24 Thread George Law
more than likely someone is harvesting emails off groups.google.com 

http://groups.google.com/group/mailing.database.myodbc

Your posting shows top of the list :)



George Law
 -Original Message-
 From: Jørn Dahl-Stamnes [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 24, 2006 7:38 AM
 To: mysql@lists.mysql.com
 Subject: Re: SPAMs
 
 On Wednesday 24 May 2006 13:36, Logan, David (SST - Adelaide) wrote:
  Hi Jørn,
 
  I myself haven't had too many spams, I get a few anyway from various
 places
  my email address is placed. I would suggest try unsubscribing using this
  alias and just re-subscribe with your new (known only to you) alias.
 
 They must have found my e-mail addresse from this list or from some sort
 of
 archive.
 
 I don't know which e-mail system (majordomo, listserver, or) this list
 use.
 But some of the systems do have commands to change the e-mail address
 without
 doing a unsubscribe and a new subscribe.
 
 --
 Jørn Dahl-Stamnes
 homepage: http://www.dahl-stamnes.net/dahls/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fun with Dates and Incentives.

2006-05-24 Thread George Law
Brian,

Just my 2 cents :)

I always try to use an epoch time stamp for anything time related.  its
easier to compare times and all the functions are built into mysql to
convert to/from a unix epoch timestamp

select unix_timestamp(NOW());
+---+
| unix_timestamp(NOW()) |
+---+
|1148492137 |


select from_unixtime(1148492137);
+---+
| from_unixtime(1148492137) |
+---+
| 2006-05-24 13:35:37   |


Need the number of minutes between 2 epoch timestamp? subtract and
divide by 60.

select (unix_timestamp(NOW()) - 1148492137)/60;
+-+
| (unix_timestamp(NOW()) - 1148492137)/60 |
+-+
|2.32 |



George Law

 -Original Message-
 From: Brian Menke [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 24, 2006 12:51 PM
 To: mysql@lists.mysql.com
 Subject: Fun with Dates and Incentives.
 
 I'm hoping for some general advice on an approach for the following
 scenario:
 
 
 
 I have a customer who wants to put an incentive program in place for
 students taking learning modules and then completing tests. The
concept is
 simple. Award the first 10 people who complete a test with a score of
 100%... that type of thing. Students are allowed to take test more
than
 once. Track each time the student takes the test and show the latest
score
 ect. You get the idea. I have the database tables and relationships
 already
 all set up for the tests, but it's the tracking of the dates and times
 that
 I don't have and it got me thinking.
 
 
 
 I need to track down to the day/hour/minute level. Okay, that should
be
 easy
 (I think). I'm going to need to do a lot of date/time calculations.
Would
 it
 be best just to have a default of CURRENT_TIMESTAMP set for a
TIMESTAMP
 field? Or, is their something else I should be using? I have limited
 experience having to munge and crunch date/time info and I want to
make
 sure
 I have the flexibility to do what I need in the future.
 
 
 
 The next gotcha I thought up is what about different time zones.
Obviously
 without this consideration, people on the East coast would have an
unfair
 3
 hour advantage over people on the west coast. I guess I can have a
time
 zone
 field in my student table so I could derive the time difference. Any
 suggestions on a good time zone approach?
 
 
 
 Here are my two tables as they stand now. I'm wondering if these are
set
 up
 in a way to allow me to do all this date time crunching I'm going to
need
 to
 do in the future? Any suggestions are greatly appreciated :-)
 
 
 
 
 
 CREATE TABLE `students` (
 
   `store_id` varchar(6) NOT NULL,
 
   `email` varchar(64) NOT NULL,
 
   `fname` varchar(32) NOT NULL,
 
   `lname` varchar(32) NOT NULL,
 
   `role` char(2) NOT NULL default '5',
 
   `password` varchar(8) NOT NULL,
 
   `phone` varchar(24) default NULL,
 
   `reg_date` date default NULL,
 
   PRIMARY KEY  (`email`),
 
   UNIQUE KEY `email` (`email`)
 
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
 CREATE TABLE `completed_modules` (
 
   `module_id` char(2) NOT NULL default '',
 
   `email` varchar(64) NOT NULL,
 
   `score` int(2) NOT NULL default '0',
 
   `time` timestamp NOT NULL default CURRENT_TIMESTAMP
 
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
 Brian Menke
 
 Visual Matter, Inc
 
 1445 Foxworthy Ave., Suite 50-215
 
 San Jose, CA 95118
 
 408 375 9969
 
 
 
 San Jose ~ Los Angeles
 www.visualmatter.com
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Looking for free MySQL Administrator

2006-05-23 Thread George Law
there is also a nice tool for Mac OSX called CocoaMySQL.
http://cocoamysql.sourceforge.net/




George Law

 -Original Message-
 From: Rajesh Mehrotra [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, May 23, 2006 3:58 PM
 To: Rithish Saralaya; mysql@lists.mysql.com
 Subject: RE: Looking for free MySQL Administrator
 
 
 Check out http://www.mysql.com/products/tools/ for some good stuff...
 
 Raj Mehrotra
 [EMAIL PROTECTED]
 
 
 -Original Message-
 From: Rithish Saralaya [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, May 23, 2006 1:21 PM
 To: mysql@lists.mysql.com
 Subject: RE: Looking for free MySQL Administrator
 
 
  I'm looking for a MySQL administrator for 4.x/5.x that will allow me
  to
 
  Any suggestions? TIA
 
 http://www.webyog.com/
 
 Regards,
 Rithish.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Returning records in a circle

2006-05-12 Thread George Law
 I ran into this same type of question the other evening at a local
linux group.

I think that once you reach the end of the results set the only way to 
start back at the beginning of that results set is to do the query
again.

Once option - do your query and grab all the rows
load them into a data structure - ie - a list of associative arrays


Then all you need to do is incrementally go through the array.
when you hit the end, just reset the index back to 0.


-Original Message-
From: Steffan A. Cline [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 12, 2006 2:52 PM
To: mysql@lists.mysql.com
Subject: Re: Returning records in a circle

Well, basically it can be done to an extent in some kind of code.
Basically
I am drawing from a table 2 records at a time. I want to make sure that
all
records are pulled at least once. If at all possible not to have 2 from
the
same vendor.

So, in this case.

I have columns id, html, vendor

So I can add columns as needed. I tried adding a flag and after
returning to
the client the 2 records I'd mark it as flag = 1 then search like this
Select id, html from urls order by flag, id desc limit 3
Then after I have those I would then set the last of the 3 to flag = 1
so
that on the next search I get the 2 after. In theory it worked fine but
when
multiple people hit the page simultaneously I had flags in different
places
and not in order. Maybe just mark them as flag = 1 after returned and
then
on search if found is 0 then set all to flag = 0 so they can be seen
again?
This doesn't seem so bad but them I guess I'd use distinct? If I cant
use
distinct with other parameters...
ex: select id,html distinct(vendor) from urls where flag=0 limit 2;
Would it be like :
Select id,html from urls where flag = 0 group by distinct(vendor) limit
2

?  

Thanks

Steffan

---
T E L  6 0 2 . 5 7 9 . 4 2 3 0 | F A X  6 0 2 . 9 7 1 . 1 6 9 4
Steffan A. Cline
[EMAIL PROTECTED] Phoenix, Az
http://www.ExecuChoice.net  USA
AIM : SteffanC  ICQ : 57234309
The Executive's Choice in Lasso driven Internet Applications
  Lasso Partner Alliance Member
---



 From: Rhino [EMAIL PROTECTED]
 Date: Fri, 12 May 2006 14:20:10 -0400
 To: Steffan A. Cline [EMAIL PROTECTED], mysql@lists.mysql.com
 Subject: Re: Returning records in a circle
 
 
 - Original Message -
 From: Steffan A. Cline [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Friday, May 12, 2006 1:38 PM
 Subject: Returning records in a circle
 
 
 Is there a way I can get a set of records incrementally such as to
get 2
 then the next query get the next 2 then at the end of all records to
get
 the
 2 from the beginning? I need to keep going incrementally by 2 in a
circle.
 
 
 Are you trying to get these rows purely via SQL at the command line or
in an
 SQL script? Or would an application be an option for you?
 
 If you are not willing to consider application code to grab the rows
you
 want, the answer to your question is maybe. SQL has always been
intended
 to return ALL of the rows that satisfy a query with a single
invocation of
 the query, no matter how many rows that is. So if your query says:
 
 select * from mytab;
 
 you will normally get all of the rows that satisfy that query in one
go,
 whether there are 0 rows, 100 rows, or a 100 million rows in the
result.
 
 You _might_ be able to get the results you want by using the LIMIT
clause.
 I'm not sure what version of MySQL you are using but the LIMIT clause
is
 described in the MySQL 3.23/4.0/4.1 manual on this page:
 http://dev.mysql.com/doc/refman/4.1/en/select.html. The problem is
that
 you'll still pretty much need some sort of script in order to keep
executing
 the query to get the next two records and you may need to change the
 parameters of the LIMIT clause at the same time.
 
 If you are willing to write application code, things get a lot easier.
For
 instance, a Java program could easily grab rows from a result set for
you
 two at a time, let you process them, then grab two more, etc. I expect
that
 it would similarly easy to do the same thing in Perl and PHP and C.
 
 In short, a program gives you a lot more ability to do what you want
to do
 with your database data. But some shops have very little programming
 expertise and prefer to do everything via SQL. If you work for one of
those
 shops, you might not be able to get your records two at a time with
SQL
 alone, unless you can write a script that takes advantage of the LIMIT
 clause.
 
 I don't pretend to know MySQL exhaustively so someone else may have
another
 suggestion for you but the only two approaches I can think of that
might
 meet your needs are to use the LIMIT clause or to write an
application.
 
 --
 Rhino
 
 
 
 
 -- 
 No virus found in this outgoing message.
 Checked by AVG Free 

RE: MySQL 5 / phpMyAdmin

2006-05-10 Thread George Law
try a search for old_password

 

-Original Message-
From: Amer Neely [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 10, 2006 1:39 PM
To: MySQL List
Subject: MySQL 5 / phpMyAdmin

I've finally got MySQL 5.0 going on my Win2K machine, but phpMyAdmin 
2.8.0.3 is now giving me grief. It gives me this error:

phpMyAdmin tried to connect to the MySQL server, and the server rejected

the connection.
#1251 - Client does not support authentication protocol requested by 
server; consider upgrading MySQL client

I'm using the same phpMyAdmin config file as when I had a previous 
version of MySQL installed, so I'm stumped on what could be the hangup.

How do I upgrade my MySQL client? phpinfo tells me the API is 3.23.49.

$cfg['Servers'][$i]['auth_type'] = 'config';

I'm not even sure this is a MySQL question, but if someone can provide a

clue for me that would be great.
-- 
Amer Neely
Home of Spam Catcher
W: www.softouch.on.ca
E: [EMAIL PROTECTED]
Perl | MySQL | CGI programming for all data entry forms.
We make web sites work!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Can I set UNIX_TIMESTAMP() as default?

2006-05-10 Thread George Law
 
Nicholas,

just found something on google:
In general, this cannot be done. Default values cannot be the return of
a
MySQL function (as much as I'd love to use NOW() for default values!).

However, there's one loophole. When inserting, not specifying a value
for
the first timestamp field in a table will generate the current
timestamp. 


Best work around I can think of is to set your field as an int 
and include unix_timestamp(NOW()) in your inserts



-Original Message-
From: Nicolas Verhaeghe [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 10, 2006 2:33 PM
To: mysql@lists.mysql.com
Subject: Can I set UNIX_TIMESTAMP() as default?

I would like an integer field to capture the current date as a Unix
Timestamp by default.

But this will not be accepted at all.

I get the error invalid default value for [field name]

Is there a workaround?

Thanks!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: inserting server time into mysql

2006-05-09 Thread George Law
 
$query = INSERT INTO staffs (firstname, lastname, signin) VALUES
('$firstname', '$lastname', NOW());


I think if you alter the table and set a default value on signin to
NOW()

ALTER TABLE staffs  CHANGE signin signin DATETIME DEFAULT 'now()' not
null';

then you could just to :
$query = INSERT INTO staffs (firstname, lastname) VALUES ('$firstname',
'$lastname');

-Original Message-
From: Alla-amin [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 09, 2006 12:41 PM
To: mysql@lists.mysql.com
Subject: Re: inserting server time into mysql


Thanks for your help,

How can I capture this information from a php form into a mysql table. 
This is what I did:

1. I created the table and the user to access the database the table is
in
create table staffs (
id int not null auto_increment primary key,
firstname varchar(20) not null,
lastname varchar(20) not null,
signin datetime not null
);

2. I created a php form to insert data into this table. The
form works but the datetime field isn't populated

html
head
titleStaff Detail/title
meta http-equiv=Content-Type content=text/html; charset=iso-8859-1
/head

body
?php
if(isset($_POST['add']))
{
include 'config.php';
include 'opendb.php';

$firstname = $_POST['firstname'];
$lastname = $_POST['lastname'];
$signin = $_POST['signin'];

$query = INSERT INTO staffs (firstname, lastname, signin) VALUES
('$firstname', '$lastname', '$signin');
mysql_query($query) or die('Error, insert query failed');

include 'closedb.php';
echo New staff detail added;
}
else
{
?
form method=post
table width=400 border=0 cellspacing=1 cellpadding=2
tr
td width=100First Name/td
tdinput name=firstname type=text/td
/tr
tr
td width=100Last Name/td
tdinput name=lastname type=text/td
/tr
tr
td width=100Sign-In Time/td
tdinput name=signin type=text/td
/tr
tr
tr
td width=100nbsp;/td
tdnbsp;/td
/tr
tr
td width=100nbsp;/td
tdinput name=add type=submit id=add value=Submit/td
/tr
/table
/form
?php
}
?
/body
/html

Am I doing something wrong?

You mentioned that I can insert the now() function into a regular
datetime field, how can I do that?


In the last episode (May 09), Alla-amin said:
 I am trying to capture my server time automatically using php and
 insert it in a mysql table.
 
 Can the timestamp or time data type capture this information
 automatically without having me code anything else?

You can use the 'timestamp' type to automatially insert the current
date/time when inserting or updating, or you can insert now() into a
regular 'datetime' field.

http://dev.mysql.com/doc/refman/5.0/en/date-and-time-type-overview.html
http://dev.mysql.com/doc/refman/5.0/en/datetime.html
http://dev.mysql.com/doc/refman/5.0/en/timestamp-4-1.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Outfile syntax and out of memory

2006-05-08 Thread George Law
 
Johan,

have you thought about doing this incrementally?
ie - 25% at a time x 4

to show something for NULL, you can use the 
COALESCE function.

ie - COALESCE(column,'nothing') 


--
George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
 

-Original Message-
From: Johan Lundqvist [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 10:16 AM
To: mysql@lists.mysql.com
Subject: Outfile syntax and out of memory

Hi,

I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!


If I instead insert the following code in queryfile.sql:
INTO OUTFILE 'outfile.txt'

Now my outfile.txt don't get the first row with the column names, and 
any NULL values are exported as \N.

This is a big problem, cause the import function that exist where I send

the data only accept the format I get using mysql  queryfile.sql  
outfile.txt.

Any help??! Ideas??

Can I in any way format my output to print the column names and print 
NULL values as 'nothing'??

Regards,
/Johan Lundqvist

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to rename a DB

2006-05-05 Thread George Law


Hardi

I rotate tables out on a monthly basis.  The way I do it is:

rename table1 to table2

If you need a new copy of table1, you can do :

create table table1 like table2

 
--
George

-Original Message-
From: Hardi OK [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 04, 2006 11:19 PM
To: mysql@lists.mysql.com
Subject: How to rename a DB

Hi,

Anybody know how can i safely and easily rename a database in MySQL
5.0.19.
Have tried some tips from google results (most of them are for MySQL
4.x)
but no luck till now.
Any help would be greatly appreciated.

Rgds/Hardi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to rename a DB

2006-05-05 Thread George Law
DOH!

Sorry - I totally misread the question!

A quick google for rename database shows Adrian is correct!


 

-Original Message-
From: Adrian Bruce [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 05, 2006 9:51 AM
To: George Law
Cc: Hardi OK; mysql@lists.mysql.com
Subject: Re: How to rename a DB


Stop the server, go to the MySQL data directory and physically change 
the name of the directory that corresponds to the database.  Restart 
MySQL server and SHOW DATABASES to see the change take effect.

George Law wrote:
 Hardi

 I rotate tables out on a monthly basis.  The way I do it is:

 rename table1 to table2

 If you need a new copy of table1, you can do :

 create table table1 like table2

  
 --
 George

 -Original Message-
 From: Hardi OK [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, May 04, 2006 11:19 PM
 To: mysql@lists.mysql.com
 Subject: How to rename a DB

 Hi,

 Anybody know how can i safely and easily rename a database in MySQL
 5.0.19.
 Have tried some tips from google results (most of them are for MySQL
 4.x)
 but no luck till now.
 Any help would be greatly appreciated.

 Rgds/Hardi

   

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: importing a dumpfile from with the mysql client

2006-05-05 Thread George Law
I think what he is saying is that be began the transaction in a
command line client 
session from one location but was not able to give the  

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

commands in the same session. (ie - I owe, I owe, its off to work I go
:)  )

This is a good question, one I have wondered about myself.

Is there a way in mysql to attach to session to issue a commit?




-Original Message-
From: sheeri kritzer [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 05, 2006 3:02 PM
To: Luke Vanderfluit
Cc: MySQL List
Subject: Re: importing a dumpfile from with the mysql client

On 5/4/06, Luke Vanderfluit [EMAIL PROTECTED] wrote:

[snip]

 I started this process remotely then went to the site to finish it.
 But when the dump finished (several hours later) I was not able to
 execute the following commands from my original location.

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

What do you mean you were not able?  Did you get an error?  Was the
server hung?  Did the keyboard stick to make you unable to type the
commands?

 My question is:
 Since the import completed the database has grown in size and been
 backed up etc.
 Yet from the original session I have not executed those 2 commands.

 Is it safe to execute them? Or would executing them cause corruption
or
 other unforseen stuff?
 Is it unnecessary to execute them?
 Is it safe to kill that original mysql session?

Those 2 commands, in and of themselves, aren't dangerous.  It's what's
in the script that could be the problem.  However, since you really
didn't mention what error you got, it's not easy to figure out how to
fix it.

It depends what tables you're running these on.  If you're running
them on MyISAM tables, deadlocking cannot happen.  Then again,
transactions are meaningless too.  If you're running on InnoDB you
have the possibility of deadlocking, but MySQL is pretty good about
avoiding that.

If other users/applications are writing to the SAME data then there's
the possibility of inconsistent data, but I wouldn't expect
corruption.

if you post what errors you actually got, I might be able to help. 
Also, are you using the same db and tables that other apps/users are
using?  Does the dumpfile contain DROP TABLE IF EXISTS commands?

-Sheeri

 --
 Luke Vanderfluit.
 Analyst/Programmer.
 Internode Systems Pty. Ltd.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UPDATE question

2006-05-03 Thread George Law
 
Shawn,

Perhaps :

UPDATE TABLE_NAME SET FIELD4 = concat(FIELD1,FIELD2);




-Original Message-
From: Cummings, Shawn (GNAPs) [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 9:33 AM
To: Mysql General (E-mail)
Subject: UPDATE question


If I have 4 Fields (FIELD1, FIELD2, FIELD3  FIELD4)

I can do this easily;

UPDATE TABLE_NAME SET FIELD4 = FIELD1;

But -- how do I do it so that FIELD4 = FIELD1  FIELD2 ???  I can't seem

to find any examples online.  Maybe it's just too early in the morning -

I'm drawing a blank! ;)






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help with this query. How to do a group by on a datetime just the month/day/year

2006-05-03 Thread George Law
try:

group by substring(timeofclick,1,10) 


-Original Message-
From: Randy Paries [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 11:25 AM
To: mysql@lists.mysql.com
Subject: Help with this query. How to do a group by on a datetime just
the month/day/year

Hello,

Not sure if i can do this.

I have a table with a datetime column

I would like to do group by a day of the month.

if i do something like

select count(*) from MTracking where mallarea=1001 group by timeofclick

every one is listed because time.

So is this possible?

Thanks
Randy

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: EXPORTING results to CSV

2006-05-03 Thread George Law
try : 

select .  into outfile '/tmp/t3.csv' FIELDS TERMINATED BY ','  LINES
TERMINATED BY '\n' from table where .


This will create a file in the /tmp directory on the DB server itself

this doesn't do the column headings and your output file cannot already
exist.



 

-Original Message-
From: Cummings, Shawn (GNAPs) [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 2:45 PM
To: Mysql General (E-mail)
Subject: EXPORTING results to CSV


Is there a way to export the results to a text file (comma-delimited 
preferred)...

ie, SELECT * FROM TABLE  test.txt ; (obviously this doesn't work)  :)




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Script run how???

2006-05-01 Thread George Law
perl ?


just quickly throwing something together - this is untested.


$done = 0;
$count = 0;
while ($done) {

$dbh-do(analyze table t1);
my $info = $dbh-{mysql_info};  
if ($info =~ /some kind of error message/) {
$dbh-do(repair table t1);
my $info2 = $dbh-{mysql_info}; 
  if ($info2 =~ /another error message/) {
   print Error while repairing table t1\n;
   last;
 }
} else {
   $done = 1;
} 

$count++;
if ($count  5) {
   print unable to repair errors in 5 passes\n;
   last;
}
}



You would have to figure what kind of errors may come back and put those
in the if conditions



-Original Message-
From: Ravi Kumar [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 01, 2006 2:56 PM
To: Sergei Golubchik; Robert DiFalco
Cc: [EMAIL PROTECTED]; mysql@lists.mysql.com
Subject: Script run how???

Hi,
   
  I want to run following command thru script .
  analyze table t1;
  optimize table t1;
  analyze table t2;
  optimize table t2;
   
  Can I do it on linux mysql ?
  I also want to run script like during analysing if we notice any error
related with table then run repair table t1 l repair table t2;
   
  Thanks,
   


-
How low will we go? Check out Yahoo! Messenger's low  PC-to-Phone call
rates.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Import from unknown format (.bdd .ind .mor .ped)

2006-04-27 Thread George Law
Pedro,

I haven't ever seen those file types - but just a thought.  Maybe they
are
Just flat files and the extension relates to the table contained there
in.

If you have unix shell access, try the file command - ie  file
file.bdd

If it reports plain text - chances are its just a flat file - try
opening it in
A text editor.

If the file command reports data - then it is probably some type of
database file.

Just a thought :)

--
George 


-Original Message-
From: Pedro mpa [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 27, 2006 2:05 PM
To: mysql@lists.mysql.com
Subject: Import from unknown format (.bdd .ind .mor .ped)

Greetings.

I need to import data to mysql from a db format I don't know.
Unfortunately
the person in charge of the data won't be reachable for the next 2 weeks
and
I want to continue my work.

Does anyone knows the db format extensions like:
.bdd
.ind
.mor
.ped


Thanks in advance.
Apologies for my bad English.
Pedro.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Carriage return in query ???

2006-04-21 Thread George Law
'\n' works for me in the command line client

select concat('Co: ABC','\n','Acct: 123','\n','Drive: Summer Special');

+-+
| concat('Co: ABC','\n','Acct: 123','\n','Drive: Summer Special') |
+-+
| Co: ABC
Acct: 123
Drive: Summer Special |
+-+



If you are doing this in PHP, why not just use 'BR'?
Maybe you are gettig extra slashes you need to strip? 
 http://us2.php.net/manual/en/function.stripslashes.php


 

-Original Message-
From: John Thorne [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 21, 2006 9:07 AM
To: 'mysql@lists.mysql.com'
Subject: Carriage return in query ???

Hello 

Trying to CONCAT several fields into one with carriage in a
Dreamweaver/php
recordset 

I found an example for MS SQL

SELECT recid, CoName, Drive, Del_Proj, 
'Co: ' +CoName + CHAR(13) + 'Acct: ' + Acct + CHAR(13) +'Drive: ',
Drive) AS
Info
FROM data
ORDER BY recid ASC

Desired Result:

Co: ABC 
Acct: 123
Drive: Summer Special

mySQL:
SELECT recid, CoName, Drive, Del_Proj, 
CONCAT('Co: ',CoName, what syntax here??,'Acct: ',Acct,, what syntax
here
??,'Drive: ', Drive) AS Info
FROM data
ORDER BY recid ASC

have tried '\n' '\\n' '\r' '\\r'  etc 

help
thanks

jrt



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: WHERE doesn't work

2006-04-20 Thread George Law
After you give the query, try :

echo mysql_error().P.mysql_info();
 

This way, you should be able to see exactly what error is coming back
from PHP/MySQL


-Original Message-
From: Tom Lobato [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 20, 2006 1:55 PM
To: Jay Blanchard; mysql@lists.mysql.com
Subject: Re: WHERE doesn't work


From: Jay Blanchard [EMAIL PROTECTED]
More of a PHP question. Echo the UPDATE statement to the browser so
that
you can see what it looks like.

It is what I did. As I explain in the mail, I executed the command
extracted 
from the browser.
The command is:
UPDATE clientes SET tipo='r', nome_fantasia='cc',estado='24' WHERE id =
'5'
Then, I copyied this from browser and executed in the mysql client, And
all 
worked well. Only in the client, not php. And executed in other clients
too, 
all working well. Only the php mysql client api behave wrong.



Tom 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Error on T_echo ?? what is this ?

2006-04-12 Thread George Law
PHP is picky on what can and cannot be in quotes... I get this all the
time.

Try : 
$sql = INSERT INTO addvalue (`value1`, `value2`,
`sumfeild`) VALUES ('.$value1.', '.$value2.', '.$sumfield.');

$result=MYSQL_QUERY($sql);

I think part of the error had to do with you missing a )  at the end,
closing the MYSQL_QUERY();



-Original Message-
From: Chrome [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 12, 2006 1:37 PM
To: 'Brian E Boothe'; mysql@lists.mysql.com
Subject: RE: Error on T_echo ?? what is this ? 

Looks like you're missing a semicolon (;) at the end of this statement:

$result=MYSQL_QUERY(INSERT INTO addvalue (`value1`, `value2`, 
`sumfeild`).VALUES ('$value1', '$value2', '$sumfield')

HTH

Dan
 
---
http://chrome.me.uk
 

-Original Message-
From: Brian E Boothe [mailto:[EMAIL PROTECTED] 
Sent: 13 April 2006 05:13
To: mysql@lists.mysql.com
Subject: Error on T_echo ?? what is this ? 

i,m getting the following error on my MySQL Code inserting data into a 
database, .?

   *Parse error*: parse error, unexpected T_ECHO in 
c:\inetpub\wwwroot\projects\testsum\adddata.php on line 30
   
LINE 30 is   echo 
Query Finished;

--here is ALL my code --
  ?
//--php/Mysql Code by Brian E Boothe  //
//throw data from form into MySQL database routine //
//initilize Adddatta to mysql database, //

//if($_POST['submit']) //If submit is hit
//{
   //then connect as user
   //change user and password to your mySQL name and password
   mysql_connect(localhost,root,goobers);
   
   //select which database you want to edit
   mysql_select_db(test);

   //convert all the posts to variables:
  $value1 = $_POST['value1'];
  $value2 = $_POST['value2'];
  $sumfield = $_POST['sumfield'];

   //Insert the values into the correct database with the right
fields
   //mysql table = news
   //table columns = id, title, message, who, date, time
   //post variables = $title, $message, '$who, $date, $time
  // $result=MYSQL_QUERY(INSERT INTO orders 
(id,title,message,who,date,time).
  $result=MYSQL_QUERY(INSERT INTO addvalue (`value1`, `value2`, 
`sumfeild`).
  VALUES ('$value1', '$value2', '$sumfield')
 //INSERT INTO `orders` (`OrderNo`, `CompanyName`, `BillingAddress`,

`City`, `StateOrProvince`, `PostalCode`, `PhoneNumber`, `FaxNumber`, 
`WebPage`, `ContactFirstName`, `ContactLastName`, `EmailAddress`, 
`Notes`, `Customer`, `Startdate`, `Completedate`, `Biddate`, 
`Bidamount`, `ProjectInfo`, `ElecProjCost`, `ElecProjBill`, `ElecRem`, 
`CtrlProjCost`, `CtrlProjBill`, `CtrlRem`, `OthrProjCost`, 
`OthrProjBill`, `OthrRem`, `BondAm`, `BondBill`, `BondRem`)
//confirm
 
 echo Query Finished;

?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


__ NOD32 1.1485 (20060412) Information __

This message was checked by NOD32 antivirus system.
http://www.eset.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Select a value between dates.

2006-04-12 Thread George Law
This brings up a question I was asked...

 

Which is more efficient?

 

Select . where date between '-mm-dd hh:mm:ss' and '-mm-dd
hh:mm:ss'

 

Or

 

Select . where unix_timestamp(date) between
unix_timestamp('-mm-dd hh:mm:ss') and unix_timestamp('-mm-dd
hh:mm:ss')

 

 

 



RE: Days in Month

2006-04-04 Thread George Law
Mike,


Maybe : 

LAST_DAY(date)

Takes a date or datetime value and returns the corresponding value for
the last day of the month. Returns NULL if the argument is invalid.

mysql SELECT LAST_DAY('2003-02-05');
- '2003-02-28'
mysql SELECT LAST_DAY('2004-02-05');
- '2004-02-29'
mysql SELECT LAST_DAY('2004-01-01 01:01:01');
- '2004-01-31'
mysql SELECT LAST_DAY('2003-03-32');
- NULL



 -Original Message-
 From: Mike Blezien [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, April 04, 2006 2:48 PM
 To: MySQL List
 Subject: Days in Month
 
 Hello,
 
 is there a MySQL function to determine the total days of any given
month,
 IE if
 current month is April, is there a function or query to find the total
 days of
 the month?
 
 TIA,
 Mike(mickalo)Blezien
 ===
 Thunder Rain Internet Publishing
 Providing Internet Solution that Work
 ===
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Restoring a database from a complete MySQL dump

2006-03-08 Thread George Law
You should be able to pull each table (create + inserts) out into a stand 
alone file

table1.sql
table2.sql

etc...

and then you can restore table by table.

This might be harder depending on the size of your database tables.  If you 
have a million rows, chances
are your .sql file is huge and it would require a pretty robust text editor 
that could handle such a big file.


In vi you could just locate the 1st line of the CREATE and the last line of 
the last insert (before the next create)

and then do : 1,100 w table1.sql



- Original Message - 
From: kent stand [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Wednesday, March 08, 2006 1:17 PM
Subject: Restoring a database from a complete MySQL dump


I have a MySQL 4.1.x installation. It was upgraded to MySQL 5.0.x and then
downgraded again, back to 4.1.14. I took a complete dump of all
databases/tables into a .sql file, and now I would like to restore just
specific databases or tables from it.

Is this possible without restoring everything or without having to restore
everything somewhere else, and then make new back-ups of the specific
tables/databases?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Movable Type + OSXServer + MySQL issues.

2006-03-04 Thread George Law
Movable Type + OSXServer + MySQL issues.Miles,

David is correct - DBI isn't installed in OSX by default.

check out http://www.brandonhutchinson.com/installing_perl_modules.html for a 
brief into
on installing from CPAN.  

Make sure you do this as root or through sudo

--
George

  - Original Message - 
  From: m i l e s 
  To: MySQL 
  Sent: Friday, March 03, 2006 4:47 PM
  Subject: Movable Type + OSXServer + MySQL issues. 


  Hi,

  Im having a rather nasty time installing MT on my G5XServe.

  IM trying to use MySQL but Im getting a rather NASTY error

  Stock Install of MySQL that came with the G5.

  ++

  Got an error: Unsupported driver MT::ObjectDriver::DBI::mysql: Can't 
  locate DBI.pm in @INC (@INC contains: /Volumes/webserver/
  ~shoreweddings.com/blog/extlib lib /System/Library/Perl/5.8.6/darwin-
  thread-multi-2level /System/Library/Perl/5.8.6 /Library/Perl/5.8.6/
  darwin-thread-multi-2level /Library/Perl/5.8.6 /Library/Perl /Network/
  Library/Perl/5.8.6/darwin-thread-multi-2level /Network/Library/Perl/
  5.8.6 /Network/Library/Perl /System/Library/Perl/Extras/5.8.6/darwin-
  thread-multi-2level /System/Library/Perl/Extras/5.8.6 /Library/Perl/
  5.8.1 .) at lib/MT/ObjectDriver/DBI.pm line 10.
  BEGIN failed--compilation aborted at lib/MT/ObjectDriver/DBI.pm line 10.
  Compilation failed in require at lib/MT/ObjectDriver/DBI/mysql.pm 
  line 10.
  BEGIN failed--compilation aborted at lib/MT/ObjectDriver/DBI/mysql.pm 
  line 10.
  Compilation failed in require at (eval 6) line 1.
  BEGIN failed--compilation aborted at (eval 6) line 1.

  ++

  Anyone seen this before ?  And how best to resolve it ?

  M i l e s.


  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




RE: apostrophe char problem

2006-03-02 Thread George Law
Halid,

I think this is a php/phpmyadmin problem and not really a mysql problem.

PHP has an option called magic quotes that may be causing this.  
 http://us2.php.net/manual/en/function.get-magic-quotes-gpc.php

I have run into this before - I think what happens is that magic quotes
will change a single
quote '   to a double ''   so its mysql safe.   Then when it reads it
back in from the database
and renders the HTML, it also displays the doubles.  

If you look in the database iteself from the mysql command line client
do the double quotes show?

--
George Law



 -Original Message-
 From: Halid Faith [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 02, 2006 11:51 AM
 To: mysql@lists.mysql.com
 Subject: apostrophe char problem
 
 Hello
 
 I use mysql 4.1.16 and php4.3.10 and phpmyadmin2.7 and 
 apache-2.0.53  on
 FreeBSD5.3
 
 I add any text as data with phpadmin on web. it works well. 
 But I have a problem.
 As example if I add a text to mysql table with phpadmin.
 My text contains as below;
 Halid 's car is expensive
 
 I will see above the sentence as Halid ''s car is expensive 
 on web (internet explorer) That is, I see that double ' 
 apostrophe character out
 
 What shall I do ?
 
 Thanks
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: dumping results of a select

2006-03-02 Thread George Law
mysqldump has a -w option:

 -w, --where=  dump only selected records; QUOTES mandatory!

--
George 

 -Original Message-
 From: 2wsxdr5 [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 02, 2006 12:30 PM
 To: 'MySQL general mailing list'
 Subject: dumping results of a select
 
 Is there a way with mysqldump to instead of dumping the 
 contents of a table dump the contents of a select so if you 
 import that sql back in you will get a new table that looks 
 like the select?
 
 --
 Chris W
 KE5GIX
 
 Gift Giving Made Easy
 Get the gifts you want 
 give the gifts they want
 One stop wish list for any gift,
 from anywhere, for any occasion!
 http://thewishzone.com
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: jdbc:mysql

2006-02-28 Thread George Law
Is your mysql server listening just on localhost (127.0.0.1)?

That is differnet than 192.xxx.xxx.xxx



 

 -Original Message-
 From: Xiaobo Chen [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, February 28, 2006 11:27 AM
 To: Amaia Anabitarte
 Cc: mysql@lists.mysql.com
 Subject: Re: jdbc:mysql
 
 Not sure yet. Could it be that you're behind the router? And 
 you might need to redirect the traffic to the model to the IP 
 of your PC and you're PC's IP should be set static so that 
 next time when you reboot, it won't change.
 
 Xiaobo
 
  Hi,
 
  I have a problem connecting to MySQL. I could connect to 
 the database 
  with this sentence:
  jdbc:mysql://localhost:3306/db
 
  but not with this sentence, with the same IP that the localhost:
  jdbc:mysql://192.xxx.xxx.xxx:3306/db
 
  What's wrong?
 
  Thanks for all,
 
  Amaia
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query returns to many results

2006-02-23 Thread George Law

Schalk ,

You need to specify the unifying column between your ablb and abm tables.

ie - in your where, and ablb.id=abm.id

Once you get this so it returns expected results, you can run the query, 
prefaced with
explain and it will give you an idea on the way mysql is running the 
query.  This has helped me determine

some additional indexes that greatly speed up my queries.

--
George


- Original Message - 
From: Schalk [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Thursday, February 23, 2006 6:59 AM
Subject: Query returns to many results



Greetings All,

Please have a look at the following query:

SELECT abm.mem_number, abm.first_name, abm.last_name, 
abm.area_represented, abm.age, abm.sex, abm.cup,
ablb.*, ablb.jp + ablb.rc + ablb.fsmgp + ablb.gmc + ablb.saly + ablb.nwgp 
+ ablb.ecgp + ablb.sams + ablb.wcc + ablb.kzngp + ablb.emc + ablb.lmgp + 
ablb.saff + ablb.gmgp + ablb.safy + ablb.mmw + ablb.lc + ablb.mmc + 
ablb.nwmc + ablb.ncc + ablb.samp + ablb.gsc + ablb.wcmgp + ablb.sapm + 
ablb.kznc + ablb.npc + ablb.smc + ablb.ecc + ablb.mgp + ablb.samo + 
ablb.cofc + ablb.cs + ablb.ncmgp + ablb.fsc + ablb.ggp + ablb.tmc + 
ablb.gc + ablb.yotm AS total_points FROM ab_leader_board ablb, ab_members 
abm

WHERE abm.sex = 'Female' AND abm.cup = 'kids'
ORDER BY total_points DESC

Now this query is run over two tables and the ab_members table contains 
around 302 rows. Around 1/3 of these will be where cup=kids. However, when 
this query is run it returns 20,700 results :0 Any idea why this is? Also, 
any help or pointers as to how I can optimize this query will be much 
appreciated. Thank you!


--
Kind Regards
Schalk Neethling
Web Developer.Designer.Programmer.President
Volume4.Business.Solution.Developers



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysqlimport, \r\n and \n

2006-02-21 Thread George Law
Daniel,

What about just running your import file through dos2unix before piping
it to mysql?

That way, it would leave \n alone (since that is unix format) and
replace any occurrances of 
\r\n with plan old \n.

--
George
 

 -Original Message-
 From: Daniel Kasak [mailto:[EMAIL PROTECTED] 
 Sent: Monday, February 20, 2006 11:21 PM
 To: mysql@lists.mysql.com
 Subject: mysqlimport, \r\n and \n
 
 I've got some import scripts that are giving me trouble.
 
 Some MOFOs keep changing the format of the data they give us, 
 and sometimes I loose half the records. When this happens, I 
 change the line terminator from \r\n to \n  ...  or from \n to \r\n.
 
 It's starting to get to me. Is there any way to ease my pain 
 here, short of importing with one format, counting the number 
 of records, truncating the table, importing wit the other 
 format, counting the number of records, and then selecting 
 the format with the most number of records?
 
 --
 Daniel Kasak
 IT Developer
 NUS Consulting Group
 Level 5, 77 Pacific Highway
 North Sydney, NSW, Australia 2060
 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
 email: [EMAIL PROTECTED]
 website: http://www.nusconsulting.com.au
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL and OpenOffice - JDBC

2006-02-19 Thread George Law

Michael,

can you connect using the command line client?  sounds to me like it may be 
the old_password problem.


http://dev.mysql.com/doc/refman/5.0/en/old-client.html

--
George

- Original Message - 
From: Michael Satterwhite [EMAIL PROTECTED]

To: MySQL List mysql@lists.mysql.com
Sent: Sunday, February 19, 2006 3:52 PM
Subject: MySQL and OpenOffice - JDBC



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I don't really think this is the right place to ask this question, so
please forgive this post. I've tried asking the question over on the
OpenOffice list, but can't get an answer. I'll try to give all the
information to ease things. One of the guru's here has to have done this
already:

I'm running OOo on Ubuntu Gnu/Linux and trying to setup a data source -
so far with no success.

Following the instructions in OOo2's  Help (which seem to be outdated as 
of

2.0), I've downloaded mm.mysql-2.0.4-bin.jar. I went to
Tools-Options-OpenOffice.Org-Java and
(1) Selected the Free Software Foundation JRE
(2) Went to Class Path and added archive mm.mysql-2.0.4-bin.jar
to the list

After restarting OOo, I then went to the File-Wizards-Address Data
Source. The only option available is other external data source (Is this
~ to be expected?). I select that and at Next press Settings and select
MySQL (JDBC). At next, I enter the database as
mysql://localhost:3306/Magicians.

I check password required, enter the user and test connection. After
entering the password, I get the error driver could not be loaded.

OK, I don't think I left anything out. Would someone be so kind as to
help me get past this? I'd appreciate it greatly.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFD+Np1jeziQOokQnARAq8/AKCqNByrqBdIvXM0XSJHRSD3su0vfwCffI2A
nr4xi+9GDU8/+Uhjm65e/8s=
=dIta
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Problems getting MySqlDump to use sub-queries

2006-02-15 Thread George Law
Henry,

Maybe the shell is doing something funky with the parens?

maybe try escaping them with the \ ??


Just a wild guess :)

 

 -Original Message-
 From: Henry Dall [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, February 15, 2006 12:06 PM
 To: mysql@lists.mysql.com
 Subject: Problems getting MySqlDump to use sub-queries
 
 I am having a problem using MySqlDump.  Context: I am having 
 to export some very specific records from our database
 
  
 
 We have a table called BIN which has, amongst others, an ID 
 column and an ACCOUNT_ID column.  When I use this
 
  
 
 mysqldump -u root -p --complete-insert 
 --where=account_id='19444' -t otm bin  
 /otm/reports/datadump/mediaowner_bin.txt
 
  
 
 it does properly create Insert statements matching the 
 results that the where clause should return.
 
  
 
 However, we have another table called BIN_DATA which has ID 
 and BIN_ID columns (the BIN_ID being a link to the BIN table 
 mentioned above).  If I fire off the following query from 
 within MySql:
 
  
 
 select * from bin_data where bin_id in (select id 
 from bin where account_id=19444);
 
  
 
 it works great, returning the rows I'd expect.  However, 
 where I am getting stuck is getting MySqlDump to handle this 
 obviously more complex query statement, it having a 
 sub-query.  I have tried lots of variations on the following:
 
  
 
 mysqldump -u root -p --complete-insert 
 --where=bin_id in (select id from bin where 
 account_id=19444) -t otm bin_data  
 /otm/reports/datadump/mediaowner_bin_data.txt
 
  
 
 Am I not wrapping the where portion of the command 
 correctly (though I have tried every possible wrapping with 
 single-quotes that I could come up with) OR are sub-queries 
 simply not supported OR is there something else that I should know.
 
  
 
 Any help you can give me would be way appreciated.
 
  
 
 Henry Dall
 
 [EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migration

2006-02-11 Thread George Law

looks like there is only a partial query

INSERT INTO `jos_banner` VALUES (1,1,'banner','OSM 
1',0,46,0,'osmbanner1.png

','http://www.opensourcematters.org','2004-07-07$

there is no ending ', so when it hits the 'jos_banner

/*!4 ALTER TABLE `jos_banner`


it sees it as the end of the field and then
basically ..
INSERT INTO `jos_banner` VALUES (1,1,'banner','OSM 
1',0,46,0,'osmbanner1.png
','http://www.opensourcematters.org','2004-07-07$/*!4 ALTER TABLE 
`jos_banner


and it is choking on jos_banner as incorrect syntax


- Original Message - 
From: James Dey [EMAIL PROTECTED]

To: Gabriel PREDA [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Saturday, February 11, 2006 1:52 AM
Subject: Re: Migration



Thanks!

The line is:


/*!4 ALTER TABLE `jos_banner` */;
LOCK TABLES `jos_banner` WRITE;
INSERT INTO `jos_banner` VALUES (1,1,'banner','OSM 
1',0,46,0,'osmbanner1.png

','http://www.opensourcematters.org','2004-07-07$
UNLOCK TABLES;
/*!4 ALTER TABLE `jos_banner` ENABLE KEYS */;

For the record I had the same issue with another line, and removed the
line:

ENGINE=MyISAM DEFAULT CHARSET=latin1;

This got rid of the error

All the best!

On 2/10/06, Gabriel PREDA [EMAIL PROTECTED] wrote:


Not the whole... maybe it's humongous... some lines above  line 20 and
some
beneath...

--
Gabriel PREDA
Senior Web Developer


On 2/10/06, sheeri kritzer [EMAIL PROTECTED] wrote:

 James,

 You're going to need to show us the contents of olddbname.sql.

 -Sheeri

 On 2/10/06, James Dey [EMAIL PROTECTED] wrote:
   I am migrating a database from mySQL 4.0 to 4.1.16 and have the
error
   ERROR 1064 at line 21: You have an error in your SQL syntax.  Check
 the
   manual that corresponds to your MySQL server version for the right
 syntax
  to
   use near 'ALTER TABLE `jos_banner` */' at line 20
  









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert and Update together

2006-02-11 Thread George Law

Andre,

I tried this a couple weeks ago... I think you want the on duplicate 
option for the INSERT query.


depends what version you have... I think this was introduced in mysql 4.1

insert into  values () on duplicate key update set x=2,y=5;

unfortunately, the server I was testing this on was running 4.0.x so I 
couldn't use it.



http://dev.mysql.com/doc/refman/5.0/en/insert.html



- Original Message - 
From: Andre Matos [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Saturday, February 11, 2006 10:54 AM
Subject: Insert and Update together



Hi List,

I would like to know if it is possible to combine Insert and Update in one
SQL instruction. This is what I want to do:

I have two tables: one where I will perform and Update replacing m0 by
scr. If MySQL find a m0, it will need to perform an insert into a log
table including the information updated.

I am trying to avoid writing a php4 program to do this. I am using MySQL 
4.1


Thanks for any help.

Andre

--
Andre Matos
[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Innodb table locked on delete

2006-02-10 Thread George Law
I have also seen the table locking on deletes and even on large selects
with INNODB.  I had converted to INNODB strictly for the row level
locking that is the biggest selling point of using INNODB.

So all the avantages of INNODB that are advertised (ie - row level
locking) are mis-represented?

Right from the mysql website:


 14.2.1. InnoDB Overview

InnoDB provides MySQL with a transaction-safe (ACID compliant) storage
engine that has commit, rollback, and crash recovery capabilities.
InnoDB does locking on the row level and also provides an Oracle-style
consistent non-locking read in SELECT statements. These features
increase multi-user concurrency and performance. There is no need for
lock escalation in InnoDB because row-level locks fit in very little
space.



 -Original Message-
 From: sheeri kritzer [mailto:[EMAIL PROTECTED]
 Sent: Friday, February 10, 2006 9:52 AM
 To: Ady Wicaksono
 Cc: mysql@lists.mysql.com
 Subject: Re: Innodb table locked on delete

 Innodb is not row-level locking -- it's
 memory-page-level-locking.  A memory page is usually small,
 so it's almost like row-level locking, but not quite. 
 Perhaps you're running up against that?

 What does the query log say which queries were running?  How
 do you know it's the delete that's taking the lock, and not
 an update (the error message said an update or a delete)?

 -Sheeri

 On 2/10/06, Ady Wicaksono [EMAIL PROTECTED] wrote:
  So where's the row locking?
 
  I configure my database with Innodb + READ COMMITED, by this
  configuration by using autocommit=1, delete should be done
 on data commited to disk.
  Other thread should be able to insert/update.
 
  CMIIW
 
 
   Tables are locked on delete because, like an update, they are
   changing data.
  
   Imagine issuing a REPLACE statement after a DELETE statement.  If
   the DELETE locks the table, then the REPLACE happens AFTER the
   DELETE, as you wanted.  If the DELETE does not lock the
 table, then
   it's possible the REPLACE will happen before the DELETE, and the
   DELETE will delete the row you just replaced.
  
   -Sheeri
  
   On 2/9/06, Ady Wicaksono
 [EMAIL PROTECTED] wrote:
   Why table locked on delete?
  
  
  
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]

 



RE: Report Generator

2006-02-06 Thread George Law
Chuck,

Check this out - it's a real **simple** JSP  that just does a query and
dumps out the results to the 
web browser.
http://www.thebook-demo.com/java-server/jsp/Mysql/MysqlExample.jsp

Its been a while since I have worked with JSP - I wrote this example
several years ago (the web site
belongs to a previous employer).  At the time, I think I had to drop the
unpacked jar file for the mysql
driver into the Tomcat source directory. That is about all I remember
about Tomcat.  

--
George Law


 -Original Message-
 From: Chuck Craig [mailto:[EMAIL PROTECTED] 
 Sent: Saturday, February 04, 2006 10:23 AM
 To: MySQL-General
 Subject: Report Generator
 
 Hi, I'm new to the list and not sure whether my question 
 belongs here or not. I'm looking for an open source program, 
 that runs on JSP, to generate reports on data in MySQL 
 databases. I've found a few myself but they run on PHP. Any 
 thoughts or advice would be very appreciated.
 
 -Chuck Craig
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Moving from PowWeb to Rackspace

2006-02-06 Thread George Law
You might get a time out with phpMyAdmin

The many web hosts I have used pretty much have all used php's default
90 second exection time for php pages.

I have a zip code database with 50,000 records and had to do this import
through a ssh session
On the web server using mysqldump on the old server and cat *.sql
|mysql ... on the new.

Assuming PowWeb doesn't do shell accounts (very few web hosts do these
days)

Search google for telnet.cgi
This is a cgi script that allows you to give commands on the webserver.
This gives you a textbox
To enter your command in, click submit and it runs the command.  

With this, you should be able to run mysqldump to export the database,
pipe to gzip, and create a file
That you can download and upload to your rackspace server.  

You should be able to do the md5 sum like James suggests using the same
telnet tool

md5sum export.sql.gz

Then after you upload, run the same command to make sure you didn't
loose any bits in the transfer.

--
George

 -Original Message-
 From: JamesDR [mailto:[EMAIL PROTECTED] 
 Sent: Monday, February 06, 2006 11:54 AM
 To: mysql@lists.mysql.com
 Subject: Re: Moving from PowWeb to Rackspace
 
 Brian Dunning wrote:
  I have a bunch of databases - some are really big, 2GB - on 
 a number 
  of different accounts at PowWeb. I am buying a Rackspace server and 
  want to move everything over -- hopefully all in one night. 
 Can anyone 
  suggest the best way to do this? Would it be to use the 
 Export command 
  in phpMyAdmin?
  
  --MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
 
 I'm not familiar with phpMyAdmin, but I would dump everything 
 to sql files, using the extended insert option then 
 compressing the resulting sql files. Then create a hash (MD5) 
 and ftp the files over, checking the hash on the remote 
 system, uncompressing, and importing. I do something like 
 this with my backups (all automated, except for the checking 
 of the hash on the remote system, I just store the MD5 in an 
 ascii file.)
 
 --
 Thanks,
 James
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Support between MySQL and PHP

2006-01-30 Thread George Law
 
Phillip,

You probably didn't need to upgrade - just set the mysql password to use
old_password:

FTFM:  

As indicated earlier, a danger in this scenario is that it is possible
for accounts that have a short password hash to become inaccessible to
pre-4.1 clients. A change to such an account's password made via GRANT,
PASSWORD(), or SET PASSWORD results in the account being given a long
password hash. From that point on, no pre-4.1 client can authenticate to
that account until the client upgrades to 4.1.

To deal with this problem, you can change a password in a special way.
For example, normally you use SET PASSWORD as follows to change an
account password:

SET PASSWORD FOR 'some_user'@'some_host' = PASSWORD('mypass');

To change the password but create a short hash, use the OLD_PASSWORD()
function instead:

SET PASSWORD FOR 'some_user'@'some_host' = OLD_PASSWORD('mypass');

OLD_PASSWORD() is useful for situations in which you explicitly want to
generate a short hash. 

So, you could have just updated the password using :

SET PASSWORD FOR 'some_user'@'some_host' = OLD_PASSWORD('mypass');





 -Original Message-
 From: Philip Thompson [mailto:[EMAIL PROTECTED] 
 Sent: Monday, January 30, 2006 9:33 AM
 To: mysql@lists.mysql.com
 Subject: Re: Support between MySQL and PHP
 
 Thanks to everyone's help. Instead of downgrading MySQL, I 
 went ahead and upgraded to PHP5. After I did that, it seemed 
 to fix the connection issue I was having.
 
 ~Philip
 
 
 On Jan 30, 2006, at 4:41 AM, James Harvard wrote:
 
  MySQL versions = 4.1 use a new, more secure authentication 
 protocol. 
  Probably the version of PHP you are using does not support it.
 
  Ah, here it is:
  http://dev.mysql.com/doc/refman/5.0/en/password-hashing.html
 
  HTH,
  James Harvard
 
  At 1:18 pm -0600 29/1/06, Philip R. Thompson wrote:
  I just installed MySQL5 and currently have PHP 4.3.11 
 installed. I am 
  wanting to connect to the mysql database on localhost, but 
 I get the 
  following results:
 
  --
  ?  $link = mysql_connect('localhost', 'user', 'password');  ? 
  Client does not support authentication protocol requested 
 by server; 
  consider upgrading MySQL client
  --
 
  Well, I have the lastest stable version of MySQL, so I did 
 some more 
  research on what the problem might be. When I checked my 
 information 
  for PHP using phpinfo(), it gave me the Client API version for 
  MySQL was 3.23.49. So, I'm thinking my version of PHP 
 cannot connect 
  to my version of MySQL. I then considered if I installed 
 the MySQLi 
  extension for PHP (supports versions of MySQL  4.1), 
 would that help 
  me? Or, if I just upgraded PHP to version 5, would that help me?
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How can I isolate the integer part of a varchar field and use it in an ORDER BY?

2006-01-24 Thread George Law
Nicolas,  

Not sure when the replace function was introduced into mysql, but I
think it might do...

Use replace in your order by, replacing a-z with null chars, leaving
just your numeric digits, then order by


select * from test;
+-+---+
| uid | name  |
+-+---+
|   1 | george099 |
|   2 | george100 |
|   3 | george101 |
|   4 | george001 |
|   5 | 123bill   |
|   6 | 100bill   |
|  13 | george|
|  14 | darren|
|  15 | joe   |
|  16 | bill  |
+-+---+
10 rows in set (0.00 sec)

mysql select uid,name  from test order by replace(name,'[a-z]','');
+-+---+
| uid | name  |
+-+---+
|   6 | 100bill   |
|   5 | 123bill   |
|  16 | bill  |
|  14 | darren|
|  13 | george|
|   4 | george001 |
|   1 | george099 |
|   2 | george100 |
|   3 | george101 |
|  15 | joe   |
+-+---+

You might need to convert 'name' to uppercase to work with all your part
numbers.   

select uid,name  from test order by replace(upper(name),'[A-Z]','');



--
George


-Original Message-
From: Nicolas Verhaeghe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 24, 2006 9:13 AM
To: 'Gleb Paharenko'; mysql@lists.mysql.com
Subject: RE: How can I isolate the integer part of a varchar field and
use it in an ORDER BY?

Oh yeah, I forgot to tell... I still use 3.23. Cannot upgrade for the
moment.

So no SP...

Is this possible at all with 3.23?

Thanks for your help!

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 24, 2006 3:41 AM
To: mysql@lists.mysql.com
Subject: Re: How can I isolate the integer part of a varchar field and
use it in an ORDER BY?


Hello.

The brute force way is to create and UDF or store function which can
extract the numeric part from the string and ORDER BY the results of
this function. See:
  http://dev.mysql.com/doc/refman/5.0/en/adding-functions.html
  http://dev.mysql.com/doc/refman/5.0/en/stored-procedures.html
  http://dev.mysql.com/doc/refman/5.0/en/string-functions.html



Nicolas Verhaeghe wrote:
 A client of mine sells motorcycle parts and the motorcycle models are 
 for instance:
 
 YZ85
 YZ125
 WRF450
 YZF450
 
 Etc...
 
 If you know motorcycles, you know that the number is always the 
 displacement in cc.
 
 What I am looking to do here is make it so that the models are sorted 
 properly according to their displacement and not their alphanumerical 
 order.
 
 Currently they are sorted like this:
 WRF450
 YZ125
 YZF450
 YZ85
 
 I would like them sorted like this:
 YZ85
 YZ125
 WRF450
 YZF450
 
 The displacement is not always at the end, sometimes it's at the 
 beginning, for instance: 125SX
 250EXC
 
 (Yes, those are Yamahas and KTMs, for those who are into that type of 
 vehicles).
 
 How can I achieve this goal without creating a specific field in the 
 database?
 
 I tried converting the field to integer, which is something that I can

 do with MS SQL (converting a varchar field to integer extracts the 
 integer part, if any) but the CAST and CONVERT are not the same 
 functions and I have looked for 30 minutes for something that could 
 work with no success.
 
 Thanks a lot for your help!
 


-- 
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How can I isolate the integer part of a varchar field and use it in an ORDER BY?

2006-01-24 Thread George Law
Nicolas,

What about just doing your sorting within your code instead of with
mysql?

Depending on how many rows you have that you would need to sort, it
should not be too difficult to build a multidimensional array (add 2
columns, one with the alphabetical part of your key below, the other
with the numeric part), and sort based on these 2 column.  

--
George



-Original Message-
From: Nicolas Verhaeghe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 24, 2006 11:09 AM
To: mysql@lists.mysql.com
Subject: RE: How can I isolate the integer part of a varchar field and
use it in an ORDER BY?

Thanks, but unfortunately the replace function does not want to work on
a regexp in version 3.23...

I guess I'll have to create a displacement field and populate it from
the admin tool.

Thanks for your help. I will upgrade this server as soon as I can.

-Original Message-
From: George Law [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 24, 2006 8:14 AM
To: Nicolas Verhaeghe; mysql@lists.mysql.com
Subject: RE: How can I isolate the integer part of a varchar field and
use it in an ORDER BY?


Nicolas,  

Not sure when the replace function was introduced into mysql, but I
think it might do...

Use replace in your order by, replacing a-z with null chars, leaving
just your numeric digits, then order by


select * from test;
+-+---+
| uid | name  |
+-+---+
|   1 | george099 |
|   2 | george100 |
|   3 | george101 |
|   4 | george001 |
|   5 | 123bill   |
|   6 | 100bill   |
|  13 | george|
|  14 | darren|
|  15 | joe   |
|  16 | bill  |
+-+---+
10 rows in set (0.00 sec)

mysql select uid,name  from test order by replace(name,'[a-z]','');
+-+---+
| uid | name  |
+-+---+
|   6 | 100bill   |
|   5 | 123bill   |
|  16 | bill  |
|  14 | darren|
|  13 | george|
|   4 | george001 |
|   1 | george099 |
|   2 | george100 |
|   3 | george101 |
|  15 | joe   |
+-+---+

You might need to convert 'name' to uppercase to work with all your part
numbers.   

select uid,name  from test order by replace(upper(name),'[A-Z]','');



--
George


-Original Message-
From: Nicolas Verhaeghe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 24, 2006 9:13 AM
To: 'Gleb Paharenko'; mysql@lists.mysql.com
Subject: RE: How can I isolate the integer part of a varchar field and
use it in an ORDER BY?

Oh yeah, I forgot to tell... I still use 3.23. Cannot upgrade for the
moment.

So no SP...

Is this possible at all with 3.23?

Thanks for your help!

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 24, 2006 3:41 AM
To: mysql@lists.mysql.com
Subject: Re: How can I isolate the integer part of a varchar field and
use it in an ORDER BY?


Hello.

The brute force way is to create and UDF or store function which can
extract the numeric part from the string and ORDER BY the results of
this function. See:
  http://dev.mysql.com/doc/refman/5.0/en/adding-functions.html
  http://dev.mysql.com/doc/refman/5.0/en/stored-procedures.html
  http://dev.mysql.com/doc/refman/5.0/en/string-functions.html



Nicolas Verhaeghe wrote:
 A client of mine sells motorcycle parts and the motorcycle models are
 for instance:
 
 YZ85
 YZ125
 WRF450
 YZF450
 
 Etc...
 
 If you know motorcycles, you know that the number is always the
 displacement in cc.
 
 What I am looking to do here is make it so that the models are sorted
 properly according to their displacement and not their alphanumerical 
 order.
 
 Currently they are sorted like this:
 WRF450
 YZ125
 YZF450
 YZ85
 
 I would like them sorted like this:
 YZ85
 YZ125
 WRF450
 YZF450
 
 The displacement is not always at the end, sometimes it's at the
 beginning, for instance: 125SX
 250EXC
 
 (Yes, those are Yamahas and KTMs, for those who are into that type of
 vehicles).
 
 How can I achieve this goal without creating a specific field in the
 database?
 
 I tried converting the field to integer, which is something that I can

 do with MS SQL (converting a varchar field to integer extracts the
 integer part, if any) but the CAST and CONVERT are not the same 
 functions and I have looked for 30 minutes for something that could 
 work with no success.
 
 Thanks a lot for your help!
 


-- 
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http

Longest substring match

2006-01-24 Thread George Law
Hi All,

I am working with some call processing.

I have one table with call detail records (cdrs) with a phone number in
it : 0111234567890

I have another table with rates in it based on the dial number with the
011 stripped off.  
There may be unique rates for 
1234
1235
1236
1237
1238
1239
1230

Right now, this processing is done by taking the first 8 digits of the
dialed number, doing a query
Select * from rates where code=12345678 
And seeing if there is a match, then taking 7 digits, seeing if there is
a match, etc

There is a chance it could come down to 2 digits, so that could be 6
queries, per cdr

Right now, as the rates for one code are found, they are loaded into an
array in perl and the next time that code comes up, the array is first
checked before it does any more queries.


I was just wondering if anyone had a better solution to be able to find
the longest sub string match right in SQL.

Thanks!!

George Law



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Lost connection to MySQL server during query

2006-01-24 Thread George Law
David,

Are you using persistent connections?  Sounds like perhaps a persistent
connection is timing out.  

Maybe a quick work around would be to call a check status routine (ie
- do a show status), just to see if the connection is still there.  If
this fails, just do a mysql_connect... Before continuing.

--
George


-Original Message-
From: David Godsey [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 24, 2006 4:09 PM
To: mysql@lists.mysql.com
Subject: Lost connection to MySQL server during query

I am getting this error when connecting to mysql with PHP:
Lost connection to MySQL server during query

This happens only when I use this procedure, but it doesn't necessarily
fail when this procedure is called.  The error will happen frequently,
however it is not consistent.  This is my first procedure I've written,
so I'm sure I've done something wrong here.  I assume the error message
means I'm hitting some kind of timeout?

Any ideas would be welcome.  Thanks.

create procedure getElement (IN n VARCHAR(255),IN ptime DOUBLE)
BEGIN
DECLARE mfid INT UNSIGNED;
DECLARE pid INT UNSIGNED;
DECLARE foffset INT UNSIGNED;
DECLARE flength INT UNSIGNED;
DECLARE vid INT UNSIGNED;
DECLARE rfid INT UNSIGNED;
DECLARE tpid INT UNSIGNED;
DECLARE fdata BLOB;
DECLARE fdata_tmp BLOB;
DECLARE fdata_bigint BIGINT UNSIGNED;
DECLARE fdata_signed INT;
DECLARE fdata_unsigned INT UNSIGNED;
DECLARE fdata_float DOUBLE;
DECLARE data_type VARCHAR(20);
DECLARE byte_order VARCHAR(20);
DECLARE conv_param VARCHAR(255);

SELECT major_frame_desc_id, parent_id, frame_offset,
frame_length, version_id, top_level_parent_id
FROM MajorFrameDescription
WHERE name=n
INTO mfid,pid,foffset,flength,vid,tpid;

SELECT attribute_value FROM MajorFrameAttributes
WHERE major_frame_desc_id=mfid AND
attribute_name=NormalizedType
INTO data_type;

SELECT attribute_value FROM MajorFrameAttributes
WHERE major_frame_desc_id=mfid AND
attribute_name=ConvParams
INTO conv_param;

SELECT attribute_value FROM MajorFrameAttributes
WHERE major_frame_desc_id=mfid AND
attribute_name=ByteOrder
INTO byte_order;

SELECT MAX(raw_major_frame_id)
FROM RawMajorFrames
WHERE major_frame_desc_id=tpid
INTO rfid;

IF rfid 0 THEN

SELECT payload_time,
SUBSTR(BINARY(frame_data),
FLOOR(foffset/8)+1,
CEIL((flength + (foffset %8 ))/8))
FROM RawMajorFrames
WHERE raw_major_frame_id=rfid
INTO ptime,fdata;

call toBigInt(fdata,fdata_bigint);
IF (foffset %8) 0 THEN
   SET @mask_off=foffset%8;
call
mask_data(fdata,@mask_off,fdata_bigint);
END IF;
IF (8-((flength+(foffset%8)) %8))  0 THEN
SELECT (fdata_bigint 
(8-((flength+(foffset%8)) %8)))
INTO
fdata_bigint;
END IF;
CASE data_type
WHEN Float
THEN
call
toFloat(fdata_bigint,fdata_float);
IF(!ISNULL(conv_param))
THEN
call
polyConv(fdata_float,conv_param,fdata_float);
END IF;
SET
@fdata_converted=fdata_float;

WHEN Double
THEN
call
toFloat(fdata_bigint,fdata_float);
IF(!ISNULL(conv_param))
THEN
call
polyConv(fdata_float,conv_param,fdata_float);
END IF;
SET
@fdata_converted=fdata_float;

WHEN Signed
THEN
call
toSigned(fdata_bigint,fdata_signed);
SET
@fdata_converted=fdata_signed;
WHEN Unsigned
THEN
  

Database backups

2006-01-23 Thread George Law
Just a quick question regarding mysql backups.

I know myisam tables are portal from machine to machine, but I know
there are some differences
Between innodb tables. 

I am running mysql 5.0.18 on suse linux 10.  I have innodb set up so it
stores each table in its own
.idb file.  

I've read that innodb tables are not portable from server to server, my
question is if I grab the whole
mysql/data directory, can it be restored back on the same computer in
the event of a crash.  Do I need
to enable binlog to do this?


I plan on giving myself about a 1 hour maintenance window where all my
import scripts skip importing and
then just copying the entire mysql/data directory to a back up server
where I will tar/gzip the data and push
it out to a back up directory so it will get dumped to tape.

--
Geo


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Suse Version 10 Documentation for MYSQL.

2006-01-15 Thread George Law

Andrew,

did you install mysql when you installed suse?

you might have to fire up YaST and install the rpms.

this is a little old... but shoudl help: 
http://www.novell.com/coolsolutions/feature/595.html
this is for 9.1, but the procedure should have remained pretty much the same 
with suse 10.



--
George

- Original Message - 
From: Andrew Burrows [EMAIL PROTECTED]

To: 'MYSQL General List' mysql@lists.mysql.com
Sent: Sunday, January 15, 2006 9:05 PM
Subject: Suse Version 10 Documentation for MYSQL.



Hi MYSQL users.



I have just install SuSE Version 10 and need to start working with MYSQL
could someone help me out regarding

Documentation . Don't seem to be able to find any in the help files.



Andrew






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb select

2006-01-13 Thread George Law

Hi All,

I just had a question on selects on a innodb table.

Looking on google, I find that there are different types of selects:
select  IN SHARE MODE and FOR UPDATE

I am administering a database with a fairly large innodb table.  I am 
running into
problems with one of my users insisting on using selects on a datestamp 
field that is not indexed.  Moreover, she is doing selects/updates like :


select  from  where left(start_time,10)= '2006-01-10'
update xxx set ... where  left(start_time,10)= '2006-01-10'

I also cannot get through to her to use BEGIN/COMMIT on her updates.

While her perl scripts are running, I am finding that I have imports running 
in the background to load data infile into the same table she is reading 
from/updating.  These updates happen 6 times an hour.  Probably 1 in 6 
import fails, presumably because one of her queries has the table locked.


Other times I have seen this happen when she is doing a bulk select and 
importing into another table.


What is the default mode for SELECT statements with INNODB table.  I had 
assumed that it would always allow for inserts to happen elsewhere in the 
table (ie - SHARED mode).  That was my big reason for using INNODB for this 
table.  It just doesn't seem right that the whole table seems to get locked 
on the select.


Does this have anything to do with the variable:

| tx_isolation| REPEATABLE-READ|


Thanks!

George Law





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Question regarding running 'mysql' from a bash script

2006-01-12 Thread George Law
Mark,  

 

Sql is an alias to mysq -u.  

What about something like :

 

declare -a TEAMS=(`echo query|sql|sed 's/$/,/g'|sed 's/^//g'|sed
's/$//'`)

 

since the query returns the results 1 per line, the first sed prefixes
each line with a quote

second sed replaces the newline with quote comma, turning it into 1 line
string.  Last sed drops off the last quote

 

I gave this command and bash didn't complain :-) so I assume it worked.

 

 

 

-Original Message-
From: Mark Phillips [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 12, 2006 12:44 PM
To: MYSQL List
Subject: Question regarding running 'mysql' from a bash script

 

There is a column in my table with string values, but the strings have
spaces 

in them. I want to read the unique column values into an array in my
bash 

script, so I can use these values in a separate query on this table. How
do I 

get the values into an array?

 

For example, the column teams in my table

 

| team |.

Red Sox

Chicago Colleens

Athletics

Kenosha Comets

Red Sox

 

and I want to create the equivalent statement, but using what I find in
the 

table

 

declare -a TEAMS=(Red Sox Chicago Colleens Athletics Kenosha
Comets)

 

I tried the following:

declare -a TEAMS=(`mysql -u $USER -h $HOST --password=$PASSWORD -D


$DATABASE -B --disable-column-names --exec select distinct team from 

registered order by team`)

 

but I get the following array (using the data above)

 

(Red Sox Chicago Colleens Athletics Kenosha Comets)

 

How do I either (1) add quotes around each row entry returned so I get
the 

right array, or (2) fill the array in a different way?

 

Thanks!

 

-- 

Mark Phillips

Phillips Marketing, Inc

[EMAIL PROTECTED]

602 524-0376

480 945-9197 fax

 

-- 

MySQL General Mailing List

For list archives: http://lists.mysql.com/mysql

To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

 



RE: mysql 5 - disk bound - fixed

2006-01-11 Thread George Law
Hi All,  

 

Just another follow up on my emails.

 

What this came down to was sheer number of records and indices in my
tables causing it to take several minutes to insert 10-20K records via
load data ...

 

table1 has probably 15 million rows, 60 fields per row, 1 index

table2 has 7 million rows, 33 fields per row, 5 indices

This is approx 2 months worth of data

 

 

2006-01-11 06:37:11 : begin import into table1
2006-01-11 06:43:14: end import into table1 records (17315) deleted :
(0) skipped  (0) warnings:(0)



2006-01-11 06:43:42 : begin import into table2
2006-01-11 06:48:19 : import into table2 records: (16197) deleted:(0)
skipped: (0) warnings:(0)

 

 

 

This morning, I created 2 new tables, renamed the original tables and
rotated the new tables in.

 

2006-01-11 08:46:16 : begin import into table1
2006-01-11 08:46:17: end import into table1 records (18853) deleted :
(0) skipped (0) warnings:(0)



2006-01-11 08:46:52 : begin import into table2
2006-01-11 08:46:55 : import into table2 records: (17615) deleted:(0)
skipped: (0) warnings:(0)

 

This is mysql 5.0.18, running on a HP Proliant, dual P4 3 GHZ, 4 GB RAM,
running Suse 9.3, so I do not believe it is hardware related 

 

 

I have to work on an automatic way to rotate these tables every week.
Is there an easy way with SQL to create a new table based on the schema
of an existing table?

--

Thanks!

 

George Law

 

 



mysql 5 - disk bound ?

2006-01-10 Thread George Law
Hi All,

I have a question for you all.

Working with 2 innodb tables.  One is a table of cdrs (call detail
records) with 33 columns and 7 million + rows.  Second table is a clone
of the first, meant as a work table.

From the mysql command line client, I gave the query :
insert into table2 select * from table1 where time_epoch between xx and
yy; (I did not do a BEGIN/COMMIT)

Time_epoch is a numeric field which is indexed.

This took 13+ minutes for 1,130,000 records.

It seems to me that 13 minutes is a little high.  

During this time, a load data infile command into table1 (using
begin/commit) was stalled out waiting for the the insert into
table2

The load data infile command - 1110 
This is a HP 1U server, dual P4 3 Gig, 4 GB of ram, optimized for disk
IO. 
Hyperthreading is enabled. This is running suse 9.3, 2.6.11.4-20a-smp
kernel.


Raw .idb files : 
11G comp_cdr.ibd(table1 above)
1.1Gtemp_comp_cdr.ibd   (table2 above)

Mysql is running with almost 2 GB ram, no swapping seems to be going
on...

Tasks:  65 total,   1 running,  64 sleeping,   0 stopped,   0 zombie
Cpu(s):  8.7% us,  4.6% sy,  3.1% ni, 75.7% id,  7.9% wa,  0.0% hi,
0.0% si
Mem:   3960896k total,  3845864k used,   115032k free,11260k buffers
Swap:  4200956k total, 2764k used,  4198192k free,  1830060k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 4515 mysql 16   0 2258m 1.9g 4296 S 99.9 49.1   1174:55 mysqld   


my.cnf:

[mysqld]
local-infile=1
bulk_insert_buffer_size = 512M
big-tables
port= 3306
socket  = /tmp/mysql.sock
max_connections = 200
skip-locking
key_buffer = 512M
max_allowed_packet = 16M
table_cache = 2048
sort_buffer_size = 8M
join_buffer_size = 8M
read_buffer_size = 2M
myisam_sort_buffer_size = 128M
thread_cache = 32
query_cache_size = 96M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8  # 2 cpu x 2 (hyperthreading) x 2


tmp_table_size = 256M

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data
innodb_log_arch_dir = /usr/local/mysql/data
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1024M
innodb_additional_mem_pool_size = 256M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 150
#skip-innodb
innodb_thread_concurrency = 8
innodb_file_per_table


any help is appreciated :)

TIA

George


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql 5 - disk bound ?

2006-01-10 Thread George Law
RE: mysql 5 - disk bound ?Replying to my own message.

Both of these tables have several indexes. 

As mentioned before, My myisam_sort_buffer_size is fairly large :
 myisam_sort_buffer_size | 134217728  

alter table xxx disable keys
(But this table is INNODB)


I tried alter table xx disable keys and then enable keys.

On enable keys I get :
|  3 | root | localhost | wholesale | Query   |  810 | Repair by sorting | 
ALTER TABLE incomp_cdr ENABLE KEYS |

810 seconds  as I mentioned before, much slower :(

Anyone feedback on Mysql support contracts, I might be heading that way






-Original Message-
From: George Law [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 10, 2006 1:14 PM
To: mysql@lists.mysql.com
Subject: mysql 5 - disk bound ?

Hi All,

I have a question for you all.

Working with 2 innodb tables.  One is a table of cdrs (call detail
records) with 33 columns and 7 million + rows.  Second table is a clone
of the first, meant as a work table.

From the mysql command line client, I gave the query :
insert into table2 select * from table1 where time_epoch between xx and
yy; (I did not do a BEGIN/COMMIT)

Time_epoch is a numeric field which is indexed.

This took 13+ minutes for 1,130,000 records.

It seems to me that 13 minutes is a little high. 

During this time, a load data infile command into table1 (using
begin/commit) was stalled out waiting for the the insert into
table2

The load data infile command - 1110
This is a HP 1U server, dual P4 3 Gig, 4 GB of ram, optimized for disk
IO.
Hyperthreading is enabled. This is running suse 9.3, 2.6.11.4-20a-smp
kernel.


Raw .idb files :
11G comp_cdr.ibd(table1 above)
1.1Gtemp_comp_cdr.ibd   (table2 above)

Mysql is running with almost 2 GB ram, no swapping seems to be going
on...

Tasks:  65 total,   1 running,  64 sleeping,   0 stopped,   0 zombie
Cpu(s):  8.7% us,  4.6% sy,  3.1% ni, 75.7% id,  7.9% wa,  0.0% hi,
0.0% si
Mem:   3960896k total,  3845864k used,   115032k free,11260k buffers
Swap:  4200956k total, 2764k used,  4198192k free,  1830060k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 4515 mysql 16   0 2258m 1.9g 4296 S 99.9 49.1   1174:55 mysqld  


my.cnf:

[mysqld]
local-infile=1
bulk_insert_buffer_size = 512M
big-tables
port= 3306
socket  = /tmp/mysql.sock
max_connections = 200
skip-locking
key_buffer = 512M
max_allowed_packet = 16M
table_cache = 2048
sort_buffer_size = 8M
join_buffer_size = 8M
read_buffer_size = 2M
myisam_sort_buffer_size = 128M
thread_cache = 32
query_cache_size = 96M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8  # 2 cpu x 2 (hyperthreading) x 2


tmp_table_size = 256M

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data
innodb_log_arch_dir = /usr/local/mysql/data
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1024M
innodb_additional_mem_pool_size = 256M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 150
#skip-innodb
innodb_thread_concurrency = 8
innodb_file_per_table


any help is appreciated :)

TIA

George


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




allocate space for innodb innodb_file_per_table

2006-01-08 Thread George Law

Hi All,

I am running mysql 5.0.18 with a innodb table of 9 GB (plus several others) 
on suse linux 9.3 with 4 GB ram.


when doing a show table status command, this table shows Data_Free:0

I assume this is because it is the file per table setting, where the 
tables fall outside of the main ibdata files.


The ibdata files are all preallocated when mysql starts up, is there anyway 
to preallocate the file per table files as well?


Is there a specific innodb list?

Thanks to Gleb's answer and the hint on the bulk load setting in my.cnf - 
that seems to have helped some what.
I notice in the load data command there is a CONCURRENT option - but as 
far as I can tell, that only does MyISAM tables?  You also suggested 
disablign the keys.  I think I tried that, but with a table with 7 mil rows, 
that takes longer than the load data :-\


TIA.

George Law


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: General Questions regarding mysql and php

2006-01-08 Thread George Law

Andrew,

I used to be a big redhat fan - but if you are looking to totally bring 
everything up to date, I would suggested opensuse.


having used redhat for years, Suse's not a big step - everything is still 
RPM based, however, Suse's admin tool, YaST, kicks butt :)


Suse 10 comes with Apache 2, PHP5, and one of the latest 4.X versions of 
mysql.


There are generic RPMs for mysql 5 on mysql's website - but I haven't used 
them - I am running 5.0.18, but using the binary distro because I needed to 
run both 4.x and 5.x at the same time to migrate some data.


--
George Law




- Original Message - 
From: Andrew Burrows [EMAIL PROTECTED]

To: MYSQL General List mysql@lists.mysql.com
Sent: Sunday, January 08, 2006 8:01 PM
Subject: General Questions regarding mysql and php




Hi MYSQL users,



Just started playing with mysql apache php and other tricky stuff and have a
few question to get me going after many years.



Was wondering what the best GUI based administration tool is used today, I
lasted used phpMyAdmin, is this still used or are there better applications
available?



Looking for some basic documentation on MYSQL could someone recommend
something online or maybe a book??



I have an old system that will probably need upgrading.



Apache 1.3

Mysql 3.22.32

Tomcat 3.1.1

Red Hat 8



Would you recommend upgrading this system or starting from scratch?



Thanks in advance.



Andrew







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql 5.0 upgrade from 4.1.14/innodb/signal 11

2006-01-06 Thread George Law
terribly wrong...
Cannot determine thread, fp=0xbf43ae84, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x81536e8
0xe420
(nil)
0x8169819
0x816e176
0x8165c3e
0x8165769
0x8164c71
0x40031aa7
0x40166c2e
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do

resolve it
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd-query at 0x9b17fb8 = grant all on wholesale.pending_rates_bak to
'rates'@'10.%' identified by ''
thd-thread_id=2
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
 
Number of processes running now: 0
060106 10:05:05  mysqld restarted
060106 10:05:05  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
060106 10:05:05  InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 4 486610513.
InnoDB: Doing recovery: scanned up to log sequence number 4 489673016
060106 10:05:06  InnoDB: Starting an apply batch of log records to the
database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88
89 90 91 92 93 94 95 96 97 98 99 
InnoDB: Apply batch completed
InnoDB: Last MySQL binlog file position 0 0, file name 
060106 10:05:08  InnoDB: Started; log sequence number 4 489673016
060106 10:05:08 [Note] /usr/local/mysql/bin/mysqld: ready for
connections.
Version: '5.0.18-standard'  socket: '/tmp/mysql.sock'  port: 3306  MySQL
Community Edition - Standard (GPL)
 
 
 

George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]

 

 


mysql 5.0 upgrade from 4.1.14/innodb/signal 11 -- PT2

2006-01-06 Thread George Law
Hi All,
 
forgot something in my other post:
 
machine is running suse 9.3,  2.6.11.4-20a-smp kernel.
 
 
Ok, I think I know the answer here... but just to make sure :)
4.1.14 ran with about 10 mysqld process.
skip-innodb was initially turned on in the my.cnf before the attempted
migration to innodb.
 
 
With innodb enabled on 5.0.18, I am only seeing a single process in a
normal ps.
 
threads right?
 
ps -elfm shows a little more:
4 - root 26602 1  0   -   - -   668 -  09:40 pts/1
00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe
--datadir=/usr/local/mysql/data --pid-file=/usr/local/mysql/dat
4 S root - -  0  79   0 - - wait   09:40 -
00:00:00 -
4 - root 26822 26386  0   -   - -   491 -  09:58 pts/1
00:00:00 tail -f /usr/local/mysql/data/db.err
4 S root - -  0  76   0 - - -  09:58 -
00:00:00 -
0 - glaw 26833 26326  0   -   - -  1046 -  09:59 pts/3
00:00:00 -bash
0 S glaw - -  0  75   0 - - wait   09:59 -
00:00:00 -
0 - glaw 26854 26326  0   -   - -  1045 -  09:59 pts/4
00:00:00 -bash
0 S glaw - -  0  75   0 - - wait   09:59 -
00:00:00 -
4 - root 26873 26854  0   -   - -  1004 -  09:59 pts/4
00:00:00 su -
4 S root - -  0  79   0 - - wait   09:59 -
00:00:00 -
0 - root 26877 26873  0   -   - -   755 -  09:59 pts/4
00:00:00 -bash
0 S root - -  0  75   0 - - -  09:59 -
00:00:00 -
4 - mysql27009 26602  9   -   - - 512778 - 10:05 pts/1
00:05:15 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql
--datadir=/usr/local/mysql/data --user=mysql --pi
4 S mysql- -  0  76   0 - - -  10:05 -
00:00:02 -
1 S mysql- -  0  76   0 - - 322560 10:05 -
00:00:00 -
1 D mysql- -  0  75   0 - - sync_p 10:05 -
00:00:00 -
1 S mysql- -  0  76   0 - - 322560 10:05 -
00:00:03 -
1 S mysql- -  0  75   0 - - 322560 10:05 -
00:00:30 -
1 S mysql- -  0  76   0 - - -  10:05 -
00:00:03 -
1 S mysql- -  0  76   0 - - -  10:05 -
00:00:03 -
1 S mysql- -  1  76   0 - - 322559 10:05 -
00:00:49 -
1 S mysql- -  0  76   0 - - -  10:05 -
00:00:00 -
1 S mysql- -  1  77   0 - - 363528 10:05 -
00:00:59 -
1 S mysql- -  0  78   0 - - 393791 10:12 -
00:00:11 -
1 D mysql- -  3  77   0 - - sync_p 10:14 -
00:01:46 -
1 S mysql- -  0  75   0 - - 1460   10:17 -
00:00:16 -
1 S mysql- -  1  76   0 - - 322560 10:33 -
00:00:28 -
 
 
 

George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]

 

 


Sorry for my n00bie question - mysql 5.0.18 with load data infile

2006-01-06 Thread George Law
Hi All,
Just wanted to apologize for my earlier rambling emails.
I am been working on recoving from a innodb corruption for 2 days now
and was really in panic mode.
 
Everything seems to be running well with 5.0.18, althought my server
load it up there.
I am running 5.0.18 on a Suse 9.3 box, P4 3 GHZ, 4 GB RAM
 
 
 16:38:57 up 86 days,  7:20,  4 users,  load average: 4.44, 4.41, 4.51
USER TTYLOGIN@   IDLE   JCPU   PCPU WHAT
root tty1  01Nov05 66days  0.23s  0.23s -bash
 
this has been pretty much a sustained all day long.
 
 
I have a perl script that takes some text cdrs, reformats them into .csv
files, and then slams then into mysql using 'load data local'.
Every 10 minutes, I process the raw cdrs, separate the completes and
incompletes, generate 3 .csv files, and do 3 'load data local' commands.
completed cdrs go into a innodb table, incompletes and raw cdrs go into
a myISAM table. 
 
 
The tables are fixed length - no varchars.  everything is forced to the
full char length in my perl, so my 'load data' generates no warnings on
truncated fields.
 
The reason I did it this way was the incomp cdrs and raw cdrs are all
read only.  The completed cdrs have fields that get updated at a later
time.
There seemed to be a lot of processes out there waiting on table locks
when this was a myisam table.
 
The performance of the 'load data infile' command seems like it is
taking way too long.
 
(dates are just from `date` in perl):
2006-01-06 16:35:21 : begin import into comp_cdr
2006-01-06 16:35:42 : end import into comp_cdr (1361) (0)(0)(0)
 
innodb table: 65 inserts per second.

2006-01-06 16:35:42 : begin import into incomp_cdr
2006-01-06 16:39:10 : import into incomp_cdr2 (15084) (0)(0)(0)
Processed 16445 CDRs in 429.60 seconds - 38.28 cdr/sec 1361 complete
calls, 15084 incomplete calls

the myisam table takes longer... 75 inserts a second.
 
 
My understanding is that 'load data' should be considerably faster than
doing one big loop in perl and doing inserts.  I haven't went back to
perl for this, but that might be my next step.
 
anyone have any suggestions.  I am not posting any table descriptions
here, but will do if needed.
--
 
 

George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]