Re: error 29, file not found (errcode: 13)

2014-06-23 Thread Scott Helms
I generally drop them into /tmp for easy access and cleanup after the data
load, but you can put them any place that the mysql daemon process has
access to read.


Scott Helms
Vice President of Technology
ZCorum
(678) 507-5000

http://twitter.com/kscotthelms



On Mon, Jun 23, 2014 at 10:52 AM, thufir hawat.thu...@gmail.com wrote:

 Apparently this error is because MySQL can't read my home directory?  Fair
 enough, but I don't quite follow.  Where would be a good location for the
 CSV file, then?

thufir@dur:~$
thufir@dur:~$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 62
Server version: 5.5.37-0ubuntu0.14.04.1 (Ubuntu)
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights
reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
mysql
mysql LOAD DATA  INFILE '/home/thufir/make_year_model.csv' INTO
TABLE vehicles.vehicles FIELDS TERMINATED BY ',' LINES TERMINATED BY
'\n';
ERROR 29 (HY000): File '/home/thufir/make_year_model.csv' not found
(Errcode: 13)
mysql
mysql quit
Bye
thufir@dur:~$
thufir@dur:~$ cat /home/thufir/make_year_model.csv
make1,model1,2012,604,buy now
make2,model2,2013,780,need to sell
make3,model3,2001,780,cheap
thufir@dur:~$



 thanks,

 Thufir




Select w/ group by question

2010-07-14 Thread Scott Mullen
I'm having trouble formulating a query to gather the following data.  I can
do this via a script, but now it is more or less just bothering me if there
is an easy/efficient way to gather the following data from a single query.

Example Tables


Products
Type Cost Vendor_id
--
---
apple11
apple32
apple73
pear  2   1
pear  42
pear  23

Vendors
Vendor_id   Vendor_name

1  Walmart
2  Target
3  Kmart


I would like to obtain the least cost of each product type and its
associated vendor.

So...I would like to see a result similiar to the following:

Type  CostVendor_id Vendor_name
apple 1   1  Walmart
pear   2   1  Walmart
(Note: both vendors 1 and 3 have the same cost which is the lowest.  I'm not
really concerned with which vendor is chosen in the result set here.)


If I do:  select a.type, min(a.cost), a.vendor_id, b.vendor_name from
products a join vendors b on a.vendor_id = b.vendor_id group by a.type,
a.vendor_id, b.vendor_name all rows are returned because the
type/vendor_id/vendor_name are unique amongst each row.  If you remove the
vendor_id and vendor_name from the group by, you get a single row with the
lowest cost for each product, but the vendor_id's and vendor_name's are
incorrect because you are not grouping by them.

Is there a way to do this from a single query.  I know I can concat things
together and imbed a select in my where clause to get the result I want, but
this is horribly inefficient.
My real tables have somewhere around 30 million rows in them.

Thanks

Scott


Re: Select w/ group by question

2010-07-14 Thread Scott Mullen
Peter

Thanks for the link.  I've never run across this page before, but it has
tons of useful informationas well as several answers on how to implement
what I was trying to do.

Ended up going with a solution similar to this example (from the page you
referenced):

SELECT
  item,
  SUBSTR( MIN( CONCAT( LPAD(price,6,0),supplier) ), 7)   AS MinSupplier,
LEFT( MIN( CONCAT( LPAD(price,6,0),supplier) ), 6)+0 AS MinPrice,
  SUBSTR( MAX( CONCAT( LPAD(price,6,0),supplier) ), 7)   AS MaxSupplier,
LEFT( MAX( CONCAT( LPAD(price,6,0),supplier) ), 6)+0 AS MaxPrice
FROM  products
GROUP BY item;

Pretty straight forward and does not require another join back to the same
table with 30+ million rows.

Thanks

Scott

On Wed, Jul 14, 2010 at 10:35 AM, Peter Brawley peter.braw...@earthlink.net
 wrote:

  Scott,


  I would like to obtain the least cost of each product type and its
 associated vendor.


 See Within-group aggregates at
 http://www.artfulsoftware.com/infotree/queries.php.

 PB

 -


 On 7/14/2010 9:25 AM, Scott Mullen wrote:

 I'm having trouble formulating a query to gather the following data.  I
 can
 do this via a script, but now it is more or less just bothering me if
 there
 is an easy/efficient way to gather the following data from a single query.

 Example Tables


 Products
 Type Cost Vendor_id
 --
 ---
 apple11
 apple32
 apple73
 pear  2   1
 pear  42
 pear  23

 Vendors
 Vendor_id   Vendor_name
 
 1  Walmart
 2  Target
 3  Kmart


 I would like to obtain the least cost of each product type and its
 associated vendor.

 So...I would like to see a result similiar to the following:

 Type  CostVendor_id Vendor_name
 apple 1   1  Walmart
 pear   2   1  Walmart
 (Note: both vendors 1 and 3 have the same cost which is the lowest.  I'm
 not
 really concerned with which vendor is chosen in the result set here.)


 If I do:  select a.type, min(a.cost), a.vendor_id, b.vendor_name from
 products a join vendors b on a.vendor_id = b.vendor_id group by a.type,
 a.vendor_id, b.vendor_name all rows are returned because the
 type/vendor_id/vendor_name are unique amongst each row.  If you remove the
 vendor_id and vendor_name from the group by, you get a single row with the
 lowest cost for each product, but the vendor_id's and vendor_name's are
 incorrect because you are not grouping by them.

 Is there a way to do this from a single query.  I know I can concat things
 together and imbed a select in my where clause to get the result I want,
 but
 this is horribly inefficient.
 My real tables have somewhere around 30 million rows in them.

 Thanks

 Scott




 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.441 / Virus Database: 271.1.1/3004 - Release Date: 07/14/10
 06:36:00




Error 1064

2010-06-08 Thread Brad Scott


I am receiving this error:

ERROR 1064: You have an error in your SQL syntax.  Check the manual that 
corresponds to you MySQL server version for the right syntax to use near '16 
varchar(255) NOT NULL default 'Not applicable', 17a varch

I have version 4.0.16 and have a dump file that I'm trying to recreate into a 
new database.

The following is the table that is trying to be created along with some of the 
fields:


CREATE TABLE assessment_child_psych (
  id int(11) NOT NULL auto_increment,
  form_version smallint(6) NOT NULL default '1',
  client_id int(11) NOT NULL default '0',
  assessment_date datetime NOT NULL default '-00-00 00:00:00',
  user_id int(11) NOT NULL default '0',
  status varchar(20) NOT NULL default 'Pending',
  6a varchar(255) NOT NULL default 'Not applicable',
  6b varchar(255) NOT NULL default 'Not applicable',
  6c varchar(255) NOT NULL default '',
  7a varchar(255) NOT NULL default 'Not applicable',
  7b varchar(255) NOT NULL default '',
  7c varchar(255) NOT NULL default '',
  8a varchar(255) NOT NULL default 'Not applicable',
  9a varchar(255) NOT NULL default 'Not applicable',
  9 varchar(255) NOT NULL default 'Not applicable',
  9b varchar(255) NOT NULL default 'Not applicable',
  10a varchar(255) NOT NULL default 'Not applicable',
  10 varchar(255) NOT NULL default 'Not applicable',
  10b varchar(255) NOT NULL default '',
  10_describe varchar(255) NOT NULL default '',
  11a varchar(255) NOT NULL default 'Not applicable',
  11b varchar(255) NOT NULL default '',
  12a varchar(255) NOT NULL default 'Not applicable',
  12b varchar(255) NOT NULL default '',
  13a varchar(255) NOT NULL default 'Not applicable',
  13b varchar(255) NOT NULL default '',
  13b_other varchar(255) NOT NULL default '',
  14a varchar(255) NOT NULL default 'Not applicable',
  14b varchar(255) NOT NULL default '',
  14c varchar(100) NOT NULL default 'Not applicable',
  14cexplain varchar(255) NOT NULL default '',
  15a varchar(255) NOT NULL default 'Not applicable',
  15b varchar(255) NOT NULL default '',
  16 varchar(255) NOT NULL default 'Not applicable',
  17a varchar(255) NOT NULL default 'Not applicable',
  17b varchar(255) NOT NULL default '',

Any line beginning with just a number (ie 9, 10, 16) causes a failure.  What am 
I missing?


  
_
Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_1

RE: Join Suddenly Failing

2010-01-22 Thread Scott Swaim
I ran into this when I upgraded from 4.0.xx to 5.0.xx  There was a change in
the logic for the Joins.  I determined that the FROM clause needs to be in
parenthesis. i.e.FROM (team, person, teamperson)  this allows all of the
fields in all of the tables to be used.  The change was made in mysql so
that only the last table (i.e. teamperson) was used for your JOIN

Scott Swaim
I.T. Director
Total Care / Joshua Family Medical Care
(817) 297-4455 
Website: www.totalcareclinic.com
 
 
NOTICE: The information contained in this e-mail is privileged and
confidential and is intended for the exclusive use of the recipient(s) named
above. If you are not the intended recipient or his or her agent, you are
hereby notified that you have received this document in error and that any
use, disclosure, dissemination, distribution, or copying of this message is
prohibited. If you have received this communication in error, please notify
the sender immediately by e-mail, and delete the original message
-Original Message-
From: Albert Padley [mailto:ap3des...@gmail.com] 
Sent: Friday, January 22, 2010 11:37 AM
To: mysql@lists.mysql.com
Subject: Join Suddenly Failing

I have a website that gets used once a year for a soccer tournament. It has
been working fine since 2006. No script changes since it was last used in
2009. All of a sudden the following script started throwing an error.

SELECT contactinfo.contactdata, contactinfo.ContactID FROM team, person,
teamperson
LEFT JOIN personcontact ON person.PersonID = personcontact.PersonID
LEFT JOIN contactinfo ON personcontact.ContactID = contactinfo.ContactID
WHERE team.teamID = 22
AND team.TeamID = teamperson.TeamID
AND teamperson.PersonID = person.PersonID
AND person.PeopleTypeID =5
AND contactinfo.ContactTypeID =2

Error: Unknown column 'person.PersonID' in 'on clause' (1054)

There are several of these type scripts and all are giving a similar error.

The server version is 5.0.87. I suspect the hosting company may have
upgraded to a new version of mysql.

Thanks.

Albert


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Happy Christmas

2009-12-25 Thread peter scott

Hope you are all having a Happy Christmas.

Regards

Peter

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to Backup the Database using Script

2009-10-29 Thread Scott Haneda
Here is how I have been doing it, for years, and I have used this to  
restore from, which has always worked for me.


http://dl.getdropbox.com/u/340087/Drops/10.29.09/mysql_backup-0196a0c2-013914.txt

A quick explanation, and some caveats:
First, set the u and p variables to a mysql user and password that has  
read access to all databases.  You only need read access, no more, no  
less.


I then set the permissions on this script itself to read/write/execute  
for root only.  There is a small window in which the data is being  
dumped in which the permissions of the dump files are a little more  
open than I wish them to be.  I need to look into how to get MySql to  
create the dump with a fixed set of owner, group, and permissions.   
There are also ways to do with without exposing a password in a file,  
but for my system, this was sufficient.


Here is how it works:
1) Variable for your MySql data dump storage location is set.
2) show databases is passed to MySql, in my case, the binary is called
   mysql5, you may need to adjust the name and add a path
3) With a list of all your databases, the script can now iterate  
through them

   all, sending out the database.sql files
4) Right after the .sql files are dumped, permissions are reduced
5) The entire batch of databases are then compressed, with datestamp  
as name

6) Archive file is then reduced to lesser permissions
7) Original .sql files are removed, now that there is one archive file

You can hook this to cron, or on OS X to launchd, and have it run on a  
schedule.  There are a lot of improvements that could be made.  For  
example, it may be possible to have the dump pipe to a compression on  
the fly, which would then only require a quick tar of the data when  
done.


Permissions certainly could be dealt with better.  I was thinking to  
`touch database.sql` with correct permissions, and then have MySql  
overwrite that, but still not sure the most elegant and sane way to do  
this.


The data could probably be appended to an archive set, saving the step  
of removing all the .sql files.  You could also locate files older  
than x days, and remove them, only storing a fixed number of backups.


I needed backups, and this works for me.  My main criteria was that I  
wanted all databases backed up, and did not want to have to think  
about modifying a script every time I added a new database.  This  
script is limited by the speed of your database, and the drives you  
are dumping to, and will of course affect the performance of the  
database as the dumps are happening.


I generally try to run this on a replication server, so I am not  
hitting the live database.  This way, you can have multiple MySql  
machines all replicated to one machine, and then the dumps happen on a  
non public facing machine.  Add in some RAID and other backup  
strategies, and you should be fine.


Feel free to modify the script or comment on improvements, I would  
love to make it a better script. The most important thing to me is  
that I have been able to use it to recover from.


Script is below:
#!/bin/sh
# Backup all MySql databases
# [10/29/2009 01:37:35 AM] sc...@hostwizard.com

# I set permissions on this file like so:
# -rwx--  1 rootwheel   864 Oct 20 23:33 mysql_backup

# set date and time
time=`date +%m-%d-%y_%I-%M%p`
u=username
p=password

# set path to final destination, needs trailing slash
location=/backups/mysql/

# set db_list to the list of databases
db_list=`echo show databases | /opt/local/bin/mysql5 -N -u$u -p$p`

for db in $db_list;
do
 echo dumping  $db to  $location$db.sql
 /opt/local/bin/mysqldump5 -u$u -p$p --opt $db  $location$db.sql
chown root:wheel $location$db.sql
chmod 0 $location$db.sql
done

echo changing to directory  $location
cd $location
echo Now in: `pwd`

echo begin tarballing
tar cvfz $time.tgz *.sql

# set permissions on the final file
chown root:wheel $time.tgz
chmod 0 $time.tgz

echo removing:
ls -la $location*.sql
rm $location*.sql

echo All your MySql Database are Belong to Us;
echo $location$time.tgz
--
Scott * If you contact me off list replace talklists@ with scott@ *

On Oct 28, 2009, at 10:33 PM, Ganeswar Mishra wrote:


Hi Everyone,
I am trying to backup a database regularly, without using  
Administrator

tool in mysql,
Can anyone help to write a scipt regarding backup database.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Performance tuning a group by with percentage

2009-10-16 Thread Scott Haneda
Running MySql 5.0.85, I need to be as efficient as possible about a  
few queries. If I could get a little review, I would appreciate it.


I collect data in the millions, and need the top 50 grouped by one  
field, with a percentage of how much those top 50 occupy.


Here is what I have come up with... 1) I have a feeling I can be more  
efficient, perhaps with a join 2) How can I get the percentage to be  
of precision in the hundredths, so * 100.00 ie: .07 becomes 7.00,  
getting SQL errors if I (percentage * 100)


SELECT user_agent_parsed, user_agent_original,  
COUNT( user_agent_parsed ) AS thecount,

COUNT( * ) / ( SELECT COUNT( * ) FROM agents ) AS percentage
FROM agents
GROUP BY user_agent_parsed
ORDER BY thecount DESC LIMIT 50;
Second issue, once a day I need to archive the result of the above.  
Any suggestions on how to best to do that? I can schedule with cron,  
or in my case, launchd, unless someone has a better suggestion.


Would you think that a simple 'SELECT (the above) INTO foo' would  
suffice? ( I will add a date stamp as well )




Thanks all.

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: insert random number into table

2009-10-16 Thread Scott Haneda
In addition to what Gavin said. You seem to want some form of key  
perhaps to be able to identify the authenticity of your contest  
winner. An auto increment ID won't be very secure for that, but you  
still need it.


Take a known combination of perhaps the key, name, email address, etc,  
and run a hashing function on it. Md5 will often suffice.


This will give you a unique hash that only you know how it was made.  
You can inject some 'salt' into a known location to add more  
difficulty in guessing your hashing input.


This hash will be asking too much of the user to hand type in, so I'm  
assuming you will accept it via a link in a URL.


Or ask for whatever you used to make the hash and rehash that and  
compare your internal stored result.


If not, for your needs you may be able to safely remove ambiguous  
characters, and take a substring of it to make it shorter.


It's also quite easy to create a dictionary of words and create a  
function that can make word1-xx-word2 and end up with red-56-car.


--
Scott
Iphone says hello.

On Oct 16, 2009, at 11:25 AM, Gavin Towey gto...@ffn.com wrote:

Don't try to give them a random number, instead use a table with a  
primary key that is AUTO_INCREMENT.  Then you just insert the  
record, and afterwards SELECT LAST_INSERT_ID(); to retrieve the id  
for the record created.


With random numbers, you're going to have more collisions when you  
add more records.


Regards,
Gavin Towey

-Original Message-
From: Ray [mailto:r...@stilltech.net]
Sent: Friday, October 16, 2009 8:43 AM
To: mysql@lists.mysql.com
Subject: insert random number into table

Hello All,
I am trying to insert a unique random number* with each row when  
inserting
into a table. I can do it in the application layer (php) but just  
wondering if

there is an easier way in the database layer.
Thanks
Ray

* for example, when registering a person for a contest, I want to  
give them a

unique, random identifier.


Re: insert random number into table

2009-10-16 Thread Scott Haneda
I always maintain a timestamp in my random numbers. As long as my  
precision is higher than my requests per second, wouldn't I be safe  
from collisions? Assuming a time machine is not invented.


--
Scott
Iphone says hello.

On Oct 16, 2009, at 11:29 AM, Jerry Schwartz jschwa...@the-infoshop.com 
 wrote:


JS] Just remember that the pseudo-random numbers that come out of a  
computer,
regardless of where or how they are generated, are not guaranteed to  
be
unique. Although the odds of a collision are slim, my philosophy has  
always
been if it's possible, it will happen - in the middle of your  
vacation.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysql.socket location problem

2009-10-05 Thread Scott Wagner
Hi
I just did a new installation of Suse 11.2 and found that I couldn't
connect to mysql from a php script because mysql.sock was
in /var/run/mysql/mysql.sock rather than /var/lib/mysql/mysql.sock.  I
changed all the lines in /etc/my.cnf for the socket
to /var/lib/mysql/mysql.sock.  Now my php scripts connect but I can't
connect to  mysql from the command line.  I get the error can't connect
through socket /var/run/mysql/mysql.sock. What do I have to change to
make the konsole use /var/lib/mysql/mysql.sock?

Thanks,
Scott 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: SELECT by day

2009-09-29 Thread Scott Swaim
Try this

SELECT * FROM `table_anw` WHERE DAYOFWEEK(datum) = 3;

Scott Swaim
I.T. Director
Total Care / Joshua Family Medical Care
(817) 297-4455 
Website: www.totalcareclinic.com
 
 
NOTICE: The information contained in this e-mail is privileged and
confidential and is intended for the exclusive use of the recipient(s) named
above. If you are not the intended recipient or his or her agent, you are
hereby notified that you have received this document in error and that any
use, disclosure, dissemination, distribution, or copying of this message is
prohibited. If you have received this communication in error, please notify
the sender immediately by e-mail, and delete the original message

-Original Message-
From: Kerstin Finke [mailto:kerstinfi...@hotmail.com] 
Sent: Tuesday, September 29, 2009 8:04 AM
To: mysql@lists.mysql.com
Subject: SELECT by day

Hi,
I want to search all rows with datum = 'tuesday' for example, something 
like:

SELECT * FROM `table_anw` WHERE datum=DAYOFWEEK(3);
'3' as tuesday.

I found in the manual I can do:
SELECT DAYOFWEEK('1998-02-03');
and will get '3' for Tuesday.

But thats not what I want. I want to select only tuesdays.
Any ideas?

Thanks in advance
Kerstin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=sc...@tmcclinic.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Getting mySQL to ignore case sensitivity of field names

2009-08-27 Thread Scott Haneda

http://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html
You need to set the field format to a non binary one, and case  
insensitive will be the default.


On Aug 27, 2009, at 2:57 PM, Adrian Aitken wrote:


Hi, Google isn't my friend :-(
How can I tell mySQL 5.0 to ignore the case of field names i.e.  
FullName

should also be able to be referenced as fullname,fullNAME etc ?
I'm running it on a linux box at home but my copy at work (running on
Windows 2000 server) has this by default - I certainly set any  
options.
When I Google all I seem to get are hits about ignoring case in  
select query

values not field names.


--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Getting mySQL to ignore case sensitivity of field names

2009-08-27 Thread Scott Haneda

Oops, sorry about that.  My understanding is this is OS dependent:
http://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html

You may be able to set lowercase tables names, but would always have  
to use lowercase.  I would just stick to entering in the correct case,  
as the other methods seem prone to trouble.


On Aug 27, 2009, at 3:04 PM, Scott Haneda wrote:


http://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html
You need to set the field format to a non binary one, and case  
insensitive will be the default.


On Aug 27, 2009, at 2:57 PM, Adrian Aitken wrote:


Hi, Google isn't my friend :-(
How can I tell mySQL 5.0 to ignore the case of field names i.e.  
FullName

should also be able to be referenced as fullname,fullNAME etc ?
I'm running it on a linux box at home but my copy at work (running on
Windows 2000 server) has this by default - I certainly set any  
options.
When I Google all I seem to get are hits about ignoring case in  
select query

values not field names.




--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Logins, php, legacy, old password

2009-07-19 Thread Scott Haneda

Hello. It is time for some long overdue service consolidation on my end.

I have a production machine ready to go, running the latest MySql 5  
series.


I have a legacy machine running MySql 4. As you know, version 4 used  
an older user account password system.


I also will be up against some data import/export changes.  
Specifically I think the timestamp column changed.


I want to move the data from 4 to 5. Here is my problem.

The http server that talks to MySql is ancient, unable to be updated.  
Http will remain on this legacy machine talking to MySql, using an  
equally ancient scripting language that does not, and can not support  
the new password format. This http server does not support MySql, so a  
local copy is not an option.


I believe I will need to set old style passwords. I'm trying to avoid  
multiple machines or multiple instances of MySql on the same database  
server.


What are my options? I'll do the work to convert the data. This means  
scripting code that displays and formats timestamp data is going to  
break. I can dig deep into my long term memory and change the  
scripting code to solve the timestamp issues.


Are there other major issues besides timestamps? All inserts used NOW 
(), so I'm ok there. But display of timestamp to the user needs  
editing. It's controlled by a function so I should have a pretty easy  
go.


But what about the passwords? I hear php 5 will not do old style  
passwords at all. So I can not just set old style passwords to  
accomodate my legacy systems.  Don't really want to either.


What I would like is to set old style passwords on a per account  
basis, but have the default be new methods. I suspect by nature of how  
this works that is not possible.


I'm really trying to consolidate. Yes, I can run a second server, but  
I would rather manage and maintain one, giving me the ability to  
retire a machine.

--
Scott
Iphone says hello.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Client deleted database, semi high priority master slave question

2009-07-08 Thread Scott Haneda
A client deleted their database and did not have a slave in place for  
backup. No scheduled dumps either.


However, I have set up a slave to the master for one table. These  
tables hold DNS data for database driven DNS server features.


The master table is empty, the slave rightly saw the deletes and  
carried them out.


But I believe the replication logs on the slave, and also perhaps on  
the master should still be there. I'm not clear on how large these  
get, if they roll/truncate, etc.


Is there any way to restore the master up to the point the data was  
deleted? I can then set the slave to resync and they should be back  
where the left off.


There are a lot if records, not a lot of data since DNS data is so  
small. If I could restore the master It would prove quite helpful.  
Currently, all servers are down since DNS is not working, as a result  
of DNS being backed by MySql via DLZ/named.


Thanks for any suggestions.
--
Scott
Iphone says hello.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Client deleted database, semi high priority master slave question

2009-07-08 Thread Scott Haneda

On Jul 8, 2009, at 12:49 PM, Daevid Vincent wrote:


Ouch.

I know it comes a little too late, but as a preventive measure going
forward, I offer up my daily crontab backup script:

http://www.daevid.com/content/examples/daily_backup.php



They move a lot of data, so scheduled dumps were not really an  
option.  I wrote a proposal to use two databases, one is the primary  
acting as a DNS 1, and the second is the slave, acting as DNS 2.  I  
then wanted to add a 3rd slave server, that was a hot standby, in case  
the primary went down.  It would hold copies of all databases.


This would be on a raid, and on a schedule, it would also do dumps,  
which could take as long as they want, since this machine was not  
being hit at all.


This was not implemented for some reason.  I will look at your  
scripts, thanks again.

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Client deleted database, semi high priority master slave question

2009-07-08 Thread Scott Haneda

On Jul 8, 2009, at 12:49 PM, Daevid Vincent wrote:


I know it comes a little too late, but as a preventive measure going
forward, I offer up my daily crontab backup script:

http://www.daevid.com/content/examples/daily_backup.php



FYI, this link does not work in Safari.
--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Can a MyISAM DB hold /everything/?

2009-05-28 Thread Scott Haneda
The times I have heard, this is just a test, hack it together, or  
this will never see significant load are more than I care to count.   
Worse, the times that those statements ended up being false, and a  
rigged and hacked demo code base become production has taught me to  
treat all work as though it will become production.


That small rant aside...

Your question is hard to answer without knowing how you plan on  
displaying the results of the data that comes out of the database.  If  
this is a case where you pull the data out and show it in a web  
browser, then without question, less access time would be to store the  
images in the file system.  There is no access to if there is no  
request for the image out of the database.


Even if you are not using a browser, I am having a hard time thinking  
of many cases in which storing an image in the database is a good  
idea.  Then again, I am thinking jpg, png, gif etc.  Could be by image  
you mean a highly structured bitmap pattern, or pure vector  
coordinates that render an image out to SVG.


I do have one question.  If you can make the statement that scaling  
will never be an issue, then you why does it matter? In a non  
performance driven environment, you should pick the most convenient  
method to manage, and not worry about performance or scaling, if you  
have that luxury.


On May 27, 2009, at 9:10 PM, mugisha moses wrote:


what if you had no issues of scaling, which would have less access
time,  file system or database?

On Thu, May 28, 2009 at 1:54 AM, Arthur Fuller fuller.art...@gmail.com 
 wrote:
I second that emotion (don't store the images in the data file:  
just store
the paths to said images). Why? Because in the event of an updated  
image it
is more hassle than it is worth, to drop the old image and replace  
it with
the new one; if what you store is a simple pointer to an image  
file, lots of

this work goes away.


--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Can a MyISAM DB hold /everything/?

2009-05-28 Thread Scott Haneda
A bit of a confusing set of replies was in the previous thread...  
Doing my best to answer the issue at hand, please reference past posts  
if my reply is not clear...


On May 28, 2009, at 6:04 AM, PJ wrote:

Could you clarify/expand on this a bit - I am setting up a site  
where I

expect to have a lot of images, both still and flvs, and a lot of
recipes (including ingredients, procedures ad text : ;images).


I would put some thought into how you store the recipes.  You mention  
XML below.  This will all depend on the structure of the recipes  
data.  In a simple case, you have quantity and item for each recipe.   
A simple relationship of something like the recipe parent, where that  
holds the title, and maybe description and instructions, linked to a  
second table with quantity and item columns would work fine.


That is also very rigid, and you never know where your data needs are  
going to deviate from that form.  Put some thought into this.  Get a  
good sampling of your recipe data to make sure you build this out in a  
way that is forward flexible.



I am
storing the images in /images directory. If the amount of images gets
rather large, you are suggesting to store them on another server,  
right?


I would first start be researching how your OS deals with large  
quantities of images in one single directory.  Write a script to copy  
1 image 100,000 times over into the same directory.  Test how fast you  
can grab a random image, test how fast you can delete a random image.


Different OS's will behave different under different amounts

Even from a pure CLI management perspective, a simple `ls -la` will  
take some time to finish on 100,000 images.


I generally do something along the lines of:
images/$user-id/$year/$month/$day/$image-name-$random.ext

Back to your question.  Just because the amount of images gets large,  
does not mean you need to use multiple servers. It is only if the load  
of requests for those images gets more than the server can handle that  
you may want to look into distributing that load.


Distributing image load over http, if that is what you are doing, is  
almost trivial.  Starting with round robin DNS and rsync to more  
advanced load balancing, or looking into a CDN like what Amazon  
offers.  There are solutions abound in this area.



Now, with a lot of recipes, I understand that I should be storing them
also on another server; and perhaps using XML to store the recipes.  
Does

that sound like I have understood your advice?


I am not so sure.  Recipes are text only, take little space.  A simple  
phpBB forum may have GB's of data in them, on a shared database  
server.  You may want to have a second database for replication, as a  
hot backup, and then have other things in place for cold backups.


This all depends on a lot of factors about your end plans which have  
not yet been shared.

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Can a MyISAM DB hold /everything/?

2009-05-27 Thread Scott Haneda
Unless you have very good reason to store binary data like an image in  
your database, do not.  It may work well for a time, but always be  
prepared that your system will grow.  If it grows a good deal relative  
to your hardware, and users, and connections etc, you will always be  
in a race to throw more hardware at the problem.


In a simplistic terms, an image is a file, files should be stored on a  
file system.


You can do some very nice and simple things later on to distribute  
image loads.  If you store the path to the image, so example, images/ 
date/time.jpg, and then need to grow your serving needs, you could  
adjust your code in the http request to call images.example.com as the  
hostname.  You could set images.example.com to return round robin DNS  
records.


Now all you have to do is mirror or sync your images directory to any  
number of high performance http servers.


That is just one example of how I have avoided dealing with binary  
file load in MySql, or any database for that matter.  Even a reltively  
small image stored in a database can become a lot of data very fast.   
You will also find it can be inefficient to fetch that data and show  
it to the end user.  At the very least, you have to request the data,  
read it, rebuild it into a form that can be displayed to a user, which  
is a lot of work, compared to no work at all if just calling it from  
the file system.


Your question about which is faster, always will be on disk binary  
storage, with the exception of deletes.  Deletes will happen in near  
the same time, though you do have to locate the record to delete.   
This probably still has no impact, since you will locate on an index.


Hope that was helpful.

On May 27, 2009, at 9:50 AM, Pete Wilson wrote:

I am new to MySQL and just laying out what I hope will be just one  
db that holds all the info for a list of forged or machined metal  
parts.


Let's say there are 10,000 such parts, each with a unique part  
number. That part number is the primary index.


Each part has at least one, and maybe several, accompanying photos.

And each part has at least one, and perhaps as many as five, human- 
language descriptions. These descriptions might be as long as 5,000  
characters each.


I believe I have the choice of:

1. Storing everything -- photos and descriptions included -- in the  
database; or


2. Holding some info in the database; and storing photos and  
descriptions in normal-type disk files apart from the database, with  
names based on part numbers.


So my questions:

1. Which scheme is faster and less resource-hogging in reading,  
updating, adding, and deleting?


2. I understand there are problems in storing photos in the db. Can  
one overcome these problems somehow?


--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Upgrade Mysql

2009-05-20 Thread Scott Haneda

On May 20, 2009, at 1:27 AM, Webmaster Studio Informatica wrote:


I need to upgrade Mysql 4 to Mysql 5 on Linux.


Sometimes


I will uninstall version 4 and install version 5.

With uninstallation usually database files remain in /var/lib/mysql/

I want to know if with the installation of Mysql 5 those database  
will be recognized and imported to work with the new version  
automatically.


In most cases, but you should have a database dump of all of them just  
in case.  You do not specify what version of 4 you are at.


You need to at least read this:
http://dev.mysql.com/doc/refman/5.0/en/upgrading-from-4-1.html

Specifically look at Incompatible change, there are quote a few.   
Most will not bother you.  For me, the biggest issue was my use of  
timestamp, and how that changed a bit.  Luckily I had a function that  
I used in my code to format that timestamp value, so it was just a  
matter of going through all my code and updating one function to all  
sites.


This was also only a display issue for me and did not change my data.

You do have to know your code.  If you do not, I would use a staging  
server, and do them one database at a time, test, make sure it works,  
and go from there.

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Stored Procedure Data Types

2009-05-20 Thread W. Scott Hayes
Hello,

I would like to do a select on a table to get back the IDs of some of
the records.

Then take those IDs and do a single update using a WHERE clause like
(recordID IN (2,44,21))

My question is:

Can I build a string using a cursor that has all of the IDs and then
issue an update using the string as part of the WHERE clause?

Are there functions that facilitate this better?  I'm wondering if
there is some sort of column function that will grab the IDs from the
initial select.

Below is my code.  Thanks for any advice.

DELIMITER $$

DROP PROCEDURE IF EXISTS sp_getNextQueueBlock$$

CREATE PROCEDURE sp_getNextQueueBlock()
BEGIN
DECLARE l_LinkQueueID INTEGER;
DECLARE no_more_queue_items INT DEFAULT 0;
DECLARE l_updateString VARCHAR(2000) DEFAULT '';
DECLARE queue_csr CURSOR FOR
SELECT LinkQueueID FROM linkqueue WHERE Completed  0 LIMIT 200;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_queue_items=1;

START Transaction;
OPEN queue_csr;
queue_loop:LOOP
FETCH queue_csr INTO l_LinkQueueID;
IF no_more_queue_items=1 THEN
LEAVE queue_loop;
END IF;
SET l_updateString=CONCAT(l_updateString,', ',l_LinkQueueID);
END LOOP queue_loop;

IF LENGTH(l_updateString)  2 THEN
SET l_updateString=SUBSTRING(l_updateString,3,LENGTH(l_updateString)-2);
END IF;

UPDATE linkqueue SET Completed = 0 WHERE (LinkQueueID IN (l_updateString));
commit;
END$$

DELIMITER ;

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-16 Thread Scott Haneda

On May 16, 2009, at 12:28 AM, Simon J Mudd wrote:


I also, in section [mysqld]
# Begin slave config 05/14/2009
server-id = 2
master-host = ip.add.re.ss
master-user = user-replicate
master-password = xx
master-port = 3306
# End slave config


No. not necessary as the information is stored in the master info  
file.



Thanks Simon, I will test.  Looks like if this is the case, literally,  
99% of every tutorial out there is wrong, as they all do this in  
duplicate, along with the CHANGE MASTER SQL command.

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-15 Thread Scott Haneda

On May 12, 2009, at 11:11 PM, Simon J Mudd wrote:


1. Ensure binlogging is enabled on the master.
2. Ensure you setup grant permissions so the slave can connect to  
the master.
3. Configure on the slave the replication (which databases need to  
be replicated)

4. Get the master and slave in sync (via rsync, load/dump or whatever)
5. Run show master status on the master (assuming binlogging is  
enabled)

  to get the current position on the master
6. use CHANGE MASTER TO on the slave providing the appropriate  
permissions.

7. Run: START SLAVE
8. Use: show slave status\G to check how the replication is working,  
and

  and adjust as necessary.



Also, how do I set the slave to be read only?  I set read-only in  
my.cnf and it made all databases read only.  I want to limit just the  
replicated database to be read only.  The rest of them are production  
databases in normal use by clients.


I suppose just not having a login and pass to the replicated database  
is more than enough?

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-14 Thread Scott Haneda

On May 12, 2009, at 11:11 PM, Simon J Mudd wrote:


talkli...@newgeo.com (Scott Haneda) writes:


Hello, I am confused about repliction setup.  Reading a config file,
and the docs, leads me to believe this is an either code choice, pick
#1 or #2.  If that is the case, why would I want to use #1 over #2?

My confusion comes from several online references where there is a
combination of #1 and #2 going on:

# To configure this host as a replication slave, you can choose  
between

# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our  
manual) -

#the syntax is:
#
#CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
#MASTER_USER=user, MASTER_PASSWORD=password ;
#
#where you replace host, user, password by quoted strings  
and

#port by the master's port number (3306 by default).


Use this method. it works and is the correct way to do things. It  
also will keep
working if you stop and restart the server with replication carrying  
on

from where it left off.

The procedure is quite simply:

1. Ensure binlogging is enabled on the master.
2. Ensure you setup grant permissions so the slave can connect to  
the master.
3. Configure on the slave the replication (which databases need to  
be replicated)

4. Get the master and slave in sync (via rsync, load/dump or whatever)
5. Run show master status on the master (assuming binlogging is  
enabled)

  to get the current position on the master
6. use CHANGE MASTER TO on the slave providing the appropriate  
permissions.

7. Run: START SLAVE
8. Use: show slave status\G to check how the replication is working,  
and

  and adjust as necessary.

It's true that initial mysql replication setup is a bit fiddly, but  
once you've

done it once or twice it's not so hard.



I have it set up and working in test.  I will redo it again once I get  
a better handle on it.  I am still a little confused on one aspect.   
In the mysql sample cfg file, the section that has:

#Replication Slave there is a very clear OR to use either #1 OR #2.

I did the suggestions of #2, issuing
#CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
#MASTER_USER=user, MASTER_PASSWORD=password ;
on the slave.

I also, in section [mysqld]
# Begin slave config 05/14/2009
server-id = 2
master-host = ip.add.re.ss
master-user = user-replicate
master-password = passw3rd
master-port = 3306
# End slave config

Am I correct in that this is not needed.  I know I for certain need  
server_id, but is that all I need, and I have redundant data?  I  
figure also better to not have raw user and pass in a cnf file if it  
is not needed.


I would say 99% of the examples on the internets are using both  
methods, the MySql docs are not entirely clear to me on this specific  
issue.  Conflicting data on the comments for sure.


I also have seen a good deal more options specified than I have.  To  
be clear, I am only looking for one way replication, the most basic, a  
master that is read/write by clients, and a slave that is neither read/ 
write by clients, but only reading in data from the master, syncing it.


Here is a example of other options I am seeing, comments interspersed...
 server-id = 1
 log-bin = /usr/local/mysql/var/bin.log
I take it I do not need this if I am ok with default data dir?
 log-slave-updates
I am pretty sure I do not not need this, since I am only doing
master to slave, and no cascading replication, or circular rep.
 log-bin-index = /usr/local/mysql/var/log-bin.index
Same as log-bin, of I am ok with default data dir?
 log-error = /usr/local/mysql/var/error.log
Again, if I am ok with default data-dir?

 relay-log = /usr/local/mysql/var/relay.log
I do not seem to have this file anywhere.

 relay-log-info-file = /usr/local/mysql/var/relay-log.info
 relay-log-index = /usr/local/mysql/var/relay-log.index
I do not see that I have these on the master, I have it on the
slave.  Maybe all these logs are best to be defined.  Perhaps
if they are not, the hostname may be used in the naming
of the logs, and if a hostname ever changes, I would
have issues on getting replication to fine the logs?

 auto_increment_increment = 10
 auto_increment_offset = 1
I am definitely not clear on these.  I want a 100% replica.
Why do I need to worry of key id collisions at all?  Or is
This n/a to a simple master/slave setup

 master-host = other master hostname
 master-user = replication username
 master-password = replication password
These I wonder if are needed at all, if I use the sql CHANGE

 replicate-do-db = somedbname1
 replicate-do-db = somedbname2
Good to know, wondered how to add more, if you comma seperated

Re: Replication config

2009-05-13 Thread Scott Haneda

Thanks, a few comments inline below...

On May 12, 2009, at 11:11 PM, Simon J Mudd wrote:


talkli...@newgeo.com (Scott Haneda) writes:


Hello, I am confused about repliction setup.  Reading a config file,
and the docs, leads me to believe this is an either code choice, pick
#1 or #2.  If that is the case, why would I want to use #1 over #2?

My confusion comes from several online references where there is a
combination of #1 and #2 going on:

# To configure this host as a replication slave, you can choose  
between

# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our  
manual) -

#the syntax is:
#
#CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
#MASTER_USER=user, MASTER_PASSWORD=password ;
#
#where you replace host, user, password by quoted strings  
and

#port by the master's port number (3306 by default).


Use this method. it works and is the correct way to do things. It  
also will keep
working if you stop and restart the server with replication carrying  
on

from where it left off.

The procedure is quite simply:

1. Ensure binlogging is enabled on the master.


Done, I can see the binlogs as well.

2. Ensure you setup grant permissions so the slave can connect to  
the master.


From the remote MySql I have mysql -u user -h example.com -p
It allows me in so I think I am good there.

3. Configure on the slave the replication (which databases need to  
be replicated)


This is where I need a little clarification, is the only thing I need  
to do is adjust my.cnf to have in the [mysqld] section

server-id = 2


4. Get the master and slave in sync (via rsync, load/dump or whatever)


Is this mandatory?  There is not a lot of data, hundred rows or so,  
can I use LOAD DATA FROM MASTER; ?


Seems most instructions say to use a dump.  This does not make a lot  
of sense to me, I am setting up replication, is it not the point to be  
able to pull the data down?  Why does it need priming like this?


5. Run show master status on the master (assuming binlogging is  
enabled)

  to get the current position on the master


I can do this now, gives back a position.  It seems to change over  
time.  Since it is a moving target, if I am using LOAD DATA FROM  
MASTER; I take it I need to lock the tables while the first load is  
happening?


6. use CHANGE MASTER TO on the slave providing the appropriate  
permissions.

7. Run: START SLAVE


Will do, thanks.

8. Use: show slave status\G to check how the replication is working,  
and

  and adjust as necessary.


Thanks.

It's true that initial mysql replication setup is a bit fiddly, but  
once you've

done it once or twice it's not so hard.



While not part of my plan, if the master goes down and I want to start  
using the slave as the master while I am fixing the master server


What is the best way to do this?  Can the slave be treated like a  
master by just pointing any client to the slave assuming I set a user  
to allow it?


With the slave temporarily becoming the master, the data will of  
course change.  When I bring the master back online, what is the best  
way to reverse sync and get back to where I was?  Probably take the  
entire thing thing down, copy the database from the current temp live  
slave that has been used as a master, and go from there?


Thank you.
--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Replication config

2009-05-12 Thread Scott Haneda
Hello, I am confused about repliction setup.  Reading a config file,  
and the docs, leads me to believe this is an either code choice, pick  
#1 or #2.  If that is the case, why would I want to use #1 over #2?


My confusion comes from several online references where there is a  
combination of #1 and #2 going on:


# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
#the syntax is:
#
#CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
#MASTER_USER=user, MASTER_PASSWORD=password ;
#
#where you replace host, user, password by quoted strings and
#port by the master's port number (3306 by default).
#
#Example:
#
#CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
#MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method,  
then
#start replication for the first time (even unsuccessfully, for  
example
#if you mistyped the password in master-password and the slave  
fails to

#connect), the slave will create a master.info file, and any later
#change in this file to the variables' values below will be  
ignored and
#overridden by the content of the master.info file, unless you  
shutdown

#the slave server, delete master.info and restart the slaver server.
#For that reason, you may want to leave the lines below untouched
#(commented) and instead use CHANGE MASTER TO (see above)

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with mysql query, multiple list

2009-05-10 Thread Scott Haneda
What about sub selects. As I see it you only care about the highest  
and lowest order of results in each list.


Sorry, in am on a mobile so I can nit make a test case, and this will  
be pseudo SQL.


Select * from table where start = (select foo) and ( select foo) ...

Also look at the between and in keywords.

Perhaps your list of timestamps is not in a database, can you put then  
in?


Either way unless I am reading you wrong you only need the greatest  
and lowest time in your comparison, not a huge list.

--
Scott
Iphone says hello.

On May 8, 2009, at 9:26 AM, Abhishek Pratap abhishek@gmail.com  
wrote:



Hi All

I am kind of stuck with this query  , cant expand my thinking. May  
this is a

limitation.  Here it is

I have a database with many cols two of which are start and end  
position for

an event.

Now I have a list of event time stamps, I want to find all the info  
once the
current event time stamp is = start time of event and =end time of  
event.


something  like this

select * from table_name where start = ( LIST of time stamps) AND  
end =(

list of time stamps).

Clearly above query accepts only one item in the list. Is there a  
way to do
this for multiple items in the list ??? I can't think of anything at  
this

moment.


Thanks,
-Abhi


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Query log for just one database

2009-05-07 Thread Scott Haneda
I am trying to monitor a specific issue, and I know it is related to  
only one database.  There is a lot of other noise in the logs if I  
enable query logging.


Is there any way to limit query logging to just one database?
--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Now() : SQL syntax error. But why?

2009-04-30 Thread Scott Haneda
Always echo out your SQL string, it will make it a lot more obvious.   
You want to see the result.  I php concatenated string can be  
confusing at times.


Also, you are not escaping your data, so if you had a word of 'stops,  
here' that would break it as well.


So in your case, you very well may break it by changing the data you  
put in.  You could also do something like stuffing drop database  
foo; into your data, and be in for real fun.


Pass every string to http://us2.php.net/mysql_real_escape_string

On Apr 30, 2009, at 9:27 PM, Antonio PHP wrote:

I thought I couldn't have missed ','(comma) before. But today  
somehow it works... ;;

I wasted hours figuring this out, but you saved me!

Maybe I'm still a complete newbie!


--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Now() : SQL syntax error. But why?

2009-04-29 Thread Scott Haneda


On Apr 29, 2009, at 11:29 AM, Antonio PHP wrote:


This is MySQL data structure. - I underlined where it causes the error
message. (datetime)
`id_Company` smallint(6) unsigned NOT NULL AUTO_INCREMENT,
`Name` varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL
`Revenue` mediumint(6) NOT NULL,
`Company_Size` mediumint(6) NOT NULL,
`Ownership` tinyint(1) NOT NULL,
`Homepage` varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci  
NOT NULL,
`Job_Source` varchar(100) CHARACTER SET utf8 COLLATE utf8_unicode_ci  
NOT

NULL,
`Updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
*`Created` datetime NOT NULL,
*PRIMARY KEY (`id_Company`),
KEY `Ownership` (`Ownership`)
)
ENGINE=InnoDB  DEFAULT CHARSET=utf8
FOREIGN KEY (`Ownership`) REFERENCES `ownership` (`id_Ownership`) ON  
DELETE

CASCADE ON UPDATE CASCADE;


Next time can you include unmodified SQL so it is a copy and paste for  
me, rather than debugging what changes you made that are causing error.



Here is php script -
$sql = INSERT INTO company SET
Name='$Name',
Revenue='$Revenue',
Company_Size='$Company_Size',
Ownership='$Ownership',
Homepage='$Homepage',
Job_Source='$Job_Source'
*Created=NOW() // if I remove this line it works fine.
*;
mysql_query ($sql) or die (mysql_error());


Same here, as I am not sure your edits are just edits, or the lack of  
a comma after the job source variable is the issue.


This works on my end:

 $Name = 'Tom';
 $Revenue  = '100';
 $Company_Size = '500';
 $Ownership= 'partner';
 $Homepage = 'example.com';
 $Job_Source   = 'friend';


 $sql = INSERT INTO mailing SET
 Name='$Name',
 Revenue='$Revenue',
 Company_Size='$Company_Size',
 Ownership='$Ownership',
 Homepage='$Homepage',
 Job_Source='$Job_Source',
 Created=NOW();

 echo $sql;

 mysql_query ($sql) or die (mysql_error());

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Now() : SQL syntax error. But why?

2009-04-28 Thread Scott Haneda
We need to see your entire query and the table structure.  timestamp  
fields can have options set to auto update them, where order matters,  
and only one field can support that feature.


Please supply more data.

On Apr 28, 2009, at 2:18 PM, Antonio PHP wrote:

You have an error in your SQL syntax; check the manual that  
corresponds to
your MySQL server version for the right syntax to use near 'Created  
= NOW(),

Updated = NOW()' at line 8

'Created' and 'Updated' are set to datetime (InnoDB).

The same syntax works for some newly created tables... and gives no  
error.


It's very strange. 'Now()' works for some tables, and it doesn't for  
some.

(All set in phpmyadmin...)

What could have caused this? Any similar experience?


--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Information Schema

2009-04-28 Thread Scott Haneda
I have been meaning to find out about this since I moved to mysql 5.   
In version 4, I never saw the table information schema.  With it  
being in version 5, I assume it was something only the root users, or  
a higher level user could see.


I now know that it shows up under any account.  I will certainly go  
read more in the docs about what this table is for.  However, since it  
seems to be important, I would assume you do not want database users  
to be able to update, insert, or delete against it.


Is the default set up in a way that database/tables are protected?
--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Auto increment?

2009-04-02 Thread Scott Haneda

On Apr 2, 2009, at 12:51 AM, Steve Edberg wrote:

At 9:35 AM +0200 4/2/09, Andreas Pardeike wrote:

Hi,

I have a table 'test'

+-+--+--+-+--- 
++
| Field   | Type | Null | Key | Default   |  
Extra  |
+-+--+--+-+--- 
++
| id  | int(11)  | NO   | PRI | NULL  |  
auto_increment |
| foo | varchar(255) | NO   | | NULL   
||
| bar | varchar(255) | NO   | | NULL   
||
| payload | longblob | YES  | | NULL   
||
+-+--+--+-+--- 
++


where 'payload' size usually is quite big!

Now I want to manipulate a number of rows in a non-trivial way and  
at the
same time set 'id' for those rows to a new number in the sequence.  
I.e.


UPDATE test SET id = NEXT_INSERT_ID(), foo = some_function(foo)  
WHERE bar ...


My motivation is that this table has several consumers in other  
applications
that keep track of the biggest 'id' they have seen between polls  
and thus

can keep track of new and changed rows.

Right now, I solve this problem by copying the rows to a temporary  
table,
then delete them and insert them from the temporary table but this  
moves my

huge payload around which I really want to avoid.

How can I solve this in a better way?

/Andreas Pardeike



Add a column of type timestamp which, by default, will be updated  
every time a record is inserted or updated. Then the other  
applications can simply select records with timestamp   
last_poll_time.



My same suggestion as well.  I go a far as to have at least, three  
standard fields to any table I make:



CREATE TABLE IF NOT EXISTS `foobar` (
  `id` int(11) NOT NULL auto_increment,
  `udpated` timestamp NOT NULL default '-00-00 00:00:00' on  
update CURRENT_TIMESTAMP,

  `created` timestamp NOT NULL default '-00-00 00:00:00',
  PRIMARY KEY  (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=0 ;

You never know when you want to know time and date, and this makes it  
brainless.  So any new record you add you will set created = NOW();,  
outside of that, never worry about the updated, field, it gets set  
automatically any time there is any change to the record.


I am near certain, there is one small thing to note, which is for  
example, if you update foobar set something = 'test' where id = 1  
and something was already at test, since no real update/change  
happened the time-stamp is not going to get updated.


* There is a version of mysql 4, that going forward, had a change to  
how `timestamp` was defined.  If your old data is in version four, and  
you plan to move to a new version, look out for that issue.


Thanks for welcome :)
--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Auto increment?

2009-04-02 Thread Scott Haneda
I read your other replies about the timestamp not working.  I still  
think adding the updated and created fields is a good idea in general,  
to any table.  I have some questions about the below since the  
original suggestion would not work for you.


On Apr 2, 2009, at 12:35 AM, Andreas Pardeike wrote:

+-+--+--+-+--- 
++
| Field   | Type | Null | Key | Default   |  
Extra  |
+-+--+--+-+--- 
++
| id  | int(11)  | NO   | PRI | NULL  |  
auto_increment |
| foo | varchar(255) | NO   | | NULL   
||
| bar | varchar(255) | NO   | | NULL   
||
| payload | longblob | YES  | | NULL   
||
+-+--+--+-+--- 
++


where 'payload' size usually is quite big!

Now I want to manipulate a number of rows in a non-trivial way and  
at the
same time set 'id' for those rows to a new number in the sequence.  
I.e.


UPDATE test SET id = NEXT_INSERT_ID(), foo = some_function(foo)  
WHERE bar ...


My motivation is that this table has several consumers in other  
applications
that keep track of the biggest 'id' they have seen between polls and  
thus

can keep track of new and changed rows.


I am a little stumped on this, since id is auto_increment, do you  
start to see gaps in your id's?  This is not undesirable to you?


Right now, I solve this problem by copying the rows to a temporary  
table,
then delete them and insert them from the temporary table but this  
moves my

huge payload around which I really want to avoid.

How can I solve this in a better way?


How about changing the initial insert logic.  The above table would  
keep the id but be used as your key.  Add a second id of payload_id.   
Create a new table with id, payload_id, and payload.  Store just the  
payload in a separate table, connected with the id = payload_id.  Now  
you are never touching your payload data, which is too large.


Also, I have been in cases with batch uploads where performance is an  
issue, and used the INSERT DELAYED features of mysql, where the the  
insert is sent in one command, but the server will do it when there is  
idle time to deal with it.  You have to be a little careful to  
anticipate a server crash or connection failure, but rollback can  
solve that easily.


There does not seem to be an UPDATE DELAYED syntax, but I did just  
learn of

The UPDATE statement supports the following modifiers:

	• If you use the LOW_PRIORITY keyword, execution of the UPDATE is  
delayed

  until no other clients are reading from the table. This affects only
  storage engines that use only table-level locking (MyISAM,  
MEMORY, MERGE).


Maybe you can use the LOW_PRIORITY keyword in your update commands to  
your advantage?

--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Multiple batch files

2009-03-19 Thread David Scott
HeyaI have folders of sql files which, when there were a few, I executed
using source d:\folder\file.sqlI now I have 100's, so is there a way to do
source d:\folder\*.sql (which sadly does not work) ?
How would  you go about automating this?

The DB is running on a windows box.

Thanks for any suggestions you may have.
--
David Scott


Re: Multiple batch files

2009-03-19 Thread David Scott
Thanks for responding Gary
I get:

---
 Volume in drive D has no label.
 Volume Serial Number is C45F-3341

 Directory of D:\data\03\09

03/19/2009  05:45 PM   389,305 sessions-20090309-00.sql
03/19/2009  05:45 PM   402,264 sessions-20090309-01. sql
03/19/2009  05:45 PM   417,579 sessions-20090309-02. sql
03/19/2009  05:45 PM   402,084 sessions-20090309-03. sql

etc

220 File(s)  50,149,425 bytes
0 Dir(s)  116,891,242,496 bytes free
---

How should I edit this?
--
Dave


2009/3/19 Gary Smith g...@primeexalia.com

 Dir *.sql  my.bat

 Edit batch file for each line to append | mysql

 Call batch file


 Sent via BlackBerry by ATT

 -Original Message-
 From: David Scott critt...@desktopcreatures.com

 Date: Thu, 19 Mar 2009 11:08:28
 To: mysql@lists.mysql.com
 Subject: Multiple batch files


 HeyaI have folders of sql files which, when there were a few, I executed
 using source d:\folder\file.sqlI now I have 100's, so is there a way to
 do
 source d:\folder\*.sql (which sadly does not work) ?
 How would  you go about automating this?

 The DB is running on a windows box.

 Thanks for any suggestions you may have.
 --
 David Scott




Re: Multiple batch files

2009-03-19 Thread David Scott
Thanks, much better.I appended '| mysql' to each line but I am not sure what
you mean by 'then call it' I ran the batch file and tried using source
batch.bat but no joy. What am I missing?

Thanks again
--
Dave



2009/3/19 Gary Smith g...@primeexalia.com

 Dir /b
 Sent via BlackBerry by ATT

 -Original Message-
 From: David Scott critt...@desktopcreatures.com

 Date: Thu, 19 Mar 2009 11:16:03
 To: g...@primeexalia.com
 Cc: mysql@lists.mysql.com
 Subject: Re: Multiple batch files


 Thanks for responding Gary
 I get:

 ---
  Volume in drive D has no label.
  Volume Serial Number is C45F-3341

  Directory of D:\data\03\09

 03/19/2009  05:45 PM   389,305 sessions-20090309-00.sql
 03/19/2009  05:45 PM   402,264 sessions-20090309-01. sql
 03/19/2009  05:45 PM   417,579 sessions-20090309-02. sql
 03/19/2009  05:45 PM   402,084 sessions-20090309-03. sql

 etc

 220 File(s)  50,149,425 bytes
 0 Dir(s)  116,891,242,496 bytes free
 ---

 How should I edit this?
 --
 Dave


 2009/3/19 Gary Smith g...@primeexalia.com

  Dir *.sql  my.bat
 
  Edit batch file for each line to append | mysql
 
  Call batch file
 
 
  Sent via BlackBerry by ATT
 
  -Original Message-
  From: David Scott critt...@desktopcreatures.com
 
  Date: Thu, 19 Mar 2009 11:08:28
  To: mysql@lists.mysql.com
  Subject: Multiple batch files
 
 
  HeyaI have folders of sql files which, when there were a few, I executed
  using source d:\folder\file.sqlI now I have 100's, so is there a way to
  do
  source d:\folder\*.sql (which sadly does not work) ?
  How would  you go about automating this?
 
  The DB is running on a windows box.
 
  Thanks for any suggestions you may have.
  --
  David Scott
 
 




Auto increment without uniquness

2009-03-19 Thread Scott Haneda
I think ideally I would like to create an auto increment column that  
has no requirement for uniqueness.  So if 6 was the last entry, and  
there are 10 of them, 7 would still be the next, is this possible?


I am assuming it is not?

I am working in a case where data is needing to be de-normalized, and  
I have never had to do this before.  In the past, this would be a join  
away.


Now I have one table, it will have a single parent record, with x  
children records.  I start with an `id` primary key auto inc field,  
standard stuff.


Keeping it simple, lets say I have:
id, group_id, foo, bar, baz

I will do a batch of inserts, where the first record is the parent,  
and the rest are children.  The first parent record is what is getting  
me stuck, I need to give it a group_id, but all the children will need  
to have the same group id.


I could do one insert on the parent, get the returned id, and then  
update the group_id on the parent to the same id, but that is an  
insert and an update, I want to avoid the update.


Can I insert into table set foo = 'test', `group_id` = `id`
Or is that too soon in the insert trasaction to be noticed?

I can allow the parent to have an empty group_id, and use the id as  
what I search on, but it seems messy.


Suggestions?
Thank you all
--
Scott * If you contact me off list replace talklists@ with scott@ *


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Installing MySQL client into workstations using GPO

2009-02-05 Thread Scott Hamm
My objective is to have all Windows XP workstation (300+) report their 
activities using mysql client that exports in CSV format to our Active 
Directory's MySQL server.  Is it possible to install just the MySQL client over 
GPO, is this possible?

Thanks,


Scott Hamm
Technology Coordinator
MOT Charter School
1156 Levels Rd
Middletown, DE 19709
scott.h...@mot.k12.de.usmailto:scott.h...@mot.k12.de.us

--And the mome raths outgrabe...


Re: stuck commits

2009-01-13 Thread Scott Edwards
On Tuesday 13 January 2009 07:23:52 am Krishna Chandra Prajapati wrote:
 Hi Scott,

 I believe something wrong with innodb parameters. It should be optimum. In
 your case it might be too high or too low. Take a look at log file size.
 Please send your show variables and show status data to reach at
 conclusion.



Okay. This is not during a time of incident.  Another server is handling the 
load from the queries and database.  So far we don't notice any issues with 
stuck commits on the new server, but it's only handling minimal load outside 
of amavis queries.  We would like to run this on the original system, because 
it has raid1+drbd+heartbeat (2nodes) where as the temporary solution only 
uses raid1.

Variable_name   Value
auto_increment_increment1
auto_increment_offset   1
automatic_sp_privileges ON
back_log50
basedir /usr/
binlog_cache_size   32768
bulk_insert_buffer_size 8388608
character_set_clientlatin1
character_set_connectionlatin1
character_set_database  latin1
character_set_filesystembinary
character_set_results   latin1
character_set_serverlatin1
character_set_systemutf8
character_sets_dir  /usr/share/mysql/charsets/
collation_connectionlatin1_swedish_ci
collation_database  latin1_swedish_ci
collation_serverlatin1_swedish_ci
completion_type 0
concurrent_insert   1
connect_timeout 5
datadir /var/www/mysql/
date_format %Y-%m-%d
datetime_format %Y-%m-%d %H:%i:%s
default_week_format 0
delay_key_write ON
delayed_insert_limit100
delayed_insert_timeout  300
delayed_queue_size  1000
div_precision_increment 4
engine_condition_pushdown   OFF
expire_logs_days10
flush   OFF
flush_time  0
ft_boolean_syntax   + -()~*:|
ft_max_word_len 84
ft_min_word_len 4
ft_query_expansion_limit20
ft_stopword_file(built-in)
group_concat_max_len1024
have_archiveYES
have_bdbNO
have_blackhole_engine   NO
have_compress   YES
have_crypt  YES
have_csvYES
have_dynamic_loadingYES
have_example_engine NO
have_federated_engine   YES
have_geometry   YES
have_innodb YES
have_isam   NO
have_merge_engine   YES
have_ndbcluster DISABLED
have_opensslDISABLED
have_query_cacheYES
have_raid   NO
have_rtree_keys YES
have_symlinkYES
init_connect
init_file
init_slave
innodb_additional_mem_pool_size 1048576
innodb_autoextend_increment 8
innodb_buffer_pool_awe_mem_mb   0
innodb_buffer_pool_size 268435456
innodb_checksumsON
innodb_commit_concurrency   0
innodb_concurrency_tickets  500
innodb_data_file_path   ibdata1:10M:autoextend
innodb_data_home_dir
innodb_doublewrite  ON
innodb_fast_shutdown1
innodb_file_io_threads  4
innodb_file_per_table   OFF
innodb_flush_log_at_trx_commit  1
innodb_flush_method
innodb_force_recovery   0
innodb_lock_wait_timeout50
innodb_locks_unsafe_for_binlog  OFF
innodb_log_arch_dir
innodb_log_archive  OFF
innodb_log_buffer_size  1048576
innodb_log_file_size5242880
innodb_log_files_in_group   2
innodb_log_group_home_dir   ./
innodb_max_dirty_pages_pct  90
innodb_max_purge_lag0
innodb_mirrored_log_groups  1
innodb_open_files   300
innodb_rollback_on_timeout  OFF
innodb_support_xa   ON
innodb_sync_spin_loops  20
innodb_table_locks  ON
innodb_thread_concurrency   8
innodb_thread_sleep_delay   1
interactive_timeout 28800
join_buffer_size3141632
key_buffer_size 50331648
key_cache_age_threshold 300
key_cache_block_size1024
key_cache_division_limit100
language/usr/share/mysql/english/
large_files_support ON
large_page_size 0
large_pages OFF
lc_time_names   en_US
license GPL
local_infileON
locked_in_memoryOFF
log OFF
log_bin ON
log_bin_trust_function_creators OFF
log_error
log_queries_not_using_indexes   OFF
log_slave_updates   OFF
log_slow_queriesON
log_warnings1
long_query_time 3
low_priority_updatesOFF
lower_case_file_system  OFF
lower_case_table_names  0
max_allowed_packet  16776192
max_binlog_cache_size   18446744073709551615
max_binlog_size 104857600
max_connect_errors  1
max_connections 250
max_delayed_threads 20
max_error_count 64
max_heap_table_size 209715200
max_insert_delayed_threads  20
max_join_size   18446744073709551615
max_length_for_sort_data1024
max_prepared_stmt_count 16382
max_relay_log_size  0
max_seeks_for_key   18446744073709551615
max_sort_length 1024
max_sp_recursion_depth  0
max_tmp_tables  32
max_user_connections0
max_write_lock_count18446744073709551615
multi_range_count   256
myisam_data_pointer_size6
myisam_max_sort_file_size   9223372036854775807
myisam_recover_options  OFF
myisam_repair_threads   1
myisam_sort_buffer_size 8388608
myisam_stats_method nulls_unequal
ndb_autoincrement_prefetch_sz   32
ndb_force_send  ON
ndb_use_exact_count ON
ndb_use_transactions

stuck commits

2009-01-12 Thread Scott Edwards
All too frequently, I see commits stuck in this database.  What can I do to 
speed that up? Or, abort if it takes more than 40 seconds?  This query here 
for example appears to take 443 seconds so far.

From mysqladmin processlist:

Id| User | Host | db | Command |Time | State | Info
14010 | amavis | mx:53008 | amavis | Query   | 443  | | commit

mysqld  Ver 5.0.32-Debian_7etch8-log for pc-linux-gnu on x86_64 (Debian etch 
distribution)

I recompiled it once, but the debug symbols are still missing.  The build 
transcript didn't include -g during compile.  I'm looking into redoing that 
now.

Thanks in advance,


Scott Edwards

---

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Locking database when 'creating sort index'

2009-01-07 Thread David Scott
When we run a large query other queries start to back up when the large one
gets to the 'creating sort index' phase, this lock seems to affect the whole
server, all databases... does anyone know what may be causing this?
Thanks in advance
--
David Scott


Re: Locking database when 'creating sort index'

2009-01-07 Thread David Scott
1) InnoDb2) 5.0.51 on Linux
3) No, a Select with a bunch of Joins, a Where, group and order
4) 37 seconds
5) Yes
6) Show Processlist does not show anything, just the user, what are you
looking for?

2009/1/7 mos mo...@fastmail.fm

 At 11:20 AM 1/7/2009, you wrote:

 When we run a large query other queries start to back up when the large
 one
 gets to the 'creating sort index' phase, this lock seems to affect the
 whole
 server, all databases... does anyone know what may be causing this?
 Thanks in advance
 --
 David Scott


 David,
   Can you provide us with more info?

 1) Is this an InnoDb table or  MyISAM?
 2) What version of MySQL are you using?
 3) Are you using Create Index or Alter Table? Can you give us the syntax
 you are using?
 4) How long does it take? Can you give us the table structure  # of
 indexes?
 5) Are these queries that are backed up, referencing the table you are
 building the index on?
 6) Can you provide us with a Show Process List?

 This should help the members of this list give you a better more informed
 answer.

 Offhand I suspect your key_buffer_size may be too low and MySQL is
 attempting to build the index on disk rather than in memory. If the index
 can be built in memory it will be 10x faster than building the index on
 disk. That is why adding as much ram as possible to your server will help.

 This is set in your my.cnf file:

 # Size of the Key Buffer, used to cache index blocks for MyISAM tables.
 # Do not set it larger than 30% of your available memory, as some memory
 # is also required by the OS to cache rows. Even if you're not using
 # MyISAM tables, you should still set it to 8-64M as it will also be
 # used for internal temporary disk tables.
 key_buffer_size=500M

 If you increase your key_buffer size from the default value to 30% of your
 memory, you should get indexes built faster.

 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=critt...@desktopcreatures.com





Re: Locking database when 'creating sort index'

2009-01-07 Thread David Scott
Oh and we increased the key_buffer_size=1200M (30% of ram) no change.

2009/1/7 David Scott critt...@desktopcreatures.com

 1) InnoDb2) 5.0.51 on Linux
 3) No, a Select with a bunch of Joins, a Where, group and order
 4) 37 seconds
 5) Yes
 6) Show Processlist does not show anything, just the user, what are you
 looking for?

 2009/1/7 mos mo...@fastmail.fm

 At 11:20 AM 1/7/2009, you wrote:

 When we run a large query other queries start to back up when the large
 one
 gets to the 'creating sort index' phase, this lock seems to affect the
 whole
 server, all databases... does anyone know what may be causing this?
 Thanks in advance
 --
 David Scott


 David,
   Can you provide us with more info?

 1) Is this an InnoDb table or  MyISAM?
 2) What version of MySQL are you using?
 3) Are you using Create Index or Alter Table? Can you give us the syntax
 you are using?
 4) How long does it take? Can you give us the table structure  # of
 indexes?
 5) Are these queries that are backed up, referencing the table you are
 building the index on?
 6) Can you provide us with a Show Process List?

 This should help the members of this list give you a better more informed
 answer.

 Offhand I suspect your key_buffer_size may be too low and MySQL is
 attempting to build the index on disk rather than in memory. If the index
 can be built in memory it will be 10x faster than building the index on
 disk. That is why adding as much ram as possible to your server will help.

 This is set in your my.cnf file:

 # Size of the Key Buffer, used to cache index blocks for MyISAM tables.
 # Do not set it larger than 30% of your available memory, as some memory
 # is also required by the OS to cache rows. Even if you're not using
 # MyISAM tables, you should still set it to 8-64M as it will also be
 # used for internal temporary disk tables.
 key_buffer_size=500M

 If you increase your key_buffer size from the default value to 30% of your
 memory, you should get indexes built faster.

 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=critt...@desktopcreatures.com






Hypothetically looking for Kevin Bacon

2008-12-18 Thread Scott Haneda
Hello mysql list, I am looking for a little guidance on a a potential  
query.  I am sure most people have heard of the limited degrees of  
separation one has to become connected to another person.  This is  
much like the myspace in your extended network idea, though I would  
like my implementation to not be broken.


If person A exists in mysql, as well as persons B through Z, and some  
are related or known to each other, they have a relationship.  Very  
much a family tree.  While I can see how one wold start at person A,  
and look for the relationship to another person, thereby drilling down  
to other known links...


I would like to take it one step further.  I think celebrities are the  
best example, so I would be able to enter in myself, and ask the  
question: show me a tree of how I am closest to x celebrity.


Since this is not a real project, and just a hypothetical question,  
assume that the links are known to be valid, and there is some proven  
way to key one person to another that is accurate.  For example, in  
this case, we would use an honor system where Person A logs in, and  
selects all the other persons that they know.


What would be the query to find out all the relationships of person A  
to x celebrity?  How expensive is this query?


I am looking to build something as a real project that will use this  
idea as a feature, albeit in a different way, but the end result will  
be the same.  I get the feeling this is one, that if it is a used and  
popular service, the database design before it is made will be highly  
important.


I smell a bit more recursion in this idea that I am used to, and also  
feel that doing most of the work in the DB layer, and not in the  
application layer, is going to give me the best performance.


Does anyone have suggestions, or perhaps examples of this in working  
use?  Maybe there is a known algo of function that gets close to this  
already.


Thank you
--
Scott


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: SELECT locking tables.... in other databases

2008-11-27 Thread David Scott
show full processlist
userX is the user the site is using to connect
databaseX is the database in question

1976156, 'userX', 'localhost', 'databaseX', 'Sleep', 13, '', ''
13508974, 'dave', 'IPX:29212', 'databaseX', 'Sleep', 0, '', ''
13759139, 'sen', '1 IPX:32775', '', 'Sleep', 160, '', ''
13775621, 'sen', 'IPX:43603', 'databaseX', 'Sleep', 0, '', ''
13821806, 'dave', 'IPX:55885', '', 'Query', 0, '', 'show full processlist'
13821878, 'userX', 'localhost', 'databaseX', 'Sleep', 0, '', ''

Show innodb status:

'
=
081127 11:12:38 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 1 seconds
--
SEMAPHORES
--
OS WAIT ARRAY INFO: reservation count 164975, signal count 155476
Mutex spin waits 0, rounds 7441650, OS waits 120688
RW-shared spins 37873, OS waits 17328; RW-excl spins 24776, OS waits 4966

TRANSACTIONS

Trx id counter 0 25118320
Purge done for trx's n:o  0 25118124 undo n:o  0 0
History list length 89
Total number of lock structs in row lock hash table 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 19166, OS thread id 1096026448
MySQL thread id 13833472, query id 57762790 IPX dave
SHOW INNODB STATUS
---TRANSACTION 0 25063765, not started, process no 19166, OS thread id
1182529872
MySQL thread id 13508974, query id 57762327 IPX  dave
---TRANSACTION 0 0, not started, process no 19166, OS thread id 1176140112
MySQL thread id 13775621, query id 57762659 IPX  sen
---TRANSACTION 0 0, not started, process no 19166, OS thread id 1181997392
MySQL thread id 13759139, query id 57665031 IPX  sen
---TRANSACTION 0 25117242, not started, process no 19166, OS thread id
1188387152
MySQL thread id 1976156, query id 57760643 localhost beta_cc

FILE I/O

I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
28 OS file reads, 12527564 OS file writes, 12361532 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 79.92 writes/s, 79.92 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf: size 1, free list len 0, seg size 2,
0 inserts, 0 merged recs, 0 merges
Hash table size 2310107, used cells 513846, node heap has 859 buffer(s)
34.97 hash searches/s, 99.90 non-hash searches/s
---
LOG
---
Log sequence number 1 501773721
Log flushed up to   1 501773721
Last checkpoint at  1 500074343
0 pending log writes, 0 pending chkp writes
12329746 log i/o's done, 79.92 log i/o's/second
--
BUFFER POOL AND MEMORY
--
Total memory allocated 1201497898; in additional pool allocated 558592
Buffer pool size   64000
Free buffers   3328
Database pages 59813
Modified db pages  253
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 65, created 59748, written 573841
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000
--
ROW OPERATIONS
--
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
Main thread process no. 19166, id 1171347792, state: sleeping
Number of rows inserted 17853779, updated 10095603, deleted 18, read
948444635
13.99 inserts/s, 65.93 updates/s, 0.00 deletes/s, 98.90 reads/s

END OF INNODB MONITOR OUTPUT

'


I hope that helps--
Dave


2008/11/27 Ananda Kumar [EMAIL PROTECTED]

 can u please do show full processlist  when the update is happening,  and
 if its innodb

 please do SHOW INNODB STATUS, which will give complete activity on innodb
 engine, including lock information.

 Please show use the output of these.

 regards
 anandkl


 On 11/27/08, David Scott [EMAIL PROTECTED] wrote:

 Hi list.We have 2 tables, both have a few inserts, many updates and the
 occasional select.
 When running a select joining the 2 tables (which can take upto 20 seconds
 to complete, they are large tables) all updates are blocked and the
 maxconnections is quickly reached.

 We tried copying the data to a 2nd database (in the same MySQL install) to
 run the select on that but for some reason that still prevents the updates
 on the original database, we watch the connections and we see them build
 up,
 when the select finishes they quickly clear.

 My question is how can we prevent this backing up of updates when running
 a
 select and why would doing a select on one database cause connections on
 another to back up?

 Thanks
 --
 David





SELECT locking tables.... in other databases

2008-11-27 Thread David Scott
Hi list.We have 2 tables, both have a few inserts, many updates and the
occasional select.
When running a select joining the 2 tables (which can take upto 20 seconds
to complete, they are large tables) all updates are blocked and the
maxconnections is quickly reached.

We tried copying the data to a 2nd database (in the same MySQL install) to
run the select on that but for some reason that still prevents the updates
on the original database, we watch the connections and we see them build up,
when the select finishes they quickly clear.

My question is how can we prevent this backing up of updates when running a
select and why would doing a select on one database cause connections on
another to back up?

Thanks
--
David


Re: SELECT locking tables.... in other databases

2008-11-27 Thread David Scott
Thanks for the improved query.The indexing didn't help much and still the
main problem is it locking all updates to the tables while it executes...
even if I am executing it on a copy of the tables in a different database
--
Dave

2008/11/27 Chandru [EMAIL PROTECTED]

 Hi David,
  please create index on games_sessions_levels table on the column startTime
 and the query can be rewritten as
 gSL.starttime between unix_timestamp('2008-11-26') and
 unix_timestamp('2008-11-26') and gSL.endTime  0

 gSL.startTime  0 is invalid since already the value that you verify is
 between  unix_timestamp('2008-11-26') and  unix_timestamp('2008-11-26')


 gSL.starttime  unix_timestamp('2008-11-26')
 AND gSL.startTime  0 AND gSL.endTime  0
 AND gSL.starttime  unix_timestamp('2008-11-26')

 Regards,
 Chandru
 www.mafiree.com
 On Thu, Nov 27, 2008 at 5:50 PM, David Scott 
 [EMAIL PROTECTED] wrote:

 Ah yes, sorry, here it is:
  SELECT gS.gameid, g.gamename, COUNT(DISTINCT(gS.sessionid)) 'sessions',
 SUM(gSL.endTime - gSL.startTime)/COUNT(DISTINCT(gS.sessionid))/60 'average
 SESSION time',
 SUM(gSL.totalTime)/COUNT(DISTINCT(gS.sessionid))/60 'average PLAY time',
 SUM(gSL.totalTime)/60
 FROM databaseX.games_sessions_levels gSL
 JOIN databaseX.games_sessions gS ON gS.sessionid = gSL.sessionid
 JOIN databaseX.games g ON g.id = gS.gameid
 WHERE gSL.starttime  unix_timestamp('2008-11-26')
 AND gSL.startTime  0 AND gSL.endTime  0
 AND gSL.starttime  unix_timestamp('2008-11-27')
 GROUP BY gS.gameid
 ORDER BY SUM(gSL.totalTime) DESC



 CREATE TABLE  `databaseX `.`Xgames_sessions` (
   `sessionid` int(99) NOT NULL auto_increment,
   `playerid` varchar(32) NOT NULL,
   `gameid` int(99) NOT NULL,
   `starttime` int(20) NOT NULL,
   `zone` varchar(255) NOT NULL,
   `host` varchar(255) NOT NULL,
   `loadref` varchar(50) NOT NULL,
   PRIMARY KEY  (`sessionid`)
 ) ENGINE=InnoDB AUTO_INCREMENT=604907 DEFAULT CHARSET=latin1
 ROW_FORMAT=DYNAMIC;


 CREATE TABLE  `databaseX`.`Xgames_sessions_levels` (
   `id` int(99) NOT NULL auto_increment,
   `sessionid` int(99) NOT NULL,
   `levelnumber` int(99) NOT NULL,
   `levelname` varchar(50) default NULL,
   `starttime` int(20) NOT NULL,
   `endtime` int(20) NOT NULL,
   `totaltime` int(20) NOT NULL default '0',
   `info` int(11) NOT NULL,
   `score` int(99) NOT NULL,
   `done` tinyint(1) NOT NULL default '0',
   `zone` varchar(50) NOT NULL,
   PRIMARY KEY  (`id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=4213995 DEFAULT CHARSET=latin1
 ROW_FORMAT=DYNAMIC;

   --
 Dave

 2008/11/27 Chandru [EMAIL PROTECTED]

 Hai david,
without seeing the query i cant comment if the index that is created
 is efficient or not. Please send the query if you need more help.

 Regards,
 Chandru
 www.mafiree.com

   On Thu, Nov 27, 2008 at 5:36 PM, David Scott 
 [EMAIL PROTECTED] wrote:

 We only indexed the id on the table as it has many many updates and very
 rarely do we select, we assumed this would be more efficient? --
 Dave

 2008/11/27 Chandru [EMAIL PROTECTED]

  Hi David,
   I find that the query is going for a full table scan. i think u need
 to optimize the query. Can you please send the original query and also the
 output of  show index from tablename;

 Regards,
 Chandru.
 www.mafiree.com

   On Thu, Nov 27, 2008 at 5:18 PM, David Scott 
 [EMAIL PROTECTED] wrote:

 We are using innodb.
 EXPLAIN on the select:
  1, 'SIMPLE', 'gSL', 'ALL', '', '', '', '', 4210688, 'Using where;
 Using temporary; Using filesort'
 1, 'SIMPLE', 'gS', 'eq_ref', 'PRIMARY', 'PRIMARY', '4',
 'databaseX.gSL.sessionid', 1, ''
 1, 'SIMPLE', 'g', 'eq_ref', 'PRIMARY', 'PRIMARY', '4',
 'databaseX.gS.gameid', 1, ''

 We are baffled by how doing it on one DB can lock updates in another
 --
 Dave


 2008/11/27 Pradeep Chandru [EMAIL PROTECTED]

  Hi David,
   can you please let me know what is the select query and the update
 query along with the explain plan of the same.
can you please let me know if you are using innodb storage engine?

 Regards,
 Chandru
 www.mafiree.com

   On Thu, Nov 27, 2008 at 4:45 PM, David Scott 
 [EMAIL PROTECTED] wrote:

 show full processlist
 userX is the user the site is using to connect
 databaseX is the database in question

 1976156, 'userX', 'localhost', 'databaseX', 'Sleep', 13, '', ''
 13508974, 'dave', 'IPX:29212', 'databaseX', 'Sleep', 0, '', ''
 13759139, 'sen', '1 IPX:32775', '', 'Sleep', 160, '', ''
 13775621, 'sen', 'IPX:43603', 'databaseX', 'Sleep', 0, '', ''
 13821806, 'dave', 'IPX:55885', '', 'Query', 0, '', 'show full
 processlist'
 13821878, 'userX', 'localhost', 'databaseX', 'Sleep', 0, '', ''

 Show innodb status:

 '
 =
 081127 11:12:38 INNODB MONITOR OUTPUT
 =
 Per second averages calculated from the last 1 seconds
 --
 SEMAPHORES
 --
 OS WAIT ARRAY INFO: reservation count 164975, signal count 155476
 Mutex spin waits 0, rounds 7441650, OS waits 120688
 RW-shared spins 37873, OS waits 17328; RW-excl spins

mysqlimport failed to import

2008-09-19 Thread Scott Hamm
I created a csv file entitled 'disposed.csv' and placed it in 
computer_inventory data folder with the following inside:

1087
1046
1086
1161
1049
1178
1029
1030
1224
1044
1106

Now I created the table 'disposed' as following:

Create disposed (
Mot_id INT(4) UNIQUE NOT NULL
);

Then I issued the command inside computer_inventory data folder as follows:

Mysqlimport -u root -p computer_inventory disposed.csv

And got the error:

Mysqlimport: Error: Data truncated for column 'mot_id' at row 1, when using 
table: disposed


What am I doing wrong?


[cid:image001.jpg@01C91A66.5935E360]

inline: image001.jpg
-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Anyone know what to do about OS X 10.5 PPC

2008-08-08 Thread Scott Haneda
I can not find a pkg installer for OS X 10.5 on PPC, any suggestions?   
Would I want the 64 or 32 bit version of this as well?


I also need to know a simple php 5 installer that works on 10.5 for  
PPC, I am growing very tired of building that one out all the time,  
and Entropy seems to have fallen off the planet.


Thanks
--
Scott
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Anyone know what to do about OS X 10.5 PPC

2008-08-08 Thread Scott Haneda
I think I got this worked out, you can use the 10.4 installer for OS X  
on PPC.  Use the 32 bit one for G4, and the 64 bit one for G5, or you  
can still use the 32 bit one on the G5 if you want to.


Just install the app, do not install the startup item or the prefpane,  
they do not work, and need modifying.  Since this is a server, you  
want it to run all the time anyway, so I see no need for on demand.


The repaired prefpane I have seen, still uses a StartUp Item, which is  
deprecated as of 10.4 as far as I know.


Solution:
in /Library/LaunchDaemons
create com.mysql.mysqld.plist

Contents should be:
(END) ?xml version=1.0 encoding=UTF-8?
!DOCTYPE plist PUBLIC -//Apple//DTD PLIST 1.0//EN 
http://www.apple.com/DTDs/Pr
opertyList-1.0.dtd
plist version=1.0
dict
   keyDisabled/key
   false/
   keyGroupName/key
   string_mysql/string
   keyKeepAlive/key
   true/
   keyLabel/key
   stringcom.mysql.mysqld/string
   keyProgram/key
   string/usr/local/mysql/bin/mysqld/string
   keyProgramArguments/key
   array
   string--user=_mysql/string
   string--lower_case_table_names=2/string
   /array
   keyRunAtLoad/key
   true/
   keyUmask/key
   integer7/integer
   keyUserName/key
   string_mysql/string
   keyWorkingDirectory/key
   string/usr/local/mysql/string
/dict
/plist

I am not sure about the array item for lower_case_table_names, but I  
know OS X is case insensitive, so I know forcing a 0 would be bad, and  
not sure about a 1, but the 2 is set with a warning anyway, so I just  
wanted the warning out of my logs.


now in terminal just type
sudo launchctl load com.mysql.mysqld.plist
mysql will start, you can use unload to stop it.

It will start back up again after a reboot, or even a crash, it will  
start back up again.


The default php install on 10.5 is junk, and needs a lot of messing  
with to get to work, so I am still looking for a way to solve that  
right now.


I can not find a pkg installer for OS X 10.5 on PPC, any  
suggestions?  Would I want the 64 or 32 bit version of this as well?


I also need to know a simple php 5 installer that works on 10.5 for  
PPC, I am growing very tired of building that one out all the time,  
and Entropy seems to have fallen off the planet.


Thanks


--
Scott
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



improving random record selection

2008-05-17 Thread Scott Haneda
I posted this a month or so ago, and was helped a little, but I am now  
back.


Currently I use select x, y, z from images where (condition) order by  
rand() limit 1;


As most know, it is slow, depending on the record set, and what I  
compare it to, it can be from one order of magnitude slower, to several.


I have cobbled together a solution, but it on occasion, returns an  
empty result set, which is causing me grief. I should mention, there  
are holes in my id column, and I am needing to select a set based on a  
condition.


The below sql I do not fully understand either, if someone could step  
me through it, I would certainly appreciate it, though my main goal is  
to figure out why I get an empty set on occasion.


$sql = 
SELECT storage_path, image_md5, t.id
FROM images AS t
JOIN
			(SELECT CEIL(MAX(id)*RAND()) AS id FROM images) AS x ON (t.id =  
x.id)

 AND (t.approved = 1) AND (t.ip_address != '$exclude_ip') 
LIMIT 1;

* I could almost live with the slow speed of an order by rand() but I  
find it has a less than even distribution.  While it indeed may be  
very random, I am looking for a pretty flat response in distribution  
of returned records over time.	

--
Scott
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Age old order by rand() issue

2008-05-02 Thread Scott Haneda

I have seen nicer fast random row implement, but that will work.



Do you happen to have a snip of it, the one I have seems to lean  
pretty heavy as far as I can tell, and on occasion, though rare, also  
sends me an empty result set.

--
Scott
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Age old order by rand() issue

2008-05-01 Thread Scott Haneda

List search seems to return 0 results, and I am a bit stumped.

Getting a more optimized order by random with 1 record...
I found a snip online that works, but seems to return an empty on  
occasion, and I am not sure why:


SELECT storage_path, image_md5, id
 FROM images
  JOIN (SELECT CEIL(RAND() * (SELECT MAX(id) FROM images WHERE  
approved = 1)) AS id) AS r2 USING (id)

   WHERE approved = 1;

I really do not get this, SELECT CEIL(RAND() will always return 1 will  
it not?  Any idea why I get an empty result set at times?


I then managed to rig this together:
SELECT * FROM images AS t
 JOIN (SELECT CEIL(MAX(id)*RAND()) AS id
  FROM images WHERE approved = 1) AS x ON t.id = x.id LIMIT 1;

This works, but I get an odd result, in that the id column is listed  
twice, once at the beginning, where it is in the table, and once at  
the end.  Duplicate values of course.


If I narrow the * to a real called select, such as
SELECT id, storage_path, image_md5 FROM images AS t
 JOIN (SELECT CEIL(MAX(id)*RAND()) AS id
  FROM images WHERE approved = 1) AS x ON t.id = x.id LIMIT 1;

-   FROM images WHERE approved = 1) AS x ON t.id = x.id LIMIT 1;
ERROR 1052 (23000): Column 'id' in field list is ambiguous

I can not seem to get past that error, and would like to call just the  
columns I want.  Or, if someone has a better random record return  
select, I am all ears.


Thanks so much.

--
Scott
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A use for REGEXP, I think.

2007-12-10 Thread David Scott
I came up with a clunky solution: REGEXP
'lvlscore5:[6-9][0-9][0-9][0-9][0-9]';
This gives me all scores over 6000. I then add on another [0-9] and change
the [6-9] to [1-9] to do 10k+
As this was a one off and only returned 6 out of thoustands of records it
was good enough for a quick and dirty manual edit.

I would of course have gone the seperate table route to hold the name/value
pairs but the main reason I went the string route becasue the process that
read and writes the string does all the work, there should never have been a
need for MySQL to know what it contains or to process it in any way. Also as
there can be 20+ bits of data in this string and the table is going to be
pretty huge it would have made for an even larger 2nd table. (by huge I
mean 1million+ records)

With this in mind would it still be best to have gone the 2nd table route?
--
Dave

On 10/12/2007, Rob Wultsch [EMAIL PROTECTED] wrote:

 On Dec 9, 2007 9:17 PM, Rob Wultsch [EMAIL PROTECTED] wrote:
 
  On Dec 9, 2007 10:53 AM, David Scott [EMAIL PROTECTED]
 wrote:
   I have a field in a DB that contains something like
   lvlscore4:4493,lvlscore5:3232,lvlscore6:6128
   I need to select all records where the lvlscore5: is greater than
 3000.
   I think this can be done using REGEXP but I have read the docs and am
 non
   the wiser for it. Can anyone point me in the right direction?
   --
   Thanks
   David Scott
  
  Insert lecture about correctly normalizing data.
 
  I think what you suggested is the wrong path for a successful outcome.
   As far as I can tell the regex engine in mysql only returns a
  booleans. http://dev.mysql.com/doc/refman/5.0/en/regexp.html . Someone
  correct me if I am wrong.
 
  This would work:
 
  SELECT 'lvlscore4:4493,lvlscore5:3232,lvlscore6:6128',
  SUBSTRING('lvlscore4:4493,lvlscore5:3232,lvlscore6:6128',
 
 LOCATE('lvlscore5','lvlscore4:4493,lvlscore5:3232,lvlscore6:6128')+length('lvlscore5')+1,
  4
 
  ) as 'lvlscore5'
 
  4 is  a magic number that will not work if the score is not always of
  length 4. If that is the case replacing the last argument of substring
  with a bit more logic will sort this problem.
 

 To be clear the regex should work, but it will probably take more work
 than what I just suggested. Using what I suggested in the where clause
 rather than as shown in the select clause will probably be the easiest
 solution.




A use for REGEXP, I think.

2007-12-09 Thread David Scott
I have a field in a DB that contains something like
lvlscore4:4493,lvlscore5:3232,lvlscore6:6128
I need to select all records where the lvlscore5: is greater than 3000.
I think this can be done using REGEXP but I have read the docs and am non
the wiser for it. Can anyone point me in the right direction?
--
Thanks
David Scott


Re: Trigger problem

2007-11-13 Thread Scott

On Thu, 2007-11-08 at 17:56 -0800, Lucky Wijaya wrote:
 Yes, the trigger code is works. Many thanks !!
 Now I understand the use of delimiter command. Thanks again... =)
 
 My next question is, do we able to view the triggers that has been created ? 
 And how ?
 
 David Schneider-Joseph [EMAIL PROTECTED] wrote: My apologies, try this:
 
  DELIMITER ;;
 
  CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi
  FOR EACH ROW
  BEGIN
   DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi;
  END;
  ;;
 
  DELIMITER ;
 
 To answer your question:
 
 The DELIMITER statement tells MySQL to use a different set of  
 characters to terminate statements.  This is necessary when you want  
 to use a ; in your actual statement.  In this case, the entire  
 trigger definition is considered one statement, but the ; in the  
 DELETE... line is being interpreted as the termination of it.
 
 Yes, it's dumb.
 
 On Nov 7, 2007, at 2:53 AM, Lucky Wijaya wrote:
 
  No, I didn't set the delimiter. But, it still have an error after I  
  set delimiter in my trigger as your example. By the way, what's  
  delimiter mean ? And what it's for ?
 
  Thanks to you Mr. David.
 
  David Schneider-Joseph  wrote: Lucky,
 
  Did you make sure to set your delimiter before and after the CREATE
  TRIGGER statement?  e.g.:
 
  DELIMITER ;;
 
  CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi
  FOR EACH ROW
  BEGIN
   DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi;
  END;;
 
  DELIMITER ;
 
  On Nov 6, 2007, at 11:11 PM, Lucky Wijaya wrote:
 
  Hi, my name is Lucky from Indonesia. I build an database application
  using Delphi 7  MySQL as the RDBMS. Now, I'm having problem in
  creating trigger in MySQL. Here is the code of the trigger:
 
  CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi
  FOR EACH ROW
  BEGIN
  DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi;
  END;
 
  It results an error message that the SQL syntax (on delete command)
  is incorrect. I didn't find yet the incorrect part of my SQL syntax.
  Could somebody help my problem ? Thank you very much.
 
  Note: I'm already using MySQL v. 5.0.41 and using GUI in creating
  the trigger. I also have tried to create the trigger through mysql
  command line, but it result the same error message.
 
  __
  Do You Yahoo!?
  Tired of spam?  Yahoo! Mail has the best spam protection around
  http://mail.yahoo.com
 
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
  __
  Do You Yahoo!?
  Tired of spam?  Yahoo! Mail has the best spam protection around
  http://mail.yahoo.com
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
  __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around 
 http://mail.yahoo.com 

mysql show triggers;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SAN and ibdata files

2007-10-08 Thread Scott Tanner
Rajan,
  In a SAN configuration, the lun/volume should appear just like a
locally attached physical disk on your server. MySQL will not need any
special configurations to access it, just poiint the datadir to the SAN
disk.

Regards,
Scott Tanner


On Mon, 2007-10-08 at 11:06 +0530, Ace wrote:
 Thanks Dan! We do know the permission problem. We tried chown but got Not a
 owner error. That is because of root squashing in which root becomes
 nobody when you cd to NFS directory.
 To overcome this we tried adding no_root_squash to export file. but
 still the error.
 
 I will overcome this somehow.but my Biggest worry now is - can I
 configure Mysql to use SAN location? if not, why?
 
 Thanks,
 Rajan
 On 10/7/07, Dan Nelson [EMAIL PROTECTED] wrote:
 
  In the last episode (Oct 06), Ace said:
   Hi Experts,
  
 You all know the size problems with ibdata and log files. We plan
  to
   move those files to SAN to have maximum storage possible. We did so with
   configuration changes in my.cnf files to save log and ibdata files on
  SAN
   location. But got following error -
  
   InnoDB: Unable to lock /opsolnfs/mysql/ibdata1, error: 13
   InnoDB: Check that you do not already have another mysqld process
   InnoDB: using the same InnoDB data or log files.
  
  Any concrete steps to overcome this problem? Your input will add
   value in solving our problem.
 
  $ perror 13
  OS error code  13:  Permission denied
 
  Check permissions on your data directory.
 
  --
 Dan Nelson
 [EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SAN and ibdata files

2007-10-08 Thread Scott Tanner
Rajan,
  I believe you are talking about a NAS setup, which is different from
the SAN configuration I was referring to. Check out 
http://www.nas-san.com/differ.html for a quick comparison.

  With NFS volumes, your adding several layers of overhead when
accessing the disks which has a drastic effect on performance. I'm not
sure what the exact locking issue would be, but I'd imagine it would
have to do with the added latency.

  I have heard of a few people using NFS on small, mostly read databases
though. So you may not have any issues depending on the type and amount
of usage your server would be getting.

Regards,
Scott

On Mon, 2007-10-08 at 18:06 +0530, Ace wrote:
 Yes Scott! Only thing that confused and still confusing me is the following
 note on mysql site -
 
 *Note*
 It is not a good idea to configure InnoDB to use datafiles or logfiles on
 NFS volumes. Otherwise, the files might be locked by other processes and
 become unavailable for use by MySQL. 
 
 http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html
 
 Why other processes will keep files locked if its on NFS volumes? and which
 are the processes they are referring?
 
 Thanks,
 Rajan
 
 On 10/8/07, Scott Tanner [EMAIL PROTECTED] wrote:
 
  Rajan,
  In a SAN configuration, the lun/volume should appear just like a
  locally attached physical disk on your server. MySQL will not need any
  special configurations to access it, just poiint the datadir to the SAN
  disk.
 
  Regards,
  Scott Tanner
 
 
  On Mon, 2007-10-08 at 11:06 +0530, Ace wrote:
   Thanks Dan! We do know the permission problem. We tried chown but got
  Not a
   owner error. That is because of root squashing in which root becomes
   nobody when you cd to NFS directory.
   To overcome this we tried adding no_root_squash to export file.
  but
   still the error.
  
   I will overcome this somehow.but my Biggest worry now is - can I
   configure Mysql to use SAN location? if not, why?
  
   Thanks,
   Rajan
   On 10/7/07, Dan Nelson [EMAIL PROTECTED] wrote:
   
In the last episode (Oct 06), Ace said:
 Hi Experts,

   You all know the size problems with ibdata and log files. We
  plan
to
 move those files to SAN to have maximum storage possible. We did so
  with
 configuration changes in my.cnf files to save log and ibdata files
  on
SAN
 location. But got following error -

 InnoDB: Unable to lock /opsolnfs/mysql/ibdata1, error: 13
 InnoDB: Check that you do not already have another mysqld process
 InnoDB: using the same InnoDB data or log files.

Any concrete steps to overcome this problem? Your input will
  add
 value in solving our problem.
   
$ perror 13
OS error code  13:  Permission denied
   
Check permissions on your data directory.
   
--
   Dan Nelson
   [EMAIL PROTECTED]
   
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Design decision

2007-10-03 Thread Scott Haneda
I have an a table of objects, attached to those objects are keywords.
Users submit the keywords to the objects.

Currently, I chose to keep a hit count on the keywords, so if a duplicate
keyword is supplied, a counter is incremented.

I thought this was a good idea, as it keeps the number of rows in the
keywords table to a minimum.

However, this is a user login based system, and with the above, I lose the
ability to track which users sent in which keywords.

So I can move to the keywords table storing duplicate keywords, and each
keyword will get a user_id attached to it, but, that table will grow.  Fast.

1000 objects, each with 10 keywords only = 10,000, I could hit many millions
very fast.

I could toss in a third table, and relate that to the user_id and keyword.
However, I am inserting in one go, and that could be upwards of 30 or more
inserts, taking too long.

Anyone got any suggestions?  Thanks.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design decision

2007-10-03 Thread Scott Haneda
 Scott Haneda wrote:
 I have an a table of objects, attached to those objects are keywords.
 Users submit the keywords to the objects.
 
 Currently, I chose to keep a hit count on the keywords, so if a duplicate
 keyword is supplied, a counter is incremented.
 
 I thought this was a good idea, as it keeps the number of rows in the
 keywords table to a minimum.
 
 However, this is a user login based system, and with the above, I lose the
 ability to track which users sent in which keywords.
 
 Why do you need this info? Is there a reporting need or something else
 you need to keep this information for? Ie when are you going to need to
 know who inserted a particular keyword?

Because it is a semi-public system, every user is allowed to supply keywords
to other users objects.  If some not so nice person decides to paste in a
list of words to 'game' that object up the ranks, I want to know who, and be
able to take action.

Looking over the php now, thanks.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



single line inserts with on duplicate key

2007-10-03 Thread Scott Haneda
Is it possible, in single-line inserts, with on duplicate key clauses, to
get back a list of last insert id's for what was inserted?

I get strange results, just one single insert id, which makes sense from the
perspective of what was just inserted, however, I need to know what the
returned insert id is for each of an arbitrary amount of single-line
inserts.

Defining what I am calling single line insets, I mean:
INSERT INTO foo (a, b, c) VALUES ('x', 'y', 'x'), ('x', 'y', 'x'), ('x',
'y', 'x'), etc etc
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query to find less than 3 char string

2007-09-20 Thread Scott Mulder
Just wondering if anyone out there may have a workaround to being stuck on a 
shared server with the default string matching set to more than 3 chars.  I 
know that Navicat searches for 3 and under chars on my catalog when it's 
connected and that offered me some form of hope.

On top of that I'm wondering if anyone may have a link to what they think the 
best search algoritim out there may be for a shopping website that needs to 
search three columns (product name, brands and categories) with plurals and 
possessives added and removed (meaning es, 's, and s).

Thanks so much!

Scott

From: Jerry Schwartz [EMAIL PROTECTED]
Sent: Thursday, September 20, 2007 1:28 PM
To: 'thomas Armstrong'; 'Michael Dykman'
Cc: mysql@lists.mysql.com
Subject: RE: Query to find foo within (foo)

If you want to match only (johnie) or  johnie , then you could use a
regular expression test. They can get as complicated as your brain will
tolerate.

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com


 -Original Message-
 From: thomas Armstrong [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 20, 2007 3:52 AM
 To: Michael Dykman
 Cc: mysql@lists.mysql.com
 Subject: Re: Query to find foo within (foo)

 Thank you Michael for your answer.

 On 9/19/07, Michael Dykman [EMAIL PROTECTED] wrote:
  The whitespace counts...  try LIKE '%johnie%' (or better
 '_johnie_' ..
   the underscorematches any single character).
 I want to match '(johnie)' and not 'johnies' or 'aljohnier', what it's
 the query does with '_'.
 If I include whitespaces, I match only the word 'johnie'.

 But I've got some texts like:
 * (johnie)
 * johnie-sullivan
 * johnie.sullivan
 and I want to find them. I suppose the right way is create
 another condition:
 firstname LIKE '%(johnie %' OR '% johnie)%' OR '%(johnie)%'
 OR '%.johnie %' ...

  Bear in mind that, with
  a LIKE variable as the first element in your string, this query will
  do a full table scan every time.  If you get a lot of
 records in here,
  that's going to kill you.
 You're right, but I didn't find another better way. What do you think
 it's better?

 
   - michael
 
 
  On 9/19/07, thomas Armstrong [EMAIL PROTECTED] wrote:
Hi.
  
   I've got this table in mySQL:
  
   item 1:
   -- firstname: John (Johnie)
   -- phone: 555-600-200
  
   item 2:
   -- firstname: Peter
   -- phone: 555-300-400
  
   I created this SQL query to find 'johnie':
   
   SELECT friends.id FROM friends WHERE ((friends.firstname LIKE '%
   johnie %' OR friends.firstname LIKE 'johnie %' OR
 friends.firstname
   LIKE '% johnie' OR friends.firstname LIKE 'johnie' OR
 friends.phone
   LIKE '% johnie %' OR friends.phone LIKE 'johnie %' OR
 friends.phone
   LIKE '% johnie' OR friends.phone LIKE 'johnie')) ORDER BY
   friends.firstname LIMIT 0, 
   
  
   But it doesn't match anything, because it's considers
 (johnie) as a
   single word. Is there any way to make mySQL consider (johnie) as
   johnie.
  
   I know I can create another condition within my query:
   firstname LIKE '(johnie)' OR firstname LIKE '(johnie%' OR
 firstname
   LIKE '%johnie)'
   but I also might consider other characters like '  - *
  
   Any suggestion?
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
 
  --
   - michael dykman
   - [EMAIL PROTECTED]
 
   - All models are wrong.  Some models are useful.
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slave Misbehavin'

2007-09-19 Thread Scott Tanner
   Maybe I missed this in the text below, but are you trying to daisy
chain the slaves (master - slave 1 - slave 2) or have multiple slaves
connecting to one master?

  Is slave 1 configured with log-slave-updates?


Regards,
Scott 


On Wed, 2007-09-19 at 12:31 -0500, [EMAIL PROTECTED] wrote:
 Hi,
 
 [EMAIL PROTECTED] wrote:
  Howdy,
  
  I'm trying to add a second slave, slave2, running MySQL 5.0.22 on CentOS 
 5 
  to our system that currently has one master and one slave, slave1, 
 running 
   4.0.24, and somehow slave2 somehow ends up with too many records in 
 many 
  of the 30 tables in the database. 
  
  Steps taken:
  
  1. Stopped new records from being inserted into the master, and 
 confirmed 
  with count(*)'s that both master and slave1 were in a static state.
  
  2. Stopped mysqld and commented out in my.cnf the master connection 
  parameters (user, host, password, port) on slave2.
  
  3. Deleted master.info, all mysql-bin and relay-bin files from the mysql 
 
  data directory on slave2. 
  
  4. Deleted all .MYD, .MYI, and .frm files from the replication database 
  directory on slave2. 
  
  5. rsync'd the .MYD, .MYI, .frm files from slave1 to slave2.
 
 - And in the meantime, slave1's data is being changed because the master 
 - is sending it replication events, no?  You need to run STOP SLAVE on 
 - slave1 before rsyncing it.  After STOP SLAVE, run SHOW SLAVE STATUS and 
 - record the output, then rsync, then START SLAVE on slave1 again.
 
 I don't think so. I stopped all activities on the master (step 1), and 
 slave1 
 therefore shouldn't have any changes made to it. I should have noted that 
 only
 inserts are done on the master -  no updates or deletes. 
 
  6. Restarted mysqld on slave2 (now not running as a slave).
  
  7. Confirmed that record counts were consistent across master, slave1 
 and 
  slave2.
  
  8. Stopped mysqld on slave2, uncommented master connection parameters in 
 
  my.cnf, and restarted mysqld.
  
  9. Got log file and log position parameters with 'show master status' on 
 
  the master.
 
 - TOO LATE.  The horse has left the barn and you're closing the door 
 - behind it!  You should instead get the replication coordinates from 
 - slave1 with SHOW SLAVE STATUS during step 5.  You're cloning slave2 from 
 
 - slave1, so slave2 tells the truth, not the master, which has done a 
 - whole bunch of work while you were going through these steps.
 
 No, slave1 can't do any work except as directed by the master, which has 
 had
 all activities stopped on it. 
  
  10. Ran 'Change master to... with all fields filled in.
  
  11.  Ran 'slave start' on slave2.
  
  12. Rechecked record counts on slave2, and they were too large and out 
 of 
  sync with slave1 and master.
  
  I poked around in the data on slave2 and found a number of records had 
  been duplicated, and that accounted for the higher record counts. 
  
  After starting the application that inserts data into the master, I 
  determined that new records are being inserted correctly into slave2.
  
  Seriously out of ideas here.
  
  Thanks,
  
  David 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



OS X binary installer

2007-09-19 Thread Scott Haneda
Looking for the OS X 4.1 binary package installer, I can not seem to locate
this on the site, I just find a source style, and a tar style.  I am pretty
sure in the past, I was able to have a double clickable installer, and it
had a preference pane for starting and stopping the service.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Hypothetical design question regarding keyword searching

2007-07-19 Thread Scott Haneda
I have been looking at stock photo sites lately, started wondering how they
are doing their keyword searched.  Given a potential for millions of images,
each with x keywords, I have come up with two approaches...

Approach one
Images table, with a parent id
Keywords table, each keyword would be its own row, and also link back to the
parent id.

You could then search for the keyword, get the parent id's, and do a IN
(...) search against the images, thereby pulling up the images that have
those keywords.

Potentially problematic as assuming million of images, and average 10
keywords per image, you end up with a keyword table that has 10's of
millions of rows, along with lots of duplicate keywords.

Approach two
Use a link table, this resolves the duplicate keyword issue, and I am sure
there will be many dupes.  However, it adds a third table.  This would
complicate the JOIN query.

Are there other approaches?  Which approach would yield the best performance
for growth issue?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



User variables in update statement

2007-07-10 Thread Scott Haneda
Is it possible to use user variables in an update statement, I can find only
scant docs on it, but they do not pertain to what I am trying to do.

Given this select:

SELECT user_id, order_status, quantity_chosen, month_price, each_price,
sales_tax, sales_tax_rate,
@NEW_each_price:=(each_price + .06) as NEW_each_price,
@NEW_month_price:=(quantity_chosen * @NEW_each_price) as NEW_month_price,
@postage_rate:=(0.30) as postage_rate,
@cost_of_postage:=(quantity_chosen * @postage_rate) as postage,
@taxable_price:=(@NEW_month_price - @cost_of_postage) as taxable,
(@taxable_price * sales_tax_rate) as NEW_sales_tax

As you can see, I am using variables to make some calcs, sure, I can do it
long hand, but it gets long and confusing.

Given the above, I end up with some vars like NEW_month_price,
NEW_each_price etc, I would like to update ... SET price = NEW_each_price,
but of course, that does not work.

Suggestions?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Import a text file

2007-07-06 Thread Scott Swaim
I have a text file that is in the Panasonic DBS format.  It is a log file
that comes from my phone system.  I have a program that monitors the serial
port and write the input to a file.  I have no control over the format of
the file.  I would like to on a nightly basis shut down the logging program
and then upload this file to Mysql.  What I don't know how to do is to have
the file split into the correct fields.  


Here is a sample of the file


MO 07/03 13:58:00 00:00:34 103 1817295978204
MO 07/03 13:58:00 00:00:24 103 8175177982 05
MO 07/03 13:59:00 00:00:23 103 8175177982 04
MO 07/03 13:53:00 00:07:25 105 18004445445000 06
MI 07/03 14:00:00 00:01:55 100 8177740741 01



It is a fixed format so I could probably just write a script that parses the
program
Or write a script that adds a comma to the correct location. 

Does anyone have any suggestions or already have something that does this?


Thanks in Advanced
Scott Swaim



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do you reference custom column names

2007-06-19 Thread Scott Haneda
SELECT 1+1 as foo, 2 as bar, foo+bar

This will not work, but I think you can see what I am trying to do.  I need
to run a pretty hefty update on a database, and there are some pretty heavy
calculations I will be doing.  The result of many of those, needs to be
further used to make updates on other columns.

Can someone suggest a method to do this so my SQL is not so non-manageable?

Something like this:

SELECT price*tax_rate+something-other

Then I need to use the result of the above calc right away in some other
field, which would be:

SELECT price*tax_rate+something-other, ((price*tax_rate+something-other)+.6)

Where what I really want to do is:

SELECT price*tax_rate+something-other as foo, (foo+.6)

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql Host through Heartbeat

2007-06-13 Thread Scott Tanner
  There's a 'report-host' option that can be set in the conf file to
mask the host name. Sounds like this may be set.

  If you want to get the server's actual host name from within mysql,
how about running a system command: 
   mysql \! hostname;

   or 
  
   mysql \! cat /etc/hostnames;  (debian) 
   mysql \! cat /etc/hosts; (CentOS/rhel)



Regards,
Scott



On Wed, 2007-06-13 at 17:46 +0100, Ben Clewett wrote:
 What I know is that:
 
 Heartbeat with MySQL uses two IP's.  That of the server, and that of the 
 resource MySql.  The former is fixed, the latter moves with MySQL when 
 it's moved to another server.
 
 The one I need is the hostname of the physical server, not the resource.
 
 I've installed 5.0.41 and have found that the 'hostname' variable does 
 report the hostname of the physical server.  I have no idea how it does 
 it :)
 
 I have my solution, thanks for the help,
 
 Ben Clewett.
 
 
 
 Baron Schwartz wrote:
  Gerald L. Clark wrote:
  Baron Schwartz wrote:
  Gerald L. Clark wrote:
 
  Ben Clewett wrote:
 
  Dear MySQL,
 
  I'm running 5.0.26 through Heartbeat.  Which seems to work well, 
  even as a replication slave and Heartbeat continously stopping and 
  starting the server.
 
  The Heartbeat moves MySQL around from server to server when a 
  failure occures.  I am trying to find a way for MySQL to report the 
  server host name on which it's currently sitting.  Without any luck.
 
  Would any kind members know of a way of getting this information 
  from MySQL?
 
  Many thanks,
 
  Ben
 
 
  Heartbeat moves the IP address around as well as the services.
  Your hostname should not change.
 
 
  DNS won't change, but the server's /etc/hostname will, right?
 
  Disclaimer: I'm no expert on this... I didn't even know the IP 
  address moved too.  I should read about Heartbeat.
 
  Baron
  Do you actually have /etc/hostname?
  RHEL and Centos do not.
  They do have an entry in /etc/sysconfig/network
  
  Debian and Gentoo have /etc/hostname and /etc/conf.d/hostname, 
  respectively.  I would think this is what /usr/bin/hostname uses, and 
  probably where the hostname server variable gets set from in MySQL 5.0.41.
  
  Baron
  
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Translation of sql into mysql

2007-06-12 Thread David Scott


I am trying to update the column CreditCode in a table extract using data 
from another table CreditCodes, which has columns CreditCode and 
Consumer_No.


I have been given the following sql which works on another database (not 
sure which and it is late at night so I can't ring up and ask)


update extract
set CustomerCreditCode = b.CreditCode
from extract a
inner join CreditCodes b
on a.ConsumerNO = b.Consumer_No;

This gives an error in mysql:
ERROR 1064: You have an error in your SQL syntax; check the manual that 
corresponds to your MySQL server version for the right syntax to use near 
'from extract a

inner join CreditCodes b
on a.ConsumerNO = b.Cons

Can anyone translate it into correct mysql syntax for me?

Thanks

David Scott

_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
Auckland 1142,NEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]

Graduate Officer, Department of Statistics
Director of Consulting, Department of Statistics


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Add column from another table

2007-06-08 Thread David Scott


I would like to add a column to a table where the column to be added is in 
another table. The column entries are to be matched using a particular 
column from each table.


I have one table detailing phone calls, indexed by CallID. I have 
created another table with the same index and a column CallNumber which I 
want to add to the phone call table.


I have looked at documentation and in the book I have available without 
success. Can anyone suggest appropriate syntax for this?


David Scot

_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
Auckland 1142,NEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]

Graduate Officer, Department of Statistics
Director of Consulting, Department of Statistics


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Diff between restore from tar and recover from power outage

2007-06-04 Thread Scott Tanner
   If mysql has been shut down, and given enough time for the tables to
be flushed to disk, then backing up the data dir using tar is fine. You
can tar the data dir without shutting down, providing all transactions
have been stopped and the tables have been flushed to disk. This is how
we backup our servers, and we haven't had an issue yet (restoring daily
to a testing environment).

Regards,
Scott Tanner  

On Mon, 2007-06-04 at 14:57 -0700, Wm Mussatto wrote:
 On Mon, June 4, 2007 14:21, murthy gandikota said:
  Hi
When restoring a DB from a tar file, I had to repair some of the tables.
  Whereas when the DB is started after a power failure, everything works
  fine. I noticed this on RedHat linux and MySQL5.
 
Any thoughts?
 
Thanks
Murthy
 
 What was in the tar file?  If you tar'ed the data directory, it is almost
 guaranteed to fail since only part of the updated information may be
 actually written to disk.  If the power failure occurred on a system with
 a correctly configured UPS, mysql probably got a shut down command so the
 disk files were in a stable state.
 
 Normally you should dump the databases and then tar that.  (see mysqldump
 command for details).
 
 ---
 
 William R. Mussatto, Senior Systems Engineer
 http://www.csz.com
 Ph. 909-920-9154 ext. 27
 FAX. 909-608-7061
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Obtaining the first or second instance of an event

2007-06-03 Thread David Scott


I am looking at data from a telephone call centre.

I have a table giving data on calls made including time and date with the 
name CallDateTime. Each call has a number, CallId and each customer has a 
number CustomerNo. Each row represents a different call.


I would like to create a column which identifies the first call made by a 
customer in a particular month. That is if a particular call is the first 
call made by that customer in that month, there is a 1 in the column, 
otherwise there is a zero.


I would also like to identify the second call (if any) made by the 
customer in a particular month.


I am quite inexperienced with MySQL and SQL in general and would 
appreciate any help which you can offer.


Thanks

David Scott

_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
Auckland 1142,NEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]

Graduate Officer, Department of Statistics
Director of Consulting, Department of Statistics


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SAN backend for DB's

2007-05-31 Thread Scott Tanner
  I think you'll need to do a lot of testing to yourself to find the
right answer to that. The number of disks, type of disks, and raid
configuration will have the most effect on performance.

  Personally, we had roughly 15% increase in performance from ditching
our EMC clarion and going with external scsi arrays for each server. For
us it was cost driven - 8 scsi disks (in raid 10 config) and the arrays
were cheaper then 4 fiber disks. The extra heads and less latency made a
noticeable difference - our database has a really high write rate.

Regards,
Scott   


On Thu, 2007-05-31 at 09:15 -0400, B. Keith Murphy wrote:
 So here is the brief situation.  We have a coraid (www.coraid.com) SAN 
 unit - the 1520 I believe.   It is ATA-over-ethernet.
 
 Right now we have a about 500 gigs of data spread across five servers.  
 To simplify things I would like to implement the coraid on the backend 
 of these servers.  Then all the data is served up out of the same 
 place.  Of course I would like to improve I/O throughput also.
 
 Googling shows that these units have good read speed but the write speed 
 doesn't seem to be that impressive.
 
 Does anyone have any experience with these?  Good? Bad?  Maybe other SAN 
 suggestions?  Am I barking up the wrong tree?
 
 Thanks,
 
 Keith
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql creating lots of processes (not threads, linux processes)

2007-05-30 Thread Scott Tanner
  I know I've run into this before, and it was a build issue. We build
from source on redhat linux, so it may not be related to your issue.

 Anyways, I just did a test and built mysql from source with the flag
--with-mysqld-ldflags=-all-static. I end up with the process spawning
mysqld:

ps -eaf | grep -c my
12

 I removed that flag and rebuilt again, and now I have 2 processes. 

 Just for reference: 
/configure --prefix=/usr/pkg/mysql5_test \
   --with-libwrap=/usr/pkg/tcpd \ 
   --with-mysqld-user=mydaemon \
   --without-debug \ 
   --with-unix-socket-path=/var/mysql/mysql.sock \
   --with-client-ldflags=-all-static \
   --enable-shared \
   --enable-thread-safe-client \
   --with-extra-charsets=none



Scott


On Wed, 2007-05-30 at 18:29 +0200, Quentin Gouedard wrote:
 Oh and by the way mysql works just fine on that machine. You can run
 queries without any problems.
 Only it keeps spawning new processes over again. It takes about 3h
 before the machine starts having problems due to memory getting full
 of useless mysql processes. 
 I just got a new machine for my site, and it's happenning there too
 (although on none of my 5 other boxes), which is good cause i can do
 some more extensive testing before i actually use it.
 Another thing is, mysql won't stop. 
 /etc/init.d/mysql stop just hangs.
 There are some processes that need to be killed manually in the end.
 I'm guessing this could have to do with the process owner being root.
 
 Some similar problems are reported here though: 
 http://forums.gentoo.org//viewtopic-t-544730-highlight-mysql.html
 
 
 On 5/30/07, Quentin Gouedard [EMAIL PROTECTED] wrote:
 Merci Geoffroy,
 starting from the command prompt shows:
 
 # /usr/sbin/mysqld --console --verbose --basedir=/usr
 --datadir=/var/lib/mysql --pid-file=/var/run/mysqld/mysqld.pid
 --socket=/var/run/mysqld/mysqld.sock
 070530 18:01:28 [Note] /usr/sbin/mysqld: ready for
 connections.
 Version: '5.0.38-log' socket: '/var/run/mysqld/mysqld.sock'
 port: 3306 Gentoo Linux mysql-5.0.38 
 
 
 However running a ps already shows multiple mysqld processes.
 Something i had not noticed so far:
 
 # ps -ef | grep mysqld
 mysql25752 27831  0 18:01 pts/2
 00:00:00 /usr/sbin/mysqld --console --verbose --basedir=/usr
 --datadir=/var/lib/mysql --pid-file=/var/run/mysqld/mysqld.pid
 --socket=/var/run/mysqld/mysqld.sock 
 root 16560 25752  0 18:01 pts/2
 00:00:00 /usr/sbin/mysqld --console --verbose --basedir=/usr
 --datadir=/var/lib/mysql --pid-file=/var/run/mysqld/mysqld.pid
 --socket=/var/run/mysqld/mysqld.sock
 root 16560 25752  0 18:01 pts/2
 00:00:00 /usr/sbin/mysqld --console --verbose --basedir=/usr
 --datadir=/var/lib/mysql --pid-file=/var/run/mysqld/mysqld.pid
 --socket=/var/run/mysqld/mysqld.sock 
 root 23390 16560  0 18:01 pts/2
 00:00:00 /usr/sbin/mysqld --console --verbose --basedir=/usr
 --datadir=/var/lib/mysql --pid-file=/var/run/mysqld/mysqld.pid
 --socket=/var/run/mysqld/mysqld.sock(etc.)
 
 the first process created is the only one with mysql owner. 
 All the subsequent processes are own by root, and have as
 parent one of the previously created processes (not always the
 first one). Not sure if that helps in understanding this
 though.
 
 Thanks anyway for your help. 
 
 
 On 5/30/07, Geoffroy Cogniaux [EMAIL PROTECTED]
 wrote:
 Try to start it with mysql_safe instead or try to
 start mysqld manually
 within a command prompt, without fork, to see what
 happen.
 ./mysqld --console --verbose --your_options
 
 Can you at least connect to mysql with a remote client
 on this server or 
 not?
 
 Have a look on this page about starting issues:
 
 http://dev.mysql.com/doc/refman/5.0/en/unix-post-installation.html#starting-
 server
 
 Geoffroy
 
 -Message d'origine-
 De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De
 la part de Quentin 
 Gouedard
 Envoyé: mercredi 30 mai 2007 09:02
 À: Scott Tanner
 Cc: mysql@lists.mysql.com
 Objet: Re: mysql creating lots of processes (not
 threads, linux processes) 
 
 Nope, I'm using 5.0.38 on Gentoo, built via emerge in
 the exact same manner.
 Thanks for your answers guys.
 
 On 5/30/07, Scott Tanner

Data types and reading in data

2007-05-30 Thread David Scott


First of all I am very much a newbie with MySQL.

I am trying to create some tables and then read in some data from a .csv 
file using load data infile 'filename.csv'


The data in the .csv file has actually come from another database system.

I have two problems which I haven't been able to find my way around.

In many cases there are missing values for particular fields. If I declare 
that field or column to be CHAR(m) then I can read the data in using the 
load command. However in a number of cases I want the field to be INT of 
some sort. If in the create table statement I declare the column to be INT 
however, I am unable to successfully read the data in. This is curious 
because I know it was written out from the source database with an INT 
declaration.


I have some dates which are in the incorrect format, as dd/mm/. Is 
there any way of having them recognised as dates?


For both of these problems my intended workaround is to create a new 
column from the problem column which has the attributes I want. I am not 
sure how possible that is at this point, being very inexperienced with 
MySQL.


Any advice would be most welcome, if only a pointer to the appropriate 
location in the documentation.


David Scott



_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
Auckland 1142,NEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]

Graduate Officer, Department of Statistics
Director of Consulting, Department of Statistics


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql creating lots of processes (not threads, linux processes)

2007-05-29 Thread Scott Tanner
Sounds like your not using threaded libraries. Was mysql built
differently, or are you using a different RPM on this server?


Scott



On Tue, 2007-05-29 at 22:49 +0200, Quentin Gouedard wrote:
 No, I have just collectd+mrtg, but i don't even use them to monitor mysql.
 I launch mysql via /etc/init.d/mysql start , and the script is the exact
 same as on the other servers. Even just after startup there's already 15-20
 processes created.
 
 On 5/29/07, Geoffroy Cogniaux [EMAIL PROTECTED] wrote:
 
  Hi,
 
It looks like automatic start-up called by a monitoring process (Nagios,
  ...). Have you such tools on your servers ?
 
  Geoffroy
 
  -Message d'origine-
  De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Quentin
  Gouedard
  Envoyé: mardi 29 mai 2007 16:41
  À: mysql@lists.mysql.com
  Objet: mysql creating lots of processes (not threads, linux processes)
 
  Hi,
  So I use mysql as the DB for a large site (up to 1 concurrent users at
  peaks).
  I have a front server as a reverse proxy and multiple (7) backend machines
  serving the site.
  Each machine has data strictly similar in nature and quantity.
 
  On 6 of these machines, I have 1 single mysqld process (process in linux
  terms):
  # ps -ef | grep mysqld | wc -l
  2
 
  There are generally 5-8 threads (processes as mysql means it) running when
  i
  do a show processlist;
 
 
  Now, on one of those machines there are huge number of processes for
  mysql.
  # ps -ef | grep mysqld | wc -l
  34
  Running just ps shows for each of these processes:
  mysql25952 10073  0 16:25 ?00:00:02 /usr/sbin/mysqld
  --defaults-file=/etc/mysql/my.cnf --basedir=/usr --datadir=/var/lib/mysql
  --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock
 
  This machine has no particular data, is doing nothing different than the
  others.
  The show processlist command also returns 5-8 processes.
 
  So where are these myqsld processes from ? There's like 20 at startup
  (instantly after launching mysql), but it keeps increasing, until i
  restart
  mysql or the server runs out of memory. I have compared the mysql
  configuration of this machine and the 6 other, variable by variable, and
  they are strictly identical.
  How come this server behaves differently ? What can I do to have the
  single-process behaviour on that machine too ?
 
  Thanks,
  Quentin
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



NOT NULL = Not Working?

2007-05-09 Thread Scott Baker
If I create the following table, and then try and insert the following
data both inserts work. It looks like the second one works (it shouldn't
because Last is NULL) because it assumes Last = ''. Is there a way I can
make it NOT assume that? If Last is not specified it should reject that
command. Is that possible?

---

DROP TABLE IF EXISTS foo;

CREATE TABLE foo (
   ID INTEGER PRIMARY KEY AUTO_INCREMENT,
   First VarChar(30),
   Last VarChar(30) NOT NULL,
   Zip INTEGER
);

INSERT INTO foo (First, Last, Zip) VALUES ('Jason','Doolis',97013);
INSERT INTO foo (Last) VALUES (17423);

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: NOT NULL = Not Working?

2007-05-09 Thread Scott Baker
I knew it was an option somewhere... Perfect!

Thanks

JamesDR wrote:
 JamesDR wrote:
 Scott Baker wrote:
 If I create the following table, and then try and insert the following
 data both inserts work. It looks like the second one works (it shouldn't
 because Last is NULL) because it assumes Last = ''. Is there a way I can
 make it NOT assume that? If Last is not specified it should reject that
 command. Is that possible?

 ---

 DROP TABLE IF EXISTS foo;

 CREATE TABLE foo (
ID INTEGER PRIMARY KEY AUTO_INCREMENT,
First VarChar(30),
Last VarChar(30) NOT NULL,
Zip INTEGER
 );

 INSERT INTO foo (First, Last, Zip) VALUES ('Jason','Doolis',97013);
 INSERT INTO foo (Last) VALUES (17423);

 In your last insert example, Last is inserted as 17423. Which is not null.

 
 Yup, empty string, the manual says this...
 http://dev.mysql.com/doc/refman/5.0/en/data-type-defaults.html
 
 However it does say that to enforce NOT NULL you would have to change
 the sql_mode
 http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html
 EG:
 SET SESSION sql_mode='STRICT_ALL_TABLES';
 INSERT INTO foo (zip) VALUES (12345);
 SET SESSION sql_mode='';
 
 I get an error on the insert statement:
 Field 'Last' doesn't have a default value.
 
 You may need to set sql_mode to STRICT_ALL_TABLES before the insert or
 do it in my.cnf or as a command line parameter.
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replace, Substitute, Delete

2007-05-09 Thread Scott Haneda
 
 STEP 1: Create the root of the image name
 update staff set imgName = Lower(CONCAT(Left(last,6),Left(first,2)));
 
 STEP 2: 
 How do I delete spaces in the imgName?
 
 STEP 3: 
 update staff set webLine = CONCAT(tdimg src='images/,imgName,.jpg'
 width='100' height='125'br clear='all' /,first, ,last,/td);
 
 Thanks in advance for your time spent in reading or responding.

Personally, I would move your html and string parse logic into php, rather
than in mysql, but that is up to you.  If you want to do this in mysql..

SELECT REPLACE('De Long', ' ', '');
+-+
| REPLACE('De Long', ' ', '') |
+-+
| DeLong  |
+-+
1 row in set (0.00 sec)


You could also do..

SELECT REPLACE('De Long', ' ', '%20');
++
| REPLACE('De Long', ' ', '%20') |
++
| De%20Long  |
++
1 row in set (0.00 sec)


Which will url encode the space, which will allow perfectly for spaces in
filenames on a web server.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replace, Substitute, Delete

2007-05-09 Thread Scott Haneda
 (1) I thought it might be quicker than Perl or PHP. Correct me if I'm wrong.
 
 (2) I have read a short description of Triggers, and I thought these
 three lines of code might be an excellent AFTER INSERT trigger. (I
 don't know enough about Triggers yet to know if they'll even take multiple
 lines of code however.) (A) Could this be a Trigger? and (B) Would it be worth
 doing?
 
 (3) I've written plenty of Perl and PHP code that concatenates fields
 and builds XHTML cells and rows. I thought it might be interesting to
 build the rows inside the database table and have my PHP do nothing but
 count MySQL records in order to know when to open and close the XHTML
 table rows. 

I think things just start to get messy over time.  One day will come when
you want to search something in the database, and you will then have to
accommodate that search with more hacks in order to ignore the html.

Further, as time goes on, say you wanted to make some design changes to your
html, you are now faced with constructing a rather complicated method for
updating all your old records to the new html.  If the html logic is pulled
out into the php/web front end, you change in one spot, the site sees those
changes on all pages.

Databases are used to store data, I do not really see html as data so to
speak, at least not important data.  Even storing a path to an image is
something I would stay away from, just store the image name in a field, and
call the path out in your html.

This in large part is my opinion, but I do tend to find most will agree with
it.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replace, Substitute, Delete

2007-05-09 Thread Scott Haneda
 Instead of individual replacements, as in ...
 
 SELECT REPLACE('De Long', ' ', '');
 
 would this global approach work?
 
 SELECT REPLACE(imgName,' ','') FROM staff WHERE imgName REGEXP ' ';

I just used that as an example. What you are doing is fine, you put the
field name in the first argument of the REPLACE() function.  You no not need
to add the where imgname REGEXP part at all.

SELECT REPLACE(imgName,' ','') FROM staff
That should suffice, unless you have some other limiting factor you want to
toss in like :

SELECT REPLACE(imgName,' ','') FROM staff WHERE imgName LIKE '%d'
You get the idea, that would select all image names that end in 'd'
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Bin logs and mysql 4

2007-04-30 Thread Scott Tanner
On Fri, 2007-04-27 at 09:19 -0500, [EMAIL PROTECTED] wrote:
 So if one is doing a full mysqldump every night, all bin-logs can be 
 deleted after this? 

  On the slave - Yes. In fact I would highly recommend it before
starting the slave processes again. This will reset the bin log's
'position' back to 0, giving you a clear starting point that will
correspond to the backup.  For the master server (if your backing up
from the slave) you'll only want to purge the bin logs back to your
slowest slave.


 If bin-logging is disabled, will master/slave syncing still occur?
 
No, if you disable the bin-logs, replication wont be happening.


Regards,
Scott


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Bin logs and mysql 4

2007-04-26 Thread Scott Haneda
Running mysql 4, just poked into data and see I have gigs and gigs of
hostname-bin.xxx log files.

How does one maintain these, can someone point me to relevant data on what
to do about drive space being lost to these?

thanks
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Bin logs and mysql 4

2007-04-26 Thread Scott Haneda
 In the short term, see the manual page for PURGE MASTER LOGS.  In the
 long term, write a cron job.
 
 innotop (http://sourceforge.net/projects/innotop) also has a new
 feature, unreleased because I just wrote it a few hours ago, which will
 help you figure out which binlogs can be purged safely with a single
 keystroke :-)

I don't quite get this, if SHOW SLAVE STATUS shows empty result set, and I
am just running one server, not a master + slave setup at all, its really
rather simple.

So, how would I ever know what logs I can safely delete or purge?

Do I really need to use mysql to purge them or can I just `rm` them?

I guess I could push this to cron?
PURGE MASTER LOGS BEFORE DATE_SUB( NOW( ), INTERVAL 31 DAY);

My question is, what are these logs really good for, I assume restoration,
and from what I read, but how do I know how far back I should keep?

thanks
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Bin logs and mysql 4

2007-04-26 Thread Scott Haneda
 Yes -- sorry for being so  general.  You can use the binlogs for a)
 replication b) replaying changes since your last backup so you get
 point-in-time recovery.  If you have no replication slaves, just delete
 everything older than your latest backup.  You  can just use 'rm'.  If
 you use PURGE MASTER LOGS BEFORE, it's a bit easier than cron because
 you can do it across all platforms easily.  On UNIX of course, you'd use
 something like
 
 find /var/lib/mysql/data -name *.bin -mtime +7 -exec rm {} \
 
 (My find syntax is guaranteed to be wrong there... don't run that as I
 typed it).
 
 But if you do it via SQL, you don't have to mess with this.

Thanks!
So, I take it since I do not have a slave at all, I could safely just
disable this feature altogether?

If I do not need point in time recovery, and the once every 12 hour dump I
do across all databases is ok with me, I suppose I can just disable said
feature?  Heck, some of these boogers are a GB each :-)
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Bin logs and mysql 4

2007-04-26 Thread Scott Tanner
 Issuing a 'reset master' will purge all of the logs as well. I wouldn't
just rm them, as they are being tracked in the index file.  

 If you aren't running a slave, then these files are only good for data
recovery purposes. Say a DBA goes crazy and deletes all of the databases
mid-day (too much stress). You could restore the previous nights backup
and run these bin logs up to the point of the delete command - a little
bit of editing would be needed to do this, but you get the idea.

 For this to work smoothly, you need to reset the logs after every
backup. If your using mysqldump, just add the --delete-master-logs
option. 

  If you want to turn the logs off, remove log-bin and log-bin-index
from the conf file.


Regards,
Scott Tanner



On Thu, 2007-04-26 at 09:47 -0700, Scott Haneda wrote:
  In the short term, see the manual page for PURGE MASTER LOGS.  In the
  long term, write a cron job.
  
  innotop (http://sourceforge.net/projects/innotop) also has a new
  feature, unreleased because I just wrote it a few hours ago, which will
  help you figure out which binlogs can be purged s keystroke 
  :-)--delete-master-logs
 
 I don't quite get this, if SHOW SLAVE STATUS shows empty result set, and I
 am just running one server, not a master + slave setup at all, its really
 rather simple.
 
 So, how would I ever know what logs I can safely delete or purge?
 
 Do I really need to use mysql to purge them or can I just `rm` them?
 
 I guess I could push this to cron?
 PURGE MASTER LOGS BEFORE DATE_SUB( NOW( ), INTERVAL 31 DAY);
 
 My question is, what are these logs really good for, I assume restoration,
 and from what I read, but how do I know how far back I should keep?
 
 thanks
 -- 
 -
 Scott HanedaTel: 415.898.2602
 http://www.newgeo.com Novato, CA U.S.A.
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL e-zine

2007-04-26 Thread Scott Haneda
 Hey everyone,
 
 I have been considering putting together a e-zine for MySQL as my way to
 give back to the community.  I was curious as to what type of interest
 there would be in this.  I am thinking something quarterly to start with
 and probably 15 - 20 pages.  Nothing huge.  Topics would vary from
 everything like standard DBA stuff like backups, replication, system
 architecture and such to stuff for the developers out there..php, perl,
 python whatever.  My background is more of a DBA so I would definitely
 need some input/articles for the developer side of things.
 
 It is just funny that in this day and age there is no magazines
 specifically for MySQL.  I even saw a php magazine at the bookstore the
 other day! 
 
 I can do the layout in QuarkXpress and generate pdfs that I can host on
 my website. 
 
 Any thoughts?  Any desires to contribute?

Anything on the topic of actual queries I would skip, those are narrow and
finite questions, usually specific to a certain application, and quickly
answered on the list.

I would cover what is more mysterious to most, which is the underbelly of
mysql.  Today there was good dialogue on the bin logs, that I think is stuff
that helps people.  Demystifying all the .cnf options, how to optimize for
load, test the load etc.

Steer clear of high end features like replication and the like, things that
generally, if you are worried about them, your app is doing well enough you
should be around knowledgeable people already.

For me at least, it would be more about internals, that the step by step of
how to do a join.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database Replication Fallover

2007-04-02 Thread Scott Tanner
Ben,
 
  Check out Linux-ha ( http://linux-ha.org). We are using version 1 at my 
company, which is fairly easy to set up and there are a lot of good articles on 
it.  With this you can create a virtual address to fail-over between systems, 
run commands / scripts during a fail over, and even kill the other node 
(stonith) if you have supporting hardware. 

  For our setup, we have the client apps accessing the db through the virtual 
IP address. We have 2 mysql config files, one for the master and one for the 
slave (actually we have one m4'd config file, but you get the idea). During 
failover, we restart mysql on the slave using the master config file and 
transfer the virtual IP over. 

  Here are a couple of key points for setting this up:

  1. Slave must be running with the 'log-slave-updates' option.
  2. After you run a back issue a reset master on the slave server. 


Hope this helps,
Scott Tanner



On Mon, Apr 02, 2007 at 08:43:35AM +0100, Ben Clewett wrote:
 I forgot to mention that I am running Linux.
 
 If anybody has some idea of software which can do this, I'd be very 
 interested.
 
 Regards,
 
 Ben
 
 Ben Clewett wrote:
 Dear MySql,
 
 I'm looking into availability and wonder if any member might be able to 
 help me.
 
 I have two databases, one Primary and one full Replication.
 
 Normally the primary is used for data input, reports are drawn from the 
 replication.
 
 If I loose the Primary, do any members have any software they can 
 recommend which:
 
 - Stops the replication daemon.
 - Sets the replication server to Read/Write.
 - Shuts down the primary.
 - Routes traffic to the replication.
 
 Any advise or ideas would be very useful...
 
 Regards,
 
 Ben Clewett.
 
 
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Possible in sql, or just move it to a language

2007-03-15 Thread Scott Haneda
This is a one off I need to do

Table1 has email addresses in it, table two also has email addresses in it.
table1 represents bounced emails that need to be purged from the database.

What I would like to do, is mark a field in table2, called needs_purging
to true, when there is a match from table 1

Should I just run a loop in my language of choice, or is there a join I am
not seeing here?
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem Installation

2007-02-27 Thread Scott Johnson
20Hi;
I had a working installation of MySQL, and then I zapped it. I found out the 
hard way that when you delete symlinks, you also delete the files to which they 
are linked :( I wiped my entire /usr/local/include dir. I'm using ports on 
FreeBSD 6.2. As it happens, I had a complete copy of the identical include dir 
on another machine, so I tarballed it and copied it over. Unfortunately, that 
didn't seem to help. I rebuild MySQL from ports and everything looks fine. But 
when I try to fire it up I discover it's not up. When I run

/usr/local/bin/mysqld_safe  

it starts up and immediately dies. What log file would tell me what happened? 
Also, strangely, if I move the 

/usr/local/include/mysql

folder and rebuild MySQL, it *doesn't* rebuild that folder! I would have 
assumed that folder would have been built in the installation of MySQL. Please 
advise.
TIA,
Scott




 

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

Problem Installation

2007-02-27 Thread Scott Johnson
Hi;
[After waiting 4 hours, I don't think my post posted, so I'm re-posting]
I
had a working installation of MySQL, and then I zapped it. I found out
the hard way that when you delete symlinks, you also delete the files
to which they are linked :( I wiped my entire /usr/local/include dir.
I'm using ports on FreeBSD 6.2. As it happens, I had a complete copy of
the identical include dir on another machine, so I tarballed it and
copied it over. Unfortunately, that didn't seem to help. I rebuild
MySQL from ports and everything looks fine. But when I try to fire it
up I discover it's not up. When I run

/usr/local/bin/mysqld_safe  

it starts up and immediately dies. What log file would tell me what happened? 
Also, strangely, if I move the 

/usr/local/include/mysql

folder
and rebuild MySQL, it *doesn't* rebuild that folder! I would have
assumed that folder would have been built in the installation of MySQL.
Please advise.
TIA,
Scott








 

No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.
http://mobile.yahoo.com/mail 

mysqlimport problem with , inside fields

2007-02-18 Thread Scott Hamm

I've been trying to import fields that contains the comma character ','
inside double quotes '', with the results following:

code

mysqlimport --fields-optionally-enclosed-by= --fields-terminated-by=,
--lines-terminated-by=\r\n --ignore-lines=1 --user=root --password shark
c:\documents and settings\shamm\desktop\result.csv

Enter password: **

mysqlimport: Error: Row 48 was truncated; it contained more data than there
were input columns, when using table: result


Line 48:

48, 14.729606, 10.1.1.22, 10.182.167.209, TCP, pop3   [SYN,
ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460



mysql desc result;
+-+--+--+-+-+---+
| Field   | Type | Null | Key | Default | Extra |
+-+--+--+-+-+---+
| No  | int(10) unsigned | NO   | | |   |
| Time| text | NO   | | |   |
| Source  | text | NO   | | |   |
| Destination | text | NO   | | |   |
| Protocol| text | NO   | | |   |
| Info| text | NO   | | |   |
+-+--+--+-+-+---+
6 rows in set (0.03 sec)

C:\Documents and Settings\shammmysql --version
mysql  Ver 14.12 Distrib 5.0.26, for Win32 (ia32)

/code

--

`Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.



loading files onto MySQL

2007-01-11 Thread Scott Yamahata
Newbie question here.  Using Mac OSX Server 10.4.4.  Do most MySQL tasks in 
terminal window.
Recently purchased software that tells me to go to Control Panel (which I 
don't think I have--though I have MySQL tools) and load their file onto the 
MySQL database.


I'm not sure what this means.  If I create a database called test is there 
a way to load their file onto that database in the terminal window?


Thanks.  I'm probably asking a very stupid question.

_
Type your favorite song.  Get a customized station.  Try MSN Radio powered 
by Pandora. http://radio.msn.com/?icid=T002MSN03A07001



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Choose a random record from a list of duplicates

2007-01-09 Thread Scott Haneda
 zv Green wrote:
 Hello all,
 
 What  I want to do is select all the records from the table but where
 there
 are duplicate entries (based on say, the surname and postcode fields)
 pick a random record and then ignore the rest.
 
 If you want to pick a random record, you can do ORDER BY RANDOM LIMIT 1
 or so. (warning, untested)

And I think you will want to group by surname and postcode in order to
filter out the duplicates.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SQL syntax

2006-12-28 Thread Scott Yamahata

Hi, I'm getting the following error message:

You have an error in your SQL syntax; check the manual that corresponds to 
your MySQL server version for the right syntax to use near ' enabled = '1'' 
at line 3INSERT INTO clf_cities SET cityname = 'Santa Barbara', countryid = 
, enabled = '1'


and can't quite figure out what it means.  Any help is greatly appreciated.

Thanks,

Scott

_
Experience the magic of the holidays. Talk to Santa on Messenger. 
http://clk.atdmt.com/MSN/go/msnnkwme008001msn/direct/01/?href=http://imagine-windowslive.com/minisites/santabot/default.aspx?locale=en-us



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Complete newbie (OSX Server - MySQL)

2006-12-23 Thread Scott Yamahata

Complete newbie here.
Purchased software for my OSX Server 10.4.8 to use with MySQL and PHP.  I 
get an installation window that opens and asks for Host: (with a default of 
localhost).

User:  site
Password:  (the one the person who sold me the script provided).
Database:  site
License key:  (the one the person who sold me the script provided).

I hit the save button and get the following:

Warning: mysql_connect(): Access denied for user 'site'@'localhost' (using 
password: YES) in /Library/WebServer/Documents/!install/install.php on line 
298

Could not connect to the Database

Questions:
1.  I don't know if I have to do anything to MySQL to add the User and 
Database or whether the scripts supposed to do it for me.
2.  I don't know if I'm supposed to use localhost or greenheartworld.com, 
which was the domain that I supplied to him (but want to change).


Any help is greatly appreciated.

Thanks,

Scott

_
Get FREE Web site and company branded e-mail from Microsoft Office Live 
http://clk.atdmt.com/MRT/go/mcrssaub0050001411mrt/direct/01/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   6   7   8   9   10   >