Re: Slave Bin Log Question

2007-09-25 Thread Ananda Kumar
After you stop the slave and start mysqldump, execute the below on slave db. show slave status\G. Note down the Master_Log_File and Exec_Master_Log_Pos. This will be the point from which you need to do the recovery. regards anandkl On 9/25/07, Eric Frazier [EMAIL PROTECTED] wrote: Boyd

Re: bookmarks and keywords

2007-09-25 Thread Sebastian Mendel
Baron Schwartz schrieb: SELECT parent.bookmark_url as pbu, parent.bookmark_keyword as pbk FROM bookmarks AS child JOIN bookmarks AS parent ON parent.bookmark_keyword = child.bookmark_keyword WHERE child.bookmark_url='http://www.redhat.com'; [..] no, you didn't, you just switched

Re: bookmarks and keywords

2007-09-25 Thread Sebastian Mendel
Baron Schwartz schrieb: You have one final problem, which isn't really causing you trouble with THIS query, but will likely bite you in the future: you are selecting non-grouped columns in a GROUP BY query. SELECT DISTINCT will help too, of course (at least in similar cases) only if required

more elegant way to store/find phone numbers

2007-09-25 Thread mysql
hi listers we have a mysql based application, wherein phone numbers may be stored and searched for. it is not the primary goal of this application to handle phone numbers. phone numbers usually are entered in a form like 099 999 99 99 or 099-999-99-99, or substings thereof. actually, the

RE: more elegant way to store/find phone numbers

2007-09-25 Thread Edward Kay
hi listers we have a mysql based application, wherein phone numbers may be stored and searched for. it is not the primary goal of this application to handle phone numbers. phone numbers usually are entered in a form like 099 999 99 99 or 099-999-99-99, or substings thereof. actually, the

Re: more elegant way to store/find phone numbers

2007-09-25 Thread Peter Brawley
does anyone have a nicer solution for this? How about comparing ereg_replace( [[:punct:]],, $colvalue ) with ereg_replace( [[:punct:]],, $comparisonvalue )? PB mysql wrote: hi listers we have a mysql based application, wherein phone numbers may be stored and searched for. it is not the

csv to mysql

2007-09-25 Thread Brian E Boothe
hi all : i,m looking for a solution for my PDA that Doesn't have a DB Solution installed on it : so im having to write to CSV Files for my Forms , i'm needing a way that when i sink my PDA with my wireless network it Moves the Entire CSV File into a MySQL database :any Suggestions :? --

RE: csv to mysql

2007-09-25 Thread Jay Blanchard
[snip] : i,m looking for a solution for my PDA that Doesn't have a DB Solution installed on it : so im having to write to CSV Files for my Forms , i'm needing a way that when i sink my PDA with my wireless network it Moves the Entire CSV File into a MySQL database :any Suggestions :?

Ordering by unrelated column in a GROUP BY

2007-09-25 Thread Rob Wultsch
Suppose I have a table: CREATE TABLE `t1` ( `id` int(11) NOT NULL auto_increment, `data1` varchar(50) , `data2` varchar(50) , `data3` varchar(50) , `occurance` datetime , PRIMARY KEY (`id`) ) And I want to pull the most recent entry of each set of unique combinations of `data1` and

Re: Ordering by unrelated column in a GROUP BY

2007-09-25 Thread Peter Brawley
You might like to compare the performance of ... SELECT t1.data1, t1.data2, MAX(t1.occurrence) FROM t1 GROUP BY data1,data1 ORDER BY occurrence; with... SELECT t1.data1, t1.data2,t1.occurrence FROM t1 LEFT JOIN t1 AS t2 ON t1.data1=t2.data2 AND t1.data2=t2.data2 AND t1.occurrence

Ugly sql optimization help?

2007-09-25 Thread Bryan Cantwell
I have the following horrible sql. I need one result that has all the data in one row. I am currently using 3 sub queries and figure it must be a better way... SELECT 'FS_DEV', ifnull(a.severity, 0) AS aseverity,

RE: Ordering by unrelated column in a GROUP BY

2007-09-25 Thread Rob Wultsch
Peter, Thank you for your reply. MAX(t1.occurrence ) will pull the max of the occurrence column out of the group, but the other collumns (like data3 or id) would still be sorted by the GROUP BY. I will try your second solution, but the tables I am working are thousands of row and your solution

Ouch! ibdata files deleted. Why no catastrophe?

2007-09-25 Thread Daniel Kasak
Greetings. I've just returned from holidays, and it seems that all but 1 ibdata file ( there were 10! ) have been deleted by a co-worker. He apparently was able to delete them with nautilus ( he was looking to reclaim some space and these were 1GB files each ... and yes, the Trash was emptied as

Re: Ouch! ibdata files deleted. Why no catastrophe?

2007-09-25 Thread Gary Josack
Did the space become available when deleted? try: lsof | grep deleted see if they're still running in memory. if so you might be able to save them. Daniel Kasak wrote: Greetings. I've just returned from holidays, and it seems that all but 1 ibdata file ( there were 10! ) have been deleted

Re: regexp negate string help

2007-09-25 Thread Baron Schwartz
MySQL's regex library doesn't have all those Perl features. You can use the pcre-compatible extension from http://www.xcdsql.org/MySQL/UDF/, or just use two clauses in the WHERE: one should be col NOT RLIKE linux$ Baron Tang, Jasmine wrote: Hi, I need to match anything that start with

Re: Ouch! ibdata files deleted. Why no catastrophe?

2007-09-25 Thread Daniel Kasak
On Tue, 2007-09-25 at 19:27 -0400, Gary Josack wrote: Did the space become available when deleted? try: lsof | grep deleted see if they're still running in memory. if so you might be able to save them. Thanks for the quick response :) They're there: mysqld 5460 mysql 10uW

Re: Ouch! ibdata files deleted. Why no catastrophe?

2007-09-25 Thread Gary Josack
Well if you can stop all instances of writes to the databases you should be able to recover them. Each file is going to be in /proc/5460/fd/10-17 the file number corresponds to the fd you see in lsof output ex: cp /proc/5460/fd/10 ibdata2 This is still risky and i reccomend you get a dump

Re: Ouch! ibdata files deleted. Why no catastrophe?

2007-09-25 Thread Daniel Kasak
On Tue, 2007-09-25 at 23:11 -0400, Gary Josack wrote: Well if you can stop all instances of writes to the databases you should be able to recover them. Each file is going to be in /proc/5460/fd/10-17 the file number corresponds to the fd you see in lsof output ex: cp /proc/5460/fd/10

Re: Ouch! ibdata files deleted. Why no catastrophe?

2007-09-25 Thread Gary Josack
For future reference. The files do actually continue to be written to. I experience this all the time when people delete logs files and space keeps filling up. Daniel Kasak wrote: On Tue, 2007-09-25 at 23:11 -0400, Gary Josack wrote: Well if you can stop all instances of writes to the