InnoDB “log sequence in the future!” crashing, won't start

2011-11-05 Thread Kevin Wang
I stopped mysql only to find that it wouldn't come back up, /etc/init.d/mysql
start only outputs . . . . . . failed. I've narrowed it down to an issue
with InnoDB. The database starts when innodb_force_recovery = 5 and nothing
lower. When I check table for my MyISAM tables, they check fine, but the
connection is dropped when I attempt to do so for my InnoDB tables with
force_recovery set. mysql.err and mysql.log are empty.  Here are the
relevant logs from syslog when I attempt to start with
innodb_force_recovery set to zero: http://pastebin.com/jzhEuWFu

and here's my my.cnf file: http://pastebin.com/qn6huZ09

Does anyone have any idea what could be going wrong?  I have a hunch it has
to do with the system datetime, but changing it to different timezones
didn't seem to help either.  If it matters, I'm running Debian Squeeze.

I'd be grateful for any advice that you could give.

Thanks,

--Kevin


Re: How do increase memory allocated to MySQL?

2011-02-04 Thread Kevin Spencer
2011/2/3 Yannis Haralambous yannis.haralamb...@telecom-bretagne.eu:

 what am I doing wrong?

 the query was just

 SELECT * FROM wasfoundin WHERE yakoright LIKE '%geography%'

When you use a leading wildcard symbol, MySQL will do a full table
scan regardless of any indexes you've created.  If you've got a MyISAM
table, I recommend a FULLTEXT index.

http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html

--
Kevin.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: export db to oracle

2010-11-17 Thread Kevin (Gmail)

Hello,

It should be possible to connect Oracle to the MySQL (or other) database 
using a DBlink (using a MySQL ODBC driver)

the tables could then be copied using PLSQL.
Maybe you could link directly to Oracle and copy the code using MySQL 
procedures or scripts (I have more experienc of Oracle which works quite 
well as I described)


This way, you can avoid use of external files and CSV etc. It is very likely 
quicker since you can use bulk loads or 'select into' routines once you have 
the right table structures and field type in place.
This is a technique that I have used for ETL and data integration and it is 
very manageable.

You can trap errors using cursors if the data has anomalies.

Kevin O'Neill

- Original Message - 
From: Johan De Meersman vegiv...@tuxera.be

To: Shawn Green (MySQL) shawn.l.gr...@oracle.com
Cc: Sydney Puente sydneypue...@yahoo.com; mysql@lists.mysql.com
Sent: Wednesday, November 17, 2010 8:58 AM
Subject: Re: export db to oracle



On Wed, Nov 17, 2010 at 1:43 AM, Shawn Green (MySQL) 
shawn.l.gr...@oracle.com wrote:


On 11/16/2010 15:14, Sydney Puente wrote:


Hello,

How can I export a mysql 5.0.45 db to Oracle? mysql is going to stau but 
I

need
to pass the data to oracle, just so the data can be transfered.
I have carried out a mysql dump. This seems fine.create table etc. about
20 MB
in total.

Any ideas? It is on Redhat if that makes a difference.



I suggest you also look at the syntax for SELECT INTO OUTFILE, too. Dumps
are usually scripts of SQL statements that Oracle may not read
appropriately.



I'm not quite sure which formats Oracle reads in, although CSV is probably 
a

good guess.

if you disable mysqldump's extended insert syntax, however, I think the
actual insert statements should be perfectly fine for most any database. 
You
may need to tweak create statements for datatypes and syntax, though; it 
may

be easier to just recreate the emtpy tables by hand.

I think I also have vague memories of an option to use ANSI-SQL standard
syntax, although that might just as well have been some third-party tool.

And, speaking of third-party tools: tOra can (if well-compiled) be used to
manage both MySQL and Oracle; maybe that nice tool can help you.

--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Swap data in columns

2010-09-22 Thread Kevin (Gmail)

update mydata set column1 = column2, column2 = column1
(works in sqlserver, can't try mysql at the moment)
You can select which rows by adding a where clause obviously.
I suppose that the field values are copied to a buffer which is the written 
to the table at the end of the update (or row by row?)


- Original Message - 
From: nixofortune nixofort...@googlemail.com

To: mysql@lists.mysql.com
Sent: Wednesday, September 22, 2010 5:29 PM
Subject: Swap data in columns



Hi all.

Sorry for very simple question, just can't figure out the solution.
I need to swap data in column1 with data in column2.


++-+-+
| id | column1 | column2 |
++-+-+
|  1 | a   | z   |
|  2 | b   | y   |
|  3 | c   | x   |
|  4 | d   | w   |
|  5 | e   | v   |
++-+-+

Can you achieve this with a simple query?
so for id 1 column1 = 'z' and column2 = 'a' and so on.

Thanks guys,
Igor




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



AFTER Delete Trigger question

2010-06-18 Thread Kevin Labecot
Hi,Is there a way to update the same table on a delete trigger ?I need to call an UPDATE statement when a delete occurs.Best regards
--Kevin Labecot,Innovanticwww.innovantic.frTél. : 05.56.45.60.54



Re: Using RAND to get a unique ID that has not been used yet

2010-05-28 Thread Kevin (Gmail)
The separate table for the IDs is probably best solution, maybe counting on 
caching of the table with an index on the id value to speed up the 'where' 
clause; this checks what numbers are left instead of what numbers have been 
used; the disadvantage is that you have to manage a second table with a 
million rows!
You could generate a memory table when you open the session, populate it 
with all possible values and then delete all already assigned values.
You would have to do this only once and then all possible unused values 
would be available.
It shouldn't get slower with time (in fact it might speed up as the used 
rows are progressively deleted).
It has the advantage that the random function is called only once: whereas 
using a single table requires looping until a unique random value is found, 
and as the table fills this will get really slow.


- Original Message - 
From: Jerry Schwartz je...@gii.co.jp
To: 'Andre Matos' andrema...@mineirinho.org; 'Steven Staples' 
sstap...@mnsi.net

Cc: mysql@lists.mysql.com
Sent: Friday, May 28, 2010 6:51 PM
Subject: RE: Using RAND to get a unique ID that has not been used yet





-Original Message-
From: Andre Matos [mailto:andrema...@mineirinho.org]
Sent: Friday, May 28, 2010 1:44 PM
To: Steven Staples
Cc: mysql@lists.mysql.com
Subject: Re: Using RAND to get a unique ID that has not been used yet

It seems to be a good approach, although I was trying to get this by 
querying

the table without creating another table to keep the Ids.

[JS] That would be a VERY bad idea. My predecessor designed our system 
that
way: it would generate a random key, check to see if that key were in use, 
and

either use it or try again.

As you would expect, the whole process get slower and slower as we ran 
out

of unique keys. Eventually the whole application became unusable.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com





Thanks,

Andre

--
Andre Matos
andrema...@mineirinho.org




On 2010-05-28, at 12:15 PM, Steven Staples wrote:


If you wanted to use/go that route, then why not select a random limit 1
from that table, and then delete that row?

SELECT `column` FROM `table` ORDER BY RAND() LIMIT 1;


On a side note, I would use the auto-inc field still, and store this 
number

in another field.

Steven Staples




-Original Message-
From: Jim Lyons [mailto:jlyons4...@gmail.com]
Sent: May 28, 2010 11:49 AM
To: Andre Matos
Cc: mysql@lists.mysql.com
Subject: Re: Using RAND to get a unique ID that has not been used yet

If your specs are that specific (IDs must be between 1 and 99)
then you could create a 99-row table with one integer column and
prefill it with the numbers 1 to 99 in random order.

Then you could write a function that would select and return the first
number in the table, then delete that record so you would not reuse
it.

Once you've done the work of sorting 99 numbers in random order
(which can be done anywhich way) it's easy and you don't have to loop
an indeterminant number of times.  You would be looping an increasing
number of times as you begin to fill up the table.

Jim

On Fri, May 28, 2010 at 10:38 AM, Andre Matos 
andrema...@mineirinho.org

wrote:

Hi All,

I have a table that uses auto_increment to generate the Id 
automatically
working fine. However, I need to create a new table where the Id must 
be a

number generated randomly, so I cannot use the auto_increment.


MySQL has a function RAND. So I could use something like this:

SELECT FLOOR(RAND() * COUNT(*)) AS RandId FROM mytable

But, let's suppose that the RandId is a number that was already used 
in

the table. Then I need to run the SELECT again and again until I find a
number that hasn't been used.


Is there a way to have this SELECT to loop until it finds a number 
that

hasn't been used?


The RandId must be only numbers and length of 6 (from 1 to 99). No

other character is allowed.


Thanks for any help!

Andre

--
Andre Matos
andrema...@mineirinho.org





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

 http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com







--
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=sstap...@mnsi.net

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.819 / Virus Database: 271.1.1/2895 - Release Date: 
05/28/10

02:25:00



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

http://lists.mysql.com/mysql?unsub=andrema...@mineirinho.org





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp






--
MySQL General Mailing List
For 

Impossible Out Param Return Value

2010-05-26 Thread Kevin Baynes

Using MySql 5.1, I have a very simple table with 1 row. I have a Stored 
Procedure to select if the row exists. I expect the test of this SP to return 
null, but it returns a value! The value is always returned if the 'path' 
matches, regardless of the other values being tested. If the 'path' does not 
match, it returns NULL as expected. Has anyone seen this before?

Thanks, 
Kevin

--
Full explanation below:
--

Table 'file_detail' with 1 row:

id_file_detail, id_file, id_machine, id_user, path
1 , 1  , 1 , 1  , C:\Program Files\BlueZone

--
Stored Procedure to see if row exists:
--

DROP PROCEDURE IF EXISTS `find_file_detail`$$
CREATE PROCEDURE `find_file_detail`
(
IN id_file int(11),
IN id_machine int(11),
IN id_user int(11),
IN filePath varchar(255),
OUT keyOut int(11)
)
BEGIN

SELECT `id_file_detail` 
INTO keyOut 
FROM `file_detail` 
WHERE (`id_file` = id_file 
AND `id_machine` = id_machine 
AND `id_user` = id_user 
AND `path` = filePath) 
LIMIT 1;

END$$

--
SQL used to test the stored procedure:
--

SET @keyOut = NULL;
CALL find_file_detail(99,99,99,'C:\\Program Files\\BlueZone',@keyOut);
SELECT @keyOut;

--

Notice there is 1 row, the values of 99 do not exist, but the path does exist. 
This test will return @keyOut = 1. How is this possible?

If I change the path string to use 'BlueZone1' (so the paths do not match), 
then this test will return @keyOut = NULL as expected. It seems as though the 
only thing getting matched is the path and the other 3 values are being ignored.

Does this make sense to anyone?

Kevin Baynes
Senior Software Developer
Rocket Software


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Impossible Out Param Return Value

2010-05-26 Thread Kevin Baynes


The problem occurs because the parameter names are the same as the column 
names. I'm not sure why, since the columns are specified in quotes, but it does.

Thanks,

Kevin

-Original Message-
From: Kevin Baynes [mailto:kbay...@bluezonesoftware.com] 
Sent: Wednesday, May 26, 2010 10:07 AM
To: mysql@lists.mysql.com
Subject: Impossible Out Param Return Value


Using MySql 5.1, I have a very simple table with 1 row. I have a Stored 
Procedure to select if the row exists. I expect the test of this SP to return 
null, but it returns a value! The value is always returned if the 'path' 
matches, regardless of the other values being tested. If the 'path' does not 
match, it returns NULL as expected. Has anyone seen this before?

Thanks, 
Kevin

--
Full explanation below:
--

Table 'file_detail' with 1 row:

id_file_detail, id_file, id_machine, id_user, path
1 , 1  , 1 , 1  , C:\Program Files\BlueZone

--
Stored Procedure to see if row exists:
--

DROP PROCEDURE IF EXISTS `find_file_detail`$$
CREATE PROCEDURE `find_file_detail`
(
IN id_file int(11),
IN id_machine int(11),
IN id_user int(11),
IN filePath varchar(255),
OUT keyOut int(11)
)
BEGIN

SELECT `id_file_detail` 
INTO keyOut 
FROM `file_detail` 
WHERE (`id_file` = id_file 
AND `id_machine` = id_machine 
AND `id_user` = id_user 
AND `path` = filePath) 
LIMIT 1;

END$$

--
SQL used to test the stored procedure:
--

SET @keyOut = NULL;
CALL find_file_detail(99,99,99,'C:\\Program Files\\BlueZone',@keyOut);
SELECT @keyOut;

--

Notice there is 1 row, the values of 99 do not exist, but the path does exist. 
This test will return @keyOut = 1. How is this possible?

If I change the path string to use 'BlueZone1' (so the paths do not match), 
then this test will return @keyOut = NULL as expected. It seems as though the 
only thing getting matched is the path and the other 3 values are being ignored.

Does this make sense to anyone?

Kevin Baynes
Senior Software Developer
Rocket Software


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=kbay...@seagullsoftware.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



when calling mysql_real_connect from the c api, I get a malloc() memory corruption error.

2008-10-29 Thread Kevin Stevens
ello,
I am encountering a problem I just can't seem to figure out and I am out of 
ideas. I can compile and run fine on one linux box running Mysql 5.1.23-rc, but 
as soon as I scp the binary and its required libs to another machine which has 
identical hardware and only a slightly upgraded distro (but also running 
5.1.23-rc), I get a glibc malloc(): memory corruption: *** error, which traces 
back to the mysql_real_connect() call. I ran ldd -v on the binary on both 
machines and there are some differences, but they don't look important. I have 
run this binary on other machines before with no issues.

I can connect to the database on the troubled machine both locally from the 
client and through my program from a different machine, but my program craps 
out when I run it locally on this new box. What could be going wrong? The 
database connection is one of the first things the program does- before we do 
any significant allocation of memory, so I really do not believe that this is a 
problem with my program (it has also been continually tested with many 
different data sets).

I checked the bug database and this forum and could not find any relevant 
information, if you have any ideas, please let me know! Below is the output 
from my program, and the code I am using to connect- am I doing something wrong 
there? If there is any more information I can provide, please let me know.

Thank you,
-Kevin



Program output
==

Starting Engine...
Reading Control Data
Using configfile: /some.ini
Populating Data from Database...
DB Connection Handle is null, going to create a new connection!
We have created the handle... initializing.
We have initialized the handle, connecting...
Connecting with parameters: Server: localhost Login: root Pass: X 
DBDatabase: mismiats
*** glibc detected *** ./ksMismi2.out: malloc(): memory corruption: 
0x011d4b40 ***
=== Backtrace: =
/lib/libc.so.6[0x7f8f6919da14]
/lib/libc.so.6(__libc_malloc+0x90)[0x7f8f6919f360]
libmysqlclient.so.16(my_malloc+0x32)[0x7f8f6a6a5032]
libmysqlclient.so.16(vio_new+0x2f)[0x7f8f6a6cd15f]
libmysqlclient.so.16(mysql_real_connect+0xe43)[0x7f8f6a6c9fe3]
./ksMismi2.out[0x51671e]
./ksMismi2.out[0x51678b]
./ksMismi2.out[0xd5cd68]
./ksMismi2.out[0x57d32e]
./ksMismi2.out[0x57dc41]
./ksMismi2.out[0x49a251]
/lib/libc.so.6(__libc_start_main+0xf4)[0x7f8f691471c4]
./ksMismi2.out(__gxx_personality_v0+0x1c2)[0x4994ea]
=== Memory map: 
0040-01012000 r-xp  fe:00 2048027 /usr/local/engine/bin/ksMismi2.out
01112000-011a9000 rwxp 00c12000 fe:00 2048027 /usr/local/engine/bin/ksMismi2.out
011a9000-011f3000 rwxp 011a9000 00:00 0 [heap]
7f8f6000-7f8f60021000 rwxp 7f8f6000 00:00 0
7f8f60021000-7f8f6400 ---p 7f8f60021000 00:00 0
7f8f64c3-7f8f64c3a000 r-xp  fe:00 1171488 /lib/libnss_files-2.7.so
7

[ snipped... ]

7fff72b7d000-7fff72b92000 rwxp 7ffea000 00:00 0 [stack]
7fff72bfe000-7fff72c0 r-xp 7fff72bfe000 00:00 0 [vdso]
ff60-ff601000 r-xp  00:00 0 [vsyscall]




Code
==


MYSQL* dbConn::getDBConnectionHandle ()
{
my_bool reconn = 1;

if (DBconnectionHandle != NULL)
{
while ( !mysql_real_connect( DBconnectionHandle, DBServer.c_str(), 
DBLogin.c_str(), DBPassword.c_str(), DBDatabase.c_str(), DBPort, NULL, 
CLIENT_MULTI_STATEMENTS) ) //Client multi statements used for batched inserts 
of trades.
{
cout  ERROR: Connecting to MYSQL. ERROR Reported from Server is:   
mysql_error(DBconnectionHandle)  endl;
}

mysql_options( DBconnectionHandle, MYSQL_OPT_RECONNECT, (const char*)reconn ); 
// must do twice, bug in mysql lib

return DBconnectionHandle;
}
else
cout  DB Connection Handle is null, going to create a new connection!  
endl;

DBconnectionHandle = new MYSQL();
cout  We have created the handle... initializing.   endl;

if ( mysql_init(DBconnectionHandle) == NULL )
{
cout  ERROR: initializing connection handle: Error from Mysql:   
mysql_error(DBconnectionHandle)  endl;
return NULL;
}

cout  We have initialized the handle, connecting...  endl;
cout  Connecting with parameters: Server:   DBServer   Login:   
DBLogin   Pass:   DBPassword   DBDatabase:   DBDatabase  endl;


mysql_options( DBconnectionHandle, MYSQL_OPT_RECONNECT, (const char*)reconn );

while ( !mysql_real_connect( DBconnectionHandle, DBServer.c_str(), 
DBLogin.c_str(), DBPassword.c_str(), DBDatabase.c_str(), DBPort, NULL, 
CLIENT_MULTI_STATEMENTS) ) //Client multi statements used for batched inserts 
of trades.
{
cout  ERROR: Connecting to MYSQL. ERROR Reported from Server is:   
mysql_error(DBconnectionHandle)  endl;
}

mysql_options( DBconnectionHandle, MYSQL_OPT_RECONNECT, (const char*)reconn ); 
// must do twice, bug in mysql lib
cout  We have connected   endl;
return DBconnectionHandle;
}


Version Info for mysql: Ver 14.14 Distrib 5.1.23-rc, for redhat-linux-gnu 
(x86_64) using readline 5.1


Re: Appropriate Design

2008-09-17 Thread Kevin Hunter
At 5:46am -0400 on Wed, 17 Sep 2008, Jeffrey Santos wrote:
 I'm developing an application
 that will require information from various sources.  Since what 
 might be considered required information about those sources will 
 vary (occasionally dramatically), I'm unsure as to the correct 
 design for the database tables.  In other words I'm left with (I 
 believe) two choices:
 
 1.   Create a source table that will contain, among other things, 
 every possible field for a source type and sort out what is 
 considered required information at the application level.

 If possible, maybe a general conventional wisdom statement 
 would greatly help my education on these matters!

This is a common pitfall of noobs to data modeling.  The idea is to try
to think of everything at the forefront, which is almost always an
impossible task, or to think of a minimal case to jump start the coding
process, and then add model components later as necessary.

The second pitfall is keeping model logic in the application.  This
violates two similar principles: Single-Point-of-Authority and
Don't-Repeat-Yourself.

You are using a database so that it can maintain the structure and
integrity of your data.  There is absolutely no other compelling reason
to use a DB.  (If you didn't need integrity and structure, you'd use a
simpler and mucho faster flat file.)  Let the DB do its job and be the
single-point-of-authority.  The application should certainly do things
the right way, putting data where it needs to go, but it should not be
the application's /responsibility/ to keep data integrity and structure.
 If you rely on your application to maintain your structure, you presume
that your programmers are perfect and will think of every little detail.
 (They're not and they won't.  I promise.)

As a general technology, databases receive a lot of work so that
applications developers don't have to sweat the little details.  Like
making sure that every user in a table has a last name.  Like making
sure the user typed a 0 instead of an O.  Like defining constraints so
that developers don't have to make sure an account has enough money to
make a withdraw.  All they need know is that it didn't go through.

The other principle of DRY also holds true.  At the point you have your
application try to maintain data constraints, you will inevitably have
lots of repeated or similar code to maintain similar data cases.  That's
a different kind of nightmare.  Similarly, with the DB, it's silly to
define multiple columns for similar data.  That's spreadsheet think.
Like col_toyota, col_ford, col_chevy, col_lexus, col_buick.  No.  Make a
single column as a foreign key to another table.

Some keywords to use with Google:

normalize
normalization
foreign keys
foreign key constraints
innodb

Wikipedia is often a good starting point.

Once you've learned the concept, I can't stress enough that you should
normalize, normalize, normalize.

 2.   Create a few source tables detailing the required information about
 each source type.  The only way I can think of doing this is something along
 the lines of:

 and then pull only the appropriate data from the database at query time
 using JOINs.

This is closer to a better approach.  That said, be more specific about
the problem you're trying to solve and folks may be better able to
provide useful advices.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Normalization vs. Performance

2008-08-28 Thread Kevin Hunter
At 2:11pm -0400 on Tue, 26 Aug 2008, John Smith wrote:
 So how bad is this? The mentioned query will be the query which is used
 the most in my application (yes, it is going to be a forum).
 Should I break normalization and save the date of the root in each node row?

My recommendation is no.  Normalization is a Good Thing, and you'll be
hard-pressed to convince me otherwise.  In the long run, you (or
following programmers) will thank yourself if you maintain a normalized
core data model.  Seriously.

But speed and performance are the here-and-now issues while DB
technology catches up to demands.  Have you looked at temporary tables
or materialized views?  These might help you in this endeavor.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT N records from each category

2008-08-11 Thread Kevin Waterson
This one time, at band camp, Perrin Harkins [EMAIL PROTECTED] wrote:

 In your case, you might be able to just cheat it with some
 MySQL-specific LIMIT stuff:

Thanks very much Perrin, this is most useful and I can take it from 
here.

Kind regards
Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SELECT N records from each category

2008-08-10 Thread Kevin Waterson
I have 3 tables (schema below) with categories, questions and answers.
Each category can of course have many questions in it. Also, the
answers are multiple choice, so each question can have several
related answers. I am trying to achieve 2 goals.

1) SELECT N questions and the related answers from each category.
The result needs to have say, 5 questions from each category.
Of course, the answers for each of the questions needs to be there also.

2) SELECT N questions and related answers from specified categories.
This time, a simple WHERE test_category_id IN(1,2,3) will do I think.

I can select ALL the questions and related answers, but this is as far
as I get. I think I need some group-wise additions.. or something

SELECT
test_question_id,
test_question_text, 
test_answer_id, 
test_answer_text, 
test_answer_correct 
FROM test_questions q1  
LEFT JOIN
   (SELECT
   test_answer_id,   
   test_answer_text,   
   test_question_id,  
   test_answer_correct   
   FROM   
   test_answers)  
AS q2  
USING(test_question_id);

The schema looks like this..

CREATE TABLE test_categories (
  test_category_id int(11) NOT NULL auto_increment,
  test_category_name varchar(450) NOT NULL default '',
  PRIMARY KEY  (test_category_id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE test_questions (
  test_question_id int(11) NOT NULL auto_increment,
  test_category_id int(11) default NULL,
  test_question_text varchar(254) NOT NULL,
  test_question_code varchar(1024) NOT NULL,
  PRIMARY KEY (test_question_id),
  FOREIGN KEY (test_category_id) REFERENCES test_categories (test_category_id)
  ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE test_answers (
  test_answer_id int(11) NOT NULL auto_increment,
  test_question_id int(11) NOT NULL,
  test_answer_correnct tinyint(4) NOT NULL,
  PRIMARY KEY  (test_answer_id),
  FOREIGN KEY (test_question_id) REFERENCES test_questions (test_question_id)
  ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

Any help in this matter hugely appreciated,
Kind regards
Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT N records from each category

2008-08-10 Thread Kevin Waterson
This one time, at band camp, Peter Brawley [EMAIL PROTECTED] wrote:

 See Within-group quotas (Top N per group) at 
 http://www.artfulsoftware.com/infotree/queries.php.

Yes, I have seen that, very clever. How does it relate to my situation?
Simply point to vague references is not helpful.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT N records from each category

2008-08-10 Thread Kevin Waterson
This one time, at band camp, Peter Brawley [EMAIL PROTECTED] wrote:


 Vague? Not in the slightest. General? Indeed, by design.

I have read this before..
It uses a single table, I am using multiple tables. I am not selecting
the TOP 10 or whatever. This example has no relationships where the
schema I presented does.

If you dont know the answer, just leave it out, but thanks for your efforts.
If anybody can steer me in the right direction here, that would be great.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Adding index to Memory table LOSES ROWS!

2008-07-27 Thread Kevin Hunter
At 12:42p -0400 on Sat, 26 Jul 2008, mos wrote:
 At 09:14 AM 7/26/2008, you wrote:
 The reproducible part is very important, because without it, it's
 suspect to be just your individual case, as with a bug in *your*
 application code, your hardware, or generally something in your setup.
 
 Well, I thought it might be my code too. That's why when the memory
 table was built (without the index), I went to a SqlMgr and counted the
 rows. None of the rows were missing. Then I did an Alter Table and added
 one index to the memory table, and sure enough when it was finished 75%
 of the rows were missing and no error was reported. Only the rows for
 index values A to E were in the table so the indexing lost rows.

That suggests to me a couple of things, both bugs with MySQL:

- an out of memory error - MySQL should *not* fail, but tell you it
  can't complete and return you to a known state.  An RDBMS should
  *never* lose data.  Ever.

- a piece of data in one of the rows of processing that MySQL doesn't
  like, and therefore gives unexpected results.  This is definitely a
  bug as this should not happen to begin with, and An RDBMS should
  *never* lose data.  Ever.

Summary: I don't know what's up and have not encountered this.  But if
you can, create a small test case that can reproduce the error.  Then
fill out a bug at http://bugs.mysql.com/ .  Loss of data is absolutely a
bug, and a critical one.

A quick ( 3min) perusal of the bugs currently open did return any
meaningful results.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Adding index to Memory table LOSES ROWS!

2008-07-26 Thread Kevin Hunter
At 5:52p -0400 on Fri, 25 Jul 2008, mos wrote:
[Adding index to memory table silently loses data]

First thing, what version are you using?

Second thing, although it would still be broken, did you read the MySQL
docs to make sure that the Memory table type supports the index *type*
you attempted to add?

 1) Why does adding an index lose the rows?
 2) Why wasn't an error reported?

[ squint ]

My first thought is that MySQL and the Memory table type are used by
*lotsa* folks, with success, so I find it difficult to believe that this
wouldn't have been caught well before now.

Are you positive this problem isn't hardware related?  A few bad memory
locations could easily explain this.

If you're certain this is a bug in MySQL, creating a small,
*reproducible* test case and reporting this through the proper channels
(http://bugs.mysql.com/) will go much further to fixing the problem.

The reproducible part is very important, because without it, it's
suspect to be just your individual case, as with a bug in *your*
application code, your hardware, or generally something in your setup.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to Get file modified time and date of file by using builtin function or procedure in sql?

2008-07-24 Thread Kevin Spencer
On Thu, Jul 24, 2008 at 1:17 AM, Sivasakthi [EMAIL PROTECTED] wrote:
 Hi All,

 how to Get file modified time and date of file by using builtin function or
 procedure in sql?

In a related thread from earlier today you were advised that any
interaction with the filesystem should be done via a programming
language of your choice.  Did you have any luck writing a program to
do just that?

Kevin.
--
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Running 2 versions of MySQL on same server

2008-07-01 Thread Kevin F. O'Riordan

Hi James,

 If I telnet to port 3306, I get the old version as I should and when
 I telnet to the new port I get the new version as I should, but if I
 do a:

 mysqladmin -P (new port) variables

 I get the old variables.

I'd suspect mysqladmin's using the unix socket, and ignoring your -P
option; try adding the option --protocol=TCP.

You may well find a similar issue with phpmyadmin - there's a
configuration option:

  $cfgServers[...]['connect_type']  = 'tcp';

which might solve this.

hth,
Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Looking for a Mysql Guru/DBA

2008-05-22 Thread Kevin Hunter
At 1:43p -0400 on Wed, 21 May 2008, bruce wrote:
 The basic goal of the project is to be able to track the sites that I'm
 visiting via a Firefox extension. I want to be able to implement something
 like the breadcrumbs extension, but I want to be able to go a lot further.

If this is for a one-per-person kind thing (i.e. only a single Firefox
installation will use a single DB instance at any time), MySQL may be
overkill.  You may want to look towards something smaller and embedded,
like SQLite[1].  It'll be much less overhead, in both installation for
users and memory overhead for your extension.  In fact, Firefox 3.0
already includes SQLite for the smart url bar they've got going on.

 If you're interested, and you're reasonably good at mysql, and devising
 database structures/schema then let's talk!

You will still want to talk to someone about getting the right schema in
place, however.

Kevin

[1] http://www.sqlite.org/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table Structure

2008-05-16 Thread Kevin Hunter
At 2:49a -0400 on Fri, 16 May 2008, Ananda Kumar wrote:
 it goes without saying

Eh, not to those who don't know.  Hence the list and question.  :-D

Krishna, the short answer is that it depends on your data, and the
queries against it that you run.  Test/benchmark on your own DB and data
to see what solves your issue.

 1. primary key and unique key would do unique scan which is 
 fastest of all scan

Sort of.

First, let's get terminology correct: keys, indexes.

Terminology: keys are nothing more than data.  Keys are used to identify
rows for manipulation, either directly by a SQL statement/request, or
indirectly through the DB SQL parser.  A key is not necessarily unique.
 For example

SELECT name FROM person WHERE height = 70;

is a request for the names of people who are 5'10 tall.  The
identifying key in this statement then, is height.  All people who are
70 inches tall are keyed for selection.

A special case of a key is a primary key.  This tells the DB that the
data in this column(s) uniquely identifies each row.  Assuming that 'id'
was created as a PRIMARY KEY,

SELECT name FROM person WHERE id = 7;

will return exactly 1 or 0 rows because id is guaranteed to be unique by
the DB.  Further, a primary key is also implicitly guaranteed to be not
null.

A unique key is a bit of a misnomer, and I'm sorry the MySQL
documentation perpetuates it.  It is more accurate to think of the
column as being constrained such that every row in that column(s) is
unique.  Thus, unique is a /property/ of a column, not it's defining
characteristic.

Terminology: indexes are an /implementation/ of the DB, usually for
speed and to help maintain data integrity.  When you designate a column
(or columns) as the primary key for the rows of a table, MySQL will
automatically create an unique index on that column.  This way, for
example, when you insert data into the table, it won't have to scan the
entire table to make sure that the new data is unique.  It can just walk
a couple of levels in the index to ensure uniqueness.

For speed however, it depends on what the DB is asked, and how the index
was defined.  MySQL is somewhat limited in that you are pretty much
limited to the index type the DB has: b-tree index.  But that's
implementation specific.  Oracle, for example, has other index types,
(hash, bitmap).  Modulo the internal specifics of the MySQL
implementation, the rough idea of a unique not null b-tree index (e.g. a
primary key column) is that the DB ostensibly need only walk O(lg(n))
steps to find a key.  A hash index, then, would presumably be the
fastest at O(1).

 2 Index would do range scan, which would be comparitivly slower.

Not exactly.  This, again, depends on implementation and the data
against which queries are run.  An index is an index.  Remember, a
primary key is implemented by the MySQL engine as a unique not null
b-tree index.  In fact, trying to get low-cardinality data from a table
via indexes could even be faster than the primary key.  Consider a table
with 1,000,001 rows of favorite colors of people.  Let's say 1 million
people like the color blue, but only 1 person who likes the color puce.
 An index for that single person would be a huge win.  Depending on the
DB (read: implementation) one could make this particular style of skewed
data even faster.  Postgres, for example, has partial-indexes.

As I said at the beginning of this email, I suggest the OP do some
personal investigation and tests, because understanding indexes /and
your own data/ is not as simple as a checklist.

Kevin

P.S. I highly suggest you Google some of the concepts about which I
wrote.  Wikipedia is fairly good start.  Also, don't be afraid to read
other DB's docs.  I've learned a great deal from comparing different
DBs, documentation sources, and interacting with multiple communities.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table Structure

2008-05-16 Thread Kevin Hunter
At 5:32a -0400 on Fri, 16 May 2008, Ananda Kumar wrote:
 2 Index would do range scan, which would be comparitivly slower.

 Not exactly.  This, again, depends on implementation and the
 data against which queries are run.  An index is an index.
 Remember, a primary key is implemented by the MySQL engine as
 a unique not null b-tree index.  In fact, trying to get
 low-cardinality data from a table via indexes could even be
 faster than the primary key.  Consider a table with 1,000,001
 rows of favorite colors of people.  Let's say 1 million
 people like the color blue, but only 1 person who likes the
 color puce. An index for that single person would be a huge
 win.  Depending on the DB (read: implementation) one could make
 this particular style of skewed data even faster.  Postgres,
 for example, has partial-indexes.
 
 As stated by you as above. Here puce is a clear case of just one
 record being read, which is very much similar to reading a record
 with a PRIMARY KEY, which is supposed to be fast for HIGH
 CARDINALITY, if there were 1/4 million people who liked puce, then
 the index would a RANGE SCAN and would be slow.

Eh, again, it's hard to say.  In my contrived example, puce is a clear
case of an index win via the fact that it's unique, but it's *not* like
the primary key because every other key isn't unique.  A primary key
enforces high-cardinality, so selecting individual rows is ostensibly
fast, but now lets select a range of rows.  What if we made ten people
like the color red and SELECTed them?  How about 2,000 green?  100,000
yellow?  An index perusal would still weed out = 900,000 rows, so is it
a win?  Try it an find out.  I don't know the answer.

Getting data from the index isn't free.  It /can/ be a win when it helps
remove a statistically strong portion of the data, dependent on disk
speed, cpu speed, and a data cardinality.  Sometimes, depending on the
what rows and data it takes to satisfy a query, it's actually better to
ignore the indexes and do a range scan of the table.

The point is that it isn't as simple as snap-your-fingers,-you're-done,
which I gather we both understand.  I suggest again to the OP to do some
tests on /personal data sets/ and see what answers needs.

Besides, who likes puce anyway?  ;-)

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: R: Re: FW: Re: what is a schema? what is a database?

2008-04-07 Thread Kevin Spencer
On Mon, Apr 7, 2008 at 1:24 AM, Moon's Father [EMAIL PROTECTED] wrote:
 Schema is a collection of databases.

A schema is a definition of tables  fields and their relationship.

Kevin.
--
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Comma's in data?

2008-03-08 Thread Kevin Hunter
At 12:23a -0500 on Sat, 08 Mar 2008, obed wrote:
 Use \ to escape.
 
 mysql insert into table1 values ('algo\,otra');

As an aside, I'm curious why the parser doesn't understand that the
comma is part of the string in this context?  It's already between two
single quotes ... ?  Shouldn't that be enough?

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Moving to another hard drive

2008-02-20 Thread Kevin Chen
I am trying to relocate MySQL and a database to another hard drive on  the
same system. According to this page,
http://dev.mysql.com/doc/refman/5.0/en/upgrading-to-arch.html,  I get the
impression that I should be able to do this by copying the data  files to the
new MySQL installation. However, that doesn't seem to be  sufficient as MySQL
does not see the database that was copied over.

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

Re: basic style shema question

2008-01-18 Thread Kevin Hunter

At 11:44a -0500 on 18 Jan 2008, Alex K wrote:

To summarize one table vs. many tables with one to one relations?


As per usual, it depends on your needs.  For most flexibility, and to 
give the DB the best chance to give the best plan for the possible 
requests I might make in the future, I generally tend towards 
normalization, not denormalization.


The question is: what do you want to do with the schema?

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: basic style shema question

2008-01-18 Thread Kevin Hunter
Hmm.  If we're talking pure DB theory, then the whole point is to apply 
the DRY principle as much as possible.  At the point you have multiple 
copies of the same data, unless your programmers are perfect (and they 
aren't, I promise), you *will* have stale data.  Better to have only one 
place to update things.


Some other questions/thoughts that come to mind:
- Will it ever be possible for a company to have more than one contact
  email address?
- Do you /really/ want to store the user's password in your DB?  Look
  towards salts and SHA/MD5 sums.
- If you have more than a few people with the same company, are you at
  all worried about disk space?
- Disk contention is often the bottleneck, if your DB can't fit entirely
  in main memory.  Having less places to update means better disk
  access.
- Usage patterns, as you described, are also a concern.  Logging in is
  an issue as is accessing other data.  What's the ratio of
  currently-logging-in-users to other data requests?  No need to pull
  the entire data row if you're not going to use most of the disk pages.
- Will you have mainly INSERT queries, mainly SELECT, UPDATE,
  DELETE, a combination?  DB's and engines are better suited to
  different types of workloads.
- Are you worried about integrity of your data?  How many foreign
  key constraints will you want?

These all tie in together, and generally beg the question of *your* 
usage patterns.  If this is a project of any import, I can almost 
guarantee that what you think will happen will not align with what 
actually happens.  With that in mind, having the agility of multiple 
tables with correct data (read: use foreign key constraints) will likely 
behoove you.


You might want to take a look at some articles on normalization and 
schema design.  Wikipedia is a good starting pace.


Kevin

At 12:30p -0500 on 18 Jan 2008, Alex K wrote:

Well the basic information, company description and personalized
options will be selected many times (whenever a user submits a query).
It will basically be show on the result page of the search engine.

The user's login / password well is used to login, then the user may
update the company basic information, description and personalized
options. These updates may happen sporadically though. Once every 3
minutes these fields are selected again in order to update the search
engine index.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql at 100% CPU

2008-01-17 Thread Kevin Hunter

At 3:25a -0500 on 17 Jan 2008, Ben Clewett wrote:
I need to know why MySql does this.  If any person knows how I an 
identify the problem, please let me know!


I think this package helped another person out on this list a couple 
days ago:


http://debaday.debian.net/2007/12/26/mytop-a-top-clone-for-mysql/

HTH,

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Prepared SQL statements - Faster performance?

2008-01-17 Thread Kevin Hunter
I can't speak to the exact internals of MySQL, but in database practice 
one *generally* uses prepared statements for two reasons:


1. Security
2. Speed

If your system is at all exposed to the outside world *and you at all 
value your data*, your biggest concern should /absolutely/ be security. 
 By using prepared statements, you reduce the number of places to 
secure by letting the database do your escaping for you (e.g. database 
developer's know better what to escape than you).  It's actually better 
than that, but that's a simple explanation.


If you're not exposed to the Internet at large, and speed is your 
concern, prepared statements can give a speed improvement, but not 
always.  The reason for the alleged speed improvement is that generally 
one prepares the statement once and then aggregates the cost of 
preparation over more than one execution.  Juxtapose with reparsing and 
executing for every set of arguments, where the plans are largely the 
same: the parsing phase is duplicate work.


But preparation is not always a win.  Say I have a table of stored 
materials.  Picking the perfect plan is highly dependent on what data 
I have, and what data I want.  Take this query as an example


PREPARE( SELECT material FROM stock WHERE mass = ? );

Presumably, just having an index on 'mass' will make things faster, 
right?  That makes an assumption that I have an evenly distributed set 
of data.  What if 5 billion items in my warehouse are 5kg, and 3 items 
are 10kg?  If I plug in 5kg, my indexes are useless, and are in fact a 
loss to use.  If I plug in 10kg, my indexes are a huge gain.  Without 
knowing before hand what data I'll need, the planner will likely make a 
poor decision in favor of the best general decision.


To answer what you can expect from planning: YMMV.  I have had code 
bases improve from 90 minutes to 5 minutes on the same hardware.  I have 
also had code bases show decreased performance by implementing planning.


As the previous poster said, the only way you'll know what *your* speed 
gain/loss will be, is to do it and find out.  You may be surprised.


Kevin

At 10:56p -0500 on 15 Jan 2008, Moon's Father wrote:

To know the exact speed improvement ,you have to have a test yourself append
on your hardware mathine.

On Jan 15, 2008 11:39 PM, mos [EMAIL PROTECTED] wrote:


At 11:25 AM 1/14/2008, Jay Pipes wrote:

Are you using the PREPARE STATEMENT server-side syntax or an emulated
prepared statement like in PDO?

-jay

Jay,
 Currently I'm not using prepared statements at all. Before I switch,
I wanted to know how much of a speed improvement I can expect, if any.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql at 100% CPU

2008-01-17 Thread Kevin Hunter

At 12:08p -0500 on 17 Jan 2008, Ben Clewett wrote:
The package you show gives the same output as the 'SHOW PROCESSLIST' 
output, which I included in my previous email :)


Heh, that'll teach me to actually /use/ the package before recommending 
it. :-P  (I just found out about it when that article came out, and 
haven't yet had a chance to try it out.)


Good luck with your problem!

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problem - MySQL at 99.9% CPU

2008-01-08 Thread Kevin Hunter

At 6:47a -0500  on 08 Jan 2008, Gunnar R. wrote:

Concerning slow queries, it seems there's a couple of different queries
that's being logged.


I haven't tried it yet, but this recently went by on debaday.debian.net:

mytop: a top clone for MySQL

http://debaday.debian.net/2007/12/26/mytop-a-top-clone-for-mysql/

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problem - MySQL at 99.9% CPU

2008-01-08 Thread Kevin Hunter

At 3:51p -0500 onGunnar R. wrote, On 01/08/2008 03:51 PM:
That tool tells me 100% of the data is read from memory, not a byte from 
disk... would there still be any point in getting more memory?



Any suggestions to where to go from here?


I dunno.  My hunch is that could do some query optimization.  Have you 
checked the index usage on your queries?  Although MySQL's internal 
planner is pretty weak for anything more complex than simple statements, 
there are some excellent tools available to help you tune your queries.


http://dev.mysql.com/tech-resources/articles/using-new-query-profiler.html

A quick google with these keywords mysql index hints query profiler 
seemed to provide a good jumping off point.


HTH,

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Select mit Subselect Problem

2007-12-17 Thread Kevin Hunter
At 7:45a -0500 on 17 Dec 2007, Marcus Füg wrote:
 Tabelle Sales (s) mit jede Menge Abverkaufsdaten.
 ArtikelNr,Kassenbon,HändlerID,Datum
 
 Jetzt sollen alle zu einem gekauften Artikel ebenfalls augelistet
 werden, d.h. was wurde mit dem Artikel zusätzliche gekauft
 
 Bisher habe ich das mit PHP gemacht, nur macht jetzt bei 5 Mio
 Datensätze in den Abverkäufen PHP schlapp. DArum meine Frage kann ich
 effizienter Mit MySQL machen?

 [snip]

 Danach die Ergebnisse aufsummiert.
 
 Geht das effizienter?

1. This list is largely an English list, so you may have better luck
asking your question in English.  Alternatively, you might ask your
question on this list: http://lists.mysql.com/mysql-de

2. How can you more efficiently use MySQL in this scenario?  It depends
on exactly what you're trying to do.  If you can describe the problem
rather than exactly what you're trying to do, we may be able to better help.

That said, I'm guessing you're looking for GROUP BY and ORDER BY.  Take
a gander at the MySQL docs for the version that you are using.  A
starting point:
http://dev.mysql.com/doc/refman/5.0/en/group-by-functions-and-modifiers.html

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Virtual Hosting Privileges

2007-11-04 Thread Kevin Waterson
This one time, at band camp, Grant Peel [EMAIL PROTECTED] wrote:

 What priveleges should I grant users at setup time?

Perhaps look at the GRANT option

GRANT SELECT, UPDATE, DELETE TO [EMAIL PROTECTED] IDENTIFIELD BY 'password';

you can use GRANT * or any combination of privileges you like.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: append on insert

2007-11-03 Thread Kevin Waterson
This one time, at band camp, Baron Schwartz [EMAIL PROTECTED] wrote:

 INSERT INTO item (`key`)
 SELECT CONCAT('my-item',
 (SELECT IF(COUNT(*) = 0, '', CONCAT('-', COUNT(*)))
FROM item WHERE `key` LIKE 'my-item%'));

 Kevin, this design is not first normal form and will cause you trouble. 
   Consider what will happen if you insert my-item, my-item, and then my-ite.

I am aware of the break in 1NF but need use the item_name as a key. I was hoping
MySQL had some sort of sequence available to auto append to similar keys.

I could of course check if the key exists, and then change the value in the 
application layer to my-item-2, but was hoping to do it in a single query.

Kind regards
kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



append on insert

2007-11-02 Thread Kevin Waterson
I have a table of item
I wish to have the value of the item incremented if there is a duplicate.
I looked at ON DUPLICATE KEY UPDATE but this modifies the exsisting value.
If the item is my-item and this already exists, I need to make it my-item-2
or even my-item-123 where 123 is the key.

Kind regards
Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: append on insert

2007-11-02 Thread Kevin Waterson
This one time, at band camp, Martin Gainty [EMAIL PROTECTED] wrote:

 Kevin-
 
 Declare the column AUTO_INCREMENT
 http://lists.mysql.com/mysql/161930

You can auto increment a non INT field?

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replace on Join

2007-10-29 Thread Kevin Murphy
I'm trying to do a join on two pieces of data that are not quite  
exactly the same. Basic story, I have 2 tables (output from 2  
mainframes) in MySQL where one table has our email addresses as  
@wncc.edu (our old name) and the other has it as @wnc.edu (our new  
name). So here is an abbreviated version of the query that is trying  
to match them, and it doesn't work. Any suggestions?


SELECT * FROM schedule
LEFT JOIN directory ON
(   REPLACE('@wncc.edu','@wnc.edu',schedule.email) = directory.email
)

--
Kevin Murphy
Webmaster: Information and Marketing Services
Western Nevada College
www.wnc.edu
775-445-3326

P.S. Please note that my e-mail and website address have changed from  
wncc.edu to wnc.edu.





Re: [Replication] - load

2007-10-23 Thread Kevin Spencer
On 10/10/07, Ratheesh K J [EMAIL PROTECTED] wrote:
 So every morning all the queries will be slow for the first time on the DB 
 server 2 and thereafter will be served by the query cache as they will be 
 cached and never invalidated until the night.

Sorry for the late reply, I'm trying to get caught up on the posts to
the list.  I *think* your concern is really the first queries of the
day having a cache miss and therefore taking longer than you'd like.
If you know what the queries are going to be in advance (i.e this is a
reporting application and your users typically request the same kind
of thing every day), you could setup a cron job to run in the middle
of the night to execute those queries.  That would seed the query
cache and when users begin to use your system in the morning, they
wouldn't experience a cache miss slowdown.

--
Kevin.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



bookmarks and keywords

2007-09-24 Thread Kevin Waterson
I have a table of bookmarks that have keyword searching.
Fields

+-+--+--+--+
| bookmark_id | bookmark_title   | bookmark_url | 
bookmark_keyword |
+-+--+--+--+
|   1 | number one   | http://www.phpro.org | php   
   |
|   2 | number 2 | http://www.freshmeat.net | software  
   |
|   3 | three here   | http://www.php.net   | php   
   |
|   4 | and four | http://www.redhat.com| linux 
   |
|   5 | the fifth| http://www.ez.no | php   
   |
|   6 | this will do | http://www.google.com| search
   |
|   7 | something about fish | http://www.youtube.com   | linux 
   |
|   8 | finally  | http://www.redhat.com| php   
   |
+-+--+--+--+

I wish to select all bookmark_title and bookmark_url that have the same 
keywords as
the bookmark_url 'http://www.redhat.com. I do this..
mysql SELECT child.bookmark_id, child.bookmark_title, child.bookmark_url FROM 
bookmarks AS child join bookmarks AS parent ON parent.bookmark_keyword = 
child.bookmark_keyword WHERE child.bookmark_url='http://www.redhat.com' GROUP 
BY child.bookmark_keyword;

But this returns..
+-++---+
| bookmark_id | bookmark_title | bookmark_url  |
+-++---+
|   4 | and four   | http://www.redhat.com |
|   8 | finally| http://www.redhat.com |
+-++---+

Where it should return also the results with the bookmark_id of 1 as the 
bookmark_url
http://www.redhat.com has two keywords, php and linux so this should match 
both.

What should I be looking at here?

Kind regards
kevin

-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.
-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bookmarks and keywords

2007-09-24 Thread Kevin Waterson
This one time, at band camp, Baron Schwartz [EMAIL PROTECTED] wrote:

  I wish to select all bookmark_title and bookmark_url that have the same 
  keywords as
  the bookmark_url 'http://www.redhat.com. I do this..
  mysql SELECT child.bookmark_id, child.bookmark_title, child.bookmark_url 
  FROM bookmarks AS child join bookmarks AS parent ON parent.bookmark_keyword 
  = child.bookmark_keyword WHERE child.bookmark_url='http://www.redhat.com' 
  GROUP BY child.bookmark_keyword;
  
 
 That query is right.

  But this returns..
  +-++---+
  | bookmark_id | bookmark_title | bookmark_url  |
  +-++---+
  |   4 | and four   | http://www.redhat.com |
  |   8 | finally| http://www.redhat.com |
  +-++---+
 
 But, the JOIN table from which those columns come actually has the 
 columns you want from the parent table as well.  It's just that you're 
 selecting the child columns.

Im not sure I follow, the results should be those with id's of 1, 3, 4, 5, 6, 
7, 8

and If I chose http://www.php.net that has only the keyword of php then the 
results
would be 1,3,5, and 8

Kind regards
Kevin

-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Linking tables

2007-09-15 Thread Kevin Waterson
Hi all, having a spot of bother with a 'keywords' table.
I have a table of articles with an article_id and a table
of categories with a category_id, all is well.
These are linked via a article_link table so that an article
can be a member of multiple categories.

I then have a table of 'keywords' that will relate to each article.
I wish to be able to SELECT all articles associated with the keywords.
No issues so far.

Next, I wish to be able to dynamically add a table to the database, 
eg: a 'cats' table.

If a cat has a keyword of 'tiddles' associated with it. I would then like
to be able to SELECT all related articles, that is, all records in the articles
table, with the keyword 'tiddles'.

MySQL 5 and the db layout looks like this so far.
thanks for looking
Kevin

CREATE table articles (
  article_id int(9) NOT NULL auto_increment,
  user_id int(9) default NULL,
  article_category_id int(11) NOT NULL,
  article_title varchar(50) NOT NULL,
  article_subject varchar(50) NOT NULL,
  article_blurb varchar(250) default NULL,
  article_body text,
  article_update timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
  article_publish_date timestamp NOT NULL default '-00-00 00:00:00',
  article_orig_author varchar(50) NOT NULL,
  article_image longblob,
  PRIMARY KEY  (article_id)
);

CREATE table article_categories (
  article_category_id int(11) NOT NULL auto_increment,
  article_category_name varchar(20) NOT NULL,
  article_category_left_node int(11) NOT NULL,
  arcitle_category_right_node int(11) NOT NULL,
  PRIMARY KEY  (article_category_id)
);

CREATE table article_links(
 article_link_id int(11) NOT NULL auto_increment,
 article_id int(11) NOT NULL,
 article_category int(11) NOT NULL,
 PRIMARY KEY (article_link_id)
);

CREATE table keywords(
 keyword_id int(11) NOT NULL auto_increment,
 keyword_name char(20) NOT NULL,
 PRIMARY KEY (keyword_id)
);



-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Can't fetch result twice in a PHP script

2007-09-15 Thread Kevin Waterson
This one time, at band camp, Mahmoud Badreddine [EMAIL PROTECTED] wrote:


 I have something like
 while($someArr=mysql_fetch_row($result))
 {
 ...some code.
 }
 
 The first time I call mysql_fetch_row , the code inside the while loop gets
 executed, but not the second time. What is the reason behind that.

The array pointer is now at the end of the result set.

Kevin

-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Blob data

2007-06-26 Thread Kevin Waterson
This one time, at band camp, Warren Young [EMAIL PROTECTED] wrote:

 Storing arbitrary data in a BLOB column is 
 tantamount to trying to turn the database into a file system.  If you 
 want a file system, use a file system.

What is a file system, if not a database?

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



BUG: MySQL 5.1.19 with UNCOMPRESS on *VALID* data fails when generated by 3rd party zlib.

2007-06-12 Thread Kevin Burton

OK I think I've found a bug with MySQL's compression support. :-/

I'm using two Java zlib implementations.  One is jzlib 1.0.7 and the other
is java.io.DeflateOutputStream .  Both of these are referenced by the zlib
implementation as being compatible.

I can compress/uncompress locally WITHOUT a problem.

When I store the data in the DB the value is stored correctly in a blob and
I can compare the MD5 hashcode with my local array of and the hashcode
values are identical which for all practical purposes means they're the
same.

The only problem is that UNCOMPRESS won't work...

It returns null and I get:


mysql SHOW WARNINGS;
+---+--+-+
| Level | Code |
Message
|
+---+--+-+
| Error | 1256 | Uncompressed data size too large; the maximum size is
1047552 (probably, length of uncompressed data was corrupted) |
+---+--+-+
1 row in set (0.00 sec)

Sure enough:

mysql SELECT UNCOMPRESSED_LENGTH(BODY) FROM FOO;
+---+
| UNCOMPRESSED_LENGTH(BODY) |
+---+
| 147577464 |
+---+
1 row in set (0.00 sec)

..

I've tested this on 5.1.19 and 4.1.21 with the same symptoms.

Anyone have any advice here?  Did you guys make any changes with the zlib
implementation you're using?

I'm willing to file this as a bug if necessary.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Why doesn't MySQL support gzip encode/decode functions.

2007-06-12 Thread Kevin Burton

Why doesn't mysql support gzip for COMPRESS/UNCOMPRESS and only zlib

For network applications zlib is a lot less compatible than gzip.

For example I could send gzip'd content directly from the database within a
larger gzip'd stream.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: MySQL Magazine - Issue 1 available NOW!!!!

2007-06-11 Thread Kevin Hunter

At 5:53p -0400 on 11 Jun 2007, Gordan Bobic wrote:
I don't understand this never-ending fascination with re-inventing  
a square wheel for an application for which the standard round type  
has already been kindly provided since year dot.


I imagine the reason for this never-ending fascination ... is along  
the lines of education.  To the uninitiated, the idea of binding  
variables seems weird and unintuitive.  What?!  I don't want them to  
be bound!  They're variables!  They're supposed to change.  Okay, so  
it's a slight exaggeration, but I think the point is still good.  A  
lot of folks pick up MySQL by fiddling around in their spare time.   
Unless someone (among my friends, it's usually me) enlightens them to  
better ways of doing things, and reasons for doing something in a  
more abstract, not-always-immediately-intuitive way, folks just don't  
know any better.  In that sense, this very discussion is /exactly/  
what the magazine article should be creating.


For those who don't know, binding is a process that does two things:

1. Lets the database pre-parse a query

	This can give enormous boosts in speed because a large amount of the  
time involved in getting information from the database is spent in  
parsing and planning a query.  If you're planning to do the same  
query many times over with only one or two different parameters, why  
not only parse and plan it once, and then aggregate the cost over / 
all/ the executions?


2. as well as let the database do any escaping necessary.

	The database designers ostensibly know best what characters need to  
be escaped, so let them do it.  Besides, no sense in having multiple  
definitions around, or reinventing a wheel, or wasting your time  
doing something mundane.


In lieu of an example, I'll just point to the PHP site:

http://us.php.net/manual/en/function.mysqli-prepare.php

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Kevin Hunter
At 12:31a -0400 on 28 May 2007, Dan Nelson wrote:
 In the last episode (May 27), Yves Goergen said:
 I'm thinking about using a MySQL table to store an Apache access log
 and do statistics on it. Currently all access log files are stored as
 files and compressed by day. Older log files are compressed by month,
 with bzip2. This gives a very good compression ratio, since there's a
 lot of repetition in those files. If I store all that in a regular
 table, it would be several gigabytes large. So I'm looking for a way
 to compress the database table but still be able to append new rows.
 As the nature of a log file, it is not required to alter previous
 data. It could only be useful to delete older rows. Do you know
 something for that?
 
 You want the ARCHIVE storage engine.
 
 http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html

Huh.  This is the first I've heard of the archive engine.  Cool!

However, I'm curious how the compression offered by OPTIMIZE TABLE and
the zlib library would compare to denormalization of the log schema.  In
particular, I imagine a lot of the HTTP requests would be the same, so
you could create a table to store the requested URLs, and then have a
second table with the timestamp and foreign key relationship into the
first.  Depending on how wide the original rows are and how often
they're requested, I imagine you could get quite a savings.  Anything
else that's repeated as well?  IP's?  Return codes?

Would be curious about the results if you were able to implement both.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Kevin Hunter
At 5:45a -0400 on 28 May 2007, Yves Goergen wrote:
 On 28.05.2007 09:06 CE(S)T, Kevin Hunter wrote:

 In particular, I imagine a lot of the HTTP requests would be the
 same, so you could create a table to store the requested URLs, and
 then have a second table with the timestamp and foreign key
 relationship into the first.
 
 Interesting idea. Inserting would be more work to find the already
 present dictionary rows.

My guess is not /that/ much work, since you should only have a known and
relatively small set in this dictionary, it'd basically be cached the
whole time.  But, that's my guess.  Haven't tried it.  Practice and
theory . . .

 Also, URLs sometimes contain things like
 session IDs. They're probably not of interest for my use but it's not
 always easy to detect them for removal.

Really?  Why wouldn't it be easy to detect them?  You presumably know
what variable you're looking for in the URL string, and applying a
simple regex search-and-replace . . . ?

 IP addresses (IPv4) and especially return codes (which can be mapped to
 a 1-byte value) are probably not worth the reference. Data size values
 should be too distributed for this.

Well, presumably, you'd normalize that part of the table.  That is,
rather than include multiple foreign keys in your data rows, you'd
create a cartesian product of the the return codes with the dictionary
table.  You'd have a slightly more bloated dictionary, but depending on
the number of requests the site(s) get(s), the aggregation would more
than make up for it.

 I could also parse user agent
 strings for easier evaluation, but this takes me the possibility to add
 support for newer browsers at a later time. (Well, I could update the
 database from the original access log files when I've updated the UA
 parser.)
 

Same thought.  If you've only a known set of UA strings, you could
normalize them with the dictionary table as well.

 How large is a row reference? 4 bytes?

I don't know, I'm fairly new to MySQL.  I suppose it'd also matter on
the type of index.  Anyone more knowledgeable wanna pipe up?

Well.  Whatever method works for your needs, cool.  I'm going to check
out both MYISAMPACK and ARCHIVE.  I was unaware of those.  Thanks list!

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: IS NULL Question

2007-05-01 Thread Kevin Hunter

At 9:06p -0400 on 30 Apr 2007 John Kebbel wrote:

I'm having problems understanding NULL. I grasp what a NULL value is,


A NULL value is rather an oxymoron.  It'd be more accurate to say  
that NULL means absence of a value.



but I can't get NULL to perform correctly. For instance, if I do a
Select statement, I see that I have columns with a NULL value.

select first, last, suffix  from persons LIMIT 5;
+---+--++
| first | last | suffix |
+---+--++
| Benjamin  | Page | NULL   |
| Jonathan  | Watson   | NULL   |
| Jose  | Thorson  | NULL   |
| Alejandro | Nickels  | NULL   |
| Griselda  | Richards | NULL   |
+---+--++
5 rows in set (0.01 sec)

Logically, it seems that a Select statement should find these five  
plus

any other NULL values in the suffix column. However, such a select
statment returns an empty set.

mysql select first, last, suffix  from persons where suffix IS NULL;
Empty set (0.00 sec)

Does anyone see what I'm doing wrong? (Thanks in advance for any  
help.)


Caveat emptor: I haven't tested this in about a year.

Are you perchance using a table type of MyISAM?  I seem to recall  
that MyISAM has a hard time actually representing NULL internally.   
[ ... Thinks for a minute ... ]  I remember something about spaces,  
like, I think I did


INSERT ( val1, val2, NULL ) INTO myTable;

and was only able to get the tuples back when I did

SELECT * FROM myTable WHERE col3 = ' ';  -- a space character

If this is the case, you might consider using a different table type,  
such as InnoDB.


HTH,

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: IS NULL Question

2007-05-01 Thread Kevin Hunter

Caveat emptor: I haven't tested this in about a year.
Are you perchance using a table type of MyISAM?  I seem to recall   
that MyISAM has a hard time actually representing NULL  
internally.   [ ... Thinks for a minute ... ]  I remember  
something about spaces,  like, I think I did

INSERT ( val1, val2, NULL ) INTO myTable;
and was only able to get the tuples back when I did
SELECT * FROM myTable WHERE col3 = ' ';  -- a space character
If this is the case, you might consider using a different table  
type,  such as InnoDB.

HTH,
Kevin


MyISAM has no problem representing NULL.


My mistake.  Obviously old data.

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL e-zine

2007-04-28 Thread Kevin Waterson
This one time, at band camp, Peter M. Groen [EMAIL PROTECTED] wrote:

  Any thoughts?  Any desires to contribute?
 
 Hi Keith,
 
 Great Idea! 
 I volunteer for the developer side (Mysql and C++)

I have written for php|mag previously, I'll put my hand up for PHP/MySQL

Kevin

-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: FW: MySQL patches from Google

2007-04-26 Thread Kevin Spencer

On 4/26/07, Mike OK [EMAIL PROTECTED] wrote:

I read the Google blog post regarding these patches.  They admit using
MySQL for some internal data storage needs but not in the general search
system.


Still, that leaves many other applications.  Groups, gmail, reader,
news et al...

--
Kevin.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: advice for blob tables?

2007-04-24 Thread Kevin Waterson
This one time, at band camp, [EMAIL PROTECTED] wrote:

 
 I don't feel the implementation direction this article takes is good.  It
 uses single row binary storage, which anyone who has had to deal with
 large files knows is a definate issue.

According to your method of storing binary data in BLOB fields, rather than
LONGBLOBs you would have over 11,000 rows for a single 700MB iso.

I am not sure how MySQL handles blobs internally but I would assume it is
like most other databases and adds some sort of delimiter to the data and
has to seek to it to distinguish columns. This means a 700Mb file stored
in 64k chunks would need over 11,000 file seeks per row read.

LONGBLOBs are the way forward here I feel.
A single table containing the relevant metadata along with the binary data.
There is no need for any normalization as this is a one-to-one relationship.

having over 11,000 rows for a single file I dunno, I would like to see
some benchmarks on this.

Kevin


-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: advice for blob tables?

2007-04-20 Thread Kevin Waterson
This one time, at band camp, Michael Higgins [EMAIL PROTECTED] wrote:


 So before I just jump in and go like I usually do, does anyone have any
 quick advice, things to consider, links, must-RTFMs or the like to help a
 newbie out? 


This tutorial deals with images and BLOBs. It should get you on the 
right path.

http://www.phpro.org/tutorials/Storing-Images-in-MySQL-with-PHP.html

Kevin

-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



referal system

2007-04-11 Thread Kevin Waterson
Hi gang, sorta newish to MySQL so please bear with me...

I am putting together a system of referals for my photography company.
The referal system works, like many others. A business refers customers
to me, for which they recieve a referal fee when the customer pays a
deposit.

I can create a the customers, referals, referers, referer_categories, jobs, 
tables no fuss.
How the system will (hopefully) work is a customer calls to book a job, and 
will have a
referal number. This number will be a referer_id in the db so I know who has 
refered them.
easy so far.

So I enter the customer details in the customers table, which includes the 
referal_id.
Should I then have a trigger to INSERT the referer_id into the referals table.

The referals table contains
referal_id INT
referer_id INT
referal_date

I then have a table of jobs for when they book the job. venue, date etc.
The payments table also will need to have some sort of mechanism so when a
deposit is taken from the customer, the referer is paid thier commission.

Am I going in the right direction here?

Kind regards
Kevin



-- 
Democracy is two wolves and a lamb voting on what to have for lunch. 
Liberty is a well-armed lamb contesting the vote.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Potential problems running MyISAM on ram drives?

2007-04-10 Thread Kevin Burton

Hey.

We have the need to have some tables stored in memory for performance
reasons.

We were thinking about just using MEMORY tables but this is non ideal since
it uses a fixed row size.

Using MyISAM would be much better since it supports variable length rows.

Backups would be handled by just using mysqlhotcopy and snapshotting the
tables to disk either on the master or the slave.

Has anyone done this and have they had any problems?

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Query works; query in function doesn't

2007-04-01 Thread Kevin Chen
I have a puzzling situation where a query works, but only outside of a
function. If I try to abstract away some of the complexity by placing the
subquery inside a function, the db seems to get stuck in an infinite query
and the entire system becomes unresponsive (from the MySQL console I can
ctrl-c to break out, but from the GUI query browser there is no way to stop
it other than hard reset).

Anyway, the db is laid out so that a class of elements is kept in one table,
while another table keeps instances of the elements. The 'entry' number from
the class table is used as a foreign key in the instance table. Makes sense
so far, right? The tricky part is that the 'entry' number is just an
internal value used as a unique key, so the users of the instance table have
to keep looking up the 'name' of the element in order to find out what the
'entry' number is. The idea is to make life easier for them by translating
the 'name' string into the 'entry' number via a function.

So, here's a raw query string, which works ok:

SELECT * FROM tbl1 WHERE id=(SELECT entry FROM tbl2 WHERE name='someguy');
-- returns the correct rows from tbl1

If I take the subquery and put it in a function, which takes the name string
as an argument, then make a simple SELECT query to test the function by
itself, it returns the correct value, like so:

create function getNumber(cName varchar(255)) returns int begin
 return (select entry from tbl2 where name=cName);
end;

SELECT getNumber('someguy'); -- returns the correct value

Now, when I try to use the function in the full query, that's when the
problem state occurs:

SELECT * FROM tbl1 WHERE id=getNumber('someguy'); -- disaster

Can anybody tell me what's wrong?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: about limit

2007-03-20 Thread Kevin Hunter

On 20 Mar 2007 at 12:50p -0400, nikos wrote:
Does any body knows how to select not the LIMIT 20 records but the  
rest of them?


MySQL version is 4.1.21-standard and I cant make a VIEW.


In Venn-speak, you are asking for Circle A less(, minus, or subtract)  
Circle B.  In SQL, this would translate to MINUS.  However, last I  
checked, MySQL does not support (or has not yet implemented) the  
MINUS operator.  This Google finding might prove useful to you in  
this endeavor (scroll to the last entry on the page):


http://www.terminally-incoherent.com/blog/category/mysql/

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Do sourced queries automatically commit?

2007-03-19 Thread Kevin Chen

João Cândido de Souza Neto [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Could you try to put the start transaction into your .sql file.

Ok, I tried that -- same result. I even tried including the rollback in the
.sql file as well, but the changes are still committed afterwards. The only
queries in the file are simple INSERT statements -- there are no statements
that would force a commit (as per 13.4.3 of the manual).



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Do sourced queries automatically commit?

2007-03-18 Thread Kevin Chen
Greetings all, I am trying to track down the cause of some issues I'm
having. I am doing this from the mysql prompt:

start transaction;
source c:/queries.sql;
rollback;

When I examine the db though, it appears that the changes from the sourced
queries have been committed.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: what journal options should I use on linux?

2007-03-09 Thread Kevin Burton

Just use XFS. it's a solve problem..

Kevin

On 3/8/07, Christopher A. Kantarjiev [EMAIL PROTECTED] wrote:


I'm setting up mysql on linux for the first time (have been using OpenBSD
and
NetBSD with UFS until now). The default file system is ext3fs, and I don't
mind
that, but it seems really silly to use a journaled file system for the
database
data - doubling my writes.

In particular, I have a couple of use cases where I spend a week or so
creating
a 17GB data (table) file and its 15GB index file, and then do sparse
queries out
of it. I need as much write speed as I can get. I certainly don't want to
have
every data block written twice, once to the journal and once to the file,
along
with the extra seeks.

What do people with this sort of large problem use on Linux?

Thanks,
chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: binary into blob

2007-03-07 Thread Kevin Hunter

On 07 Mar 2007 at 3:57p -0500, Alexander Lind wrote:

imagine a large system where pdf-files are accessed by clients a lot.
say 1 pdf file is access per second on average.

also say that your database is on a machine separate from the 
webserver(s) (as is common).


do you really think its a good idea to pump the pdf data from the db 
each time it needs to be accessed?


it may seem easier to do this as you then would not have to replicate 
the physical files across the webserver machines, but i would not say 
that the slight increase of ease is worth the added load to the db and 
the network.
following your reasoning, you'd also store all images in the db so that 
you don't have to replicate these either?


what is unsafe about having a pdf file on a webserver (not in a place 
where the httpd can access it of course)?


alec

[EMAIL PROTECTED] wrote:

I have to disagree with most, I would store the entire file in the
database, metadata and all.  Better security, if you have a backend
database, it's much harder to get the data than pdf's sitting in a
directory on the webserver.  Plus if you ever want to scale to a
multi-webserver environment, the db storage works great.


On Wed, 7 Mar 2007, Ed wrote:

 

On Wednesday 07 March 2007 21:26, Alexander Lind wrote:
   

I would put the pdf as a regular file on the hd, and store the path to
it in the db.
Meta data could be things like the size of the pdf, author, owner,
number of pages etc.

Storing binary data from pdf:s or images or any other common binary
format is generally not a good idea.

Alec
  
Thank you all for your help.  I get the gist of things... don't store 
the

binnary, store the path to it and details of it.

Thank you all for your quick response.
 -Ed


Grrr.  All you lazy top-posters!  ;)  It seems to me that a case can be 
legitimately made for both methods of handling BLOBs.  On the one hand, 
where speed and/or efficiency (on many different levels) are the top 
priorities, it'd be good to keep the DB as trim as possible.  On the 
other hand, if security is the top priority, you'd better keep all your 
data encrypted and in the DB.  And as I wrote this, colbey explains this 
well in his email of 4:06p.


It seems to me that the point is to use the correct tool for the job at 
hand.  Computers have many tools, so no sense in turning everything into 
a proverbial nail head.


Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Ability to escape varbinary data when sent to the console?

2007-03-06 Thread Kevin Burton

We need to store  binary data form time to time in mysql.  To date I've just
base64 encoded the data to avoid having it corrupt the console on  SELECT *

Is there any way to have the mysql command line client automatically do this
for me?  Is there any work around?

base64 is about 30% data bloat that I'd like to have to avoid.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: Mysql on linux - choosing the right filesystem

2007-02-25 Thread Kevin Burton

On 2/24/07, Jean-Sebastien Pilon [EMAIL PROTECTED] wrote:


Hello,

I would like to get some of your input on file systems to use with
mysql. Should I use a journaling filesystem ? Should I choose a
different one based on what I store (log files, myisam dbs, innodb
datafiles, etc ) ? Is there any file system tweaks you recommend ?

TIA
NOTICE: This email contains privileged and confidential information and is
intended only for the individual to whom it is addressed. If you are not the
named addressee, you should not disseminate, distribute or copy this e-mail.
Please notify the sender immediately by e-mail if you have received this
transmission by mistake and delete this communication from your system.
E-mail transmission cannot be guaranteed to be secured or error-free as
information could be intercepted, corrupted, lost, destroyed, arrive late or
incomplete, or contain viruses.

AVIS: Le présent courriel contient des renseignements de nature
privilégiée et confidentielle et n'est destiné qu'à la personne à qui il est
adressé. Si vous n'êtes pas le destinataire prévu, vous êtes par les
présentes avisés que toute diffusion, distribution ou reproduction de cette
communication est strictement interdite. Si vous avez reçu ce courriel par
erreur, veuillez en aviser immédiatement l'expéditeur et le supprimer de
votre système. Notez que la transmission de courriel ne peut en aucun cas
être considéré comme inviolable ou exempt d'erreur puisque les informations
qu'il contient pourraient être interceptés, corrompues, perdues, détruites,
arrivées en retard ou incomplètes ou contenir un virus.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Real BITs which use 1 bit in 5.1?

2007-02-17 Thread Kevin Burton

A little birdie:

http://forge.mysql.com/wiki/Top10SQLPerformanceTips

notes..

In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it's 1 byte.

Is this true?

I didn't see a note in the manual..

I assume it would be here

http://dev.mysql.com/doc/refman/5.1/en/storage-requirements.html

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Script to perform unattended slave sync with master...

2007-02-16 Thread Kevin Burton

Has anyone built a script to add a new slave into a MySQL replication
setup which can operate (for the most part) unattended?

The set of operations is pretty straight forward but right now it's
mostly a manual step which ends up taking a LONG time.

The script would need to:

* connect to a master or a slave
* FLUSH TABLES WITH READ LOCK
* record master replication position
* take snapshot of myisam via mysqlhotcopy or simply CP the files into
a temp directly
* UNLOCK TABLES
* SCP the files to the target slave
* update replication positions on this box
* setup correct permissions
* startup replication
* assert that the box is functioning correctly

The transfer could be done unattended with SSH and ssh-agent.  The CPU
would be the bottleneck on gigabit ethernet but since it's unattended
it shouldn't matter as much.

One could even setup rsync with authentication if crypto was really
the bottleneck.

Thoughts?

I don't want to have to write anything because I'm amazingly lazy ;)

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Merge tables and ON DUPLICATE KEY UPDATE

2007-02-12 Thread Kevin Burton

I want to use a merge table so that I can direct all new INSERTs to a
new merge table and migrate old data off the system by having a
continually sliding window of underlying MyISAM tables.

The problem is that of I do INSERT ... ON DUPLCATE KEY UPDATE  and
that value isn't in the leading table where all INSERTs go a *new* row
will be created.

Is there any way around this problem?

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Merge tables and ON DUPLICATE KEY UPDATE

2007-02-12 Thread Kevin Burton

I thought about it I was actually going to use merge tables AND
partitions to split the underlying MyISAM tables across two disks and
then partition on top.

It's POSSIBLE to use partitioning the way I want it but I'm going to
have to grok it for a bit more.

Thanks though.

Kevin

On 2/12/07, Jay Pipes [EMAIL PROTECTED] wrote:

Kevin Burton wrote:
 I want to use a merge table so that I can direct all new INSERTs to a
 new merge table and migrate old data off the system by having a
 continually sliding window of underlying MyISAM tables.

 The problem is that of I do INSERT ... ON DUPLCATE KEY UPDATE  and
 that value isn't in the leading table where all INSERTs go a *new* row
 will be created.

 Is there any way around this problem?

What about using partitioning in MySQl 5.1+?  Would this work?

http://dev.mysql.com/doc/refman/5.1/en/partitioning.html

Cheers,

Jay




--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



BUG? SHOW SLAVE STATUS blocks on disk full?

2007-02-12 Thread Kevin Burton

Hm. Running on 4.1.21 seems to have a 'feature' where SHOW SLAVE
STATUS blocks when the disk is full.  Thoughts?

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A 'Simple' Protocol for Manual MySQL Slave Promotion to Master

2007-02-07 Thread Kevin Burton

Hey.

I should have posted this hear earlier but it just dawned on me that
you guys could have some good feedback:

We've been working on the design of a protocol which would enable
promotion of a slave to a master in a MySQL replication cluster.

Right now, if a MySQL master fails, most people just deal with a
temporary outage. They bring the box back up, run REPAIR TABLEs if
necessary, and generally take a few hours of downtime.

Google, Flickr, and Friendster have protocols in place for handling
master failure but for the most part these are undocumented.

One solution would be to use a system like DRDB to get a synchronous
copy of the data into a backup DB. This would work of course but would
require more hardware and a custom kernel.

You could also use a second master in multi-master replication but
this would require more hardware as well and complicates matters now
that you're using multi-master replication which has a few technical
issues.

A simpler approach is to just take a slave and promote it to the
master. If this were possible you'd be able to start writing to the
new master almost immediately after the old master fails. You'd lose a
few transactions but if you have any critical code that depends on
data insertion you can have it assert that it reached at least one
slave before moving forward.

.

http://www.feedblog.org/2007/02/a_simple_protoc_1.html

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Monitoring for corrupt tables and transiently failing master INSERTs

2007-02-05 Thread Kevin Burton

We're trying to write a monitoring process for our master so that if a
table is corrupt it will raise flags which can then trigger
operations.

We can do the basic stuff such as asserting that the port is open and
that we can ping the machine but I want to test if any
INSERT/UPDATE/DELETEs are failing on the master due to table
corruption.

For example, if you have a functioning DB and then deliberately
corrupt the tables (for testing of course) I'd want SOME way to detect
that INSERTs were failing on this table.

There's no way to currently detect this I believe.  SHOW STATUS
doesn't help nor does SHOW TABLE STATUS.

Any pointers?

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Administrator problem

2007-01-20 Thread Kevin Spencer

On 1/19/07, Daniel Culver [EMAIL PROTECTED] wrote:

Are you working on a Mac? If so, logging in as root is not good
enough. You must have set up and log into the root user account of
your Mac computer or server.


The OP is talking about managing MySQL accounts with MySQL
Administrator.  MySQL Administrator does not in any way allow you to
login to a host with a shell account, root or otherwise.

--
Kevin.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What is your favorite GUI tool for creating and querying tables in MySQL?

2006-12-21 Thread Kevin Spencer

On 12/21/06, Behrang Saeedzadeh [EMAIL PROTECTED] wrote:

Hi,

What is your favorite GUI tool for working with MySQL. EMS SQL Manager
is feature-rich but sometimes buggy and also very expensive. Navicat
is not very handy. It forces to switch between mouse and keyboard
repeatedly.

What is your favorite tool?


I've always found the MySQL Query Browser to be rather handy for
creating  querying tables.

http://dev.mysql.com/downloads/gui-tools/5.0.html

--
Kevin.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: definition of Created_tmp_files in show status

2006-12-14 Thread Kevin Fries
Thanks for that.  Is the only reason for temporary files then going to be 
replication?  And if so, should I be worried that thousands of files have been 
created for this purpose?  Are you saying that I can tune the size (and thus 
the number) of the temp files by adjusting the temporary table size?
My Created_tmp_disk_tables is considerably lower than my Created_tmp_files 
value.  So it seems the Created_tmp_files cannott include the values from the 
Created_tmp_disk_tables.

Kevin

- Original Message 
From: Visolve DB Team [EMAIL PROTECTED]
To: Kevin Fries [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Wednesday, December 13, 2006 9:26:52 PM
Subject: Re: definition of Created_tmp_files in show status


Hi,

A replication slave needs some of its temporary files to survive a machine 
restart so that it can replicate temporary tables or LOAD DATA INFILE 
operations. If files in the temporary file directory are lost when the 
server restarts, replication fails. MySQL creates temporary files as hidden 
files.

tmp_table_size variable will determine the size of the temporary table. 
But if it exceeds, then server automatically converts it to disk-based 
tables.

The server variables,
Created_tmp_tables -The number of in-memory temporary tables created 
automatically by the server while executing statements.
Created_tmp_disk_tables  -The number of temporary tables on disk created 
automatically by the server while executing statements.
Created_tmp_files - How many temporary files mysqld has created.
were used to determine the temporary files status.

Thanks
ViSolve DB Team

 Original Message - 
From: Kevin Fries [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Thursday, December 14, 2006 12:57 AM
Subject: definition of Created_tmp_files in show status


According to the mysql reference manual, the definition of this field is:
How many temporary files mysqld has created. 

Can someone elaborate on this?  What causes mysql to create a temporary 
file? I see something indicating it may be associated with replication.  In 
our environment (which has replication) we see counts for this variable in 
the tens of thousands.

Thanks in advance,
Kevin




Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com


 

Need a quick answer? Get one in minutes from people who know.
Ask your question on www.Answers.yahoo.com

definition of Created_tmp_files in show status

2006-12-13 Thread Kevin Fries
According to the mysql reference manual, the definition of this field is:
How many temporary files mysqld has created. 

Can someone elaborate on this?  What causes mysql to create a temporary file? I 
see something indicating it may be associated with replication.  In our 
environment (which has replication) we see counts for this variable in the tens 
of thousands.

Thanks in advance,
Kevin


 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

Re: Issues with MySQL x86_64 crashing

2006-12-08 Thread Kevin Old

On 12/7/06, David Sparks [EMAIL PROTECTED] wrote:

Kevin Old wrote:
 Hello everyone,

 We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
 MySQL with a mix of InnoDB and MyISAM tables.

 We normally run at 1500 queries/per second and lately, the server will
 all of a sudden lock up and we are forced to restart mysql.

That isn't a crash.

Can you still connect to the db?  If so output of 'show full
processlist\G', etc would be useful.



Sorry, I shouldn't have worded it as a crash.  Basically what happens
is that queries start stacking up and if we don't start killing
queries, we have to restart mysql before it crashes/restarts itself.

We have spotted a couple of queries that we think might be causing the
problems, but even after adjusting the queries and the tables
(converting from InnoDB to MyISAM) we're still seeing the spike in
queries (not due to actual load).

Here are links to our my.cnf and the output of show innodb status and of
show variables:

http://kold.homelinux.com/bj/my.cnf
http://kold.homelinux.com/bj/innodbstatus.txt
http://kold.homelinux.com/bj/mysqlstats.txt

Maybe some of our settings could be causing the problem.

Thanks for any help,
Kevin
--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Issues with MySQL x86_64 crashing

2006-12-08 Thread Kevin Old

We have 16GB total, but are only using 8GB (according to mysql and our dbadmin).

Kevin

On 12/8/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

How much memory do you have on your system ? (the current setting in your
my.cnf could eat *a lot* of memory)

Thanks,
  Jocelyn Fournier
  www.mesdiscussions.net

 On 12/7/06, David Sparks [EMAIL PROTECTED] wrote:
 Kevin Old wrote:
  Hello everyone,
 
  We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
  MySQL with a mix of InnoDB and MyISAM tables.
 
  We normally run at 1500 queries/per second and lately, the server will
  all of a sudden lock up and we are forced to restart mysql.

 That isn't a crash.

 Can you still connect to the db?  If so output of 'show full
 processlist\G', etc would be useful.


 Sorry, I shouldn't have worded it as a crash.  Basically what happens
 is that queries start stacking up and if we don't start killing
 queries, we have to restart mysql before it crashes/restarts itself.

 We have spotted a couple of queries that we think might be causing the
 problems, but even after adjusting the queries and the tables
 (converting from InnoDB to MyISAM) we're still seeing the spike in
 queries (not due to actual load).

 Here are links to our my.cnf and the output of show innodb status and of
 show variables:

 http://kold.homelinux.com/bj/my.cnf
 http://kold.homelinux.com/bj/innodbstatus.txt
 http://kold.homelinux.com/bj/mysqlstats.txt

 Maybe some of our settings could be causing the problem.

 Thanks for any help,
 Kevin
 --
 Kevin Old
 [EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Issues with MySQL x86_64 crashing

2006-12-08 Thread Kevin Old

On 12/8/06, Philip Mather [EMAIL PROTECTED] wrote:

So something like 15G, that's not that bad. I'd run mtop as someone suggested 
and see if some query is hammering it, maybe some other process on the machine 
is hogging or going IO bound?


Thanks.  We are watching the queries.  The pattern we're seeing now is
any large query that takes more than a few seconds to execute causes
incoming queries to stack up and not execute, which causes the mysql
load to go higher.  We've seen a few times where mysql recovered after
a large query started other queries to stack up.

Keep in mind that we've been running some of these queries that are
now having problems for over a year.  We were running on the same
hardware with the 386 version of mysql and performance was awesome
only using 2GB RAM (the max mysql would allow us to use).  Only after
the switch to the x86_64 version are we seeing these problems.

Thanks for your help,
Kevin
--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Issues with MySQL x86_64 crashing

2006-12-07 Thread Kevin Old

Hello everyone,

We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
MySQL with a mix of InnoDB and MyISAM tables.

We normally run at 1500 queries/per second and lately, the server will
all of a sudden lock up and we are forced to restart mysql.

This is not related to higher load, as it frequently happens during
our slowest load period (early AM).  This is not related to any
significant code changes.

We have recently converted about 20 of our largest and most frequently
used tables from MyISAM to InnoDB.  The size of our InnoDB tables
collectively is about 95GB.

I can provide my configuration if it is necessary, but was just
wondering if someone else might have experienced this and could
possibly point me in the right direction as to what might be causing
this.

Thanks,
Kevin
--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Issues with MySQL x86_64 crashing

2006-12-07 Thread Kevin Old

No backups (like innodb-backup) running during these times.

We have 16GB of RAM and are currently using about 8GB of it.

We think we might have narrowed it down to a few hellish queries that
are hitting a few tables that we recently converted to InnoDB from
MyISAM.  We're gonna convert them back to MyISAM tonight and see if
that fixes it.

Thanks for your help,
Kevin

On 12/7/06, Michael Bacarella [EMAIL PROTECTED] wrote:

RAM too, how are you on RAM?

Obviously 100GB+ probably isn't going to fit in cache, but the usage pattern
during slower periods might be causing killer thrashing.

On Thu, Dec 07, 2006 at 12:43:17PM -0600, Dan Buettner wrote:
 Another thought is - do you have backups running during this time?
 Something that might be attempting to backup live the InnoDB files?
 We had similar problems with MySQL and backup software a while ago,
 though we used all-MyISAM.

 Dan


 On 12/7/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Hi,
 
 Do you use MySQL with a NPTL or a linuxthread glibc configuration ?
 
 Regards,
   Jocelyn Fournier
   www.mesdiscussions.net
 
  Hello everyone,
 
  We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
  MySQL with a mix of InnoDB and MyISAM tables.
 
  We normally run at 1500 queries/per second and lately, the server will
  all of a sudden lock up and we are forced to restart mysql.
 
  This is not related to higher load, as it frequently happens during
  our slowest load period (early AM).  This is not related to any
  significant code changes.
 
  We have recently converted about 20 of our largest and most frequently
  used tables from MyISAM to InnoDB.  The size of our InnoDB tables
  collectively is about 95GB.
 
  I can provide my configuration if it is necessary, but was just
  wondering if someone else might have experienced this and could
  possibly point me in the right direction as to what might be causing
  this.
 
  Thanks,
  Kevin
  --
  Kevin Old
  [EMAIL PROTECTED]
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
Michael Bacarella [EMAIL PROTECTED]

1-646-641-8662 (cell)

545 Eighth Avenue * Suite 401
New York, NY 10018

http://michael.bacarella.com/
http://netgraft.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ETA for 5.1.13 ?

2006-11-19 Thread Kevin Burton

What's the ETA for 5.1.13? There are a few critical bugs with NDB that are
fixed in this rev that I'd like to play with.

I'm hoping it's right around the corner :)

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Advice on multilingual databases?

2006-10-10 Thread Zembower, Kevin
I'd like some advice on setting up databases that contain entries for
the same item in more than one language. For instance, here's what I
currently do for a table that contains the same topics translated into
English and Arabic:
CREATE TABLE `TOPIC` (
  `TopicID` int(11) NOT NULL auto_increment,
  `Topic-en` text NOT NULL,
  `Topic-ar` text character set utf8 collate utf8_unicode_ci NOT NULL,
  `ParentTopicID` int(11) NOT NULL default '0',
  PRIMARY KEY  (`TopicID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='Topics for text-based
hierarchies' AUTO_INCREMENT=76 ;

In this table, 'Topic-ar' is the Arabic translation of the English
'Topic-en.' If this were required to also be in Spanish and French, I'd
add columns for 'Topic-es' and 'Topic-fr' similar to 'Topic-en' above.

I'm wondering if there are any other ways to store and access
multilingual data. Can anyone suggest other ways they've dealt with this
task, and the pros and cons of their approach compared to mine?

Thank you in advance for your advice and suggestions.

-Kevin

Kevin Zembower
Internet Services Group manager
Center for Communication Programs
Bloomberg School of Public Health
Johns Hopkins University
111 Market Place, Suite 310
Baltimore, Maryland  21202
410-659-6139 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Requesting help with subquery

2006-09-28 Thread Zembower, Kevin
Thanks, again, for folks who suggested solutions to my problem. To help
users searching the archives, I've pasted in a working solution at the
end of this message. Also, I'd like to ask if there is a more efficient
or better way of checking if the language version exist than the six
lines I repeated 8 times below.

Thanks, again.

-Kevin
=\
I originally wrote:
I have a database of publications in different languages. main
categories are organized into sub categories with baseitems of
publications. Each baseitem can be printed in one or more of eight
languages. My SQL query so far is:
snip

Working solution:
[EMAIL PROTECTED]:~$ cat OrderDB-requested.sql
SELECT 
   m.title AS Main Category, 
   s.title AS Sub Category, 
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv 
  WHERE lv.langid = 1 # English = 1
  AND b.baseitemid = lv.baseitemid
   ) as English Title,
   CONCAT(
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 1 # 1 = English
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'E', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 2 # 2 = French
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'F', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 3 # 3 = Spanish
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'S', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 4 # 4 = Portuguese
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'P', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 5 # 5 = Arabic
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'A', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 6 # 6 = Swahili
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'W', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 7 # 7 = Russian
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'R', '-'),
  IF(EXISTS (SELECT * 
  FROM langversion AS lv
  WHERE lv.langid = 8 # 8 = Turkish
  AND b.baseitemid = lv.baseitemid
  AND lv.available = 'Y'
  ), 'T', '-')
   )AS Lang Avail
FROM maincategory AS m 
JOIN subcategory AS s ON m.maincatid=s.maincatid 
JOIN baseitem AS b ON s.subcatid=b.subcatid 
WHERE 
   b.available = Y 
ORDER BY m.title, s.title;


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Requesting help with subquery

2006-09-26 Thread Zembower, Kevin
I'm trying to use a subquery for the first time, and am having some
problems. I'm hoping someone can give me some suggestions on ways of
doing what I want, and further suggestions if there's a better way of
going about my task.

I have a database of publications in different languages. main
categories are organized into sub categories with baseitems of
publications. Each baseitem can be printed in one or more of six
languages. My SQL query so far is:

SELECT
   m.title AS Main Category,
   s.title AS Sub Category,
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1 # English = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   IF(ISNULL(SELECT lv.langversionid
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1
  AND b.baseitemid = lv.baseitemid
 )), 'Y', 'N') AS Lang Avail
FROM maincategory AS m
JOIN subcategory AS s ON m.maincatid=s.maincatid
JOIN baseitem AS b ON s.subcatid=b.subcatid
WHERE
   b.available = Y
ORDER BY m.title, s.title;

This gives me an error at line 11, IF(ISNULL(SELECT  This should
give me a Y if the English version exists, and a N otherwise.

If I modify it like this, it works:

SELECT 
   m.title AS Main Category, 
   s.title AS Sub Category, 
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv 
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   (SELECT lv.langversionid 
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
  WHERE lv.langid = 1 
  AND b.baseitemid = lv.baseitemid
   ) AS Lang Avail
FROM maincategory AS m 
JOIN subcategory AS s ON m.maincatid=s.maincatid 
JOIN baseitem AS b ON s.subcatid=b.subcatid 
WHERE 
   b.available = Y 
ORDER BY m.title, s.title;

I think this demonstrates that the two subqueries are working.

What I'd ultimately like to do is produce a string like YNNYYN where Y
is printed if the language version of the baseitem exists (is not
null?). I was going to do this by creating a SELECT subquery for each
language version possible and CONCAT the Y or N together.

Can anyone suggest where I'm going wrong in my attempts? Is there a
better way overall to produce this information?

Thanks in advance for your help and suggestions.

-Kevin

Kevin Zembower
Internet Services Group manager
Center for Communication Programs
Bloomberg School of Public Health
Johns Hopkins University
111 Market Place, Suite 310
Baltimore, Maryland  21202
410-659-6139 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Requesting help with subquery

2006-09-26 Thread Zembower, Kevin
Thank you, Johan and Chris, for finding my obvious mistakes. Unfortunately, 
even after fixing them, I still get an SQL error:

[EMAIL PROTECTED]:~$ cat OrderDB-requested.sql
SELECT 
   m.title AS Main Category, 
   s.title AS Sub Category, 
   b.partno AS Part Number,
   (SELECT lv.title
  FROM langversion AS lv 
  JOIN baseitem AS b2 ON lv.baseitemid = b2.baseitemid
  WHERE lv.langid = 1 # English = 1
  AND b.baseitemid=lv.baseitemid
   ) as English Title,
   IF((IFNULL(SELECT lv.langversionid 
  FROM langversion AS lv
  JOIN baseitem AS b3 ON lv.baseitemid = b3.baseitemid
  WHERE lv.langid = 1 
  AND b.baseitemid = lv.baseitemid
   ), 0), 'Y', 'N') AS Lang Avail
FROM maincategory AS m 
JOIN subcategory AS s ON m.maincatid=s.maincatid 
JOIN baseitem AS b ON s.subcatid=b.subcatid 
WHERE 
   b.available = Y 
ORDER BY m.title, s.title;
[EMAIL PROTECTED]:~$ mysql orderDB OrderDB-requested.sql
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right syntax to 
use near 'SELECT lv.langversionid 
  FROM langversion AS lv
  JO' at line 11
[EMAIL PROTECTED]:~$

-Kevin

-Original Message-
From: Chris Sansom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 26, 2006 12:45 PM
To: Zembower, Kevin; mysql@lists.mysql.com
Subject: Re: Requesting help with subquery

At 11:40 -0400 26/9/06, Zembower, Kevin wrote:
IF(ISNULL(SELECT lv.langversionid
   FROM langversion AS lv
   JOIN baseitem AS b3 ON lv.baseitemid =
b3.baseitemid
   WHERE lv.langid = 1
   AND b.baseitemid = lv.baseitemid
  )), 'Y', 'N') AS Lang Avail

Looks to me as if your parentheses don't balance 
here - you have an extra ')' in that last line.

-- 
Cheers... Chris
Highway 57 Web Development -- http://highway57.co.uk/

I once preached peaceful coexistence with Windows.
You may laugh at my expense - I deserve it.
-- Jean-Louis Gassé, former Apple executive ( Be CEO)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Announce: LBPool 1.0 beta1 (Load Balancing JDBC Connection Pool)

2006-07-19 Thread Kevin Burton

There was a thread before about this... this is much better than connector
J's load balancing.

You can take machines out of production, add thhem back in, it's MySQL slave
aware, etc

On 7/19/06, Christopher G. Stach II [EMAIL PROTECTED] wrote:


Kevin Burton wrote:
 Hey Gang.

 I wanted to get this out on the list and facilitate some feedback.

 http://www.feedblog.org/2006/07/announce_lbpool.html


What does this have over MySQL Connector/J's load balancing?

--
Christopher G. Stach II





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org


Announce: LBPool 1.0 beta1 (Load Balancing JDBC Connection Pool)

2006-07-18 Thread Kevin Burton

Hey Gang.

I wanted to get this out on the list and facilitate some feedback.

http://www.feedblog.org/2006/07/announce_lbpool.html

I CC'd both lists because this might be of interest to the larger MySQL
community as the techniques I used here could be implemented in other
languages.

==

The lbpool project provides a load balancing JDBC driver for use with DB
connection pools. It wraps a normal JDBC driver providing reconnect
semantics in the event of additional hardware availability, partial system
failure, or uneven load distribution. It also evenly distributes all new
connections among slave DB servers in a given pool. Each time connect() is
called it will attempt to use the best server with the least system load.

The biggest scalability issue with large applications that are mostly READ
bound is the number of transactions per second that the disks in your
cluster can handle. You can generally solve this in two ways.

  1. Buy bigger and faster disks with expensive RAID controllers.
  2. Buy CHEAP hardware on CHEAP disks but lots of machines.

We prefer the cheap hardware approach and lbpool allows you to do this.

Even if you *did* manage to use cheap hardware most load balancing hardware
is expensive, requires a redundant balancer (if it were to fail), and seldom
has native support for MySQL.

The lbpool driver addresses all these needs.

The original solution was designed for use within MySQL replication
clusters. This generally involves a master server handling all writes with a
series of slaves which handle all reads. In this situation we could have
hundreds of slaves and lbpool would load balance queries among the boxes. If
you need more read performance just buy more boxes.

If any of them fail it won't hurt your application because lbpool will
simply block for a few seconds and move your queries over to a new
production server.

While currently designed for MySQL this could easily be updated to support
PostgresQL or any other DB that supports replication.


--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org


how to clear buffer cache

2006-07-12 Thread Kevin Wilkinson
in order to run benchmarks, it is useful to be able to
clear the buffer cache. i'm using innodb. is there a
way to clear the buffer cache, i.e., something equivalent
to oracle's alter system flush buffer pool?

thanks,

kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help with locate() function and index

2006-06-24 Thread kevin vicky

Hi,

The table has ~2 million records and has 3 columns one of which is a text
field which on an average has 2000 characters(English alphabets, space are
there for now, but if required can be eliminated).

eg: adfastsdfasgadfdfdsaffagdasfd adfsadfadsgdsfdfsdfsgsdfdsg fg adfafg
adfddfgadsfdsgfghfghjgafedsf.

The query I am trying is like this...

Select pid,locate('affagdasfd adfsadf',txtfield) from tablename where
locate('affagdasfd adfsadf',txtfield)  10;

The substring I am searching for is always 30-50 characters in length. The
query execution takes more than 30 seconds and I would like to improve this.
I tried fulltext index on this field hoping that locate() will be faster,
but it dint make any difference. It would be great help if you could help
solve this problem.

thanks


On 6/24/06, John Hicks [EMAIL PROTECTED] wrote:


kevin vicky wrote:
 Hi,

 I am trying to use locate() function to get position of a substring in a
 text field which on average has 2000 characters and the search substring
is
 30 -50 characters long. The table has around 2 million records and
looking
 for a efficient way to do the search. I tried fulltext index but dint
see
 any difference between using it or not. The text field is random
characters
 with space or sometimes no space, does this make the fulltext index not
 much
 use? I also wonder about character set and collation, the text field
will
 contain only english alphabets and no special characters so is there a
 special character set to use rather than the default?

What is the problem you are trying to solve?

What problem do you have when you use locate()?

Are you running locate on all 2 million records?

 Also since the search string will be between 30-50 characters is there
 any parameters to make the index effective?

I believe locate operates on the text field argument. So it doesn't use
an index.

It sounds like you are confused (or one of us is confused :) so why
don't you just back up and tell us what you are trying to accomplish.




help with locate() function and index

2006-06-23 Thread kevin vicky

Hi,

I am trying to use locate() function to get position of a substring in a
text field which on average has 2000 characters and the search substring is
30 -50 characters long. The table has around 2 million records and looking
for a efficient way to do the search. I tried fulltext index but dint see
any difference between using it or not. The text field is random characters
with space or sometimes no space, does this make the fulltext index not much
use? I also wonder about character set and collation, the text field will
contain only english alphabets and no special characters so is there a
special character set to use rather than the default? Also since the search
string will be between 30-50 characters is there any parameters to make the
index effective?

If my questions are not clear please let me know I will try to explain
better,

thanks,
Kevin


Disaster with dash on mysql cli interface

2006-06-21 Thread Kevin Old

Hello everyone,

I had a horrible thing happen to me this morning and wanted to make it
known to the community.

I needed to delete a record from a very large table (yes, it was
backed up) and like the cli interface of mysql.  I ran this query:

delete from tablename where id - 12345;

Notice that I accidentally hit the dash (-) instead of the equal (=).
It proved to be disasterous as it deleted all the records from that
table.

Lucky for me I had a backup from last night and not too many records
were added since then and I was able to restore.

For the record, I am aware of the select before delete method, but
didn't use it in this one instance and it meant a few hours restoring
data.

Just wanted to throw this out and see if others had possible solutions
for working with the mysql cli interface for maybe setting up rules
for it to cancel a query if it contains a certain character (like the
dash).  Fat chance there is, but I thought I'd ask.

Hope this helps someone,
Kevin
--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Disaster with dash on mysql cli interface

2006-06-21 Thread Kevin Old

On 6/21/06, Barry [EMAIL PROTECTED] wrote:

Kevin Old schrieb:
 Hello everyone,

 I had a horrible thing happen to me this morning and wanted to make it
 known to the community.

 I needed to delete a record from a very large table (yes, it was
 backed up) and like the cli interface of mysql.  I ran this query:

 delete from tablename where id - 12345;

 Notice that I accidentally hit the dash (-) instead of the equal (=).
 It proved to be disasterous as it deleted all the records from that
 table.

 Lucky for me I had a backup from last night and not too many records
 were added since then and I was able to restore.

 For the record, I am aware of the select before delete method, but
 didn't use it in this one instance and it meant a few hours restoring
 data.

 Just wanted to throw this out and see if others had possible solutions
 for working with the mysql cli interface for maybe setting up rules
 for it to cancel a query if it contains a certain character (like the
 dash).  Fat chance there is, but I thought I'd ask.

 Hope this helps someone,
 Kevin

On this one use LIMIT.
If you want to delete specific rows alway use LIMIT.

even if you f**k up you just have deleted one row.

If you are luck it is an old one and easy restoreable.



Hi Barry,

So if I understand you correctly, I'd do the following:

delete from tablename where id - 12345 limit 1;

Is that correct?
--
Kevin Old
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Password problems using Mac OS X

2006-06-14 Thread Kevin Felix

Hi all,


I've been using MySQL for over a year now and I didn't install  
anything special recently, yet I suddenly find myself locked out of  
MySQL.


I'm using version 3.23.49 and I'm running Mac OS 10.4.6 fully up-to- 
date. I normally connect using the root-user but this afternoon I  
first got a socket error. Not the first time so I just reboot the  
server, socket error is gone but now I'm getting Access denied for  
user 'root'@'localhost' . I restarted the server a few times and  
even rebooted my system.


I can't connect using php, terminal, MySQL Administrator, ...

After a quick search on the http://dev.mysql.com/doc/refman/4.1/en/ 
resetting-permissions.html page. I tried that but alas, no change...


This might be a good time to update to MySQL 4 if I can get my data  
out, but I don't see that happening without me getting access to the  
server once again.


Does anyone have a solution?


Thank you for your time
Kevin Felix
[EMAIL PROTECTED] (e-mail)
[EMAIL PROTECTED] (MSN-Messenger)
ekefster (AIM)

Re: Password problems using Mac OS X

2006-06-14 Thread Kevin Felix

Dan,

A thousand times thanks for the fast reply, I just reset my password  
with MySQL Administrator now, everything is back the way it was!


On a sidenote: I also saw this as the version MySQL 5.0.17-max via  
socket. The version 3 was through phpinfo(). My php install and  
other MySQL do all use the same database though. Does it matter what  
version phpinfo() is giving me?



Kevin Felix

Op 14-jun-06, om 05:18 heeft Dan Buettner het volgende geschreven:


Kevin -

You can start up the MySQL server without password protection using  
the --skip-grant-tables option.  Note that will leave your server  
wide open, so you may also want to use the --skip-networking option  
and access it through the socket on the same machine.


Something like this I think:
mysqld_safe --skip-grant-tables --old-passwords --user=root --skip- 
networking 


I know that Apple has packaged at least one MySQL update into the  
OS X 10.4.x updates.  Is it possible that you got upgraded to MySQL  
4.x during an Apple update?  Perhaps your 3.23 installation still  
lives somewhere but now 4.x gets started up by the OS?


Dan


Kevin Felix wrote:

Hi all,
I've been using MySQL for over a year now and I didn't install  
anything special recently, yet I suddenly find myself locked out  
of MySQL.
I'm using version 3.23.49 and I'm running Mac OS 10.4.6 fully up- 
to-date. I normally connect using the root-user but this afternoon  
I first got a socket error. Not the first time so I just reboot  
the server, socket error is gone but now I'm getting Access  
denied for user 'root'@'localhost' . I restarted the server a few  
times and even rebooted my system.

I can't connect using php, terminal, MySQL Administrator, ...
After a quick search on the http://dev.mysql.com/doc/refman/4.1/en/ 
resetting-permissions.html page. I tried that but alas, no change...
This might be a good time to update to MySQL 4 if I can get my  
data out, but I don't see that happening without me getting access  
to the server once again.

Does anyone have a solution?
Thank you for your time
Kevin Felix
[EMAIL PROTECTED] (e-mail)
[EMAIL PROTECTED] (MSN-Messenger)
ekefster (AIM)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Get the record with the latest date

2006-05-28 Thread Kevin F. O'Riordan

Hi Brian,

 Man, this took me two hours to figure out, but in the end, this worked!

 SELECT  ... max(date_time) ...

It /appeared/ to work, but with different test data you should see
that it isn't guaranteed to.  Try inserting:

  INSERT INTO completed_modules
(module_id, email, score, date_time)
VALUES (1, '[EMAIL PROTECTED]', 5, '2006-05-29 11:11:00');

When I insert this line, then run your SELECT query, I get

  +---+---+---+-+
  | module_id | email | score | max(date_time)  |
  +---+---+---+-+
  | 1 | [EMAIL PROTECTED]   | 8 | 2006-05-29 11:11:00 |

  [remaining results snipped]

The 'max(date_time)' is as you're looking for; the 'score' is not.

Your use of 'max(date_time)' will select the most recent value for
'date_time', but the value selected for 'score' is independent of
this.

To solve your original problem correctly:

  I'm trying to get a list of each persons completed test
  (module_id), and the score, but only for the latest test result
  (date_time), not all of them.  ...  I just want the latest results
  (even if the score was worse).

You'll have to use Michael's suggestion:

 For each email-module_id combination, you want the row with the
 latest (maximum) date.  This is actually a FAQ, with solutions in
 the manual
 http://dev.mysql.com/doc/refman/4.1/en/example-maximum-column-group-row.html.

good luck,
Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: can not read from pipe

2006-05-18 Thread Kevin F. O'Riordan

Hi Baynaa,

 I am trying to install syslog-ng + phpsyslogng ... It pipes all the
 log messages to a pipe file mysql.pipe in sql statement format.

 But now the problem is mysql can not read from the pipe file.  When
 I give the command:

 $ mysql -u sysloguser -p  syslog  mysql.pipe

Does the mysql command return, or hang?

If it returns:

  You could test mysql reading from a pipe in a more controlled
  manner:

  shell mkfifo sql.fifo
  shell echo INSERT INTO whatever_table ...   sql.fifo

  and from another shell run your original command but reading from
  the new sql.fifo .


If it hangs:

  Sounds like there's nothing to read from the pipe.
  What does 'cat mysql.pipe' print?


HTH,
Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Insert speed on table with 7M rows but small row size.

2006-04-07 Thread Kevin Burton
I have a fairly small table WRT the data size.  Its about 300M of  
data.  Right now it has about 6M rows.


The schema is pretty simple.  It has one 64bit ID column.  Basically  
its for checking the existence of an object in our DB and is designed  
to work very fast.


One the table was FIRST created I could do inserts very fast.Now  
that its grown in size inserts are taking a LOT longer.  Sometimes 60  
seconds to 2-3 minutes.


I've migrated to using bulk inserts of 1k rows or more but in  
production this might be a bit difficult.


Is there anyway I can tune MySQL to improve this operation?

Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Using Network Block Device on Linux to build HUGE/cheap memory-base MySQL boxes.

2006-02-19 Thread Kevin Burton
I was talking to a friend tonight about how they use NBD to run a  
single system image in memory.


NBD (Network Block Device) allows one Linux box to export a block  
device and for you to mount it on another filesystem.   For the  
memory component they just use a ram disk.


More info here:

http://www.ussg.iu.edu/hypermail/linux/kernel/9704.3/0492.html

Basically they just buy cheap 1U boxes with 4-8 gig and then mount  
them... this way they allow the process to allocate as much memory as  
it wants and it will them start swapping but instead of uses disk it  
starts using the remote memory.  Since gigabit ethernet is now FASTER  
than most disk installs in terms of throughput this would seem like a  
win/win.


Here's the idea I had though.

MySQL (except for MySQL cluster) doesn't scale if you need to run an  
image across 2 boxes.  For example you can't currently take two boxes  
and run your dataset on BOTH boxes at the same time for double  
scalability.


What if you booted a MySQL install and told it to use NBD mounted  
memory?  Theoretically you could build MUCH cheaper and MUCH faster  
clusters.  Your DB writes would still back to the local (RAID)  
filesystem but your innodb buffer pool and other buffers would be  
running out of swap and into your network memory subsystem.


This would allow you to have a HUGE buffer for MySQL.  Buffer your  
whole damn database in MEMORY.


The main downside I can see is fault tolerance if the ethernet port  
was pulled.  The box would fail.  Of course at this point its a bit  
like pulling a SCSI cable out.


If this turns out to be a good way to scale MySQL someone could just  
pay to have NBD enhanced to support fault tolerance with mirror nodes.


Thoughts?

Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Drop all tables?

2006-01-02 Thread Kevin Spencer
On 1/2/06, JJ [EMAIL PROTECTED] wrote:
 Is there an easy way to drop all (MyISAM) tables of a MySQL 3.23 database
 besides deleting it and then recreating an empty database?

Read up on mysqldump.  For 3.23, something like the following should
do the trick:

mysqldump databasename --add-drop-table --no-data  your_output_file.sql

And then import it:

mysql databasename  your_output_file.sql

--
Kevin.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   6   7   >