Re: MIT-Pthreads

2002-09-07 Thread Tod Harter

On Friday 06 September 2002 05:39 pm, Ilyas Keser wrote:
 What I also not understand is that one can also compile Mysql with
 MIT-Pthreads. What is a MIT-Pthread? Where can I read more about this?

 Thanks

 ilyas

'PThreads' is short for POSIX Threads, which is the Posix API specification 
for multi-threading. Threading can be implemented in a wide variety of ways, 
and an implementation of the PThreads API was done at MIT, thus 'MIT 
PThreads'. 

Since any PThreads implementation in theory implements the same API they 
SHOULD be interchangeable. In theory... In practice no such thing is even 
close to true because of various factors, but MySQL does claim to link and 
run against several threads libraries.

My guess would be that you are almost always better off using whatever the 
default threading is for your platform. It would require a deep knowledge of 
the implementation to have any idea what linking against some other thread 
library will do. At the very best you might get better performance or avoid 
some bugs, at worst you get a broken or flakey and/or inefficient binary. 
Leave it to experts! 


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: newbie help

2002-09-07 Thread Tod Harter

On Friday 06 September 2002 05:50 pm, Marcelo Simon wrote:

Well, I'm not 100% certain since I don't know what environment you're working 
in, but a typical example from perl might be something like

my $query = INSERT INTO foo 
VALUES(.$cgi-param('myintfield').,'.$cgi-param('mydatefield').');

Now if the integer field is empty then the result of 
$cgi-param('myintfield') is going to be an empty string and the result is 

INSERT INTO foo VALUES(,'whatever was in datefield')

which is NOT legal SQL... 

You would need to check your input parameter and it is an empty string 
replace that with the string 'NULL' (if you want a null there). As for the 
date field, once again an empty quoted string will not insert properly into a 
database.

NOW having said all of this, DO NOT EVER INSERT FORM FIELD VALUES DIRECTLY 
INTO A DATABASE THIS WAY WITHOUT FIRST PREPROCESSING THEM. You are creating a 
GIANT security hole in your system if you do!

If you're using PHP, basically the exact same comments etc apply.

One simple way to fix your problem in perl at least, is to use placeholders 
and bind params. Its much safer and much easier.



 Hi,

 First time here, hope someone can understand me and help me.

 I have a field wich type that is int(4), and I'm updating it from a form.
 it is working ok, unless the formfield is empty. When it's empty I'm
 receiving an SQL error number 51().

 In the db the field is setup to receive empty values.
 In the query the field is not surranded by ' ' ( since it's a number ).

 does anyone can explain to me what is happening ? how can I leave that
 field empty on the form.

 Also the same thing is happening with a date field. ! anyclues ?

 TIA
 Marcelo Simon


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Euro sign mysql

2002-09-02 Thread Tod Harter

On Monday 02 September 2002 09:27 am, Derk van der Harst wrote:
 Hello everyone,

 how can I insert records with the Euro sign into the mysql database?
 When i try to insert the euro sign i get the err message 'query is empty'.
 Can somebody help?

 thanks
 Derk

What happens if you escape the string first, like with DBD's quoting 
function? Not knowing what encoding you're trying to use for your string I 
would have to guess that it contains what MySQL is interpreting as an illegal 
character. 

Still, the error message is a bit odd. Perhaps it isn't even Mysql's fault... 
How are you connecting?


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: innob question

2002-08-21 Thread Tod Harter

On Wednesday 21 August 2002 10:40 am, Egor Egorov wrote:

Well, loosely...

SET AUTOCOMMIT=0

does not START a transaction, what it does is signal the database that from 
now on transactions will be explicit, not implicit. That means you NEED to 
have a transaction to get anything to stick in the database, any work done 
while autocommit is 0 that is not part of an explicit transaction is just not 
going to get stored in the database AT ALL. 

In other words when autocommit is 1 it is as if every statement is surrounded 
by a seperate transaction. When autocommit is 0 you must use transactions 
explicitly, THEY ARE NOT OPTIONAL!!! So really the 2 things are not very 
equivalent, in fact to do a transaction you would say

SET AUTOCOMMIT=0
BEGIN WORK

COMMIT (or ROLLBACK)

Note however that MySQL will 'toggle' out of autocommit mode if you initiate 
an explicit transaction, so in fact you can simply BEGIN WORK when you are at 
autocommit =1 and things will be fine. Once you COMMIT the database will go 
back to the standard autocommiting behaviour on its own.


 Randy,
 Tuesday, August 20, 2002, 7:49:09 PM, you wrote:

 RJ  in a innodb table type using mysql max

 RJ  is a begin statement the same as set autocommitt = 0??

 Yes.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Constraint Hell

2002-08-20 Thread Tod Harter

On Tuesday 20 August 2002 08:00 am, Jim Bailey wrote:

http://www.mysql.com/doc/en/SEC447.html

RTFM, couldn't be more clear, the columns in both master and slave which are 
constrained need to be indexed, and yes the FOREIGN KEY and REFERENCES 
sections name the INDEXES, not the columns. Its 

FOREIGN KEY (index_in_this_table) REFERENCES (index_in_other_table)

This is I believe entirely standard SQL anyhow, works the same way in other 
databases. 


 sql, Query
 Hark ye experts!

 Here I stand an expert grunt.
 Trying to understand,
 The syntax of constraint.

 I was directed to the constraint doc.
 And found the constraint doc doth suck.

 Here's my ploy, renewed, refined.
 Please see if you can help.

 To create my Foreign table I used the following script

 CREATE TABLE IIM_InventoryItemMaster(
 IIM_InventoryItemMaster_ID MEDIUMINT(8) NOT NULL,
 IIM_ItemNm VARCHAR(64),
 IIM_ItemDescCD VARCHAR(1) DEFAULT 'U',
 IIM_GenderCD VARCHAR(1),
 IIM_ItemPN VARCHAR(36),
 IIM_Spec1Nm VARCHAR(48),
 IIM_Spec2Nm VARCHAR(48),
 IIM_RecordCreatedBY VARCHAR(24),
 IIM_RecordCreatedDT DATETIME,
 IIM_LastEditBY VARCHAR(24),
 IIM_LastEditDT DATETIME,
 PRIMARY KEY(IIM_InventoryItemMaster_ID),
 KEY(IIM_GenderCD),
 KEY(IIM_ITEMNm),
 KEY(IIM_ItemDescCD),
 key(IIM_ItemPN)
 )
   Type = INNoDB;

 Then SHOW CREATE TABLE Told  me the table type
 is MyISAM. Don't I need InnoDB to use Constraints?

 Does INNoDB cause it woe?
 ===
 Now for some clear and Poignant questions about MySQL hell?
 ===
 From the following URL:http://www.mysql.com/doc/en/CREATE_TABLE.html
 I found Constraint docs
 ===
 [CONSTRAINT symbol] FOREIGN KEY [index_name] (index_col_name,...)
 [reference_definition: REFERENCES tbl_name [(index_col_name,...)] [MATCH
 FULL | MATCH PARTIAL] [ON DELETE reference_option] [ON UPDATE
 reference_option]
 ===

 Question I.
 If there is a Foreign Key in a Foreign Table, then there must be a Native
 key in a Native Table.
 What is the correct technical name of the {Native} Field?

 Question II
 I understand CONSTRAINT symbol to be the name of the constraint.
 true or false?

 Then there is FOREIGN KEY [index_name]
   What Index_name, The name of the Foreign Key field's Index
 (located in the Foreign Table?)
   So this is the name of an index in the Foreign Table?

 Question III.
 Then comes (index_col_name,...)
 I guess this means the name of the Foreign Key Field(s)?
 true or false;

 Question IV.
 Next comes REFERENCES tbl_name. [(index_col_name,...)]
 The name of the Native Table or he Fireign Table?
 [(index_col_name)], This must mean the name of the Foreign Key Field(s)
 AGAIN?
 true or false

 Question V.
 I suppose I should include the Constraint syntax in the SQL that creates
 the Native Table and not in the SQL that creats the Foreign Table?
 True or false?

 Thanks for your help
 Jim Bailey





 _
 Send and receive Hotmail on your mobile device: http://mobile.msn.com


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL over NFS

2002-08-20 Thread Tod Harter

On Tuesday 20 August 2002 06:39 am, Luis Calero wrote:

NFS locking is ALWAYS problematical. You might have no problems, you might 
also be instantly in hell. 

Its really hard actually to answer your question. NFS implementations vary 
greatly in their quality. In addition it depends on the NICs you have. Some 
NICs create a lot more CPU load than others. 

1st thing to do would be to test with a dummy database and see if the setup 
will work at all. Then you'd need to load test your NFS server, tune it, and 
see what you can get out of it for throughput. Finally you'd have to test the 
actual setup and find out if it really works better.

Needless to say most sysadmins find this sort of thing to be too much trouble 
and just upgrade their hardware! 

   Hi... I've got the folowing question, our servers are running pretty
 busy these days and our main DB server is taking high load peaks (memory
 is OK but the cpu has almost no idle time). We have another spare server
 and I'm thinking about mounting the database over NFS (100mb LAN) to the
 spare server and using both as frontends to the DB. Both servers are
 supposed to do reads and updates to the DB, but i'm concerned with the
 updates of the server using NFS.

   Are NFS locks safe enough to run this kind of setup? Is this going to be
 an advantage or will suffer from other kinds of problems? Both boxes are
 PIII dual 1Ghz / 1Gb ram, Linux 2.4.16, MySQL 3.23.52

   Thanx

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How to share a MyISAM table among different databases

2002-08-20 Thread Tod Harter

On Tuesday 20 August 2002 06:27 am, Sergei Golubchik wrote:

The key quote from the manual is...

All used tables must be in the same database as the MERGE table itself.

which makes sense. Fundamentally a database is an independent unit of data 
storage. You can't share data between two seperate databases. I know some 
systems approach this differently, but that seems to be the philosophy in 
MySQL, and getting away from that would pretty much throw the grant system 
out the window.

 Hi!

 On Aug 20, Dmitry Kuznetsov wrote:
  Hi, Sergey!
 
   As stated in description:
  A MERGE table is a collection of identical MyISAM tables that can be
  used as one. , i.e. content of these tables, being merged, is  merged
  also. I need the opposite - one physical table being represented in
  multiple DBs as if these replicas are regular tables.

 Well, a collection can consist of only one table :)

 So, you create a MERGE table that merges the table you want to alias,
 and put this MERGE table in whatever database you want. Then, edit .MRG
 file to add a proper path there. Voila.

 Regards,
 Sergei

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Utility for importing Paradox tables to MYSQL

2002-08-20 Thread Tod Harter

On Tuesday 20 August 2002 05:52 am, Nick Lazidis wrote:

Its relatively straightforward to do it with a perl script.

Just set up an ODBC data source to the Paradox database and install perl DBI 
and the DBD::ODBC and DBD::MySQL drivers. Then you can simply read records 
from Paradox via ODBC and write them to your MySQL database. Naturally you 
can also do some conversions along the way if you like. 

There are other ways of course, but I've always found this to be one of the 
easiest ways. 

 Hi,

 I wonder if there is a utility for transferring Paradox tables to MySQL.

 Nickos



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Count(*) using a join?

2002-08-20 Thread Tod Harter

On Monday 19 August 2002 09:12 pm, Jocelyn Fournier wrote:

except, leave off the 'f.' on 'f.count(*)'. Remember there is only one count 
of records, the count of returned rows from the join, there is no need to 
qualify that with a table identifier... (and I think it would be a syntax 
error if you do).

Also, about the GROUP BY. If you have the group by, then you will have only 
one record in your result set for each unique s.name, and your count(*) will 
return the number of times each s.name appears in table s. Table f won't 
really do much, except that any row in s that doesn't have a match of its id 
column with a row in table f will simply not appear at all.

My suspicion is that you're wanting to know how many times there is an f with 
a given id for each unique value of s.name. That is a tough one Really 
you will have to group on some other value in f which is unique to each row 
in f. So you could then have something like.

SELECT COUNT(*) AS fcount, s.name, f.unique, WHERE f.id = s.id GROUP BY 
s.name, f.unique

and I believe that would accomplish what you want (though you will end up 
with a row in your result you don't really care about, but at least the count 
is there).

 Hi,

 Try:
 SELECT f.count(*), s.name FROM first f,second s WHERE f.id = s.id GROUP BY
 s.name;

 Regards,
   Jocelyn

 - Original Message -
 From: Mark S Lowe [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, August 20, 2002 3:02 AM
 Subject: Count(*) using a join?


 I need to do a join with a count...but for some reason I can¹t figure out
 how to do it using MySQL. I¹m hoping that this pseudo code can help you
 give

 me an answer:
  select
 
  f.count(*),
  s.name
 
  from
 
  first f, second s
 
  where
 
  f.id = s.id

 How do I get a sql result like:

 CountName
 ---
 12343Test

 I¹ve attempting adding the group by function as documented, but MySQL has
 some demands I don¹t understand.


 Thank you in advance!

 Mark



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: web application test tool

2002-08-20 Thread Tod Harter

On Tuesday 20 August 2002 01:06 pm, bin cai wrote:

check out www.owasp.org, there are some good links there for various tools, 
mostly for correctness and security testing, but it should give you some good 
starting points. Check out www.freshmeat.net as well, I am pretty sure some 
stuff has shown up there at some point.

 Hi,
 I have completed a web-based application using java
 servlet with mysql as the backend database and tomcat
 as the server. I try to test my application
 performance using a free test tool. i found one called
 Microsoft Web Application Stress tool) it is free. but
 it's test script is used by AsP. I am not quite sure
 if there is any web test tool using java servlet?
 Thank you
 bin

 __
 Post your ad for free now! http://personals.yahoo.ca

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Transaction Question CONFUSED

2002-08-20 Thread Tod Harter

On Tuesday 20 August 2002 01:26 pm, Randy Johnson wrote:
 I am confused. (innodb table type)

I'm really not so sure about the 'lock in share mode' thing, but to the best 
of my knowledge if you do a

SET TRANSACTION_ISOLATION_LEVEL=SERIALIZABLE

and then start a transaction where you read data from a row and then update 
it, there is an absolute guarantee (if the database properly honors the 
isolation level) that no two transactions can act in such a fashion that 
either one interferes with the other. In practical terms that means that the 
same code run from client 2 will block as soon as it attempts the read until 
transaction started in client 1 is 100% complete.

Now, there may be more efficient ways to get this result. InnoDB uses 
multi-versioning and that has some subtle effects on transactions and 
concurrency. 

A larger question however is this, why do you care about reading the old 
balance? If you aren't going to use it to calculate the new one, then its 
irrelevant...

In other words the scenario you outline reduces (as far as the db is 
concerned) to just

update table set balance=100

and since that is an atomic operation it requires no transaction. In fact in 
theory ACID never requires a transaction for any operation involving only one 
single row.  For instance if you were incrementing the balance by 100 it 
would STILL be an atomic operation

update table set balance=balance+100

It is in fact only when you get to multi-row or multi-table situations where 
transactions are required.

Consider again your example, since no matter what order the 2 operations are 
performed in the resut is the same (balance is 100) there is no point in 
caring what sequence occurs, esp since script 1 cannot care if script 2 ever 
runs or not, and vice versa (or else they'd be one script...).

You can satisfy yourself that the same is true for increment, decrement, or 
ANY other single-row scenario that can possibly be invented. This is in fact 
a theorem of transactions...

Why then were transactions invented? Suppose you had THREE rows you needed to 
update with a single update statement

update table set balance = balance=1 where id =1 or id = 2 or id = 3

NOW you might need a transaction, because it might be a really bad idea for 
script 2 to come along and do 

select balance from table where id =1 or id = 2 or id = 3

and end up with the incremented balance for row 1, and the unincremented 
balances for rows 2 and 3, which is quite possible.

In that case running the 1st query in a transaction would in fact be quite 
necessary. 

Now you know what keeps db design guys up late at night

 Client 1.
 starts transaction
 selects balance from table where id=1 for update
 update table set balance=100
 #At this point from what i have read the balance could be selected by
 anther user.

 committ



 I have read that a select balance from table where id=1 lock in share mode
 will wait for the committ statement,  but client 2 would be wanting to
 update the balance the same way client 1 does sO i do not see how client 2
 could use the lock in share mode because the script is the same for client
 2 as it is in client one.

 so how would i ensure that client 2 waits for client 1 to committ before
 processing their select  and update?



 Randy

 sql,quary



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: High volume HEAP table

2002-08-19 Thread Tod Harter

On Sunday 18 August 2002 04:17 pm, Rick Robinson wrote:
 Hi all,
 I'm testing using a HEAP table to maintain transient state in a high
 volume TP environment.  So far, it looks promising, and I want to ask
 anyone about similar experiences and solicit some input.

 Some background:
 * I have one table; the row length is ~200 bytes and the table will
 typically never have more than 10,000 rows (more like 7,000 on average).
 * The table is accessed via one of two keys.  I have an index on each.
 The EXPLAIN output indicates these are used.

To the best of my knowledge HEAP tables can only have one index, it has to be 
primary. In other words, they're basically an in-memory hashtable accessible 
via SQL. 

Your application is pretty typical for session-state management. I doubt 
you'll have any problems, but I'd look into those keys. 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: A question about mm.mysql and the GPL

2002-08-19 Thread Tod Harter

On Monday 19 August 2002 07:21 am, Nicolas Ivering wrote:

Several things spring to mind here...

1st of all when you use JDBC, you're coding to the JDBC interface, not to any 
proprietary interface (IE, your application could just as well call a 
PostgreSQL JDBC driver and it would not need to be recompiled or even 
relinked). In that situation its a bit hard to see how the GPL could be 
construed to apply to your work.

2nd, even assuming the GPL DID apply to your work, it does NOT apply to 
output generated by the work. If it did then every graphic produced with GIMP 
would be GPL, and every byte of code emitted by gcc would have to be GPL as 
well! Clearly this is not the case, so your data is always 'safe'.

3rd the GPL really only deals with DISTRIBUTION of application. In other 
words if you gave someone the binary of your application, you might have to 
give them the source. Giving them the URL of a web server which runs GPL code 
does not in any way oblige you to distribute anything to anyone. This is a 
point many people miss. You can keep your GPL code hidden away from the world 
and then for all practical purposes the provisions of the GPL are powerless.

Finally I'm not at all sure the notion of a derived work is 'hazy'. It may be 
complex, but it seems extraordinarily precisely defined by the license. If 
there's real money riding on it, hand it to a lawyer and get an opinion. If 
not, then don't worry about it.

 
 Now if I am using the mysql database as a web site
 back end, the web site is totally independent of the
 database and cannot be considered a derived work
 of the mysql database. (keep in mind, the notion of
 derived work is *very* hazy under the GPL).
 
 But if the JDBC driver itself is GPL'ed and I am
 using the driver in my web site's jsp/servlet code,
 then it's concievable that the entire web site's
 contents
 and code become GPL'ed too. Of course, I wouldn't want
 to GPL by web site/code just because I am using a
 driver (keep in mind, using, *not* extending or
 customizing).
 
 Well, the question is, does in fact the use of the
 JDBC driver GPL my java server pages ? That is, does
 the entire web site become a derived work ?
 
 Any offical response from either Mr. Mathews of
 MySQL AB would be appreciated...
 
 Best regards,
 
 --j
 
 __
 Do You Yahoo!?
 HotJobs - Search Thousands of New Jobs
 http://www.hotjobs.com
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
  [EMAIL PROTECTED] Trouble
  unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Query slowing down, need advice

2002-08-16 Thread Tod Harter

On Friday 16 August 2002 12:43 pm, Brian Moon wrote:
 No, it should be and is scanning through the 500,000+ rows that meet the
 key of (approved, datestamp).  The table has 1M+ in it. My question is, how
 can it be faster?  Can it not be faster?  Is MySQL on my size server just
 not going to handle scanning 500K rows?  That seems like a low number to
 have problems with.  The query is not just slow, it does not return.

There's no RDBMS on earth that would be fast for this query.  If the string 
'myphorum' is a constant expression which is always getting searched for then 
I would simply flag every record that it occurs in. 

I would generally try to redesign my application so this sort of query is not 
used. Or else maybe there are other fields you can index on as well to cut 
down the numbers of candidate records.

I mean consider this. I don't know your schema, but if you have record 
lengths of say 256 characters then 1 million records is going to be 256 
megabytes. I suspect your data is more like a gig. Given that most disk 
subsystems in realistic use get about 5 meg/sec of sustained transfer you're 
talking about a 1 minute READ time for the entire table (and your index is 
likely not helping a bit, in fact I'm surprised MySQL bothers with it, at the 
hit rate of 50% practically every page of the database has to be loaded 
anyhow, so the index is practically worthless, in fact it may even be slowing 
your query, try dropping it).

My other advice would be to get lots of RAM and increase the size of the 
database's buffers. If you can get the entire table buffered into RAM you 
might get some decent performance.


 Brian.

 - Original Message -
 From: Thomas Spahni [EMAIL PROTECTED]
 To: Brian Moon [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Friday, August 16, 2002 10:51 AM
 Subject: Re: Query slowing down, need advice

 | On Fri, 16 Aug 2002, Brian Moon wrote:
 |  I ran into a problem when the queries started to call for 500,000+
 |  rows. The query would not return.  Here is the query and explain:
 | 
 |  select SQL_NO_CACHE id from single_table where approved='Y' AND

 datestamp =

 |  '2001-08-16 00:00:00' AND  ( ( author LIKE '%myphorum%'  OR  subject

 LIKE

 |  '%myphorum%'  OR  body LIKE '%myphorum%' ) ) order by datestamp desc;
 |
 | Brian,
 |
 | this query, having a joker at the beginning of each search string, scans
 | through the whole table. It can't possibly be fast on large tables.
 |
 | Regards,
 | Thomas

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Persisten Connections

2002-08-14 Thread Tod Harter

On Wednesday 14 August 2002 09:54 am, Thomas Seifert wrote:

I disagree entirely

Persistent connections have little or nothing to do with increasing load!

Given that you mention you are using PHP I'll assume you have mod_php running 
in Apache. Each Apache child process in this configuration will maintain ONE 
open database handle, so 100k pageviews per day I would expect you might max 
at like 30k in one hour, or around 10/second, so you might top out at roughly 
100 Apache child processes at any one time, thus 100 database connections.

Each DB connection is not a huge overhead, but creating and destroying 10 
database handles PER SECOND is a large overhead!!! Remember, every time mysql 
creates a connection it has to do internal queries on the grant tables. I 
don't know exactly what the overhead of that is going to be, but ANYTHING 
that creates 10 queries per second is putting some strain on your database 
server!

One of the main goals of using Apache modules for scripting was to allow 
persistent database connections. There is really NO reason to give up that 
advantage. Remember, MySQL is multi-threaded, with one thread per connection, 
so the resources for a database connection are on the order of under 100k of 
memory per connection.

 it will reduce the load for sure!
 MySQL is very fast in opening and closing connections at least if the
 database-server and webserver are on the same machine.
 I don't know how it will perform on different machines.


 Thomas

 On Wed, 14 Aug 2002 14:40:31 +0100

 John Wards [EMAIL PROTECTED] wrote:
  I am running a website which generates around 100,000 pageviews a day and
  I am wondering if I stop using persistent conections to the MySQL
  database and use
  normal open and close conections this would reduce the load onto my
  server?
 
  Most conections are either made through my main file or the phorum
  message board system.
 
  Thanks in advance
  John Wards

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Regarding Relation Amg Tables - Urgent()

2002-08-14 Thread Tod Harter

On Wednesday 14 August 2002 07:03 am, Balaji  Nallathambi wrote:

I think you're expectation is simply wrong.

In MySQL recent versions if you use InnoDB tables you can create a 'foreign 
key' constraint. All this means is that you cannot create a record in a 
dependent table where the same value in the key does not exist in the master 
table. SQL has no mechanism for doing automatic updates to related tables 
(except triggers, which are not part of MySQL and not very standardized 
anyway). It is true that MS Access has such a feature, but you won't find it 
in SQL databases. You just have to write 2 queries, 1 to update each table.

I would have to advise you though, that if you are needing to change values 
in primary keys, your database is not properly designed. I would switch your 
key to a synthetic key design and use that as the foreign key constraint. You 
should never have to change the values of those keys in a well designed 
database.

 Hi all,

 Tho this is a Visual C++ Group i hope that i will get
 the result for my quesion with lot of hopes.
 I want to create an one to one relationship among
 tables using SQL. How can i do this. I creates an
 relation but it dose not expect my needs. For Ex they
 are two table say tbset nad tb tbprocess. If i set an
 one to one relation between the two tables in which
 the master table is tbset. If i change the values in
 the tbset table the change should be refleted in the
 slave table that is tbprocess. I think i can do this
 in msacess. But i changed to mysql in which i need to
 write the Query(There is no gui for mysql as like
 msacess). Pls Reply me regarding this


 __
 Give your Company an email address like
 ravi @ ravi-exports.com.  Sign up for Rediffmail Pro today!
 Know more. http://www.rediffmailpro.com/signup/


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Need help how to make Directory system in MySQL with 6.5 mill subscribers ?

2002-08-14 Thread Tod Harter

On Tuesday 13 August 2002 06:50 pm, Steinar Kolnes wrote:

Just create indexes on first and last, that should improve the speed of your 
query drastically. It will of course be a BIG index. You might experiment 
with only making the width of the index small, like maybe 8 or 10 characters 
might be enough to get 99% of the benefit with a lot less size. You might 
experiment with only indexing last names as well, the query optimizer should 
then use that index first and will thus only have to do linear search through 
the resulting candidate records.

I think the design of your table is fine. Note that adding NOT NULL to a 
column's definition saves you a small amount of space per row, and you 
probably don't want nulls anyway. 

 Hi there,

 I have to make a large 6.5 million names and numbers database in
 MySql(maybe not so
 large for some of you). Yet it is very simple, here is my sql file:

 create table subscriber
 (
 idbigint unsigned not null auto_increment 
primary key,
 subscr_id bigint unsigned,
 telco_id  int unsigned,
 first char (80),
 last  char (40),
 address   char (40),
 postcodeint unsigned
 );

 NB I also merged first and middle names into one first;

 All the above should be searchable.
 I have a separate table that take cares of postcodes and post names.

 However the search is very slow. It takes more than 3 minutes for a query
 to search for
 first and last name;

 Example:
 select * from subscriber where first like 'steinar%' and last like
 'kolnes%';


 Is there any out there that have an suggestion how I can speed things up,
 even if I increases the size to more than 10 mill.

 I planned to have separate tables for first, last and street addresses,
 however is this a good idea ?

 Rgs
 Steinar Kolnes




 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Speed issues...

2002-08-02 Thread Tod Harter

On Wednesday 31 July 2002 01:41 pm, Richard Baskett wrote:
 Ok here is the EXPLAIN query:

OK, 1st thing this is telling you is that you aren't using any index on your 
master table, Employers. Probably this just means there isn't any field in 
that table that is part of the search (I can't be 100% sure of that without 
knowing the schema). If one of the columns in Employers IS part of the search 
criteria, then by all means index it, that will cut down the 46,175 rows you 
need to select to start with.

See what's happening is that first you get 46175 rows from Employers (which 
is the entire table), then you attempt to join it to Jobs, which matches 3 
rows, meaning 46,175 TIMES 3 comparisons to do the join. Then the result of 
THAT is being compared with JobsLocation, which is doing another bunch of 
comparisons (we don't know exactly how many, but at least 46,175 TIMES 
581). etc.

It looks like you have indexes on everything you can, but as someone else 
suggested it may be beneficial to get rid of 'left join' so MySQL can reorder 
the joins for maximum efficiency. I THINK if it can join to JobsCategory 
first it will cut down the work done substantially. 

Check out the MySQL manual section on EXPLAIN as well, there are probably 
some subtle things I've missed. Sometimes just tinkering with your query can 
help. I would try removing a few joins just to see which tables are slowing 
you down the most. You might be able to denormalize a bit based on that.

 table, type, possible_keys, key, key_len, ref, rows, Extra
 Employers, ALL, NULL, NULL, NULL, NULL, 46175, where used; Using temporary;
 Using filesort
 Jobs, ref, EmpID, EmpID, 4, Employers.EmpID, 3, where used
 JobsLocation, ref, PRIMARY, PRIMARY, 4, Jobs.JobID, 581, Using index
 Location, eq_ref, LocID, LocID, 2, JobsLocation.LocID, 1, where used
 JobsCategory, ref, PRIMARY, PRIMARY, 4, Jobs.JobID, 581, Using index;
 Distinct
 Category, eq_ref, PRIMARY,CatID, PRIMARY, 1, JobsCategory.CatID, 1, where
 used; Using index; Distinct

 Hopefully you can make out what it says in all of that jumble! :)

 Thanks so much for everyone's help!

 I am using PHP by the way.. And the reason those things that do not have to
 be quoted are.. Is.. Well.. Just because I quote everything, less likely to
 run into errors, but if it would speed things up.. By all means I will take
 them out! yes?

 Cheers!

 Rick

 When the solution is simple, God is answering. - Albert Einstein

  From: Tod Harter [EMAIL PROTECTED]
  Organization: Giant Electronic Brain
  Date: Wed, 31 Jul 2002 09:57:20 -0400
  To: Richard Baskett [EMAIL PROTECTED], MySQL [EMAIL PROTECTED]
  Subject: Re: Speed issues...
 
  On Wednesday 31 July 2002 05:17 am, Richard Baskett wrote:
  This is my first post here.. So be nice! ;)
 
  OK, no flames ;o).
 
  I'd want a bit more information in order to really diagnose this. The
  first thing I would tell you to do is to EXPLAIN this query. To do that
  just tack the keyword EXPLAIN onto the beginning of it, this will return
  you an explanation of what MySQL's query optimizer has decided to do with
  the query. Most likely one or more of your tables are being joined on
  columns with no indexes. Generally all the columns mentioned in USING
  should be indexed. You might also experiment with indexing of the LocName
  and JobTitle columns.
 
  Beyond that there are many possibilities. Your table structure is
  obviously fairly elaborate, and I suspect fully normalized. You might be
  forced to denormalize it some, in other words maybe the stuff in tables
  like Location will just have to be merged into the parent table even
  though it means possibly some duplication of data (but I suspect that at
  the level of actually managing your data you probably duplicate it in the
  child table already anyhow...).  Denormalization can reduce the numbers
  of joins enough to possibly get you a speed improvement.
 
  BTW, why is the 1 in ... VALID = '1' ... quoted? If its an integer then
  get rid of the quotes, same for CatID IN ('2'), your just forcing the
  database engine to do conversions. If, as I suspect, you are using Perl
  DBI then try using bind parameters and explicitly setting the SQL_TYPE.
  Perl DBI is pretty dumb about figuring out what data types to use.
 
  I have a database with a little over 60,000 records and when I do a
  search on that db it takes a little over a minute and a half to return
  results. This is very unacceptable, but Im just not sure what I can do
  to remedy this.  Here is what the query looks like when just using
  location and category in the search..
 
  SELECT DISTINCT Location.SortID AS Loc, LocName, JobTitle AS Job,
  Company AS Comp, Jobs.JobID, Employers.EmpID
  FROM Employers
  LEFT JOIN Jobs USING (EmpID)
  LEFT JOIN JobsLocation USING (JobID)
  LEFT JOIN Location USING (LocID)
  LEFT JOIN JobsCategory ON Jobs.JobID=JobsCategory.JobID
  LEFT JOIN Category USING (CatID)
  WHERE Valid = '1' AND JobTitle IS NOT NULL

Re: Querying for XML content in the database ?

2002-08-02 Thread Tod Harter

On Friday 02 August 2002 12:47 pm, [EMAIL PROTECTED] wrote:

Lars: I think what you are trying to do is pound a square peg into a round 
hole...

MySQL really wasn't designed for this sort of useage. Your selects are going 
to be brute force searches on unidexed data, which is exactly why they are so 
slow...

I would advise you to look into using an XML or XML Enabled RDBMS. There are 
a lot of possible considerations here as to what you might want. 
Considerations would include:

XML databases generally have NO idea of referential integrity whatsoever, nor 
transactions, etc. They also generally use some sort of query language that 
is either totally proprietary or based to some degree on XQuery, XPath, 
XPointer, etc. Expect the query language to be unfamiliar to SQL users and 
non-portable from database to database. On the flip side they can give you 
excellent performance for this type of application and have other desirable 
features like support for Schemas etc.

XML enabled databases now include most of the major vendor's offerings, 
though they often need an add-on module. I'm not really incredibly familiar 
with these, not having worked with one closely, but they generally are a 
standard RDBMS which has added features to allow mapping of XML to a 
relational schema, and even sometimes ad-hoc XML support (though generally 
its not indexed in that case). 

The other option is to keep using MySQL but abadon the idea of storing XML in 
a blob. In other words you would have to create a relational mapping of your 
XML schema to an RDBMS schema. Then your application would accept XML, 
extract the necessary data from it, create appropriate INSERT statements 
For output it would be necessary to reverse the process. If its a fairly 
simple application this is not a big deal. If its a complicated application 
with complex and changing schema, then you probably would want to find a 
commercial tool which supports that sort of thing, there are several good 
ones out there that will make it easier to do the mappings.

Hope this is helpful. You might want to check out O'Reilly's www.xml.com web 
site, it has a lot of interesting articles discussing this sort of thing.


 Heyho folks,

 I hope I am in the correct mailing list here.

 We do have a performance problem with mySQL.

 In the database we have :

 INSERT INTO art_xml VALUES ('0', 'article-infopn lang=enenglish
 product nr 0 /pnpd lang=enand went away, and two  technicians
 remained with instrument and started  to\r and /pdpn lang=dedeutsches
 product nr 0 /pnpd lang=deand protected meaning of civil life.\r and
 /pd/article-info');

 lots of these entries...

 and querying happens with :

 SELECT pr.*, SUBSTRING(xml_description, LOCATE('', xml_description) +
 length('') ) as jezus FROM art_xml as pr WHERE pr.xml_description regexp
 '[[:space:]]*pn[[:space:]]*lang=de[[:space:]]*[[:alnum:][:space:]]*de[[
: alnum:][:space:]]*' ORDER BY jezus

 Well this works.. but.. it is f***cking slow... really slow.. serious
 slow...

 ( 50,000 entries need much more than 3 minutes to process this select)

 So the question is - do we make a major mistake here or is it just so slow
 ?

 Thnx folks :)

 dodger




 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Speed issues...

2002-07-31 Thread Tod Harter

On Wednesday 31 July 2002 05:17 am, Richard Baskett wrote:
 This is my first post here.. So be nice! ;)

OK, no flames ;o).

I'd want a bit more information in order to really diagnose this. The first 
thing I would tell you to do is to EXPLAIN this query. To do that just tack 
the keyword EXPLAIN onto the beginning of it, this will return you an 
explanation of what MySQL's query optimizer has decided to do with the query. 
Most likely one or more of your tables are being joined on columns with no 
indexes. Generally all the columns mentioned in USING should be indexed. You 
might also experiment with indexing of the LocName and JobTitle columns.

Beyond that there are many possibilities. Your table structure is obviously 
fairly elaborate, and I suspect fully normalized. You might be forced to 
denormalize it some, in other words maybe the stuff in tables like Location 
will just have to be merged into the parent table even though it means 
possibly some duplication of data (but I suspect that at the level of 
actually managing your data you probably duplicate it in the child table 
already anyhow...).  Denormalization can reduce the numbers of joins enough 
to possibly get you a speed improvement.

BTW, why is the 1 in ... VALID = '1' ... quoted? If its an integer then get 
rid of the quotes, same for CatID IN ('2'), your just forcing the database 
engine to do conversions. If, as I suspect, you are using Perl DBI then try 
using bind parameters and explicitly setting the SQL_TYPE. Perl DBI is pretty 
dumb about figuring out what data types to use.


 I have a database with a little over 60,000 records and when I do a search
 on that db it takes a little over a minute and a half to return results.
 This is very unacceptable, but Im just not sure what I can do to remedy
 this.  Here is what the query looks like when just using location and
 category in the search..

 SELECT DISTINCT Location.SortID AS Loc, LocName, JobTitle AS Job, Company
 AS Comp, Jobs.JobID, Employers.EmpID
 FROM Employers
 LEFT JOIN Jobs USING (EmpID)
 LEFT JOIN JobsLocation USING (JobID)
 LEFT JOIN Location USING (LocID)
 LEFT JOIN JobsCategory ON Jobs.JobID=JobsCategory.JobID
 LEFT JOIN Category USING (CatID)
 WHERE Valid = '1' AND JobTitle IS NOT NULL
 AND (LocName LIKE 'US-Alabama-Birmingham%')
 AND Category.CatID IN ('2')
 ORDER BY Loc ASC LIMIT 0,50

 Add on the following when using keywords in the search, these are the jobs
 to search in since these had the keywords within them:

 AND Jobs.JobID IN ('2345','6578')

 Like I said it's taking between 45-100 seconds to execute these queries and
 that's just not acceptable.  I was told to use another table that would
 save the query, save a timestamp, and save the results.  Then whenever that
 same query came up just show the stored results.  While this is a
 possibility, I see the likelihood of people doing the exact same search as
 very slim.  So hopefully there is a better way of doing my query.. Or if
 there is something I can tell my host to do with mysql or the hardware that
 it's on... Im desperate.. Anything that can speed things up will be
 appreciated!

 Cheers!

 Rick

 The intuitive mind is a sacred gift and the rational mind is a faithful
 servant.  We have created a society that honors the servant and has
 forgotten the gift. - Albert Einstein


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Windows CE MySQL

2002-07-17 Thread Tod Harter

On Wednesday 17 July 2002 09:09, Cal Evans wrote:
 I can see where accessing the server from a handheld would be cool but I
 question the concept of actually RUNNING the server on a handheld.  Write a
 cool lightweight client with the library and leave the server on beefier
 HW.

There are actually reasonable reasons for wanting to have a lightweight 
server, but really what you want is simply an application with embedded MySQL 
in it. In other words you don't need a client/server database, you need 
access to tables and you want to reuse client code which works already with 
MySQL database, so what you do is compile the embedded server into your 
program. Effectively it has the same API as the client library, except it 
does all the work directly itself instead of talking to a server.

I don't know how much resources it requires though, nor what the requirements 
would be as far as functionality the platform has to support in order to 
compile it. My guess would be you could fairly easily do this on a Linux 
based PDA like the Zaurus, and its not unlikely it might work in CE, but I 
doubt you'd have much luck with things like Palms since they have a pretty 
weird set of OS functions thats nothing like what you get on a normal desktop 
OS (ie you don't have a regular C standard library really...).


 =C=

 *
 * Cal Evans
 * The Virtual CIO
 * http://www.calevans.com
 *


 -Original Message-
 From: Nick Miles [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, July 17, 2002 3:34 AM
 To: [EMAIL PROTECTED]
 Subject: Windows CE  MySQL


 Hi,

 Has anyone tried building a MySQL server for Windows CE ?

 I have found a really good client for CE called DALP
 (http://www.kalpadrum.com/dalp/) but no server.

 It would be really good to have a cut down MySQL server on PDAs.  I have
 had a go at compiling the server using Embedded Visual C++ 3 but my C
 skills are no where near good enough to get anywhere.


 Thanks

 --
 Nick Miles - [EMAIL PROTECTED]
 Database Developments Ltd
 Client/Server  Web Database Apps
 Tel +44 1453 861155Fax +44 1453 861166
 --


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Best Journaling file system?

2002-07-11 Thread Tod Harter

On Thursday 11 July 2002 04:20 am, Iago Sineiro wrote:

I vote for Reiserfs personally. I've been running it for a year, its totally 
stable, and reasonably fast. ext3 is OK as well, but its slower and at least 
in theory does not provide an absolutely guaranteed consistent file system 
100% of the time like Reiserfs does. On the other hand ext3 works with most 
ext2 tools. The xfs and jfs file systems are still a bit green. xfs is good 
with large files, but some features are still not working in Linux, and its 
not recommended as a filesystem for general use (IE, don't use it for system 
partitions etc, in fact I don't even think lilo will boot off it at all). jfs 
was totally unstable last I checked... 

In the long run xfs will probably be best for things like InnoDB table spaces 
because its designed for very large contiguous data accesses, whereas 
Reiserfs was designed from the start to support very high speed access to 
filesystems containing millions of small files. 

 Hi all.

 Which is the best journaling file system for mysql and innodb?

 Ext3, XFS or Reiserfs?

 Innodb needs a journaling file system or is better use ext2?

 Iago.



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql query for next unique ID

2002-07-01 Thread Tod Harter

On Monday 01 July 2002 10:29, W. Enserink wrote:
 Hi all,

You would just do something like

SELECT max(idcolumn)+1 as nextvalue FROM tablename

where idcolumn is the name of the column and tablename is the name of your 
table, but remember that you may need to serialize this, since probably the 
NEXT thing you're going to do is insert into another table using that id 
number, and you don't want someone else to grab it in the meantime. The 
traditional way is to issue a lock on the table, but with InnoDB tables you 
should be able to do it with a transaction.



 Does anybody know if there is a function to retrieve the next unique ID
 number for a record to be made in the future?


 thx. Wilbert

 mysql query for next unique ID

 -
 Pas de Deux
 Van Mierisstraat 25
 2526 NM Den Haag
 tel 070 4450855
 fax 070 4450852
 http://www.pdd.nl
 [EMAIL PROTECTED]
 -

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL dump/recovery probable bug

2002-06-21 Thread Tod Harter

On Friday 21 June 2002 09:54, Keith C. Ivey wrote:
 On 21 Jun 2002, at 15:43, Stefano Incontri wrote:
  ERROR 1064: You have an error in your SQL syntax near 'Load (IL,Item_ID)
  ) TYPE=MyISAM COMMENT='SCM Internal Loads table'' at line 12

 LOAD is a reserved word in MySQL, so you need backticks around it.
 Have you tried using the --quote-names option in mysqldump?

 http://www.mysql.com/doc/m/y/mysqldump.html

On top of what Keith says, it is a lot more efficient to create the index 
AFTER you load the data anyway. Given that the data is a restore of 
known-good values which shouldn't ever violate the unique key constraint 
there is no advantage in creating the index before loading. With large 
amounts of data the difference in performance could be considerable. I'm only 
working from memory here, but I think there are also some parameters related 
to transactions and InnoDB tables you want to look at as well when you load 
up large amounts of data.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: @@IDENTITY

2002-06-21 Thread Tod Harter

On Friday 21 June 2002 06:55, Arul wrote:

No, but last_insert_id() is only valid until you do another query on the same 
database handle. The only explanation I can think of is that somehow you're 
code is interjecting some other query between the insert and the call to 
last_insert_id(). Maybe a commit? I'd make sure that the very next thing you 
do after insert is to get that id. 

 Even the last_insert_id() returns 0
 is it coz the table type is INNODB


 - Original Message -
 From: Jan Peuker [EMAIL PROTECTED]
 To: Arul [EMAIL PROTECTED]
 Sent: Friday, June 21, 2002 4:16 PM
 Subject: Re: @@IDENTITY

  You are using M$-Syntax, in MySQL you should use:
  SELECT last_insert_id()
  http://www.mysql.com/doc/e/x/example-AUTO_INCREMENT.html
 
  regards,
 
  jan
 
 
  - Original Message -
  From: Arul
  To: MySQL
  Sent: Friday, June 21, 2002 12:17 PM
  Subject: @@IDENTITY
 
 
 Hi
 
  I m using mysql 3.23.51 on win 2k
 
  Well..I have a user table where i have the userid column as primary key

 and

  auto_increment column and name as varchar(25).
 
  I insert values in the table such as
 
  Insert into user (name) values ('AA');
 
  Then When i do select @@Identity in the next time , it tells me as 0
 
  Why is this happenning..
 
  I need to get the last inserted value of the primary key ..right
 
  -Arul
 
 
 
 
 
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
 
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail
  [EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: locking on row level ...

2002-06-21 Thread Tod Harter

On Thursday 20 June 2002 17:50, Cal Evans wrote:

What you want to do is wrap the update in a transaction with transaction 
isolation level set to SERALIZABLE. This should insure that whichever client 
is first to initiate the transaction will have effectively exclusive access 
to the record in question. Thus something like.

SET AUTOCOMMIT=0
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN
SELECT 
UPDATE 
COMMIT

If both clients do this, then I believe you should insure that whichever 
client gets to the database first will be able to read the record and update 
it before the other client even gets to see the data. Note however that 
effectively what is happening is that a lock is being set on ALL the rows in 
the SELECT clause, so every other access to the table will block until the 
transaction completes. It can, needless to say, have a negative effect on 
database performance...


 Short answer:
 You don't. That's not the way SQL databases work.  When you hear of 'row
 level locking' it means something different than you are used to if you are
 coming from Access/Foxpro.

 If you are using InnoDB tables then you can:
 BEGIN TRANS
   statement
   statement
   statement
 COMMIT or ROLLBACK


 Long answer:
 If you are coming from a FoxPro background (as I am) then you are used to
 being able to issue a RLOCK() and the database will place a lock on that
 record not allowing anyone else to touch it into you are done with it. (If
 your background is not FoxPro then I'm sure the desktop DB you are using
 has something similar).

 You don't do that in SQL.

 However, in times past, when I've felt the overwhelming urge to regress, I
 have built systems that have a LOCK field. (i.e. userLock varChar(10)) 
 Then when someone wanted to 'lock' the record, I would write their userID
 in the lock field with an update statement.

 Then before granting someone else a lock on the record, I would check to
 make sure that someone else didn't already have it locked. The downside to
 this methodology is that you are increasing your hits on the database
 server.  If your application is not busy then it's not a major deal. If,
 however, you have a busy server, this will cause it to be busier.

 Once the user with the lock issues their UPDATE statement then I issue a
 second UPDATE to clear the lock (or if you want to get fancy, clear it in
 the first UPDATE with a userLock='')

 Anyhow, if you study how databases are built in SQL you won't find a lot of
 this type of programming.  I now rely much more on transactions to keep
 things straight. Yes, this means that two users could issue updates almost
 at the same time.  Last one to the database wins.

 HTH,
 =C=

 -Original Message-
 From: Silmara Cristina Basso [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, June 19, 2002 1:10 PM
 To: [EMAIL PROTECTED]
 Subject: locking on row level ...


 I'm newbie MySQL and I'm using MySQL-max 4.0.1(Innodb), with one
 application developed in Delphi and connect through MyODBC. The question is
 ... How can i do to lock one row so that it is editing in the application?


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Using LDAP with mysql.

2002-05-23 Thread Tod Harter

openLDAP is capable of using various back end data stores, including an SQL 
database.

You have to set up MySQL (obviously), and set up MyODBC (which probably means 
you will need to install an odbc manager like iodbc or unixodbc), and create 
a data source. Then you can reconfigure slapd so it will utilize that 
backend. 

The docs are available on the openLDAP site, but be warned that it isn't a 
really straightforward process. The way LDAP structures its schemas is not 
really well adapted to an RDBMS, so there is some ugly black magic you have 
to do. Basically you have to create a little configuration which tells 
slapd's back_sql how to map from LDAP attributes to columns, and what columns 
of which tables the tree is linked together with. 

I experimented a bit with all of this for reasons similar to yours, but I 
never got as far as getting the whole thing to work entirely. Seems its not 
too tough to get slapd talking to mysql, but the real fun is creating the 
proper table structures and configuration for your schema. My advice would be 
to get someone who's an expert on LDAP to give you a hand.

On Thursday 23 May 2002 02:01, ally wrote:
 Dear all,

 my company is currently using mysql database with php. Now there's this
 company which is using LDAP with Oracle and we need to be able to link them
 together.

 They suggest tht we use LDAP with Mysql. I've tried looking for information
 abt using ldap and mysql but invain.

 Can you please enlighten and advice me?

 thank you vey much

 ally.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: how to design mysql clusters with 30,000 clients?

2002-05-23 Thread Tod Harter

On Wednesday 22 May 2002 18:44, Dave Watkins wrote:
 At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
 Hello list,
 
 I am expecting to have 30,000 http clients visting my website at the
 same time. To meet the HA requirement, we use dual firewall, dual
 Layer-4 switch and multiple web servers in the backend. My problem is,
 if we use the user-tracking system with apache, php and mysql, it will
 surely brings a huge amount of database traffic. How can I balance mysql
 load among multiple mysql server yet assure the data consistency among
 them? My idea is:
 

Actually you don't need to do clustering at all, at least not for the systems 
doing the actual id tracking. Use Apache's mod_unique (which will generate 
unique ids even across a cluster of web servers with no need for them the 
communicate) and then each web server can have its own user tracking 
database, there will be no danger of id collisions. 

If you need to do actual session management where you would have any of the 
pool of servers needing to recover per-session data, then you're stuck back 
where you were before, though at least you have globally unique ids now. 

One way to deal with that issue is to provide some form of session affinity, 
so once a given user session hits one particular apache, it always gets 
directed there. Layer 4 switching may allow you to do that.

Another idea to throw into the mix is reverse proxying. Most likely you have 
a mix of dynamic and static content, and the static stuff really could care 
less where it gets served from, so set up your cluster so that each system 
has a stripped Apache running on port 80 which can serve static content and 
reverse proxy dynamic content handling off to a 2nd instance. Then you can 
write your proxy rewrite rules such that they take into account the user's 
session state, and you end up with effectively session affinity at the 
dynamic server, which is really the only place it would matter. Then you have 
no need for clustering at all, you can just replicate for reliability 
purposes and munge your data together from each server when you need to do 
reports.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: GUI for mySQL

2002-04-22 Thread Tod Harter

Watch out with using the Jet database engine this way. MS Access tries to 
keep linked external table views in sync, which means that with large tables 
and several PCs running Access linked to MySQL over ODBC driver it will bring 
your network to its knees. The Jet engine will constantly be scanning tables 
on the database server, generating GIGANTIC amounts of traffic. I've seen 
100mb ethernet completely swamped by just 3 or 4 clients. There are probably 
work-arounds for that, but you should definitely test first, MS Access is a 
very badly behaved piece of KAKA.

On Monday 22 April 2002 07:29, Hathaway, Scott L wrote:
 I would not look at front ends to mySQL.  Instead, I would let them keep
 using access to do this and link them to the mySQL database or get an SQL
 query tool that allows for graphical queries.

 Scott

 -Original Message-
 From: Pierre du Parte [mailto:[EMAIL PROTECTED]]
 Sent: Saturday, April 20, 2002 3:06 AM
 Cc: MySQL
 Subject: GUI for mySQL


 I am gradually winning the war in promoting opensource to a number of my
 key customers. Notably a couple have made the move to OpenOffice.org,
 albeit on Windows. I am currently re-writing a key application to use
 mySQl (previously Access), so in all I'm pleased.

 However, the head bean counter in one office tackled me on this - In
 Access I can quickly create queries and export the results in all manner
 of ways. What do I do if we let you talk us into mySql?

 Where indeed? Are there any nice, user friendly GUI available for mySQL
 ad-hoc query and report design?

 Thanks

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: joins vs excess fields

2002-04-22 Thread Tod Harter

I disagree. Joins can be expensive. Now PERHAPS Tom's suggestion is a good 
one, but its hard to say. Remember, excess fields can always be left out of a 
SELECT, so they don't neccessarily cause a performance problem. Some things 
are true though, fixed length records are more efficient, so if you have 
varchar fields it might be more efficient to put those in a seperate table. 

Remember though, when you denormalize like this you risk data integrity and 
you increase the number of queries your database will handle, which can 
itself cause a bottleneck. Disk drives for instance can only do so many reads 
per second, so many small requests for data can cause an I/O bottleneck long 
before the bandwidth of IDE or SCSI is reached. 

The only real way to tell if denormalization will help is to try it. build a 
test database both ways and run some realistic load tests. 

On Monday 22 April 2002 02:22, Toomas Vendelin wrote:
 Hello olinux,

 If I were in your shoes, I would put not only paths to images into a
 separate table, but also a field called content and probably title,
 summary, author and linked all this stuff together via ID-like fields which
 are primary keys in those new tables. Then articles table can be searched
 through much faster, and title, summary, author data can be quickly
 retrieved using primmary keys ONLY when this data is needed. So you first
 search a very compact table where most of the fields are
 indexes or parts of indexes, and THEN, using primary keys, retrieve big
 pieces of data which you do not have to search through  (and nothing, I
 guess,
 works faster in MySQL than retrieval of a single record by it's primary
 key). Run your queries with EXPLAIN, it will probably explain it
 less verbously than me :).

 Regards,

 Tom

 Monday, April 22, 2002, 6:29:52 AM, you wrote:

 o Hello,

 o I have a table with a large number of news articles.
 o Articles of type 'feature' (identified in the
 o article_type field) will have the path to an image
 o associated with the article stored. The question is
 o whether to 1) add a field to the database that most
 o records will not use OR 2) add a table to hold image
 o path names and require a JOIN in order to retrieve.

 o I am thinking that the join query will be best. Thanks
 o for any help.

 o olinux

 o Table setup is:

 o CREATE TABLE articles (
 o   id int(12) NOT NULL auto_increment,
 o   type varchar(15) NOT NULL default '',
 o   region smallint(1) NOT NULL default '0',
 o   date varchar(35) NOT NULL default '0',
 o   author varchar(30) NOT NULL default '',
 o   title varchar(90) NOT NULL default '',
 o   summary varchar(255) NOT NULL default '',
 o   content text NOT NULL,
 o   keywords varchar(150) NOT NULL default '',
 o   filename varchar(30) NOT NULL default '',
 o   PRIMARY KEY  (id,id)
 o ) TYPE=MyISAM;


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL Power ?

2002-04-08 Thread Tod Harter

On Friday 05 April 2002 11:32, Steve Rapaport wrote:
 I'm currently running MySQL for a big, fast app without
 problems.  BUT:

 I'm in the middle of specifying a new application with a high
 load, and I'm consideing looking for alternatives to MySQL
 because without InnoDB, it gets really slow on tables
 with frequent updates and reads (no row locking).

 We have, for example, a session table that records
 all the incoming requests for holding state.  Since
 it's constantly being updated and read, it is frequently
 locked, and there are often instances where 50 reads
 will stack up while a lock is held.  This slows down
 the whole database.

Only if you insist on locking your tables. If you're talking about a webapp, 
Apache only really will be servicing 1 request at a time, generally. I have 
yet to be forced to lock my session table. Maybe you rely too much on this 
one table? Also you might try using a HEAP table for session state. Its much 
much faster if you can afford the memory (but again, keep your session state 
small and simple!). 


 With InnoDB, I'm sure this problem goes away, but as soon as we
 go to InnoDB, we have to pay for backups and support,
 which means we start looking around at 'pay' solutions.

I guess I don't understand why you see InnoDB as being more expensive than 
using standard myisam. Its just a table handler, there are not that many 
fundamental differences in how things work, and 99% of the existing MySQL 
infrastructure works fine with any table type. Backups are pretty basically 
the same as ever, just back up the tablespaces! Admittedly they use the disk 
a bit different, but it has caused us no real problems.

 Is there something I'm missing?

 Steve

  However, my impression is that while the answer, for the very highest
  volumes, is that Oracle is better, the point at which Oracle
  betters MySQL
  is *much* higher than doubters might think. So, if anybody
  give the reply
  that Oracle is best at the high end, please could they also
  try to quantify
  the point at which MySQL begins to run out of steam - and
  what it is it
  can't do and Oracle can at that point. (For example, MySQL
  can handle high
  read loads by use of replication, but would bottleneck on
  high write loads
  - I think).

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL vs. MS SQL

2002-04-08 Thread Tod Harter

On Friday 05 April 2002 02:25, David Williamson wrote:
 Hi there,

 I will shortly be installing a MySQL server at my place of work, the box I
 will be installing it on currently has MS SQL server running as well. (I
 believe its a wintel box).
 Anyway, I am wondering if there are any known problems having MS SQL and
 MySQL running on the same machine.
 My personal opinion is no - but I need to find some info to reassure the
 network guys:)

 Any information on this subject will be greatly appreciated.

 Regards,

 David

Should be fine. I never ran both at high load together, but they will at 
least peacefully coexist, and on a dual 1 Ghz PIII with 1 gig ram and some 
fast SCSI drives both seemed pretty snappy (well SQL Server is NEVER snappy, 
but you know what I mean... ;o)).



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Best hardware for a very large MySQL server? looking at x86

2002-04-04 Thread Tod Harter

Well, its tough to compare system configurations in a very general way. I've 
run any number of different systems. I can tell you that Sun has some very 
nice boxes. A 4 way SMP server with a couple gigs of ram and 1-2 internal 36 
gig drives can be had in the 25k price range. They're perfectly nice boxes 
and will run Linux fine as far as I know. IBM makes a whole line of similar 
PPC servers. 

I think you'll find that the RISC systems have fewer processors and run at 
lower clockspeeds for the same total performance. 100k is a HUGE amount of 
money to drop on a system. You could get 2 full racks of high performance 1U 
systems, including everything, for less, but whatever!

On Wednesday 03 April 2002 22:49, JW wrote:
 Trying to send this again... SPAM filter messing with me...

 this is a query about what hardware might make for a really good sql
 server

 There :-p

 Hello,
 
 I need some advise.
 
 We are about to purchase a huge system for use as a DB/web application
  server (mostly DB).
 
 I'd like to point out that upper management (not me) has decided to do
  this... please don't tell me that I don't need something that big, or
  that I should use an x86 cluster - that's already out of the question,
  and out of my hands.
 
 The server they had already decided to get is a Dell PowerEdge 8450 with
  an external PowerVault storage array. See details here: PowerEdge 8450:
 http://configure.us.dell.com/dellstore/config.asp?customer_id=04keycode=6
 W300order_code=PE8450cfgpg=1#updatepriceNS PowerVault 22xS
 http://configure.us.dell.com/dellstore/config.asp?order_code=PV22XScustom
 er_id=04keycode=6W300family_id=9171
 
 The server is an eight-way PIII Xeon , 32GB of RAM, price approx. $99,000
  USD - let's say $100,000 In some configurations we've gotten higher.
 
 
 PowerVault approx. $7,500 USD for five-disk RAID 5, 36GB 15k SCSI disks.
 
 Management asked me and another tech. to figure out exactly what we need.
  We called Dell, and the Dell tech said this would be going head on with
  RISC based systems.
 
 Which got me to thinking I am personally not fond of x86, and don't
  want to pass up an opportunity to get a RISC system, like an Alpha, SPARC
  or PPC.
 
 Management has given me permission to make a comparison, I'm hoping
  someone here has experience with RISC systems.
 
 I'd _really_ like to have a RISC system but I've got no idea how to go
  about comparing them.
 
 For me it's like trying to compare apples and oranges when you don't even
  know what an orange is =)
 
 Does anyone here know how much an Alpha, SPARC, or PPC system that has
  comparable power would be?
 
 Or even _what_ systems are comparable? As I said, I've never dealt with
  anything besides x86 and Apple PPC before, so I'm venturing into totally
  new territory.
 
 If someone can give me a clue, I'd really appreciate it. Apparently we are
  mostly after processing power (CPU+RAM), my boss said we wouldn't even
  need a GigaBit NIC (though of course he may be wrong).
 
 I basically need to find a RISC system that produces the same amount of
  power for less $$$, or at LEAST more power for the same amount of $$$.
 
 It must run Linux of course, much preferably SuSE. I know SuSE Enterprise
  edition runs on SPARC, PPC ({i|p|z}Series and Itanium/IA64 ( I really
  don't want the latter, though if someone gives me a convincing argument,
  I'll consider it), and Professional 7.1 runs on Alpha, 7.3 runs on
  PowerPC
 
 Any help would be greatly appreciated.
 
 Thanks!
 
 
 
 
 Jonathan Wilson
 System Administrator
 Clickpatrol.com
 Cedar Creek Software http://www.cedarcreeksoftware.com

 
 Jonathan Wilson
 System Administrator
 Clickpatrol.com
 Cedar Creek Software http://www.cedarcreeksoftware.com


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: PERL/SQL, again? - Re: Procedures

2002-04-03 Thread Tod Harter

One thing to consider with Perl is that perl 5 and perl 6 are totally 
different beasts.

Perl 6 is built on top of parrot, a general purpose register based virtual 
machine which is optimized to execute perl and similar languages. There are 
already a small test language running on top of parrot (though perl 6 itself 
is still in the design stage). Embedding the parrot engine (which will also 
run perl 5 as well, and most likely java, javascript, etc) would be the most 
logical course of action. Then one could simply drop in whatever front-end 
language support is provided for parrot, and all that would remain would be 
the language modules you would need in order to interface your script to the 
SQL engine.

There are other possible languages. GNU FORTH would be an excellent choice. 
It is small, fast, and powerful. It is also pretty easy to mix FORTH syntax 
with other languages (FORTH basically HAS almost no syntax itself). 

On Wednesday 03 April 2002 10:15, Russell E Glaue wrote:
 Actually, I like this proposal of being able to plug in any language into
 MySQL.  I also want to make a note that Greg Cope has a very good point.
 I believe it to be good to be able to plug in any language, but the mySQL
 community may want to make a standard recomendation of a certain language.
 And going from what Greg mentioned in an earlier e-mail in this thread,
 the embedded language of choice really needs to be a small and compact
 language. Perhaps it is feature rich, but still small.
 A big language like java or perl will most likely slow down the database.

 What language could MySQL try to standardize on that would not cause the
 server to slow down or become bulky in memory?  I don't know if Python or
 PHP are necessarilly better choices than PERL or Java under these
 considerations. I cringe at offering up TCL as an option (don't hit me!).
 I might go along with using LISP.

 Does anyone on this list have a suggestion as what might be the best
 language to embed into MySQL?  Be sure to consider memory efficiency,
 threadding, speed of the language, and compactness (compared to code and
 library bulkiness).  I'd like to hear what other people think?

 As far as creating a language just to embed in MySQL... this may be
 tempting, but I think it to be far better to not do this. The problems
 with inventing a language just for MySQL is stalness of advancing the
 language, and also innovation and optimization. There are already lots of
 people in existing communities to improve current program languages.

 -RG

 On Tue, 2 Apr 2002, Jeremy Zawodny wrote:
  On Tue, Apr 02, 2002 at 06:20:41PM -0600, Russell E Glaue wrote:
   On the mysql-internals mail list we had a thread going about this. I
   was suggesting something like embedding PERL into MySQL to produce
   something like PERL/SQL (similar to PL/SQL in oracle).
   Unfortunately, although promising and liked among people of the
   list, there are no plan right now to develop anything like
   this. Atleast no one has taken the initiative to look into this.
 
  Actually, there are plans.  I've been in at least 2 discussions (1 was
  in person) about just how it will be done.
 
  The plan, last I heard, is to provide a general API for plugging in a
  language on the back-end.  So if you to use Python, you can.  If you
  want to use PHP, you can.  Java?  Probably.  And so on.
 
  The only problematic languages are those with threading problems--like
  Perl.
 
  Jeremy
  --
  Jeremy D. Zawodny, [EMAIL PROTECTED]
  Technical Yahoo - Yahoo Finance
  Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936
 
  MySQL 3.23.47-max: up 54 days, processed 1,508,672,191 queries (319/sec.
  avg)

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: PERL/SQL, again? - Re: Procedures

2002-04-03 Thread Tod Harter

That assumes you believe the myth that Java is really the language of choice 
in the back rooms of large enterprise IT shops. It isn't. In the financial 
industry perl is ubiquitous. Upper management will tell you Java is god, but 
if you actually look at the deployed systems that are out there doing real 
work and being developed, esp the SUCCESSFUL ones, they are mostly written in 
perl.

A few people have actually written articles on this topic, but its not in the 
interests of ISV's and system or IT shops marketing arms to promote a free 
language that has free tools. 

My team regularly runs rings around groups struggling to do the same stuff 
with Java. All our systems are already implemented as stacks of SOAP services 
for instance. The Java guys aren't even close to that yet. We can do in a 
week what it takes them 3 months to do, and well written perl is every bit as 
good as Java (plus perl seems to be 3-4 times faster for most real world 
applications).

On Wednesday 03 April 2002 02:50, pwxgao wrote:
 I have been using languages like Smalltalk, Perl and Java for about 15
 years. I am a huge fan of putting one in the DB for embedded server
 procedures. I also beleive the best strategic option for MySQL to continue
 its adoption in the Enterprise IT world is to choose Java as this language.
 Perl is great. But so was Smalltalk. We develop everything in our shop now
 in Java. Continuing to leverage Java for server DB prcesure makes sense.
 Using yet another language, even if it is academically better suited, goes
 against the mainstream.

 regards, Jon
 This message was posted using eunum --- http://www.cn.eunum.com/
 To interact with a real-time, threaded interface to this e-mail list, go
 visit the link MySQL below: http://www.cn.eunum.com/discussions/2-261.html
 .

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: storing files in mySQL

2002-03-05 Thread Tod Harter

On Tuesday 05 March 2002 10:05, Thomas Spahni wrote:

There are some good reasons for wanting to store data directly in the 
database, sometimes.

For instance I built an application recently that used blobs. The reason is 
simplicity and security. In a web application it is nice to just dump the 
files into a directory and point to them, but consider that if you are 
building NOTHING but a database server, you don't want to have to support and 
manage security for another protocol, like ftp or http. In addition in my 
case it would have made the client code much more complicated since the 
client could already deal with an ODBC data source. I would have had to code 
in an entire new piece of functionality to deal with the ftp or http 
protocol, then figure out how to engineer it so the client code would 
automatically go and get the data, etc. Plus now I can't just manage data 
security with SQL GRANT instructions.

On top of that huge directories are very slow in most file systems. We had 
over 24 million records! That would mean that some elaborate scheme would 
have to be developed, like MD5 hash the filename and walk a series of 
directory levels down based on characters in the hash, etc. Since data is 
getting added and removed constantly there would then have to be a way to 
manage that tree to keep it from getting unbalanced etc. In the final 
analysis this would have been pretty close to the equivalent of building our 
own b-tree index! 

Given that database already have highly optimized indexes, it seemed more 
sensible to use the existing functionality of blobs and indexes. I think what 
all this points out is that MySQL developers might want to consider how to 
better support this type of application in the future. Currently HUGE tables 
of large blobs ARE very awkward. Our blob tables are terabytes in size, and 
the corruption of one record can force the tape restore from lower regions... 
Not to mention the horrors of backing the thing up, or any operation that 
mysql tries to perform by copying the table (of which there are several I 
believe). Maybe a special table handler module could be written that would 
cater to the needs of this type of application, plus a few special tools for 
doing repairs and such. Just an idea :o). 

 On Mon, 4 Mar 2002, Dang Nguyen wrote:
  I'd like to know how to load files, such as MS-Word documents or PDF
  documents, in a mySQL database.  I've setup a blob type in a table, but
  where do I go from there?
 
  The purpose of this is to store files uploaded from a web page and
  processed with Java servlets.  Then, the files should be retrievable
  (displayed or downloaded) back to a client browser.  My environment:
  Apache Web server 1.3.x on Solaris 2.8 with Java servlets environment.

 MS-Word and PDF files are essentially binary files (very much like
 images). I suggest leaving them as they are in a separate directory.

 These MS-Word and PDF files may be converted to plain ASCII using filters
 (mswordview, pdftotext). The resulting ASCII versions are then loaded into
 a table with one column of type TEXT along with a VARCHAR pointer to the
 filename. Create a fulltext index on the TEXT field.

 A fulltext search for some keywords will return a sorted list of
 filenames. These files can be returned to the client.

 This scenario works well for me on a collection of some 15'000 HTML
 documents (I used w3m as a filter to convert to ASCII). All conversions
 are done with a simple shell script calling the filters.

 Thomas Spahni


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Finding overlapping intervals efficiently

2002-02-22 Thread Tod Harter

On Friday 22 February 2002 00:03, Colin Dewey wrote:

This is an example of a class of problem that crops up in a lot of 
applications, like GIS systems all the time.

Unfortunately B-Tree type indexes, like RDBMS systems generally use are just 
not well adapted to this type of query. I know Informix had a Data Blade 
for Universal Server that provided some indexing strategies for this type of 
thing, but that is a pretty expensive solution... 

There are several ways to build indexes for this sort of stuff, but in MySQL 
you would essentially have to design a new table handler. Given MySQL's 
flexibility in that respect it is conceivable that someone will provide 
products like that in the future.

For now I think you have the best solution you can get. The next best in 
theory I guess would be a table that was a cross join on t1 and t2 holding 
every possible t1.start - t2.end and t1.end - t2.start value. I suspect it 
would be HUGE! You might however construct such a table and then reduce it to 
only a few records of interest, but if your record set is being updated often 
that won't do you a lot of good either. 

Another possibility would be to try to use other criteria to reduce the work 
the slow query needs to do. Maybe there are entire sets of t1's such that 
they cannot possibly overlap a t2 (for instance any t1 who's start is greater 
than max(t2.end)). Those need not be considered at all. Depending on your 
data, that may eliminate a lot of work. 

 Given two tables (t1 and t2) with fields start and end, what is the
 most efficient method of finding the all the intervals in the first table
 that overlap any interval in the second table?

 Right now, I use a query like:

 SELECT t1.name, t2.name FROM t1, t2
 WHERE t1.start = t2.end AND t1.end = t2.start;

 I use a key (start, end) in each table.

 Is there anything else that I can do to optimize this type of query (i.e.
 add keys, change the select statement)?  The tables typically have
 thousands of records, so optimizing this query is very important.

 Thanks in advance for any help you can give me,

 Colin

 _
 Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Fake interval data type

2002-02-21 Thread Tod Harter

On Thursday 21 February 2002 03:06, Heikki Tuuri wrote:

Oh, Duh. The bigger Oops was mine. Doesn't help much to have a transaction if 
you don't ask for a lock! ;o). Thx.

 Oops,

 the syntax is

 SELECT ... FROM ... WHERE .. LOCK IN SHARE MODE;

 Heikki

 -Original Message-
 From: Heikki Tuuri [EMAIL PROTECTED]
 To: [EMAIL PROTECTED] [EMAIL PROTECTED]
 Date: Thursday, February 21, 2002 9:27 AM
 Subject: Re: Fake interval data type

 Hi!
 
 Its kind of a subtle point, but innodb's locking not only locks rows, it
 locks the ABSENCE of rows. If I understand correctly this means that
 something like:
 
 BEGIN WORK
 SELECT COUNT(*) FROM table WHERE starttime? AND endtime?
 (check the count)
 INSERT INTO table  (if count is zero)
 COMMIT
 
 SHOULD do the trick. The transaction should block anything else that
  tries
 
 to
 
 do an insert into the same range before you do your commit.
 
 I must add you have to use a LOCKING read in the SELECT. Either
 
 SELECT ... FROM ... WHERE .. FOR UPDATE;
 
 which sets exclusive locks, or
 
 SELECT ... FROM ... WHERE .. FOR LOCK IN SHARE MODE;
 
 which sets shared locks on the rows (to be precise, index records) it
 encounters.
 
 Without these additional clauses the default SELECT mode of InnoDB is the
 consistent non-locking read of Oracle.
 
 Locking reads in InnoDB also lock the absence of rows, as Tod states.
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 ---
 InnoDB - transactions, row level locking, and foreign key support for
  MySQL See http://www.innodb.com, download MySQL-Max from
  http://www.mysql.com

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Query Help

2002-02-20 Thread Tod Harter

On Wednesday 20 February 2002 09:20, Web boy wrote:

You want to use UNION. If I remember correctly its a fairly new feature for 
MySQL, but its a standard SQL thing.  From the manual:

SELECT ...
UNION [ALL]
SELECT ...
  [UNION
   SELECT ...]

UNION is implemented in MySQL 4.0.0.

UNION is used to combine the result from many SELECT statements into one 
result set.

The SELECT commands are normal select commands, but with the following 
restrictions:

* Only the last SELECT command can have INTO OUTFILE.
* Only the last SELECT command can have ORDER BY.

If you don't use the keyword ALL for the UNION, all returned rows will be 
unique, like if you had done a DISTINCT for the total result set. If you 
specify ALL, then you will get all matching rows from all the used SELECT 
statements.



 Now how in the world to I take run this query against
 both sets of tables and then combine the data and
 bring back a result?

 Thanks---

 =
 Want to know when a band is playing or want to see what bands are playing
 at your favorite bar go to www.buffalogigs.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Question on maximum record input frequency and table lengths.

2002-02-20 Thread Tod Harter

On Wednesday 20 February 2002 02:06, Anvar Hussain K.M. wrote:
 Hello,

 I don't feel the the table sizet will be a problem. But how would the
 database function with such a huge number of queries per second.

 If I read right, you will be taking a maximum (presently) of 42 channals
 each taking taking data at the rate of 360/sec. That would be 42x360 =
 15120 queries to the database per second. Can MySql withstand this?

My concern wouldn't be so much can MySQL handle 15120 queries per second 
but can your disk drives handle it... Even with some pretty high-end hardware 
thats a tall order. Consider, a top line SCSI drive is 10k RPM, that means 
that at the very best the heads get over each sector on the disk 10k times 
per second, so that is the absolute theoretical maximum rate at which reads 
and writes can be issued to the drive (since drives have to spin around once 
per I/O operation no matter what size it is). Given that many operations span 
multiple cylinders that is just the optimum rate of the hardware, and doesn't 
account for seek time delays, which are orders of magnitude higher than that. 
Of course the controller and the OS, and MySQL can all help by coagulating 
various operations into lesser numbers of larger operations, but 
fundamentally the limit is still there. Disk stripping and to a somewhat 
lesser extent, other forms of disk load balancing will also alleviate the 
bottleneck, but you are still talking about a very high level of write 
frequency. I'd be loathe to claim that any particular system would handle 
that load without testing that hypothesis. Its certainly doable, but might 
take a bit more hardware than one would think at first blush.

 One solution is to insert multiple rows in a single query. Possibly
 inserting only after a monitor has taken 100 samples (an optimal value can
 be found by trial). Now the number of queries would be reduced to 3.6 x 42
 = 151. Something manageable for the database engine. It would do good to
 keep number of indeces to the minimum required and simple.

 Anvar.

 At 07:29 PM 19/02/2002 -0800, you wrote:
 On the largest plants we would expect to monitor, we would have about 42
 separate analog channels (tables) taking data at the 360 or more times per
 second. Each of these tables would have two fields - a record number
 (index) autoincrementing, and a numeric (probably a smallint in most cases)
 which are change, and a couple of static identifier fields, so each record
 would be on the order of 100 bytes or less.

 Michael Mitchell,
 Principal,
 DGRI Systems


 -
 Before posting, please check:
 http://www.mysql.com/manual.php (the manual)
 http://lists.mysql.com/ (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Fake interval data type

2002-02-20 Thread Tod Harter

On Wednesday 20 February 2002 05:51, Sasa Babic wrote:
 On Wed, Feb 20, 2002 at 11:38:31AM +0100, [EMAIL PROTECTED] wrote:
  sql,query

 The above is for antispam filter.



 I have a need for an interval data type. It would contain start and end
 of a certain time event. Since it is not possible to have two different
 events at the same place and at the same time, I want to ensure that
 there cannot exist two entries for which time interval would overlap
 with another entry's interval.

 I've done a simple routine in PHP which cheks if there is an overlaping
 entry in database and it inserts new entry only if it's 'safe'.
 Obviously, there is a problem if there are two entries, let's call them
 A and B, and A entry is made after B has done the checking but before B
 has made a new record.

 Any sugestions? Even sugestions for another free open source db would
 do. Sorry for my broken english. Thanks.

Transactions would be the answer. Use an innodb table and wrap your 
check/insert code in a transaction. The select which does the check will lock 
the table in such a way that any other attempt to insert within the range you 
tested for should end up being locked. 

Its kind of a subtle point, but innodb's locking not only locks rows, it 
locks the ABSENCE of rows. If I understand correctly this means that 
something like:

BEGIN WORK
SELECT COUNT(*) FROM table WHERE starttime? AND endtime?
(check the count)
INSERT INTO table  (if count is zero)
COMMIT

SHOULD do the trick. The transaction should block anything else that tries to 
do an insert into the same range before you do your commit.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Please have a look on this group by

2002-02-19 Thread Tod Harter

On Tuesday 19 February 2002 10:07, Oliver Heinisch wrote:

What would you expect the database to return

You asked it to GROUP all the records with the same preis, that means it has 
to choose a value for each column of the record that is the result of each 
grouping, correct? So which of the (for example eleven (elf for you 
deutche;o)) different values of database.lfd did you want to have returned in 
your one resulting record? Obviously the database cannot stuff 11 values into 
one field

You need something like select min(preis), max(lfd)  or some other 
summary function, OR you need to GROUP BY preis,lfd. You simply cannot have 
it both ways. In fact (Monty are you listening!!!) MySQL needs to throw an 
error on this sort of query because it is relationally incorrect and the 
database engine cannot fullfill it correctly.

For some odd reason though, human brains have a really hard time 
understanding this concept. If you want to really understand it, pretend to 
be a database and process a GROUP BY of the form he's having problems with, 
then you'll understand. ;o).

 Hi Folks,
 now sitting here for 5 hours trying to get these stupid sql-request
 running, I decide to ask
 the list.

 structual information of the database:
 database.preis = decimal(12,2),
 database.lfd=int(11),
 database.typ=varchar(30),
 database.marke=varchar(50)

 The following requests works
 select min(preis) from database
 also
 select min(preis) from database where marke='marke' and typ not like 'R%'
 and typ not like 'X%'

 but I have to get another value from the db so i choose

 select min(preis),lfd from database where marke='marke'
 and typ not like 'R%'
 and typ not like 'X%'
 and typ not like 'C%'
 group by preis

 this doesn´t bring the expected pair preis / lfd  but all available
 preis/lfd grouped ascending.

 Any idea, even if I´m shure that the select worked tonight.
 TIA Oliver


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: double outer joins in mysql?

2002-02-19 Thread Tod Harter

On Tuesday 19 February 2002 00:48, Todd Goldenbaum wrote:
 hi,

 I'm wondering if anyone knows if it's possible to do a double-outer join in
 mysql.  by that I mean outer-joining on two tables instead of one (both
 from the same original table).  in other words, whereas a normal outer join
 might go something like this:

 select b.date, u.name from billing b outer join users u on
 b.user_id=u.user_id

 ...I'm trying to do something more like this:

 select b.date, u.name, l.city from billing b outer join users u on
 b.user_id = u.user_id, locations l on b.location_id = l.location_id

 (that syntax is bogus) but does anyone know if this is even possible?

 thanks,
 todd

As I recall you should be able to do

SELECT * FROM a LEFT JOIN b ON a.x=b.x LEFT JOIN c ON c.z=b.z WHERE ...

and if I understand you correctly you get what you want (which is ALL records 
in A even if they have no corresponding b's or c's. NOTE that the second ON 
might want to be a comparison between columns in a and c, not b and c 
depending on exactly what you need (IE in my example a missing b value would 
result in no returned c for that a, even if one existed). You might even need 
something like ON c.z=b.z OR a.y=c.y depending on your needs.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Porting from MsSQL to MySQL (VERRY XML RELATED)

2002-02-18 Thread Tod Harter

On Monday 18 February 2002 13:05, Dan Nelson wrote:
 In the last episode (Feb 18), Marcelo Iturbe said:
  Hello,
  Currently I have an aplication which stores the information in an MsSQL
  database in a straight forward manner, insert into blah blah
 
  However, I retrieve the information in XML format using the FOR XML
  EXPLICIT statements that come with MsSQL 2000.

 You'll have to generate the XML in your application.   Here are a
 couple links (picked from a google search on mysql xml select) to get
 you started:

 http://www.xml.com/pub/a/2000/12/13/perlxmldb.html
 http://www.nusphere.com/products/library/mysql_with_xml.pdf
 http://home.online.no/~runeberg/hauler/hauler.html

You might also look into the xml-dbms project, which is a good toolset for 
some purposes, though it may not be applicable to your needs. One of these 
days someone will get around to embedding a procedural language into MySQL, 
then stuff like this will be pretty easy...

I vote for perl myself ;o).

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: re-ordering rows

2002-02-14 Thread Tod Harter

On Thursday 14 February 2002 09:54, Bryan McCloskey wrote:
 Greetings,

 How do I get the rows in a table to be in a different
 order? I know I can sort a SELECT statement with an
 ORDER BY clause, but how do I make this a permanent
 adjustment to the table, so that all future SELECTs
 will produce ordered data? How does MySQL know what
 orders the rows are in (how are they actually stored),
 and how can I change that?

 Here's what's happening: say I have a table with an ID
 field like this:

 ID
...

Well, in most RDBMS you would use a VIEW to accomplish that, but as MySQL has 
no views, you simply MUST use an ORDER BY when you want a sorted result set! 
No database guarantees any natural order to the way it stores data in 
tables. Some products do let you specify an index as defaulting to ascending 
or descending, but you STILL have to say ORDER BY to get sorting in the first 
place.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SQL Help, Please...

2002-02-14 Thread Tod Harter

On Thursday 14 February 2002 07:58, Carl Shelbourne wrote:
 Hi

 I am trying to write an auction script that uses mysql as its backend. Each
 auction can have upto 25 sub auctions(cells) taking place.

 I'm trying to query the DB to give me a list of all the successfull bids
 for each cell, for each auction...

 SELECT b.auctionId, b.cellId, MAX(b.bid) as bid, b.bidderId FROM sa_bid as
 b, sa_auction AS a WHERE (a.state='active' AND b.auctionId=a.Id) GROUP BY
 auctionId,cellId ORDER BY bidTime DESC

 This is further complicated in so much that multiple MAX bids may exist at
 the same value, but, only the earliest should be returned for each cell.

 Which is returning some of the columns correctly, namely auctionid, cellid
 and bid, but it does not return the bidderId correctly.

 Can anybody help?

Your query is simply NOT relationally correct... The database has NO way to 
know WHICH bidder id to return in a given group. Suppose that for a given 
auctionid and cellid there might be 12 different bidders. You are telling the 
database engine to return ONE record for that group of 12 rows, so which 
bidderid will it use? The correct behaviour would be for MySQL to reject the 
query, it simply cannot be properly processed. Unfortunately I've found that 
MySQL doesn't behave correctly in these cases, instead it just returns one of 
the possible bidderid values at random. 

Your query would be technically correct if you used a summary function on 
bidderid, like MAX(b.bidderId) or somesuch. The rule is that the returned 
columns in a GROUP BY must either by mentioned in the GROUP BY section of the 
query itself, OR they must be the results of a summary function. Any other 
use is not correct for the reason stated above.

In other words, you need to rewrite your application logic. Most likely you 
will need to add the b.bidderId to the GROUP BY and have the program walk 
through the result set and do further sumarization on its own. Alternately 
you might be able to craft an SQL statement that gets you what you want, but 
without correlated subqueries it is going to be difficult or impossible. I've 
had this same sort of problem myself... 

 Cheers

 Carl


 #
 # Table structure for table `sa_auction`
 #

 CREATE TABLE sa_auction (
   id int(11) NOT NULL auto_increment,
   start datetime NOT NULL default '-00-00 00:00:00',
   end datetime NOT NULL default '-00-00 00:00:00',
   state enum('waiting','active','expired') NOT NULL default 'waiting',
   PRIMARY KEY  (id)
 ) TYPE=MyISAM;

 #
 # Dumping data for table `sa_auction`
 #

 INSERT INTO sa_auction (id, start, end, state) VALUES (1, '2002-01-23
 21:42:50', '2002-04-30 11:30:00', 'active'); INSERT INTO sa_auction (id,
 start, end, state) VALUES (2, '2002-01-23 21:42:50', '2002-02-09 06:30:00',
 'expired'); INSERT INTO sa_auction (id, start, end, state) VALUES (3,
 '2002-03-23 21:42:50', '2002-07-05 09:00:00', 'waiting'); INSERT INTO
 sa_auction (id, start, end, state) VALUES (4, '2002-03-23 21:42:50',
 '2002-08-01 11:30:00', 'waiting'); #
 

 #
 # Table structure for table `sa_bid`
 #

 CREATE TABLE sa_bid (
   id int(11) NOT NULL auto_increment,
   auctionId int(11) NOT NULL default '0',
   cellId tinyint(4) NOT NULL default '0',
   bid int(11) NOT NULL default '0',
   bidderId mediumint(9) NOT NULL default '0',
   bidtime timestamp(14) NOT NULL,
   PRIMARY KEY  (id),
   UNIQUE KEY id (id),
   KEY id_2 (id)
 ) TYPE=MyISAM;

 #
 # Dumping data for table `sa_bid`
 #

 INSERT INTO sa_bid (id, auctionId, cellId, bid, bidderId, bidtime) VALUES
 (1, 1, 5, 1, 1, 20020128225421); INSERT INTO sa_bid (id, auctionId, cellId,
 bid, bidderId, bidtime) VALUES (2, 1, 5, 2, 2, 20020128225424); INSERT INTO
 sa_bid (id, auctionId, cellId, bid, bidderId, bidtime) VALUES (3, 1, 17,
 15, 2, 20020213214856); INSERT INTO sa_bid (id, auctionId, cellId, bid,
 bidderId, bidtime) VALUES (4, 1, 5, 3, 4, 20020213215649);



 
 This e-mail and any attachments are confidential.  If you are not the
 intended recipient, please notify us immediately by reply e-mail and then
 delete this message from your system. Do not copy this e-mail or any
 attachments, use the contents for any purpose, or disclose the contents to
 any other person: to do so could be a breach of confidence.

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, 

Re: Distributed Fulltext?

2002-02-13 Thread Tod Harter

On Thursday 07 February 2002 14:53, Brian DeFeyter wrote:
 Has anyone made a suggestion or thought about ways to distribute
 databases which focus on fulltext indexes?

 fulltext indexes do a good job of indexing a moderate amount of data,
 but when you get a lot of data to be indexed, the queries slow down
 significantly.

 I have an example table, with about 90 million rows.. and has a fulltext
 index on a varchar(100) field. A single-word query which would return
 approx 300k results takes an average of 15 seconds. A query with smaller
 results (~ 10k) can be as quick as 1 sec.. which I would consider
 acceptable.

 Has any thought about splitting the data into distributed files or even
 machines? ie: something as simple as 'words' starting with 'X' are split
 into a-h, i-p, q-z... or something more advanced? (maybe mysqld could
 automatically split results based on (#results per unique 'word' /
 desired # of 'split files/machines') Would such a system give any
 advantages to searching speed and concurrenct query scalability? I
 haven't looked at the fulltext internals.. so I don't know if such
 query routing could take place or not.

 If nothing else, does anyone else have experience with a table of this
 size or even larger? What kind of tuning have you done?

 Thanks,

  - Brian

Wouldn't be too tough to write a little query routing system if you are using 
perl. Use DBD::Proxy on the web server side, and just hack the perl proxy 
server so it routes the query to several places and returns a single result 
set. Ordering could be achieved as well. I'm sure there are commercial 
packages out there as well. I don't see why the individual database servers 
would need to do anything special.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Distributed Fulltext?

2002-02-07 Thread Tod Harter

On Thursday 07 February 2002 14:53, Brian DeFeyter wrote:
 Has anyone made a suggestion or thought about ways to distribute
 databases which focus on fulltext indexes?

 fulltext indexes do a good job of indexing a moderate amount of data,
 but when you get a lot of data to be indexed, the queries slow down
 significantly.

 I have an example table, with about 90 million rows.. and has a fulltext
 index on a varchar(100) field. A single-word query which would return
 approx 300k results takes an average of 15 seconds. A query with smaller
 results (~ 10k) can be as quick as 1 sec.. which I would consider
 acceptable.

 Has any thought about splitting the data into distributed files or even
 machines? ie: something as simple as 'words' starting with 'X' are split
 into a-h, i-p, q-z... or something more advanced? (maybe mysqld could
 automatically split results based on (#results per unique 'word' /
 desired # of 'split files/machines') Would such a system give any
 advantages to searching speed and concurrenct query scalability? I
 haven't looked at the fulltext internals.. so I don't know if such
 query routing could take place or not.

 If nothing else, does anyone else have experience with a table of this
 size or even larger? What kind of tuning have you done?

 Thanks,

  - Brian

Wouldn't be too tough to write a little query routing system if you are using 
perl. Use DBD::Proxy on the web server side, and just hack the perl proxy 
server so it routes the query to several places and returns a single result 
set. Ordering could be achieved as well. I'm sure there are commercial 
packages out there as well. I don't see why the individual database servers 
would need to do anything special.


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED] Trouble
 unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Advice

2002-01-29 Thread Tod Harter

Hi Guys

I've been using MySQL for a few years now, but always on medium sized 
projects. I think the biggest thing I ever did has maybe 200k records in it. 

Now all of a sudden I have a client with a requirement for a database that we 
estimate will be in the range of 1 to 4 TERABYTES of data... Now this data is 
largely static (though there will be a few updates and some insertions) 
non-transactional, and needs to be searched on several indexes. Furthermore 
its image data, plus metadata (which the searching is done on).

Anyone have experience with this kind of application? Is MySQL going to 
handle it? What are the upwards limits on scalability? Ideally from a system 
design perspective I'd like to have ALL the data in the database itself and 
in a single set of tables. Unfortunately the images themselves are up to 10 
megs each. Is it even POSSI BLE to use blobs in that context? I know someone 
is going to suggest putting the image data outside the database, but for 
various reasons I consider that a last resort, not the least of which is 
security requirements are so high that running other protocols to access data 
thats outside tables is problematic. 

I'd love to hear about other's experience in this area. Thanks ahead of time 
:o).

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Advice

2002-01-29 Thread Tod Harter

On Tuesday 29 January 2002 12:40, Sinisa Milivojevic wrote:
 Tod Harter writes:
  Hi Guys
 
  I've been using MySQL for a few years now, but always on medium sized
  projects. I think the biggest thing I ever did has maybe 200k records in
  it.
 
  Now all of a sudden I have a client with a requirement for a database
  that we estimate will be in the range of 1 to 4 TERABYTES of data... Now
  this data is largely static (though there will be a few updates and some
  insertions) non-transactional, and needs to be searched on several
  indexes. Furthermore its image data, plus metadata (which the searching
  is done on).
 
  Anyone have experience with this kind of application? Is MySQL going to
  handle it? What are the upwards limits on scalability? Ideally from a
  system design perspective I'd like to have ALL the data in the database
  itself and in a single set of tables. Unfortunately the images themselves
  are up to 10 megs each. Is it even POSSI BLE to use blobs in that
  context? I know someone is going to suggest putting the image data
  outside the database, but for various reasons I consider that a last
  resort, not the least of which is security requirements are so high that
  running other protocols to access data thats outside tables is
  problematic.
 
  I'd love to hear about other's experience in this area. Thanks ahead of
  time
 
  :o).

 Hi!

 A number of our registered customers has database of that size.

 But in order to optimise things, I do not think it is wise to store
 such huge images in tables.

What would be the performance considerations? Considering it theoretically if 
I (for instance) set things up so that I had an ftp server for the images and 
stored filenames in a column then as far as raw data transfer efficiency its 
tcp either way. I guess the question becomes one of how does the database 
deal with blobs internally? Given that the data is images its essentially 
opaque data to the RDBMS (ie there would never be any reason to search the 
blob columns themselves, just recover them based on searches of other 
indexes).

I guess the other related question with indexes is how big a deal is it to 
have such huge indexes? In a practical sense its one data set, so I really 
would like to have indexes covering the whole thing. The number of records 
runs up into the low millions. Queries will need to be done on maybe up to 4 
or 5 columns in that data set. Can I expect a query like that to complete in 
a time frame of under 1 minute? What kind of hardware would be recommended to 
get that sort of performance? We will be building out the entire system, so 
essentially its up to us to define that. I'd considered something in the 
order of a top of the line dual processor Pentium 4 class system with a 
couple of SAN boxes and fibre channel. I expect its no challenge to throw a 
few gigs of ram at it, but naturally there is NO way to put enough ram in a 
box to even come close to holding complete indexes...

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php