Re: innodb

2002-01-28 Thread ryc

Just add another file entry to my.cnf seperated by commas... You can add as
many seperate files as you like... (within reason of course, eventually if
the line length gets longer than 1024 or so you cant add anymore). Once you
edit the file, restart mysql and innobase will detect the new file entry and
create the file.

ryan

- Original Message -
From: "Michal Dvoracek" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, January 28, 2002 4:12 AM
Subject: innodb


> Hello,
>
> is possible "on the fly" change size of innodb tables ??
>
> i create table 1G large but is small and i need enlarge it. Can i
> change number in my.cnf  or something else ?
>
> S pozdravem
> Michal Dvoracek  [EMAIL PROTECTED]
> Capitol Internet Publisher, Korunovacni 6, 170 00 Prague 7, Czech Republic
> tel.: ++420 2 3337 1117, fax:  ++420 2 3337 1112
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Fulltext build 2 days and counting...

2002-01-20 Thread ryc

My experience (atleast with the 3.23 series) is that full text indexing is
not worth the hassle if you have an existing HUGE database... I let mysql
run for about a day or two on a 20gb database (dont remember exactly how
many rows it had) before giving up (note this was using 3.23.39). I found on
such a large dataset the full text indexing didnt do very well (queries took
15-20 seconds on a pIII 700mghz 512megs of ram dedicated to mysql).

Anyhow... good luck, hope things turn out better for you than they did for
me.

ryan


- Original Message -
From: "Steve Rapaport" <[EMAIL PROTECTED]>
To: "MySQL List" <[EMAIL PROTECTED]>
Sent: Sunday, January 20, 2002 5:25 PM
Subject: Fulltext build 2 days and counting...


> Okay, does anyone know how long this will take?
> 22 million records, fulltext index on a single field, first 40 chars.
> record length 768 chars, variable.
>
> It's been running for 2 days, processlist quotes time at around 10.
> Index file still growing occasionally, up to 3Gb.
>
> Should I let it continue or give up?  Will this take another 2 days?
> another week?  Anyone else with experience to get a rule of thumb?
>
> I'm using mysql  3.23.37 on a dual processor intel Redhat, 700Mhz, 1G ram.
>
> --
> Steve Rapaport
> still at large
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: java.sql.SQLException: Lost connection to server during query theerror is here

2002-01-15 Thread ryc

There are many different things that could cause this error. Did you check
the mysql error log for helpful information?

Most likely the problem is not with Apache, its either mysql or your
application. One source of that error message is mysql crashing and
burning... I have seen it a few times on servers that were rather flaky and
crashed under high load (I am not saying this is the case here, but it is
something to look into).

good luck,
ryan

- Original Message -
From: "shashidhara n" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, January 15, 2002 12:20 AM
Subject: java.sql.SQLException: Lost connection to server during query the
error is here


> hello
>
> we are running web application using apache as web server and mysql as
> databse. after running server for 6- 8 hr it will give the following
> error
>
> java.sql.SQLException: Lost connection to server during query the error is
> here
>
> if we  restart apache server it will works
>
> i want to know why this problem occuring, any problem in mysql server
> or apache server
> please mail  to
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>
>
>
>
>
> _
> Join the world's largest e-mail service with MSN Hotmail.
> http://www.hotmail.com
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Best text query implementation?

2002-01-10 Thread ryc

Problem with using a regular expression (or even a simple wildcard search)
is that they are very slow. I have looked for a better solution to search
through a mysql database but didn't find anything that worked great with
mysql. If speed is important then you are pretty much out of luck without
upgrading to mysql 4.0 (if you find something please share it with the list
=) ).

It is pretty easy to convert something like "tty*" to a query, however...
select * from table1 where field1 LIKE 'tty%' will do it... albiet slowly.
You can change the where clause to do more stuff like "where field1 LIKE
'tty%' OR field1 like 'something else'" but this definitly would be slow.
There are probably better ways of doing something like this that I dont know
off of the top of my head.

Good luck,
ryan



- Original Message -
From: "Eric Mayers" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 10, 2002 5:14 PM
Subject: Best text query implementation?


Hi Everbody,

I'd like to know how people are forming text searches with MySQL.  I'd
like to be able to support a query something like:

(login AND root) OR ("invalid login" AND tty*)

a code snippet that could turn a query like this into a MySQL Regular
expression or other query of some kind would be ideal.

I'm not able to use MySQL 4.0.1 FULLTEXT binary mode searches because I
can't use alpha code at this point.


Thanks,

Eric Mayers
Software Engineer
Captus Networks

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: INSERTing into joined tables?

2002-01-09 Thread ryc

> So I designed my "files" and "people" tables without any direct
> relationship with one another, thinking to link them with the SELECT
> statement.
>
> What I completely forgot, up until this point, was that I would need to
> INSERT these records (from pre-written HTML/PHP forms), and there is no
> WHERE clause in the INSERT statement to keep everything together.  In my
> scenario, a user might add a record to "files" and wish to associate
> that record to some of the records in "people", either new or
> pre-existing (typed into an HTML text input form or something).  How
> should SQL code be arranged to "link" these records over the foreign key
> table?

Don't fret too much looking for a complicated solution, the solution is just
as easy as it sounds. You must insert into each table individually. So for
instance... someone enters a new person... you insert it into the person
table. Someone enters a new file, you insert it into the files table. Now
when someone wants to associate a file with a person, the user selects the
person he wants (so you have the people_id) and the user selects the file he
wants (so you have the file_id), and then you insert a row into the
filespeople table. There is not a single command to do this for you, it will
take 3 seperate inserts.

> but... to keep it all together... is lost on me... and then later to
> have UPDATE statements to do the same thing!  Although I suspect this
> may be easier as I can use the WHERE clause in an UPDATE statement.

Update will work in the same fashion... If person 'ryan' is working on
'file1' but is moved to 'file2'... do an update to the corresponding row in
the filespeople table (or you can delete the old row, and insert the new
one, but updating would yeild better performance).

It sounds like you have setup the correct select statement but dont entirely
understand why it works. I would suggest that you sit down with a pen and
paper and 'draw' the three tables with some mock data and see how you can
use the 'filespeople' table to join the two tables.

> If anyone has a link to a tutorial on this very concept, that would be
> greatly appreciated as well!

Devshed ( http://www.devshed.com/ ) has some good simple tutorials that can
help you understand what is going on with this SQL. There are also a lot of
other good tutorials on the site (the ones I am refering to can be found in
the mysql section I believe).

Good luck,
ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: InnoDB ibd file hit 400MB default

2001-12-30 Thread ryc

Innobase allows you to specify multiple data files and will use them
automatically. So to keep under the 2gb limit just keep adding 2gb files as
needed. You can see how much space is left in the innobase data files by
doing the following query: "show table status from 'dbname' like
'tablename'".

ryan

- Original Message -
From: "Sam Lam" <[EMAIL PROTECTED]>
To: "MySQL List" <[EMAIL PROTECTED]>
Sent: Sunday, December 30, 2001 12:00 PM
Subject: InnoDB ibd file hit 400MB default


> I've been using InnoDB for the last week or so with good results & then
> all of a sudden I started getting errors : table  is full. I finally
> tracked it down to the size of the IBD file which I had at the 400MB
> default.
>
> Does an IBD file expand out columns to their data type max size ? I have
> some TEXT columns that are sparsely used that I'm thinking are the cause
> of large space wasting.
>
> What can I do when my IBD file reaches the 2GB Linux file size limit ?
> Which alternate file system should I use for > 2GB or should I switch to
> BSD ?
>
> Is there any way to have separate IBD files for each MySQL table or at
> least DB ?
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: column maximum?

2001-11-17 Thread ryc

I am curious, what is your reasoning for being a fan of single-table
databases?

The number of columns a table may have depends on the table type you are
using, this should be in the documentation. Your decision to put 20
judges... 3 rounds... 17 categories into a single table probably is (well
almost definitly is) a bad choice. You might want to consider breaking that
into a table for judges (contains name, court, address, blah blah), a table
for round information, and a table for categories... putting it all into one
table makes very little sense.

ryan

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, November 17, 2001 8:59 PM
Subject: column maximum?


> I'm creating a table, and because i'm a fan of single-table databases, its
turning out to have many colums. (It will have over 1100).  Will this be
possible?  Will this cause problems?  A note: 1020 of the colums will be a
numeric value less than 10.
>
> I'm creating a database that keeps track of a group of judges scores for
something.  20 judges...3 rounds...17 categories...1020 single digit
entries.
>
> Thanks,
> Kurt
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Field Naming Standards

2001-10-20 Thread ryc

I am not aware of any document that describes naming conventions, but
through my use of databases and design I have found a scheme that works for
me. I follow these rules:

1. The autoincrement field (or unique id) for a table is named "tablename"
++ "id". So if the table is "books" the unique identifier for the row would
be "bookid".
2. When an attribute in one table is the same as in another table (ie in the
table "books" I might want to reference a row in the "author" table in which
case I would name the attribute in the books table "authorid").
3. I use lowercase names for both tables and attributes.. The reason is I am
not a big fan of uppcase letters unless they serve a purpose (like in OO the
class name would be upper case while methods on the class are lowercase). In
SQL there isnt much of a need to diferentiate between a table and a
attribute because you can determine easily what the name is from the context
in which it was used in the query. But I think this is a preference thing, I
could see someone thinking its very useful.
4. When naming an attribute I try to reflect the meaning of the data as much
as possible in the attribute name. Because SQL doesnt let you easily convey
to the client what the field means, you must do it in the attribute name. So
instead of using something like "aptno" for a field I would choose
"apartmentnumber".
5. If you use "apartmentnumber" in one table, make sure to call other
references to it "apartmentnumber" as well... Dont change it to "aptno".
This follows from rule 2, I just thought I would say it again in another
way. :P The same goes for code that uses these attributes. If a variable
holds the value from the "apartmentnumber" attribute, dont call it "aptno".
It might be clear in this example where that data came from but that may not
hold for other examples... and definitly if someone is going over your code
who is not familiar with the database it will help (or if you havent looked
at the code in a really long time).

That is about it for now... anyone care to add?

ryan

> I've been developing in MySQL for some time now, but as I go along I've
> noticed that my naming conventions have changed dramatically, and I was
> wondering if there is like a set of guidelines or something I could take
> a look at. A set of standards, if you will, for naming conventions of
> fields in tables in MySQL databases.
>
> For example, using all lowercase for field names, separating words with
> underscores, or not, and whether to use a descriptive auto_increment id
> (such as userid, newsid) or if to just use "id" for every table.
>
> Any discussion on this is much appreciated.
>
> Mike



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Sharing pws between UNIX and MySQL?

2001-09-16 Thread ryc

[snip]
From: "Jonas Larsson" <[EMAIL PROTECTED]>
> But wouldn't it be possible to let MySQL use MD5-hashed pw like UNIX? Why
> isn't that implemented? Then it would be easy to transfer the hash...
>

Yes it definitly would be nice and would have been easier to impliment than
the current Password() implimentation (just call crypt()!!). I found that
Postgresql uses the built in system crypt() while Mysql uses its own hash
which causes some problems when porting a database between the systems.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Fulltext indexing libraries (perl/C/C++)

2001-09-14 Thread ryc

Your original message got through the first time, but your email bounced.

I think what you are looking for is called mifluz and is the indexing
library that htdig uses. The link is http://www.gnu.org/software/mifluz/ .

If you develop any kind of bindings to use mifluz to index a mysql database
let me know I would definitly be interested.

ryan

- Original Message - 
From: "Christian Jaeger" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, September 14, 2001 12:43 AM
Subject: Fulltext indexing libraries (perl/C/C++)


> Hello
> 
> [ It seems the post didn't make it through the first time ]
> 
> While programming a journal in perl/axkit I realize that the problems 
> of both creating useful indexes for searching content efficiently and 
> parse user input and create the right sql queries from it are sooo 
> common that there *must* be some good library already. :-) So I 
> headed over to CPAN, but didn't really find what I was looking for.
> 
> It should create indexes that are efficiently searchable in mysql, 
> i.e. only  queries, not "%abc%". 
> Allow to search for word parts (i.e. find "fulltext" when entering 
> "text"). Allow for multiple form fields (i.e. one field for title 
> words, one for author names, etc.) at once. Preferably allow for some 
> sort of query rules (AND/NOT/OR or something).
> Preferably do some relevance sorting. Preferably allow to hook some 
> numbers (link or access counts etc) into the relevance sorting.
> 
> I think there are 3 tough parts which are needed:
> 1. creation of sophisticated index structures (inverted indexes)
> 2. somehow recognize sub-word boundaries to split words on. Maybe use 
> some form of thesaurus? Or syllables? (I suspect it should be the 
> same rules as for splitting words on line boundaries)
> 3. user input parser / query creator
> 
> Why not:
> 
> - use mysql's fulltext indexes? Because I think that currently they 
> are too limited (i.e. see user comments about them 
> www.mysql.com/doc/) (should be better in mysql-4, I read, but we need 
> it in a few weeks already...). And they are also not supported in 
> Innodb which we want to use.
> 
> - use indexing robots? Because we work with XML documents, and would 
> like to both keep the index up to date immediately, as well as split 
> the XML contents into several parts (i.e. there's a title, byline, 
> etcetc, which should be searchable or weigted differently). We want a 
> *library*, not a finished product.
> 
> There's Lucene (www.lucene.com) in Java that I think does exactly 
> what I want. Anyone who helps me port that to perl or 
> C(++)/perl-bindings (-; ? (It should be ready in a few weeks, and 
> it's about 500k source code :-().
> 
> (Something in C/C++ that would be loaded as UDF or so would be nice 
> too, but as I understand (from recent discussion about stored 
> procedures) it's not possible since these UDF's would have to start 
> other queries (i.e. to insert each word fragment into an index 
> table).)
> 
> Like Daniel Gardner has pointed out to me, one could maybe use 
> Search::InvertedIndex as a basis and complement it with Lingua::Stem 
> (only english) or Text::German (german) (both seem to be quite 
> imperfect tough) or with some word list processing. (I don't 
> understand Search::InvertedIndex enough yet.) I think it would still 
> be much work.
> 
> 
> Has someone finished something like this? More info about mysql4?
> 
> Thx
> Christian.
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL+InnoDB

2001-08-27 Thread ryc

Yes. delete all ib_ files and all data files specified in my.cnf... and then
run mysql again.

ryan

- Original Message -
From: "alexus" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, August 27, 2001 5:16 PM
Subject: MySQL+InnoDB


> hi
>
> i'm trying to setup mysql w/ InnoDB
>
> i followed all steps from http://www.mysql.com/doc/I/n/InnoDB_start.html
>
> and i'm end up with this in my log every time i run mysql and in addition
it
> ends
> here is log file from first time i run mysql
>
> 010827 17:07:07  mysqld started
> InnoDB: The first specified data file
> /usr/local/mysql/var/innodb/data/ibdata/ibdata1 did not exist:
> InnoDB: a new database to be created!
> InnoDB: Setting file /usr/local/mysql/var/innodb/data/ibdata/ibdata1 size
to
> 2097152000
> InnoDB: Database physically writes the file full: wait...
> InnoDB: Data file /usr/local/mysql/var/innodb/data/ibdata/ibdata2 did not
> exist: new to be created
> InnoDB: Setting file /usr/local/mysql/var/innodb/data/ibdata/ibdata2 size
to
> 2097152000
> InnoDB: Database physically writes the file full: wait...
> InnoDB: Error: all log files must be created at the same time.
> InnoDB: If you want bigger or smaller log files,
> InnoDB: shut down the database and make sure there
> InnoDB: were no errors in shutdown.
> InnoDB: Then delete the existing log files. Edit the .cnf file
> InnoDB: and start the database again.
> 010827 17:11:13  Can't init databases
> 010827 17:11:13  mysqld ended
>
> any ideas?
>
> thank you in advance
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innobase problem

2001-08-25 Thread ryc

As stated on the bugs and fixes page on http://www.innodb.com/bugfixes.html
there is a problem with the latest innodb code that causes corruption when
doing updates... I believe I have run into that problem... First mysql would
hang.. and do nothing. After killing it and restarting it the error log
reads:

InnoDB: Doing recovery: scanned up to log sequence number 10 2051892005
InnoDB: Starting an apply batch of log records to the database...
InnoDB: Apply batch completed

But hangs there again and mysql doesnt seem to come back up. I dont care
about the data being rolled back I just want to get the database back up and
running. Is there a way I can delete the rollback logs safely? Anyone know
how I can get the database back up quickly?

Thanks,
ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: experiences with table corruption-should this be in a faq?

2001-08-23 Thread ryc

I think you have bigger issues going on with your setup than killing a perl
script causing corruption issues. I have experienced very little (almost no)
corruption with any of my mysql tables using various versions of mysql for
extended periods of time (the times the tables were corrupted it was user
error). First thing I would recommend is for you upgrade to mysql 3.23.41,
and then try you application again to see if the corruption continues. If it
does, send a message back to this list with more information reguarding your
setup (OS, hardware, mysql compile options, ect.. ). Then perhaps we can
help find the cause of your problem.

ryan

> These are some remarks on mysql(3.23.32) with regard to
> my experience
>  with data crash and recovery.  It is not meant to be
> negative in any sense
>  and I am actually very thankful there is a database
> like Mysql around.
>  On the other hand, if  these experiences are not due
> to some mistake on my part and if my workarounds are
> reasonable, it might be worth
>  including them in some documentation of faqs.
>
> First, it is fairly EASY to get table corruption .
>  I am using a perl/DBI script to parse
>  some data files and store them in Mysql tables.
>  It looks like it
>  is a little safer to close the Database handle before
> attempting
>  say a mysqldump.
>  To optimize queries I am storing the data in tables
> according
>  to some attribute. For example if this were a grocery
> shop
>  inventory, the tables would looks like:
> Table: Bananas
> item_no  quantity price
>
> Table: Oranges
>
> item_no quantity price
>
> etc..
>
> This is because I have a lot of data and I am mostly
> interested in issuing
>  queries on each species separately. Needless to say,
> this creates a very large
>  number of tables(more than ls or rm can handle at a
> time and of course also
>  more than what myisamchk can repair in a shot)
>
> I have found that if DBI dies for some reason, then
>   there is an almost certainity that at least some
> tables are
>  corrupted.
>  In most cases myisamchk is able to only recover part
> of the records
> (from 50 to 80-90%)
>
> For recovery,the perl script  uses a system call to
>   mysqldump every N files that are parsed.
> Unfortunately, this gives some problems too. Sometimes
>   mysqldump does not complete and sometimes
>  even though it completes, one cannot subsequently
> recover
>  all data because mysql ...  about duplicate
>  primary keys. This appears to be traceable to the
> creation
>  of files /var/lib/mysql/host-name-bin.xxx  . These
> files
>  appear to contain sql statements like what is produced
> by mysqldump
> Not only can these files get quite big, taking a lot of
> space
>  in /var/lib/mysql(this is a mount point in a logical
> volume group)
>  but I also found that removing the last one was
> essential for being able to
>  get rid of the duplicate primary keys message.
> Removing these
>  files(at least the last one) allowed the recreation of
> the database tables
>  from the mysqldump obtained backups
>
> Another issue is the temporary files in TMPDIR;
>  According to the manual one can control where these
> will go
>  by editing TMPDIR in safe_mysqld; I have not found any
> such line in safe_mysqld
>
> I am also worried about a reported 8 hr limit(I need to
> run in
>  continuous mode, i.e. get data continuously), but I
> have not yet experienced
>   corruption that was definitely attributable to this.
>
> Any comments?
>
> Thanks,
> S.Alexiou
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Setup own DB for each user?

2001-08-22 Thread ryc

Sure this is possible, take a look at the 'grant' command... and for each
local user add their mysql account giving them access to their database
only.. then other users will not be able to read any db but their own.

ryan

- Original Message -
From: "Ulv Michel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 22, 2001 4:53 AM
Subject: Setup own DB for each user?


> Hi all,
>
> I am running a webhosting server using SuSE 7.2 /mysql 3.23 /php 4. We are
> hosting a small amount of virtual servers, allowing our customers to user
> php and mysql. My problem is that every user sees every other users
> databases. Is there a way to setup mysql so that every local user has his
> own empty database? Please tell me how this has to be set up.
>
> Thanks
>
> Ulv
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: NuSphere final [sic] speaks out

2001-08-15 Thread ryc

[snip]

> I'm very sad this is all happening. But however you may feel
> about NuSphere, I find it a bit unsettling that MySQL AB, as
> an open-source organization, would claim a trademark in the
> first place. From what I've seen, it looks like both sides
> are being equally childish.

I dont think there is a correlation between open source and the use (or non
use) of trademarks to protect ones brand. I think it is very import for an
opensource company/group/product to trademark their name. Otherwise what
would stop someone from taking the source code and releasing it under the
same name (im just throwing this point in there, no need for whatifs in
further threads). Also, dont forget another open source project linux(tm) is
trademarked to protect the name from being used in a 'bad' way.

Just some ideas to think about. I dont mean to further the thread any more
than it already has.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Table size limitations...

2001-08-15 Thread ryc

Look into MAX_ROWS... ie:

alter table mytable max_rows = 1

ryan

- Original Message -
From: "Nathanial Hendler" <[EMAIL PROTECTED]>
To: "MySQL" <[EMAIL PROTECTED]>
Sent: Wednesday, August 15, 2001 12:19 PM
Subject: Table size limitations...


>
> I have a table that holds a lot of information.  I tried to INSERT
something
> into it, and received...
>
> DBD::mysql::st execute failed: The table 'fancy_big_table' is full at
> ./tom_to_mutt.pl line 156.
>
> The table is 4G is size.  The table's Max_data_length = 4294967295 (4G)
>
> I'm running the MySQL server on a FreeBSD system, and this is not a file
> system limitation.  I am using MyISAM tables.
>
> So, I'm not sure what to do.  I could easily double the size of that table
in
> the next few months.  After reading the CREATE TABLE page in the manual, I
> thought that increasing the AVG_ROW_LENGTH would help since it says
> that...
>
> "MySQL uses the product of max_rows * avg_row_length to decide how big the
> resulting table will be"
>
> I ran "ALTER TABLE fancy_big_table AVG_ROW_LENGTH = 9482;" and after a
long
> time, it finished without error, but inspecting the Avg_row_length and the
> Max_data_length afterwards showed no change.  Maybe this is a red herring,
> and totally down the wrong path.  I'm not sure.
>
> So, if one of you could help me figure out how to cram more crap into my
> table, I'd be awefully appreciative.
>
> Thanks,
> Nathan Hendler
> Tucson, AZ USA
> http://retards.org/
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innobase .39b problems

2001-08-09 Thread ryc

After succesfully upgrading to .39b things seemed to be running
great... However the past few days I have been having some problems with
queries not completing. This only seems to occur during higher loads and I
am not sure where to begin debuging. I was hoping someone has some ideas...

Here is the output of mysqladmin processlist... From the time field you can
see those queries have been going for a very long time, and under normal
circumstances they finish in under a second.

| 27  | forums | localhost | forums | Query   | 20014 | statistics   |
SELECT m.
messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publice
mail
,'\'','\\\' |
| 44  | forums | localhost | forums | Query   | 19941 | statistics   |
SELECT m.
messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publice
mail
,'\'','\\\' |
| 55  | forums | localhost | forums | Query   | 20090 | statistics   |
SELECT m.
messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publice
mail
,'\'','\\\' |
| 77  | forums | localhost | forums | Query   | 6736  | statistics   |
SELECT *
FROM users WHERE username = LOWER('xxx') AND password = PASSWORD('xxx')
|
| 82  | forums | localhost | forums | Query   | 19981 | statistics   |
SELECT
m.messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publi
cemail,'\'','\\\' |
| 84  | forums | localhost | forums | Query   | 20434 | Sending data |
SELECT
m.messageid,m.parentmsgid,groupid,m.userid,subject,postdate,replies,realname
,publicemail FROM |
| 85  | forums | localhost | forums | Query   | 20186 | statistics   |
SELECT
m.messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publi
cemail,'\'','\\\' |
| 89  | forums | localhost | forums | Query   | 19826 | statistics   |
SELECT
m.messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publi
cemail,'\'','\\\' |
| 90  | forums | localhost | forums | Query   | 19773 | statistics   |
SELECT
m.messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publi
cemail,'\'','\\\' |
| 91  | forums | localhost | forums | Query   | 19622 | Sending data |
SELECT
m.messageid,m.parentmsgid,groupid,m.userid,subject,postdate,replies,realname
,publicemail FROM |
| 92  | forums | localhost | forums | Sleep   | 11129 |  |
|
| 95  | forums | localhost | forums | Sleep   | 17126 |  |
|
| 99  | forums | localhost | forums | Query   | 6498  | statistics   |
SELECT * FROM users WHERE username = LOWER('xxx') AND password =
PASSWORD('xxx')|
| 116 | forums | localhost | forums | Query   | 108   | statistics   |
SELECT * FROM users WHERE username = LOWER('xxx') AND password =
PASSWORD('xxx')|
| 118 | forums | localhost | forums | Sleep   | 3350  |  |
|
| 122 | forums | localhost | forums | Query   | 1625  | statistics   |
SELECT
m.messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publi
cemail,'\'','\\\' |
| 126 | root   | localhost || Query   | 0 |  | show
processlist

Here is the query that is being 'locked':
SELECT
m.messageid,groupid,m.userid,subject,postdate,replies,realname,replace(publi
cemail,'\\'','\\'') AS publicemail FROM messages AS m LEFT JOIN users AS
u ON (m.userid = u.userid) WHERE m.groupid = $groupid AND m.parentmsgid = 0
AND m.clean = 1 ORDER BY postdate DESC limit 50;

and also:
SELECT * FROM users WHERE username = LOWER('xxx') AND password =
PASSWORD('xxx')

In both cases the connection is using auto commit = 1... so I dont think it
can be a deadlock problem. I am fairly certain that inserts have not been
done to either table in a while (a while being a day or so). Below are the
create table definitions just in case you need to take a look at them. Do
you have any idea what could be causing this? Under the non-large blob
support innobase code this worked without any problems. The only thing that
has changed since then is the upgrade to .39b code and a little bit more
data inserted into the table.

Thanks,
ryan

create table messages` (
  `messageid` bigint(20) NOT NULL auto_increment,
  `msgid` varchar(255) NOT NULL default '',
  `groupid` int(10) unsigned NOT NULL default '0',
  `userid` int(10) unsigned NOT NULL default '0',
  `parentmsgid` int(10) unsigned NOT NULL default '0',
  `subject` text NOT NULL,
  `body` text NOT NULL,
  `closedflag` char(1) NOT NULL default '',
  `postdate` int(10) unsigned NOT NULL default '0',
  `adddate` int(10) unsigned NOT NULL default '0',
  `replies` int(10) unsigned NOT NULL default '0',
  `ipaddr` int(10) unsigned NOT NULL default '0',
  `clean` char(1) NOT NULL default '0',
  `msent` char(1) NOT NULL default '',
  PRIMARY KEY  (`messageid`),
  UNIQUE KEY `usenetmsgid` (`msgid`),
  KEY `parentmsgid` (`parentmsgid`,`clean`),
  KEY `MSGSELECT` (`groupid`,`clean`,`parentmsgid`,`postdate`),
  KEY `adddatestats` (`adddate`)
) TYPE=InnoDB MAX_ROWS=1;

CREATE TABLE `users` (
  `userid` bigint(20) unsigned NOT NULL 

Re: Optimising for 4096MB RAM

2001-08-09 Thread ryc

You might want to look into what your ulimit settings are for the process
starting mysql... usually this can be done with 'ulimit -a' but check your
shells man page for more info. You could have a memory limit set for the
process and mysql is not able to use the amount of memory you specify.

Keep in mind that mysql doesnt actually allocate all the memory you let it
until it needs it, so utilities like 'top' wont show the full amount of
memory mysql will use until its been up for a while and sufficiently 'used'.

Also, your computer has 4096MB or ram, you shouldnt let mysql use absolutly
all of it you should keep some of the OS + the other stuff that runs on the
computer (if any).

ryan

- Original Message -
From: "Corin Hartland-Swann" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, August 09, 2001 1:41 PM
Subject: Optimising for 4096MB RAM


>
> Hi there,
>
> I am trying to optimise MySQL for use on a machine with:
>
>   Linux 2.4.7-ac9 (only kernel to work with the disk controller)
>   Dual Pentium III 1000 MHz
>   4096 MB memory
>   RAID-1 Mirror over two 76GB 7200 RPM UDMA disks
>   Reiserfs partition for MySQL
>
> As a reference point, I am running the benchmarking tests to measure the
> performance. I plan on measuring the performance with our particular
> queries after this.
>
> After reading the section of the manual dealing with performance tuning, I
> used the following in my.cnf:
>
> [mysqld]
> set-variable = join_buffer=2048M
> set-variable = key_buffer=2048M
> set-variable = table_cache=1024
> set-variable = max_allowed_packet=16M
> set-variable = max_connections=200
> set-variable = record_buffer=2048M
> set-variable = sort_buffer=2048M
> set-variable = tmp_table_size=2048M
>
> The problem is that these settings don't seem to have practically any
> effect on the amount of memory that MySQL uses when running the
> benchmarking suite - I expected it to chew up large amounts of memory with
> these settings. MySQL is currently only taking up 25MB of RAM - not the
> 512MB plus I was expecting.
>
> This machine is going to be a dedicated database server for a number of
> web servers - I expected MySQL to take up about half the memory, and disk
> caching the other half. I want it to run like a beast posessed :)
>
> Does anyone have any real world examples and/or suggestions for how to
> increase the performance? Any other variables I should be looking at?
>
> When the machine is operational, the biggest table I expect to be using
> has 55 million rows and takes up about 2G disk space (compressed, packed,
> everything).
>
> Please could you CC: any replies to me, since I am no longer on the list.
>
> Regards,
>
> Corin
>
> /+-\
> | Corin Hartland-Swann   |Tel: +44 (0) 20 7491 2000|
> | Commerce Internet Ltd  |Fax: +44 (0) 20 7491 2010|
> | 22 Cavendish Buildings | Mobile: +44 (0) 79 5854 0027|
> | Gilbert Street | |
> | Mayfair|Web: http://www.commerce.uk.net/ |
> | London W1K 5HJ | E-Mail: [EMAIL PROTECTED]|
> \+-/
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Speed (char or integer)

2001-08-07 Thread ryc

A numeric value will be faster in most cases. The reason being that a
comparison between two numbers at most takes 1 comparison... but a
comparison for a char at most takes n comparisons (n being the length of the
field/string). In addition a numeric value would take less space... all
around its a good deal faster.

ryan

- Original Message -
From: "Goran Krajacic" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, August 07, 2001 9:33 AM
Subject: Speed (char or integer)


I have a table with (char)Username attribute which is the primary key of the
table and also acts as a  foreign key in many other tables.

My question is if the queries would be faster if i used a numeric value for
the primary key?



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Thinking of switching from MyISAM to InnoDB

2001-08-04 Thread ryc

If you want a MyISAM table to grow larger than 4GB you have to do

alter table tablename max_rows=1

or maybe its 'maxrows'...

Once you do that, you will be able to stuff the table as full as the OS will
let you.

Innobase will allow you to get around this by making a bunch of smaller
files to create a larger table space... but keep in mind the max size for a
blob is 4GB.. (yah that is pretty damn huge tho heh). Im not sure what the
max size for a MyISAM blob is.

Hope this helps.

ryan

- Original Message -
From: "Nick Seidenman" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, August 04, 2001 9:54 AM
Subject: Re: Thinking of switching from MyISAM to InnoDB


> On Saturday 04 August 2001 09:25, Sinisa Milivojevic wrote:
>
> > Nick Seidenman writes:
> > > Apparentlly there's a 4 GB limit to a MyISAM file when the table it
> > > contains has VARCHAR, TEXT, or BLOB columns.  In order to get around
this
> >
> > There is no 4 Gb limit in MyISAM with later 3.23 versions.
>
> I an running version 3.23.32.
>
> > This limit is imposed by a filesystem only.
>
> Don't think so.  When I do a SHOW TABLE STATUS I have several tables that
are
> well in excess of 4 GB (2^32-1), as well as those that show exactly 4GB.
The
> difference is that the larger tables are of fixed type while the smaller
ones
> are of dynamic type.  It is one of the dynamic tables that repeatedly runs
> into space problems.  This one happens to have a TEXT column in it the
values
> for which can be (and often are) in excess of 2 KB.
>
> 
>  Nick Seidenman, CISSP
>  Director of Software Development
>  Hyperon, Inc.
>  www.hyperon.com
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Sort-of theoretical db design question

2001-07-28 Thread ryc

When you are designing a database and you are thinking about creating a
comma delimted list, this is a good sign that you need to rethink your
design. Bitfields are a good option, however if you ever need to add
elements to the bitfield (ie bitfield A can signify the presence of 4
elements, but now you want 5) then bitfields isnt the way to go.

This is what I would suggest, because it allows for some very flexible
querying:

Artists table:
- artistid (auto increment field)
- name
- birthday
- whatever...
Album table:
 - album id (auto increment)
 - artistid (index to the artist table that contains the artist of the
album)
 - number of tracks
 - total length
 - record label
 - ect...
Track table:
 - trackid (auto increment)
 - artistid
 - albumid
 - title
 - length, ect...
Playlist table:
 - trackid  (auto increment)
 - DJ who played it
 - time of play
 - ect..

By setting up a table for each of these different things, not only would
your database be normalized (ie the length of a song would never be included
twice), but it allows for some interesting querying.. For instance you can
select all of the tracks that where played during the month of april from
arist X... or all of the songs from album Y. This is accomplished through
joining the tables together on the trackid/artistid/albumid.

Hope this gets the juice flowing.

ryan

> Hello all!
>
> I have a question for all of you... I would very much appreciate your
> input.
>
> I'm building a database for a radio station.  The database must allow
> the DJ to enter what they play and when, and allow the program director
> to create weekly reports for the record labels.
>
> I'm wrestling with how to design the database.  I have already
> determined, via beginnings of normalization, that I need separate tables
> for Albums, DJs, Genres, and so forth... The problem I'm having is how
> to store the track data.  I have two ideas:
>
> First, to maintain a single table with every bit of track data there is
> (ie, title, artist, length, etc) and store this data into the albums
> table via either a bitfield (ie, binary additions, etc) or via a comma
> (or other) delimited list... ie "32897,39823,1234,29844" etc.
>
> The problem with this is that there are probably nearly 250,000 tracks
> among all of our assets...
>
> The other thought would be to have an album table that would contain
> things like the label, artist, number of tracks, etc, and another table
> that would contain the track data for that album... ie "1238_tracks".
>
> I'm rather new to all of this, so I don't know the relative
> benefits/detriments of these two options... can you give some advice?
>
> Thanks,
> Ben
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




long query: when to give up

2001-07-27 Thread ryc

To sum it all up, I am trying to import data from text files into an
innobase table (about 12m rows of data that is stored in text files created
by doing a select into outfile). First I tried dropping the indexes before
doing load data infile, but I found that I didnt have enough table space to
recreate the indexes since apperently it makes a copy of the entire table.
So I decided to leave the indexes on the empty table and start loading the
text files. The first one loaded fine (4gb text file, loaded in 5 hours). I
noticed that the cpu usage from 14-20% for the first 4 hours, and then about
45 minutes of 99% cpu usage before it finished. The step that I am on now is
importing the largest file (12gb of text) and it has been going for about 26
hours now. The processor had been at 14-20% usage for most of the time and
for the last 5 hours or so its been at 99% usage. So I assume if it is
following the pattern that accured when I loaded the smaller file, it is
finishing up. But like I said, 26 hours have passed already. What if another
5/10/15 hours passes before it finally finishes? What if it is stuck in a
loop and is never going to finish?

The question is, when do you know when to give up, and when to keep on
trucking?

I would like to minimize downtime as much as possible (with limited
hardware, if i had more disk space it would be much easier!), but so far its
been 31 hours total. Does that seem long for importing 16gb of text files
into a table with 4 indexes (one primary, one unique, plus two more)? Is
there something else going on here or a better method of doing what I need
to do?

Thanks for any advice you give.

ryan

in case anyone cares to read more background on what I am trying to do, here
it is:

I have been using innobase tables with a 64kb page size in order to fit
around 30k per row into my table. As many of you know, recently Innobase 39b
was released with support for up to 4GB blob support. Naturaly I wanted to
upgrade to this code asap so I wouldnt have to worry about keeping my row
length under 30k or so. But because my innodb table space was creating with
a 64kb page size, I would have to dump the data out of the db to a file of
some sort, and then reimport it. This is where the trouble comes in.

I decided to split this 15gb table into three different text files using
select into outfile. This went without a hitch fairly quickly (12m rows or
so). I then deleted the old innodb files, upgraded the server, and recreated
the tablespace. So now the time comes to get the data back into the db.

Originaly I dropped all the indexes from the table, imported the old data
using load data infile (took around 4-5 hours), and started to recreate the
indexes. However I found that I didnt have enough table space to recreate an
index because it had to make another copy of table (and I dont have enough
disk space to increase the size of my table space that much). So that leads
me to the story above... imported the data with indexes on the table
already.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Question on design

2001-07-25 Thread ryc

I would suggest that you use an INT column type to store the ip address.
This would use a lot less space than a varchar column (when dealing with the
magnitude that you describe)... the smaller the faster... The following
functions make this very easy:

INET_NTOA and INET_ATON

They are described on
http://www.mysql.com/doc/M/i/Miscellaneous_functions.html

Because the entire address is stored in one INT field if you wanted to do
any kind of select based on subnet you could use bitfield operations.

Hope this helps.

ryan

- Original Message -
From: "Paul DuBois" <[EMAIL PROTECTED]>
To: "David Lott" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, July 25, 2001 9:00 PM
Subject: Re: Question on design


> At 7:53 PM -0600 7/25/01, David Lott wrote:
> >(please reply directly)
> >
> >
> >I'm new to mysql and I'm working on a design for an application that
> >will require a LOT of lookups on IP addresses.  Data set is on the
> >order of millions of addresses.  In my reading, I have not
> >encountered a TYPE of ip_address or some such.  Did I miss it?
>
> No.  There's no such type in MySQL.
>
> >
> >If there is no such type, what would be the best way to setup the
> >addressing for the fastest possible locate?
> >Use varchar (15) and put the whole address in?
> >Use 4 fields A, B, C & D all as tinyint to represent A.B.C.D?
>
> Or convert the address to a 32-bit integer and store it in an INT column.
> If you use separate columns, there are probably network mask operations
and
> such that you'll find more difficult to do than if you store the address
> in a single column.  Depends on what kind of things you want to do with
the
> addresses.
>
> >
> >My assumption is that an integer search would be faster than a text
> >search - am I correct?
>
> A 4-byte integer like INT would certainly be faster than a 15-char string.
>
> >What works best?  How would you tackle this?
> >
> >
> >
> >Thanks,
> >David Lott
>
>
> --
> Paul DuBois, [EMAIL PROTECTED]
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innodb unrecognized option

2001-07-25 Thread ryc

I think mysqld may have a problem parsing long lines in my.cnf. I recieve
the following error when trying to start mysqld:

/usr/local/libexec/mysqld: unrecognized option `--r3/ibdata31:2G'


Here is the line that is offending mysqld (it is all one line in my.cnf). If
I remove or shorten this line in any way, mysql runs fine.

innodb_data_file_path =
dr1/ibdata0:2G;var/mysql/ibdata1:2G;var/mysql/ibdata2:2G;var/mysql/ibdata5:2
G;var/mysql/ibdata6:2G;
dr3/ibdata7:2G;dr3/ibdata8:2G;dr3/ibdata9:2G;dr3/ibdata10:2G;dr3/ibdata11:2G
;dr3/ibdata12:2G;dr3/ibdata13:2G;
dr3/ibdata14:2G;dr3/ibdata15:2G;dr3/ibdata16:2G;dr3/ibdata17:2G;dr3/ibdata18
:2G;dr3/ibdata19:2G;dr3/ibdata20:2G;
dr3/ibdata21:2G;dr3/ibdata22:2G;dr3/ibdata23:2G;dr3/ibdata24:2G;dr3/ibdata25
:2G;dr3/ibdata26:2G;dr3/ibdata27:2G;
dr3/ibdata28:2G;dr3/ibdata29:2G;var/mysql/ibdata30:2G;dr3/ibdata31:2G

Anyone know what is going on? Is this a long line problem or something else?
Thanks.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: AUTO_INCREMENT problem in table

2001-07-25 Thread ryc

If this is a MyISAM table you can use myisamchk to set the auto_incriment
value. Check out the man page.

ryan

- Original Message -
From: "Stefan Hinz" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "Peter Wiherkoski"
<[EMAIL PROTECTED]>
Sent: Wednesday, July 25, 2001 12:15 PM
Subject: Re: AUTO_INCREMENT problem in table


> Dear Peter,
>
> > Even if i delete the record where id=65535
> > DELETE FROM cust WHERE id=65535
> > AUTO_INCREMENT starts with 65535 and i cant add any more records.
>
> Sad, but true, MySQL "remembers" the biggest number inserted into an
> auto_increment column. If you delete the row in question, the next
> insert will add the biggest number again.
>
> The only workaround I can see is to copy the table (without the problem
> id row) to another table with the same structure. Maybe someone has a
> better solution?!
>
> Regards,
>
> --
>   Stefan Hinz
>   Geschäftsführer / CEO iConnect e-commerce solutions GmbH
>   #  www.js-webShop.com www.iConnect.de
>   #  Gustav-Meyer-Allee 25, 13355 Berlin
>   #  Tel: +49-30-46307-382  Fax: +49-30-46307-388
>
> - Original Message -
> From: "Peter Wiherkoski" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Wednesday, July 25, 2001 4:47 PM
> Subject: AUTO_INCREMENT problem in table
>
>
> > Hi, got a problem:
> >
> > My table (part of it) looks like this:
> > CREATE TABLE cust (
> > id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
> > name CHAR(30) NOT NULL,
> > PRIMARY KEY (id)
> > );
> >
> > When i type
> > INSERT INTO cust VALUES(NULL,"Peter");
> > the value of id will increase by 1 every time i insert a record.
> >
> > By "misstake" i typed
> > INSERT INTO cust VALUES(20,"Maria");
> > the id (20) then was "downsized" to the top limit of SMALLINT -
> 65535.
> >
> > Next time i type
> > INSERT INTO cust VALUES(NULL,"Joe");
> > the AUTO_INCREMENT tries to increase id by 1, but it cant because
> > AUTO_INCREMENT for id starts with 65535 and cant go any higher.
> >
> > Even if i delete the record where id=65535
> > DELETE FROM cust WHERE id=65535
> > AUTO_INCREMENT starts with 65535 and i cant add any more records.
> >
> > How do i get out of this mess?
> >
> > /Peter
> >
> > -
> > Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list archive)
> >
> > To request this thread, e-mail <[EMAIL PROTECTED]>
> > To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> >
> >
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Running multiple MySQL servers on one machine

2001-07-19 Thread ryc

I think the main disadvantage would be the memory... Running two servers
means it would use twice the amount of memory... So say you are using 256mb
for the key cache... running two servers would use 512mb (not the exact
moment the servers start but after a prolonged period of time of course).
This situation gets even worse if you are depending on a larger key cache
for performance.

ryan

- Original Message -
From: "Jeremy Zawodny" <[EMAIL PROTECTED]>
To: "William Chi" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, July 19, 2001 3:59 AM
Subject: Re: Running multiple MySQL servers on one machine


> On Mon, Jul 16, 2001 at 12:40:24PM +1000, William Chi wrote:
> > Hi,
> >
> > I was wondering if anyone has any opinions on the advantages and
> > disadvantages of running multiple servers on one machine.  Each
> > server would be used to access different databases.
>
> Off the top of my head, since I haven't seen a reply yet...
>
> Benefits:
>
>   * If one crashes, the other does not.
>
>   * You could run them under different userids.
>
>   * You could run different versions of MySQL.
>
> Disadvantages:
>
>   * You'll probably use more memory that way.
>
>   * Client applications have to be configured to use different port
> numbers.  This may or may not be trivial.
>
>   * You have more servers to monitor.
>
>   * Usernames and passwords are not shared across the servers.
>
> Jeremy
> --
> Jeremy D. Zawodny, <[EMAIL PROTECTED]>
> Technical Yahoo - Yahoo Finance
> Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936 <-- NEW
>
> MySQL 3.23.29: up 33 days, processed 257,419,842 queries (90/sec. avg)
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: help connection erroe

2001-07-16 Thread ryc

The problem is just as it states, "Access denied" ... You can also find the
information that the username is 'root' and a password was supplied.. so the
first thing to check out would be the password the script is using to
connect with. Once that is corrected it should work fine.

Hope that helps.

ryan

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Monday, July 16, 2001 2:53 AM
Subject: help connection erroe


> When am trying to connect to mysql am getting this error
>
>
> "Warning: MySQL Connection Failed: Access denied for user:
'[EMAIL PROTECTED]'
> (Using password: YES) in /var/www/html/accounts/include/useradmin.php on
line
> 144 Unable to select database"
>
> What is the problem, Please do assist thanks
>
> Regards
>
>
> Martin W. Kuria
> Web Programmer
> Bsc. Computer Science
> Kenyaweb.com Ltd
> www.kenyaweb.com
> mail.kenyaweb.com
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Compiling in BSD

2001-07-16 Thread ryc

First off, what BSD? There are a few of them... and what version?

Sometimes --with-low-memory doesnt seem to work... What I have done in the
past to get it to compile is edit the Makefile in that directory, find the
line that compiled sql_yacc.cc, and add the compiler flag -O0 ... Sure that
turns optimizations off, but atleast the file will compile. Another solution
is to allow the compiler to use > 256megs of ram but that isnt an option on
every server.

Hope that helps.

ryan


> Hello All.
> I try compile in BSD from source.
> I have an account but NOT root access.
>
> 1-At myisam subdir says 'don't know how create mi_test_all'
> erasing proper lines at Makefile, solves the problem, but
> why says this message?
>
> 2-the infamous file sql_yacc.cc says 'virtual memory exhausted'...
>..and YES, I use the --with-low memory (-fno-inline) option...
> any solution?3.21 compiles fine. 3.22 have same problem...
> some 3.22.XX compiles OK but currently are not offered in FTP...
>
> James A. Puig
> [EMAIL PROTECTED]
>
> - Original Message -
> From: "Rabia MEDDOURI - SERMA/LEPP" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, July 16, 2001 10:54 AM
> Subject: 
>
>
> > >Description:
> >
> > Hello
> > I want to installe MySQL, but i'm not root (my user is: meddouri).
> > I have only change the location where 'mysqlacess' script expect to find
> the 'mysql' client.
> > I have create grant tables with the scripts mysql-install_db.
> > But i can't set  a password for the mysql root user.
> > I try to do this with :
> > ./bin/mysqladmin -u meddouri -h protee -P 23 -p password 'new-password'
> > then
> > Enter password: **
> >
> > The script have finished white the follow message :
> >
> > ./bin/mysqladmin: connect to server at 'protee' failed
> > error: 'Lost connection to MySQL server during query'
> >
> > thanks for your help
> >
> > >How-To-Repeat:
> >
> > >Fix:
> >
> >
> > >Submitter-Id: 
> > >Originator:
> > >Organization:
> >
> > >MySQL support: [none | licence | email support | extended email
support ]
> > >Synopsis:
> > >Severity:
> > >Priority:
> > >Category: mysql
> > >Class:
> > >Release: mysql-3.23.39 (Official MySQL binary)
> > >Server: ./mysqladmin  Ver 8.21 Distrib 3.23.39, for sun-solaris2.7 on
> sparc
> > Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB
> > This software comes with ABSOLUTELY NO WARRANTY. This is free software,
> > and you are welcome to modify and redistribute it under the GPL license
> >
> > Server version 3.23.39
> > Protocol version 10
> > Connection Localhost via UNIX socket
> > UNIX socket /tmp/mysql.sock
> > Uptime: 1 hour 52 sec
> >
> > Threads: 1  Questions: 14  Slow queries: 0  Opens: 0  Flush tables: 2
> Open tables: 0 Queries per second avg: 0.004
> > >Environment:
> >
> > System: SunOS protee 5.7 Generic_106541-12 sun4u sparc SUNW,Ultra-5_10
> > Architecture: sun4
> >
> > Some paths:  /usr/local/bin/perl /usr/xpg4/bin/make /usr/local/bin/gmake
> /usr/local/bin/gcc /product/SUNWspro/bin/cc
> > GCC: Reading specs from
> /usr/local/lib/gcc-lib/sparc-sun-solaris2.7/2.95.2/specs
> > gcc version 2.95.2 19991024 (release)
> > Compilation info: CC='gcc'  CFLAGS='-O3 -fno-omit-frame-pointer'
> CXX='gcc'
>
>
CXXFLAGS='-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -
> fno-rtti'  LDFLAGS=''
> > LIBC:
> > -rw-r--r--   1 bin  bin  1694308 Jul 25  2000 /lib/libc.a
> > lrwxrwxrwx   1 root root  11 Oct 22  1999 /lib/libc.so ->
> ./libc.so.1
> > -rwxr-xr-x   1 bin  bin  1115940 Jul 25  2000 /lib/libc.so.1
> > -rw-r--r--   1 bin  bin  1694308 Jul 25  2000 /usr/lib/libc.a
> > lrwxrwxrwx   1 root root  11 Oct 22  1999
/usr/lib/libc.so ->
> ./libc.so.1
> > -rwxr-xr-x   1 bin  bin  1115940 Jul 25  2000 /usr/lib/libc.so.1
> > Configure command: ./configure  --prefix=/usr/local/mysql
> '--with-comment=Official MySQL
>
binary' --with-extra-charsets=complex --with-server-suffix= --enable-assembl
> er --disable-shared
> > Perl: This is perl, version 5.004_04 built for sun4-solaris
> >
> > -
> > Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list archive)
> >
> > To request this thread, e-mail <[EMAIL PROTECTED]>
> > To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> >
> >
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check

innobase > 4gb rows

2001-07-14 Thread ryc

I have a table that were created using innobase compiled with 64kb page
tables. Now I want to upgrade to the newer version of innobase that supports
the larger row sizes but as it mentions on the innobase .39b download page
( http://www.innodb.com/download.html ) I must export the tables, and then
import them again after upgraded to the new innodb code. The table has >
10million rows and takes up 15-20 gigs of disk space, I dont think that
mysqldump will work so well on this table. Does anyone have a suggestion?
Thanks!

Also I have been trying to find a data set that reproduces the two column
index bug that was discovered a few weeks back, but I have not been able
to... or rather, I havent been able to commit enough time. I should be able
too in the following week.

Thanks again,
ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Installing DBI

2001-07-11 Thread ryc

perhaps give a little more info as to what is on your screen before the
error?

ryan

- Original Message -
From: "sherzod ruzmetov" <[EMAIL PROTECTED]>
To: "MySql Mailing List" <[EMAIL PROTECTED]>
Sent: Wednesday, July 11, 2001 1:22 AM
Subject: Installing DBI


>
> I can install other moduels successfully. But when it comes to DBI, and
> DBD::mysql, it keeps throwing out the following error:
>
> make - not ok
> make returned bad status
>
>
> I am lost :-(. What could that be?
>
> -sherzodOR
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Perl DBI to MySQL -- Passing info to query

2001-07-10 Thread ryc

I was not aware of placeholders, and the benifits of using them instead of
using $dbh->quote(). It doesnt make sence that the DBI version of quote isnt
as thorough as having the code behind placeholding do it. But anyhow, I have
a few questions as to how this works. Here is an example from the Perl
DBH::mysql docs:

my @names = ['Flaherty', 'Jones', 'Smith'];
my $sth = $dbh->prepare("UPDATE contact
 SET phone = '555-1212'
 WHERE last_name = ?");
$sth->execute(@names);
$sth->finish;

So most likely this query will return 3 rows, each corresponding to the last
names contained in the array. Does this mean you can not use more than one
place holder per query? What if the 'WHERE' statement was "WHERE last_name =
? AND first_name = ?". So you do an execute like this:
$sth->execute(@lnames,@fnames) ... This would not work because as far as the
execute function is concerned, these two arrays are the same (if you want to
pass them as seperate arguments you must pass references rather than the
object itself). Anyhow one how placeholders for multiple variables can be
used? Thanks.

ryan

> r> Seems that you are not taking advantage of Perl. This is what you can
do:
>
> No, he is taking advantage of placeholders. It is much better to use
> placeholder for value substitution that substitute values directly
> into query with Perl because there is no need to escape values (do you
> know that $dbh->quote doesn't quote reliably?) and $sth can be reused
> for simular queries (which can give perfomance gain on some SQL
> databases).
>
> However usually placeholders can be used only for value
> substitutuion. 'DESC' cannot be substituted since it is a part of
> query.
>
> r> $parentid = x;
> r> $orderby = 'DESC';
> r>my $sth = $dbh -> prepare (qq{
> r>  SELECT message.name, contents, user.name, message.id
> r>  FROM message, user
> r>  WHERE folder='N' and parentid=$parentid
> r>  GROUP BY message.id
> r>  ORDER BY time $orderby
> r>  }) || die $dbh->errstr;
>
> r> You can put any variable inside the string... Note that I changed the
q{
> r> ... } to qq{ ... }
>
> r> $string = q{ string } is the same as $string = 'string';
> r> $string = qq{ string } is the same as $string = "string";
>
> r> Note that variables inside of strings enclosed by '' will not be
translated,
> r> only strings enclosed by "" (or qq{} =) ).
>
> --
>  -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> | Ilya Martynov (http://martynov.org/)
|
> | GnuPG 1024D/323BDEE6 D7F7 561E 4C1D 8A15 8E80  E4AE BE1A 53EB 323B DEE6
|
> | AGAVA Software Company (http://www.agava.com/)
|
>  -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: String Size Limit?

2001-07-10 Thread ryc

Most likely your size limitation is comming from the data type used for
'notes'. Check out the documentation for the size of the text types,
mediumtext may be more apropriate for you.

ryan

>
> I'm running into an odd string size limitation. I've traced it down as far
> as I can get, and I didn't run across anything in the docs or on any
> searches I've run so I figured I'd try here. The field in the database is
a
> TEXT type column. I'm appending to the front of the text field each time
> using:
>
> UPDATE field SET notes=CONCAT("newtext",notes) WHERE ...
>
> However the field is cutting off after 4096 bytes. With a hunch that
CONCAT
> may have been causing the limit, I switched to doing the concatenation
> inside of the application and setting the full field. The same cut-off
> occurs in storage, however the SQL executes correctly so it's not like the
> query is getting cut off and not seeing the WHERE clause.
>
> Seeing as this happens in CONCAT which should be server side I suspect I'm
> running into a server-side limitation. My app is PHP3 over IIS on NT4.0
> using MyODBC 2.50.3300 to a Linux Server running MySQL 3.22.32 precompiled
> (when you get 100 day 24/7 uptime no problem you don't upgrade!). What
> incredibly obvious piece of documentation am I missing, is this a "you
need
> to upgrade thing", or what would be a reasonable work-around in this
rather
> abstracted environment?
>
> Thanks,
> Sean
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Perl DBI to MySQL -- Passing info to query

2001-07-09 Thread ryc

Seems that you are not taking advantage of Perl. This is what you can do:

$parentid = x;
$orderby = 'DESC';
   my $sth = $dbh -> prepare (qq{
 SELECT message.name, contents, user.name, message.id
 FROM message, user
 WHERE folder='N' and parentid=$parentid
 GROUP BY message.id
 ORDER BY time $orderby
 }) || die $dbh->errstr;

You can put any variable inside the string... Note that I changed the q{
... } to qq{ ... }

$string = q{ string } is the same as $string = 'string';
$string = qq{ string } is the same as $string = "string";

Note that variables inside of strings enclosed by '' will not be translated,
only strings enclosed by "" (or qq{} =) ).

Hope that helps. For more info goto www.perl.com

ryan


> I'm having some trouble passing some information from my Perl script to
> the DBI query interface.
>
> A short version of my query would look like:
>
>   my $sth = $dbh -> prepare (q{
> SELECT message.name, contents, user.name, message.id
> FROM message, user
> WHERE folder='N' and parentid=?
> GROUP BY message.id
> ORDER BY time ?
> }) || die $dbh->errstr;
>
> Then, I'm calling the query:
>
>   $sth->execute($parent, "DESC");
>
> The "DESC" is being ignored. I can hard-code it into the query and it
> works fine. The closest I've been able to achive in having the query
> recognize the second ? is to have it crash with an error reporing that
> it's there, but doing nothing about it
>
> go figure.
>
> any help is greatly appreciated.
> --
> *===| http://bigpig.org/
> *===|
> *===| It's about Freedom!
> |
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Easiest way to create a duplicate of a db?

2001-07-09 Thread ryc

If you want a copy of the DATABASE and not of a table, I would suggest
(using unix) ...

cp -R /products /products1

Where the datapath is the dir that contains all the databases.. in my setup
I usually make it /var/mysql... but often its in other places.

ryan

- Original Message -
From: "Victor Spång Arthursson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, July 09, 2001 12:51 PM
Subject: Easiest way to create a duplicate of a db?


> For example if I want a copy of products named products1?
>
> Sincerely Victor
>
> query
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: help meeeeeeeee

2001-07-03 Thread ryc

> Hi,
>
> I am working on a banking project.
>
> we are planning to take our tour into vb/mysql combination.
>
> Please help me sending your comments.

Where would you like help sending my comments to? heh.

It sounds like you are asking for comments reguarding your choice of
vb/mysql... If that is the case, sure it can be done. The only problem I can
see if you plan on having end users use this program, they would have to
install and configure Mysql on their computers. This might be a bit of a
burdon for them. But that is for you to decide.

The list can offer more advice if you ask more specific questions.

ryan



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: ERROR 1062 at line 46: Duplicate entry '4' for key 1

2001-06-29 Thread ryc

Duplicate entry for 'x' for key y means that there exists a unique key (key
1) on a certain column that already contains the entry '4'.. You are trying
to insert something with '4' and hence it complains since that key says it
must be unique.

In order to find out what line is #46... its the 46th line from the top of
the file.

ryan

> Hi;
>
> I'm trying to learn MySQL, having some difficulties:
>
> mysql  Ver 9.38 Distrib 3.22.32, for pc-linux-gnu (i686)
>   for pc-linux-gnu (i686)
>
> Getting the following error:
>
> ERROR 1062 at line 46: Duplicate entry '4' for key 1
>
> What I did was:
>
> mysqldump -u jimjones -p XxXJimmy > lovingdata-Db-test.txt
>
> Deleted all the Db files in /var/lib/mysql/lovingdata
>
> mysql --user=jimjones --password=XxXJimmy lovingdata <
lovingdata-Db-test.txt
>
> So there must be an error in the database. So how do I fix the error?
> How do I figure out which line is #46?
> How do I use the isamchk utility for an entire database, not just tables?
>
> Where is the documentation (besides man pages) for
> mysql  Ver 9.38 Distrib 3.22.32, for pc-linux-gnu (i686)
>
> Joe
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re:innobase bug w/ 3.23.29???

2001-06-28 Thread ryc

> Heikki> Hi!
> Heikki> I checked that in the COUNT(*) below InnoDB does
> Heikki> not fetch the whole row if it does the query through
> Heikki> a secondary index. It may sometimes have to
> Heikki> fetch it to determine if the row should be visible
> Heikki> in the current consistent read, the trx id is
> Heikki> stored in the whole row in the primary key index.
> Heikki> But Ryan said that queries were CPU-bound,
> Heikki> and fetching a million rows through a primary key
> Heikki> should at most take 10 s.
>
> Heikki> Regards,
>
> Heikki> Heikki
> Heikki> http://www.innodb.com
>
> This sounds like we would need a test case to solve this...
>
> Ryan, any change you could provide us with a script that would
> generate data for a testcase ?
>

I tried before with a small test case with no success. I will attempt again
with a larger dataset and hopefully it will pop-up.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Integrity checks

2001-06-28 Thread ryc

When you delete a row from a table that may be referenced in another
table... you must delete all references to it from the other tables. This is
done in the same way in which you deleted the first row...

ryan

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> As long as MySQL doesn't manage the foreign keys, how do you manage
> the deletion of a record if it is referenced in another table's
> record?
> I am working in ASP, but I will be intersted by solution in any
> programming language.
>
> Thanks,
>
> Nicolas.
>
> passpharse : database,mysql
>
> -BEGIN PGP SIGNATURE-
> Version: PGP 7.0.4
>
> iQA/AwUBOzrqB4iKIkRfAqJVEQKt1ACglEJufod2+1A6jHOWh+pDUqLhUJ0AoNTC
> 3UMqv+qnk9VUCtJkLznjH7w5
> =tmED
> -END PGP SIGNATURE-



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re:innobase bug w/ 3.23.29???

2001-06-28 Thread ryc

> ryc> 
> Heikki> Well, mystery solved :). Except that why MySQL uses hours to
> Heikki> sieve away the extraneous rows in the first query.
> >>
> >> To be able to answer the question why:
> >>
> >> SELECT COUNT(*) FROM table1 WHERE col1=0 and col2=0
> >>
> >> Is slow, I would need to get an EXPLAIN for this query.
>
> mysql> explain select count(*) from messages where clean=0 and
> ryc> parentmsgid=0;
>
> ryc>
+--+---+---+-+-+--+-
> ryc> +---------+
> ryc> | table| type  | possible_keys | key | key_len | ref  |
rows
> ryc> | Extra   |
> ryc>
+--+---+---+-+-+--+-
> ryc> +-----+
> ryc> | messages | index | parentmsgid   | parentmsgid |   5 | NULL |
7207120
> ryc> | where used; Using index |
> ryc>
+--+---+---+-+-+--+-
> ryc> +-+
>
> How many rows satisfies the query?  Is it about 7207120 rows ?
>
> Heikki, could it be that InnoDB will read the original row through the
> primary key even if MySQL has marked that it would be happy if InnoDB
> only returned the found keys from the secondary index ?
>

A total of 5 rows match this query.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re:innobase bug w/ 3.23.29???

2001-06-27 Thread ryc



> Heikki> Well, mystery solved :). Except that why MySQL uses hours to
> Heikki> sieve away the extraneous rows in the first query.
>
> To be able to answer the question why:
>
> SELECT COUNT(*) FROM table1 WHERE col1=0 and col2=0
>
> Is slow, I would need to get an EXPLAIN for this query.

mysql> explain select count(*) from messages where clean=0 and
parentmsgid=0;

+--+---+---+-+-+--+-
+-+
| table| type  | possible_keys | key | key_len | ref  | rows
| Extra   |
+--+---+---+-+-+--+-
+-+
| messages | index | parentmsgid   | parentmsgid |   5 | NULL | 7207120
| where used; Using index |
+--+---+---+-+-+--+-
+-+

It appears that it is using the index, but it takes a very long time to
complete..

Thanks,
ryan



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Tables exceeding 2gb

2001-06-26 Thread ryc

The OS may not have a 2GB limit, however the table MAY. If you do a
myisamchk on the table it will tell you if you are running out of space in
the table. If you are.. you need do something like "alter table tablename
max_rows=1000".. (its in the manual)

ryan

> > I have a Linux machine (Linux cartman 2.4.2-2smp #1 SMP Sun Apr 8
> > 20:21:34 EDT 2001 i686 unknown) that is running MySQL 3.23.38-log and
> > now I have a table that is getting VERY near to the ext2 filesystem
> > limitation of 2gb.
>
> Ext2 doesn't have a 2 GB limit. You seem to be running Red Hat Linux
> 7.1, and both kernel and C library support large files on this system.
>
> For more information, I recommend a look at
> http://www.mysql.com/documentation/mysql/bychapter/manual_Table_types.html
> --
> Trond Eivind Glomsrød
> Red Hat, Inc.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: $sth->{mysql_insertid} not working in 3.23.38?

2001-06-23 Thread ryc

> >heh, my ($id) = $dbh->selectrow_array("SELECT MAX(id) FROM tbl;");
>
> You're in for a VERY nasty surprise as soon as two of these happen to
> run in parallel.

I agree, you may get another insert before that select occurs and end up
getting the wrong id. You should use "SELECT LAST_INSERTID()" or whatever
the function name is.. that may be right but I dont remember.

Two things you might want to try... first download and install the newest
Msql-Mysql DBD drivers.. check what the documentation says in there.. and
second, keep changing what variable you are trying to access and maybe you
will find one works that isnt documented.

ryan

>
> Our code uses $sth->{mysql_insertid}. We don't have any 3.22 servers,
> though -- they can't do (some? I forget) binary operations in a
> SELECT right, and our customers _do_ have some reasonable
> expectations about SQL compatibility.
>
> --
> Matthias Urlichs
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: $sth->{mysql_insertid} not working in 3.23.38?

2001-06-22 Thread ryc

Woops sorry about that last message.

Try using the following:

my $id = $sth->{insertid};

That is what I had to change it to, to get it working.. despite what the
documentation says.

GL.

ryan

> > My ISP recently updated one of their servers to 3.23.38, and all of a
> sudden
> > a BUNCH of my code stopped working.   I use a number of insert queries
> (new
> > items, new search caches, etc), where I insert a blank record, and then
> take
> > notes of its id (usually "int unsigned auto_increment primary key"), to
be
> > used in a subsequent UPDATE statement (or INSERT, to a related table).
> >



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: $sth->{mysql_insertid} not working in 3.23.38?

2001-06-22 Thread ryc

I had the same problem when upgrading... and I found that using


> My ISP recently updated one of their servers to 3.23.38, and all of a
sudden
> a BUNCH of my code stopped working.   I use a number of insert queries
(new
> items, new search caches, etc), where I insert a blank record, and then
take
> notes of its id (usually "int unsigned auto_increment primary key"), to be
> used in a subsequent UPDATE statement (or INSERT, to a related table).
>
> Most of my (Perl) code uses $sh->{insertid}, which errors out in 3.23 and
> tells me to replace it with $sh->{mysql_insertid}.  So I wrote my code to
> autodetect the version (since $sh->{mysql_insertid} doesn't work in 3.22,
> which is still installed on most of their servers, where most of our
clients
> are set up).  Anyway, it doesn't work.  All of the 3.22 sites still work
> great, but when I move the same code to 3.23 (with the minor change noted
> before), I get all kinds of errors because $sh->{mysql_insertid} seems to
> return a null value (or maybe a nullstring), which breaks INSERT
statements
> (and makes UPDATE statements stop working).
>
> Does anyone know anything about this?
>
> Thanks,
>
> Chris
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innobase bug w/ 3.23.29???

2001-06-22 Thread ryc

Keep in mind the version of innobase that I am using is compiled with a page
size of 64k.

I have a table of 10million rows (give or take, select count(*) from
tablename is slow for innobase tables so I dont do it often). Everyday I
need to run a query that selects rows that are marked for processing... this
involves something like:

select * from tablename WHERE col1=0 and col2=0 limit 100 ( there is an
index (col1,col2) on the table).

The problem is say for instance there 88 rows that should match that query,
and it is executed... mysql will peg the processor at 100% and go on pretty
much forever.. (ie its ran for many many hours and still hasnt stopped, I
killed it). However if I run the query with limit < number of rows that
match the where clause, it executes in no time at all (as expected)... but
once the limit is above the number of rows that match the where clause it
seems to loop forever (no disk activity, just eats away at the cpu).

I tried creating a small test case to recreate this but I was not able to
succeed. I guess this is more of a message for Heikki but maybe someone else
will benifit from reading this.

Thanks,
ryan



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




begin and commit w/ perl

2001-06-18 Thread ryc

When begining and commiting a transaction using perl and DBI, does the
commit need to come from the same statement handle as the begin or does it
just need to be from the same database connection handle?

Thanks.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innodb admin questions

2001-06-16 Thread ryc

My application needs to insert between 200-500k rows about once a day...
When using MyISAM tables I determined the best way to get this done (while
still allowing users to perform select statements from the table) was to use
mysqlimport with --low-priority. This way all the inserts get bundled up
into large groups, and the low-proriorty allows select statements to
continue. However I switched the table type to Innodb now and I am not sure
what the best way to insert these rows would be. There is:

a) continue using mysqlimport (without low-priority??)... the question about
this is will mysqlimport group the inserts into one large begin/commit
block, or will each insert have its own block?
b) create the begin/insert/commit statements myself

What way would be the fastest and least abrasive on the server?

Another question I have is reguarding memory usage... Will innobase use any
of the key-buffer memory Mysql is using for MyISAM tables or is the only
memory innobase uses defined with innodb_buffer_pool_size?

Thanks!!

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Just a quick question about data conversions?

2001-06-16 Thread ryc

Yes the server must be able to open the file. I dont know how you would
write a file path that accesses another machine heh.

ryan


> In order to add data to mysql, using LOAD INTO, am I right in thinking
> that this file has to be local to the machine.  Therefore I have to ftp
> the file I want inserted to the machine running the MySQL server;
> Correct?
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innodb text fields

2001-06-13 Thread ryc

I am aware that currently innobase tables can not have abritararly large
text/varchar fields and support for this will be released sometime in the
month of June. The application I am working with requires text fields
(usually less than 64k), and I desperatly need row level locking. Either I
continue using Mysql, and use Innodb tables.. or I have to switch to
Postgresql.

I need to do something right away about this performance problem, I can not
wait until Innodb tables are released with the unlimited row length. I tried
to find out what the max row length for a Innodb table row, but did not find
it in the documentation. If it isnt too much smaller than 64k I could switch
now, and not have to worry about using postgresql. Does anyone know what the
limit is? Any pointers would be apreciated.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: encrypt wierdness

2001-06-09 Thread ryc

For passwords it is good to use the sql function password(). This is done
like this:

to insert:
insert into users ('username1234', password('userspassword') );

to load
select * from users where username='username1234' AND
password=password('userspassword')

Hope this helps.

ryan

> Hi,
>
> > > > So where does mysql get its salt from? Is it a random salt?
> > This confused the hell our of me for around an hour!
>
> > > You should look MySQL manual not C crypt manpage ;). And yes, this is
> > > random salt and makes life little bit more secure.
>
> > Ok, so you can obtain a random result (thought that was what random()
> > was for), but still cannot understand how this could be usefull.
>
> If you take another look at the man page for the crypt() system call,
you'll
> notice that it says that "the first two characters represent the salt
> itself" when mentioning what constitutes the returned value.
>
> So, given this, you can consistently re-encrypt a string to compare
against
> the original by taking the first two characters and using them as the
salt.
> The example below demonstrates this.
>
> mysql> select encrypt('blahblah');
> +-+
> | encrypt('blahblah') |
> +-+
> | IIRggo.uD7.Xk   |
> +-+
> 1 row in set (0.00 sec)
>
> mysql> select encrypt('blahblah', 'II');
> +---+
> | encrypt('blahblah', 'II') |
> +---+
> | IIRggo.uD7.Xk |
> +---+
> 1 row in set (0.00 sec)
>
> > I use
> > encrypt to store password info in a database, but how do you compare the
> > user entered password with the one in the database if the results vary
> > the whole time? Please give me an application for this behaviour and I
> > will be happy :-)
>
> In your case, when comparing the password the user has entered against
what
> is in the database (an encrypted value) you first need to get the first
two
> characters of what is already in the database for that user. Something
along
> the lines of this should do the trick:
>
> SELECT * FROM users_table WHERE username = 'johndoe' AND passwd =
> ENCRYPT('secretpasswd', LEFT(passwd, 2));
>
> Regards,
>
> Basil Hussain
> ---
> Internet Developer, Kodak Weddings
> E-Mail: [EMAIL PROTECTED]
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Bug: mysql uses temp filesort instead of index

2001-06-03 Thread ryc

> does not and is quite fast.  Somehow the optimizer is getting confused
> by the "sid = 16".  I was thinking that perhaps there is some magic
> way of rephrasing the problem that MySQL would understand better.
> Failing that, what is the recommended way of making this query
> quicker?  Should I insert another index, for instance called
> negative_id that is always (4294967296 - id) or something like that,
> so I can ORDER BY negative_id instead of ORDER BY id DESC?
>
> Thank you for any advice or suggestions you might have.

Perhaps ORDER BY (column * -1), if the column is numeric would be
sufficient. But that might also make the optimizer less willing to use the
index for sorting. Also I think MySQL will use filesort when the returned
result is large, but I could be wrong.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Indexing problem with large database

2001-05-25 Thread ryc

With the correct indexes this query should run in less than a second.
Basically what you should do is make an index for every column that is used
in the where clause. For instance, if you had a query ... WHERE A=X AND B=Y
AND C=Z (A,B,and C are in the same table) you would create an index (A,B,C)
and it would make this query tons faster for large data sets. It is a good
idea to put the most restrictive column first in the index. By restrictive I
mean the column that has the most amount of distinct rows, ie if we had the
index (A,B,C) and the column A is mostly filled with X's, putting A first
wouldnt narrow the search down as much because mysql would have to check
every index that begins with X. Hope that was a clear enough explaination.

ryan


> **Example
> Query***
> SELECT DISTINCT badge_history.xact_date AS xact_date,
> badge_history.xact_time AS xact_time, badge_history.last_name AS
> last_name, badge_history.bid AS bid, badgests.cond_desc AS status,
> department.description AS department,
> badge_history.reader_desc AS reader_desc, area.description AS area,
> badge.expired_date AS expired, badge_history.xact_type AS
> xact_type, badge_history.tzcorr AS tzcorr FROM badge_history, badgests,
> department, area, badge LEFT JOIN
> smccm_user_vs_permitted_department ON badge_history.dept =
> smccm_user_vs_permitted_department.permitted_department
> WHERE smccm_user_vs_permitted_department.smccm_user = '1' AND
> badge_history.dept = department.id AND
> badge_history.area = area.id AND badge_history.status = badgests.id AND
> badge_history.source_host = area.source_host AND
> badge_history.bid = badge.bid
> End Example
> Query**
>
> Now, if we describe the statement:
>
> |badgests  |ALL|id|NULL|NULL|NULL|4|Usingtemporary|
>
> |badge_history |ref|
> ixbid,ixarea,ixdept,ixstatus|ixstatus|4|badgests.id|1048640||
>
> |department|eq_ref| ixid|ixid|4|badge_history.dept|1||
>
> |area  |eq_ref|
> ixid|ixid|68|badge_history.area,badge_history.source_host|1||
>
> |badge |ref|ixbid| ixbid|18|badge_history.bid|1||
>
> |smccm_user_vs_permitted_department |ref |permitted_department|
> permitted_department
> |4|badge_history.dept|16|whereused;Distinct|
>
> Any ideas on how to make this faster, it needs to run in less than 5
> minutes.  The machine is a dual pentiup 8XX Mhz with 900 MB Ram.  Is it
> possible to speed this up somehow...
>
> Thanks, and please reply to [EMAIL PROTECTED], as I don't subscribe to
> the list.
>
> --
> *
> Mike W. Baranski
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>
> Web  : http://www.secmgmt.com
> Phone: 919-788-9200
> Fax  : 919-510-0037
>
> P.O. box 30099
> Raleigh, N.C. USA 27622
> *
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: db index / theory OT(?)

2001-05-24 Thread ryc


I believe what you are talking about is an inverted text index where the
word is used to lookup what document it occurs in. To create this index you
would have to make a program that takes a "document" (be it a row in the
database or a file or whatever), splits it up into words, inserts these
words into a table (assigning each unique word an index number).. and then
for each word index has a list of documents it occurs in. Getting the
results back would just be the inverse of this.

One webpage that I can refer you to is http://www.cs.mu.oz.au/mg/. Managing
gigabytes is a book that covers full text indexing and to aid the readers
they created source code that illustrates the points in the book. This is a
bit more complicated than what I described above.

For hit highlighting usually a straight search/replace is done.. replacing
the search words with searchword.

Hope this helps. I would be interested in your findings could you please
send me info when you finish gathering? Thanks.

ryan



> Could anyone suggest some (specific) resources (books, articles,
> urls, whatevers) about indexing a database? I think what I'm looking
> for is what do you store in the index; how does it get in there and
> then how do you get the information out again?
>
> For example:
> How does hit highlighting (your keywords in bold on the search result
> page) work? I understand that you look up a word in the index and the
> index tells you that the word is found somewhere in some document,
> but how does the results page know how to make your word bold?
>
> Again, my apologies if this is off-topic!! Thanks for all of your
> insight to date. I've learned a lot from reading other peoples'
> questions. :)
>
> emma
> PS All of my findings are going into a summary document that I'm
> going to be giving to my team at work. Let me know if you're
> interested in receiving a copy when I'm done. :)




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: count(*) on different tables

2001-05-21 Thread ryc

As far as I know, they should be the same. The only case in which I can see
count(colname) would be slower is if it doesnt count rows that contain null
values for that column (this is only a guess, I dont know if count does that
or not).

ryan

> Would count(*) show the same performance as count(1)???
>
> Siomara
>
> >Actually, as far as performance goes, it depends on table types.  Most of
> >the MySQL table handlers keep an internal count of record numbers.  Thus,
a
> >select count(*) from table; is as fast as it gets.
> >
> > > Well ...
> > >
> > > I don't know enough about mysql but it must allow you somehow to
specify
> >the
> > > column by number as in Oracle and other DBs
> > >
> > > SQL> select count(*) from product;
> > >
> > >   COUNT(*)
> > > --
> > > 10
> > >
> > > SQL> select count(1) from product;
> > >
> > >   COUNT(1)
> > > --
> > > 10
> > >
> > > Once you just want to count the rows the performance of your select
will
> >be
> > > much better if you specify a column rather than *.
> > >
> > > Siomara
> > >
> > > >Hi,
> > > >
> > > >is it possible to get *one* quick result with the rowcount of each
> >table
> >in
> > > >one database, without knowing the column-names?
> > > >
> > > >this does *not* work:
> > > >  select count(t1.*), count(t2.*)
> > > >from table1 t1, table2 t2
> > > >
> > > >Manual doesn't say anything about that.
> > > >
> > > >Greetings,
> > > >Ansgar Becker
> > > >



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Speed question

2001-05-16 Thread ryc

Changing to C++ is not likely to give you a noticable speed difference
because your bottleneck is not the code but the queries. With proper
database design you should be able to acheive those results with one query,
and it should be fast if given the right indexes.

ryan


> I am searching to speed up an operation I have, which works - it just
> takes minutes to finish. The current script is written in pike (a C like
> scripting language). I believe that most of the overhead is the multiple
> queries. Would using C++ be significantly faster? (I'd have to learn
> some C before I could do this, so, the speed increase needs to be worth
> the effort).
>
> Let me explain the situation:
>
> Table 1 is defined as:
> MM;varchar(4)
> Title;varchar(48)
> Archive;varchar(4)
> record_num;int(8) unsigned zerofill
>
> I am given a list of Titles. With these Title's I need to search Table 1
> for the Title and output as follows:
>
> Green Mile 10 35mm TRLR  6 16mm BCAM
> Cast Away 4 35mm 20 BETA
>
> In the above example, Green Mile=Title 35mm=MM TRLR=Archive etc. So we
> have 10 35mm TRLR of Green Mile.
> Currently I am querying several times to get the result. This is slow
> because the list could be 50 Titles.
>
> Thanks for any advance you can give on this.
>
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re[2]: MySQL FullText improvements in V4.0

2001-05-15 Thread ryc

It wasnt my original post, however the problem with the full-text engine
built into mysql is at the current time it is very slow with many records. I
was inquiring if this other method would be any faster.

ryan

> Excuse me but what do you mean by FullText ? Is this a full-text search
> engine in MySQL ???
>
> > -Original Message-----
> > From: ryc [SMTP:[EMAIL PROTECTED]]
> > Sent: Tuesday, May 15, 2001 3:29 PM
> > To: James Treworgy; Chris Nichols
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: Re[2]: MySQL FullText improvements in V4.0
> >
> > What kind of speed can you expect to get from this after tbl_doc_index
> > fills
> > up with tens of millions of rows? Is this scheme suitable for that
> > magnitude
> > of data?
> >
> > Also, if you wish to generate a query that generates all documents that
> > contain token x or token y, would mysql neglect to use the keys? I
> > remember
> > reading that field1=x OR field2=x wont use the index because a general
way
> > of optimizing that query hasnt been found. Correct me if im wrong. AND
> > queries on the other hand should work well.
> >
> > thanks,
> > ryan
> >
> > > You can do this yourself, pretty easily by building an index in
> > > advance. I've been using the following technique to implement full
text
> > > searching since before it existed in MySQL in any form. Tokenize
> > > each unique word in a document and store the word & count in tables:
> > >
> > > tbl_tokens:
> > > token_id int
> > > token varchar
> > >
> > > tbl_doc_index:
> > > token_id int
> > > doc_id int
> > > word_count int
> > >
> > > Populate the tables from your document database either what documents
> > > are saved or in some other offline process. When someone searches on
> > > words, first convert to tokens using the first table and look up in
> > > the 2nd table using whatever search/join technique works best in
> > > your situation.
> > >
> > > Jamie
> > >
> > > At Monday, May 14, 2001 on 2:18:38 PM, you wrote:
> > >
> > > > I too am curious!  I think one feature that I'd really like to see
is
> > the
> > > > ability to tell the number of times a string appears inside another
> > > > string.  This would help a lot when trying to do search results
> > weighting.
> > >
> > > > -Chris
> > >
> > >
> > >
> > > -
> > > Before posting, please check:
> > >http://www.mysql.com/manual.php   (the manual)
> > >http://lists.mysql.com/   (the list archive)
> > >
> > > To request this thread, e-mail <[EMAIL PROTECTED]>
> > > To unsubscribe, e-mail
> > <[EMAIL PROTECTED]>
> > > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> > >
> >
> >
> > -
> > Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list archive)
> >
> > To request this thread, e-mail <[EMAIL PROTECTED]>
> > To unsubscribe, e-mail
> > <[EMAIL PROTECTED]>
> > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Varchar binary field auto-trimmed??

2001-05-15 Thread ryc

Varchar fields are not binary, blobs are.

ryan

> Hi & Help!
> 
> I would like to store some short binary data (encrypted). I used a
> varchar(24) binary field for this.
> But it seems that MySQL cuts white-spaces from the end when storing the
> data in the table. When the encrypted string now has a white-space at
> end, MySQL will cut it and this will result in a wrong decryption (next
> time when the string is read and decrypted).
> 
> When using TinyBlob instead of varchar binary, all works fine. Is this a
> "feature" of varchar binary? In my opinion, binary data shouldn't be
> trimmed!
> 
> Please answer me via mail as I don't receive the mailing list.
> Thanks for your support!
> 
>  ... tobias wiersch
> 
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re[2]: MySQL FullText improvements in V4.0

2001-05-15 Thread ryc

What kind of speed can you expect to get from this after tbl_doc_index fills
up with tens of millions of rows? Is this scheme suitable for that magnitude
of data?

Also, if you wish to generate a query that generates all documents that
contain token x or token y, would mysql neglect to use the keys? I remember
reading that field1=x OR field2=x wont use the index because a general way
of optimizing that query hasnt been found. Correct me if im wrong. AND
queries on the other hand should work well.

thanks,
ryan

> You can do this yourself, pretty easily by building an index in
> advance. I've been using the following technique to implement full text
> searching since before it existed in MySQL in any form. Tokenize
> each unique word in a document and store the word & count in tables:
>
> tbl_tokens:
> token_id int
> token varchar
>
> tbl_doc_index:
> token_id int
> doc_id int
> word_count int
>
> Populate the tables from your document database either what documents
> are saved or in some other offline process. When someone searches on
> words, first convert to tokens using the first table and look up in
> the 2nd table using whatever search/join technique works best in
> your situation.
>
> Jamie
>
> At Monday, May 14, 2001 on 2:18:38 PM, you wrote:
>
> > I too am curious!  I think one feature that I'd really like to see is
the
> > ability to tell the number of times a string appears inside another
> > string.  This would help a lot when trying to do search results
weighting.
>
> > -Chris
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Auto_increment value

2001-05-14 Thread ryc

Refer to the man page for myisamchk

ryan

> Hi,
>
> how I can change AUTO_INCREMENT value for one concrete table in mysql
> database? (e. g., this value you can get from 'show table status'). Thanx!
>
> oSup
>
>
> 
> e-mail: [EMAIL PROTECTED]
> ICQ: 10798264
> tel.: 0608/974782
> SMS: [EMAIL PROTECTED]
> 
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Hom many columns is too many?

2001-05-11 Thread ryc

Just because one table has many columns does not mean it is not normalized.
The original message stated that there is all kinds of different information
about a contact from their first/last name to the high school they graduated
from. Given enough information about a person it would be easy to fill 75
columns of DIFFERNT data. Sure there may be a column to store phone
numbers... but that doesnt mean that in order to add another phone number
for a particular contact that you would insert a whole new row. Either a)
you decide not to keep more than one phone number per contact, and use an
update to change the phone number for that contact or b) create a phone
number table so you can create a one to many relationship (one row in the
contact table can have multiple rows in the phone number table).

In case b yes some information is taken out of the contact table and put
into another table, however that is just one column. For things like mothers
maden name and other unique and SINGLE data elements putting that data in
the contact table does not denormalize the table. The only situation in
which I can think to create a seperate table with this sort of information
is a) one contact may have many of column y or b) a majority of the contacts
dont have information for column y, so you create a seperate table for
column y to store the few rows you do have information for, so that your
contact table doesnt have a bunch of empty fields.

Hope this helps.

ryan

> Well with tables not normalized you will more then likely at some point in
> get redundant data at some point.  For instance I enter Joe smith with
phone
> of 123-123-1222 then Joe Smith buys a new home keeps the other home and
now
> has two active phone number with on table you would have to reenter
> information already in the database on Joe Smith unless you keep adding
more
> columns to accommodate each new phone number(this would be very poor
> database design).  Its basic normalizing of tables that will keep this
from
> happening.  Its is very dependent on the nature of the data on whether you
> need to break that table up or not.  I would find it very hard to believe
> that all the information needed for customer/contact data
> from there name all the way to where they went to high school could be
> stored in one table logically or more importantly efficiently.
>
> -Original Message-
> From: ryc [mailto:[EMAIL PROTECTED]]
> Sent: Friday, May 11, 2001 11:58 AM
> To: 'Richard Reina'; [EMAIL PROTECTED]
> Subject: Re: Hom many columns is too many?
>
>
> I dont think the problem was redudant data. He is talking about 75
distinct
> fields that no other table contains and ALL data is related to the
contact.
>
> It makes sence that a table with more columns would have a higher overhead
> when parsing the queries, but but other than that I dont _think_ there
> should be a major problem having a table with that many columns. However
75
> does seem like it is a very obscene amount for one table, so you may want
to
> consider creating one table that stores the contacts primary information
> (name, address, phone #, business name, ect... ) that is imparative to
> working with the contact, and then have another table that contains more
> detailed background information. This way if you need the information from
> the second table you can left join it, and if you dont need it you dont
have
> to. Or another scheme is only insert detailed data for the contacts you
have
> that information on, and leave the rest blank (in this case you wont be
able
> to use left join because if the column you join on is missing from the
> details table the query will return empty, even though the contact is in
the
> primary table). Either way there are trade offs, decide what is best based
> on how much detail information you have from the contacts, and how you
want
> to lay out your code.
>
> Hope that helps,
> ryan
>
> > With relational database design you can more then likely break that 75
> > column table down into many different tables.  Look up information on
> NORMAL
> > FORM  this will help you design an effective database.  Usually from
what
> I
> > know and have been taught 3rd normal form is as low a level as you want
to
> > go.  But this also depends on things such as how large your db will be.
> > Sometimes redundant data is not always a bad think.  In that case a
lower
> > normal form would be fine too.  Its an art not a science:)
> >
> > RH
> >
> > -Original Message-
> > From: Richard Reina [mailto:[EMAIL PROTECTED]]
> > Sent: Friday, May 11, 2001 1:26 PM
> > To: [EMAIL PROTECTED]
> > Subject: Hom many columns is too many?
> >
> >
> > I am designing an in house database app. for sales/contact management.

Re: Hom many columns is too many?

2001-05-11 Thread ryc

I dont think the problem was redudant data. He is talking about 75 distinct
fields that no other table contains and ALL data is related to the contact.

It makes sence that a table with more columns would have a higher overhead
when parsing the queries, but but other than that I dont _think_ there
should be a major problem having a table with that many columns. However 75
does seem like it is a very obscene amount for one table, so you may want to
consider creating one table that stores the contacts primary information
(name, address, phone #, business name, ect... ) that is imparative to
working with the contact, and then have another table that contains more
detailed background information. This way if you need the information from
the second table you can left join it, and if you dont need it you dont have
to. Or another scheme is only insert detailed data for the contacts you have
that information on, and leave the rest blank (in this case you wont be able
to use left join because if the column you join on is missing from the
details table the query will return empty, even though the contact is in the
primary table). Either way there are trade offs, decide what is best based
on how much detail information you have from the contacts, and how you want
to lay out your code.

Hope that helps,
ryan

> With relational database design you can more then likely break that 75
> column table down into many different tables.  Look up information on
NORMAL
> FORM  this will help you design an effective database.  Usually from what
I
> know and have been taught 3rd normal form is as low a level as you want to
> go.  But this also depends on things such as how large your db will be.
> Sometimes redundant data is not always a bad think.  In that case a lower
> normal form would be fine too.  Its an art not a science:)
>
> RH
>
> -Original Message-
> From: Richard Reina [mailto:[EMAIL PROTECTED]]
> Sent: Friday, May 11, 2001 1:26 PM
> To: [EMAIL PROTECTED]
> Subject: Hom many columns is too many?
>
>
> I am designing an in house database app. for sales/contact management.
> We've identified out about 75 things to know about a customer/contact
> from there name all the way to where they went to highschool.  Should
> all these attributes be in one table since they all describe the contact
> or should they be broken up into seperate tables like education,
> professional backround,  business backtound special interests?
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: sophisticated DELETE query

2001-05-06 Thread ryc

If you wanted to do this using a SELECT you would do a LEFT JOIN or
equivalent action. Glancing over the DELETE syntax in the documentation it
does not look like DELETE supports any such mechanism. I dont think it is
possible to involve more than one table in a DELETE query.

ryan

> I have two tables: table1 and table2.
> Both tables share the same 'account_id'
> column. In table1 there is a column
> 'active' which is always either 'y' (account
> active) or 'n' (account inactive).
>
> Now here is what I want to do...
>
> I want to delete all entries from table2
> with account_id matching account_id in
> table1 and having table1's active='n'.
>
> Is it possible to perform such a delete
> with one query (I know it is with SELECT)?
>
> What's the best approach here?
>
> Thanks for your help!
>
> -- Simon
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance Problems with Huge Table

2001-05-05 Thread ryc

I would suggest creating a new table to hold vendor information. Then remove
the varchar vendor field in the parts table and replace it with an integer
that represents the vendorid from the vendor table you just created. This
should speed things up consideribly. You can do a left join any time you
want info from the vendor table included in queries involving the parts
table.

ryan

- Original Message -
From: "Carlos Fernando Scheidecker Antunes" <[EMAIL PROTECTED]>
To: "MySQL List" <[EMAIL PROTECTED]>
Sent: Saturday, May 05, 2001 4:53 PM
Subject: Performance Problems with Huge Table


I am having performance problems with my server with larger tables.

I Have a 512MB Pentium III Red Hat Linux 7 server running MySQL version
3.23.32 packaged by Red Hat

This server's /etc/my.cnf is as follows :

[mysqld]
pid-file=/var/run/mysqld/mysqld.pid
datadir=/var/lib/mysql
port=3306
socket=/var/lib/mysql/mysql.sock
set-variable=max_connections=2000
skip-locking
set-variable = key_buffer=384M
set-variable = max_allowed_packet=1M
set-variable = table_cache=512
set-variable = sort_buffer=2M
set-variable = record_buffer=2M
set-variable = thread_cache=8
set-variable = thread_concurrency=8 # Try number of CPU's*2
set-variable = myisam_sort_buffer_size=64M
log-bin
server-id = 1

[mysql.server]
user=mysql
basedir=/var/lib

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

I have a table structure like this, including indexes :

# Table structure for table 'tbl_Parts'
#

CREATE TABLE tbl_Parts (
  Vendor varchar(30) NOT NULL default '',
  PartNumber varchar(20) NOT NULL default '',
  Suplier varchar(20) NOT NULL default '',
  Quantity int(11) default '0',
  Prico float default '0',
  Description varchar(50) NOT NULL default '',
  DateHour datetime NOT NULL default '-00-00 00:00:00',
  PRIMARY KEY (Vendor,PartNumber,Suplier),
  KEY index_data(DateHour),
  KEY index_Company(Suplier,DateHour),
  KEY index_description(Description,Vendor),
  KEY index_Vendor(Vendor)
) TYPE=MyISAM;

As you can see I have also an Index for the Vendor.

The problem is that when I do an statement (that I use pretty much) :

SELECT DISTINCT(Vendor) from tbl_Parts order by Vendor;

It takes up to 52 seconds to return it since my table tbl_Parts has
1.130.300 records.

This SQL statement is always ran with a PHP script so that the user can
Select the vendor and type the PartNumber he/she is looking for.

Based on my config, structure and situation, is there anyone who could
kindly help me on boost its performance? 52 seconds to return the SELECT
DISTINCT statement is very long.

By the way, my system has an SCSI HD which is quite fast for it.

Thank you all,

Carlos Fernando Scheidecker Antunes.






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: windoze and batch files

2001-05-02 Thread ryc

Perhaps you can type the query in notepad, and then past it into the mysql
client's window.

If that is not suitable and you still want to go with the batch file scheme,
type "type batchfilename | mysql -u ... "

ryan

> Hi all,
> unfortunately, I am forced to do some background mysql development work on
a
> windoze box, which for reasons I wont go into here requires that I use the
> mysql console client.  As many would know this does not allow any commands
> to be edited in the console as does linux.
>
> I decided that it would be easier to write a batch file to do some work
but
> can't find out how to get mysql to process the batch file.  Issuing the
> mysql > batch file in dos mode wont work.
>
> Any help appreciated as I'm getting sick and tired of entering long
queries
> at the mysql prompt and then retyping the whole thing if I make a small
> syntax error.
>
> Stan Bordeaux
> Injury Prevention Policy Officer
> Public & Environmental Health Service
> 1/34 Davey St
> Hobart 7000
> ph: (03)6233 3774
> fax: (03)6223 1163
> email: [EMAIL PROTECTED]
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Problem with mysqldump....

2001-05-02 Thread ryc

To import you should do

cat filename | mysql -uusername -ppassword databasename

ryan

- Original Message -
From: "Rachel-MY" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 01, 2001 9:38 PM
Subject: Problem with mysqldump


Hi everyone,

I'm trying to backup all the databases from Server_A to Server_B.
I'd issue "mysqldump" command in server_A to backup every single database,
as below :-
> mysqldump -uusername -ppassword databases_name > file_name

In server_B, i create a new database using the command below :-
> mysqladmin -uusername -ppassword create database_name

Then... i issued another "mysqldump" command to put everything back to
server_B, as below:-
> mysqldump -uusername -ppassword databases_name < file_name

Problem happened where the databases being backup are all empty!! No tables
being copied!!
Anyone have idea on this?



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: ORDER BY DESC optimization

2001-05-01 Thread ryc

> > I have a fairly large table (greater than 4mil rows) that I would to
preform
> > a query like:
> >
> > SELECT col1,col2 FROM table WHERE a=1, c=2 ORDER BY b DESC LIMIT 50;
> >
> > I have an index on the table INDEX1( a,b,c );
> >
> > When running the query as is, it takes around 4seconds. If I omit the
"DESC"
> > part the query runs in a fraction of a second.
> >
> > I would like the query to run faster when I use DESC. I looked at
> > myisamchk -R to sort by the 'b' index but I want to be sure it will
speed up
> > my query since it may take a while to sort all 4million rows.
>
> To improve things I'd suggest you drop the (a,b,c) index and create two
new ones:
> one on (a,c)
> and another on (b)
> and then do the myisamchk -R on the second index
> You might also consider the --sort-records=# option

The reason I decided to use (a,b,c) as the index is because I read in the
"How mysql uses indexes" (http://www.mysql.com/doc/M/y/MySQL_indexes.html)
that with the index (a,b,c) if you have a where clause where a=constant and
have order by b (the key_part2) it will use the index.

I may be wrong, but if I have an index just on (a,c) and a seperate index on
(b), the b index will span the whole table (greater than 4 mil rows) and
thus the cardinality is higher, while (a,b,c) would only span the rows that
I am looking for making the query faster. If this isnt right, I will go
ahead and try the other index method. The only reason I wouldnt just do it
is changing the indexes on the table takes quite a while.

What I forgot to include in my first post reguarding the optimization of the
order by, when the query contains "ORDER BY b DESC", explain says it will
use file sort (and hence the query takes around 4 seconds). When I drop the
"DESC" part from the ORDER BY clause, explain no longer says it will use
file sort and the query takes .1 seconds or so. I would like the DESC case
to be faster as that is the query I need to use. Is --sort-records the
option with myisamchk I want or --sort-index?

Thanks for the help.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




ORDER BY DESC optimization

2001-04-30 Thread ryc

I have a fairly large table (greater than 4mil rows) that I would to preform
a query like:

SELECT col1,col2 FROM table WHERE a=1, c=2 ORDER BY b DESC LIMIT 50;

I have an index on the table INDEX1( a,b,c );

When running the query as is, it takes around 4seconds. If I omit the "DESC"
part the query runs in a fraction of a second.

I would like the query to run faster when I use DESC. I looked at
myisamchk -R to sort by the 'b' index but I want to be sure it will speed up
my query since it may take a while to sort all 4million rows.

Does anyone have guidance on how to accomplish this? is myisamchk -R what I
want?

Thanks.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Berkelely & MYSQL

2001-04-23 Thread ryc

It will run just fine, you need the patched version if you want transaction
support.

ryan


> Hi there
>
> I have managed to download the "Berkeley for Windows" installation,
however
> I have noted that MYSQL will not run without a patched version of
> berkeley's. Where can I obtain such a installation to run on Windows?
>
> Kind Regards
> Warren
>
>
> ~
> Warren van der Merwe
> Software Director
> PRT Trading (Pty) Ltd t/a RedTie
> Cell (083) 262-9163
> Office (031) 767-0249
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Error codes

2001-04-23 Thread ryc

Type "perror #error_num" at the prompt, you can get what the error codes
mean from there.

ryan


> Hello,
>
> A few days ago I asked if anyone knows the numeric values of the
> Mysql error codes.  Nobody responded, so I looked through the source
> code to find the answer.  This info would be valuable if, for
> example, you wanted to programmatically handle certain types of
> errors.
>
> In the source distribution I found the file Docs/mysqld_error.txt,
> which I'm including here just in case anybody else had the same
> question.
>
> =
>
>
> /* Copyright Abandoned 1997 TCX DataKonsult AB & Monty Program KB & Detron
HB
> This file is public domain and comes with NO WARRANTY of any kind */
>
> #define ER_HASHCHK 1000
> "hashchk",
> #define ER_NISAMCHK 1001
> "isamchk",
> #define ER_NO 1002
> "NO",
> #define ER_YES 1003
> "YES",
> #define ER_CANT_CREATE_FILE 1004
> "Can't create file '%-.64s' (errno: %d)",
> #define ER_CANT_CREATE_TABLE 1005
> "Can't create table '%-.64s' (errno: %d)",
> #define ER_CANT_CREATE_DB 1006
> "Can't create database '%-.64s'. (errno: %d)",
> #define ER_DB_CREATE_EXISTS 1007
> "Can't create database '%-.64s'. Database exists",
> #define ER_DB_DROP_EXISTS 1008
> "Can't drop database '%-.64s'. Database doesn't exist",
> #define ER_DB_DROP_DELETE 1009
> "Error dropping database (can't delete '%-.64s', errno: %d)",
> #define ER_DB_DROP_RMDIR 1010
> "Error dropping database (can't rmdir '%-.64s', errno: %d)",
> #define ER_CANT_DELETE_FILE 1011
> "Error on delete of '%-.64s' (errno: %d)",
> #define ER_CANT_FIND_SYSTEM_REC 1012
> "Can't read record in system table",
> #define ER_CANT_GET_STAT 1013
> "Can't get status of '%-.64s' (errno: %d)",
> #define ER_CANT_GET_WD 1014
> "Can't get working directory (errno: %d)",
> #define ER_CANT_LOCK 1015
> "Can't lock file (errno: %d)",
> #define ER_CANT_OPEN_FILE 1016
> "Can't open file: '%-.64s'. (errno: %d)",
> #define ER_FILE_NOT_FOUND 1017
> "Can't find file: '%-.64s' (errno: %d)",
> #define ER_CANT_READ_DIR 1018
> "Can't read dir of '%-.64s' (errno: %d)",
> #define ER_CANT_SET_WD 1019
> "Can't change dir to '%-.64s' (errno: %d)",
> #define ER_CHECKREAD 1020
> "Record has changed since last read in table '%-.64s'",
> #define ER_DISK_FULL 1021
> "Disk full (%s). Waiting for someone to free some space",
> #define ER_DUP_KEY 1022
> "Can't write, duplicate key in table '%-.64s'",
> #define ER_ERROR_ON_CLOSE 1023
> "Error on close of '%-.64s' (errno: %d)",
> #define ER_ERROR_ON_READ 1024
> "Error reading file '%-.64s' (errno: %d)",
> #define ER_ERROR_ON_RENAME 1025
> "Error on rename of '%-.64s' to '%-.64s' (errno: %d)",
> #define ER_ERROR_ON_WRITE 1026
> "Error writing file '%-.64s' (errno: %d)",
> #define ER_FILE_USED 1027
> "'%-.64s' is locked against change",
> #define ER_FILSORT_ABORT 1028
> "Sort aborted",
> #define ER_FORM_NOT_FOUND 1029
> "View '%-.64s' doesn't exist for '%-.64s'",
> #define ER_GET_ERRNO 1030
> "Got error %d from table handler",
> #define ER_ILLEGAL_HA 1031
> "Table handler for '%-.64s' doesn't have this option",
> #define ER_KEY_NOT_FOUND 1032
> "Can't find record in '%-.64s'",
> #define ER_NOT_FORM_FILE 1033
> "Incorrect information in file: '%-.64s'",
> #define ER_NOT_KEYFILE 1034
> "Incorrect key file for table: '%-.64s'. Try to repair it",
> #define ER_OLD_KEYFILE 1035
> "Old key file for table '%-.64s'; Repair it!",
> #define ER_OPEN_AS_READONLY 1036
> "Table '%-.64s' is read only",
> #define ER_OUTOFMEMORY 1037
> "Out of memory. Restart daemon and try again (needed %d bytes)",
> #define ER_OUT_OF_SORTMEMORY 1038
> "Out of sort memory. Increase daemon sort buffer size",
> #define ER_UNEXPECTED_EOF 1039
> "Unexpected eof found when reading file '%-.64s' (errno: %d)",
> #define ER_CON_COUNT_ERROR 1040
> "Too many connections",
> #define ER_OUT_OF_RESOURCES 1041
> "Out of memory;  Check if mysqld or some other process uses all
> available memory. If not you may have to use 'ulimit' to allow mysqld
> to use more memory or you can add more swap space",
> #define ER_BAD_HOST_ERROR 1042
> "Can't get hostname for your address",
> #define ER_HANDSHAKE_ERROR 1043
> "Bad handshake",
> #define ER_DBACCESS_DENIED_ERROR 1044
> "Access denied for user: '%-.32s@%-.64s' to database '%-.64s'",
> #define ER_ACCESS_DENIED_ERROR 1045
> "Access denied for user: '%-.32s@%-.64s' (Using password: %s)",
> #define ER_NO_DB_ERROR 1046
> "No Database Selected",
> #define ER_UNKNOWN_COM_ERROR 1047
> "Unknown command",
> #define ER_BAD_NULL_ERROR 1048
> "Column '%-.64s' cannot be null",
> #define ER_BAD_DB_ERROR 1049
> "Unknown database '%-.64s'",
> #define ER_TABLE_EXISTS_ERROR 1050
> "Table '%-.64s' already exists",
> #define ER_BAD_TABLE_ERROR 1051
> "Unknown table '%-.64s'",
> #define ER_NON_UNIQ_ERROR 1052
> "Column: '%-.64s' in %-.64s is ambiguous",
> #define ER_SERVER_SHUTDOWN 1053
> "Server shutdown in progress",
> #define ER_BAD_FIELD_ERROR 1054
> "Unknown column '%-.64s' in '%-.64s'",
> #define ER

auto_increment dissapearing

2001-04-22 Thread ryc

I have a auto increment field in a table and after doing a few operations
the flag has gone away. This is using mysql 3.23.36 on OpenBSD. Here is how
it happened:

I am using perl with DBI to interface with mysql. I have been inserting
"NULL" into this field to allow mysql to fill in the auto_increment value
for me. Eventually I started using mysqlimport to put records into this same
table. When the text file had "NULL" for the auto_increment field,
mysqlimport would count that as a warning. Wanting to get mysqlimport to run
warning free I tried changing the values from NULL to 0 for the
auto_increment field. This worked great.

Sometime a week later, and many many imports, I changed the perl code to
insert with 0 instead of "NULL". Instead of working as advertised, the
insert resulted in a row with id set to "0" rather than the next
auto_increment value. I deleted the row with id 0, and tried running
mysqlimport again only to find it will not insert anything because of
duplicate key entry "0" (even though it has been deleted). I ran myisamchk
and it said the table was fine. After running myisamchk I noticed the extra
column does not list "auto_increment" when doing a "describe table". It just
went away.

When I try to do "select * from table where id = 0" I get error 127,
record-file is crashed.

Does anyone know how I can fix this without loosing any data or changing the
values in the auto_increment field? Did I do something wrong to screw up the
table? How can this be avoided in the future?

Thanks in advance.
ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: 64MB Limit on MyISAM indexes?

2001-04-20 Thread ryc

You need to change the "maxrows" on the table. It is in the documentation, I
dont remember the exact syntax something like "alter table tablename set
maxrows=1" for tables larger than 2gigs.

ryan

> I've got tables in ISAM format whose indexes are way over 64 MB.  However,
> when I try to create a MyISAM table with an index that big I get errors
and
> confirming that with myisamchk I get:
>
> myisamchk: warning: Keyfile is almost full,   67107839 of   67107839 used
>
> Which is essentially 64MB.  Are MyISAM tables able to handle large
indexes?
> I hope I don't have to go back to ISAM tables...
> Any suggestions?
>
> Thanks,
> Jeff Brewer
>
> Spam filter bypass: (database,sql,query)
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Efficiency of MEDIUMTEXT columns?

2001-04-18 Thread ryc

> I have some VARCHAR columns (name, e-mail address and subject) that are
> used in every query, though. Should I change them to CHAR(255) so that I
> can do this? (On a related note, does anyone know whether the RFC for
> e-mail messages limit the subject length/name length/e-mail address length
> to a certain number of characters? What is this number?)

I would suggest to create another seperate table for user info only and just
keep a userid field in the message table that corresponds with the user info
row that holds the name and email. This way if someone posts 100 messages,
you only store
their email address once. The subject field is a bit more tricky. You COULD
change it to Char(255) but that would waste a lot of space seeing that most
subjects are a lot shorter. You can move the subject into the table that
holds the varchar bodies. This wont hurt anything because that table already
has varaible length rows from the body column, adding the subject wont hurt.

> Also, is it a good idea to make a field called size and use that instead
> of calling length(message) (message is MEDIUMTEXT) when I don't need to
> retrieve message, or does MySQL have length(message) cached?

Yes definitly. Although it is tempting to calculate the length/count/other
stats on the fly using database functions, it is not scalable. For any stat
type field like length(in your case size) ask yourself how often it changes.
If it doesnt change often or at all, it is much better to calculate it once
and store 'cache' the result in the database. Do the same for number of
replies (ie dont calculate them on the fly). Otherwise you are going to find
yourself with 500k rows and the counting replies isnt so fast anymore.

> So it would appear that retrieving a message (the message that I used in
> this test is 300k, about the largest that is in the archive) from the
> database is faster than loading it from a disk file. However, when I have
> to split() the message into a list (one entry per line), it becomes
> slower.  Any idea how I can split it more efficiently? (My code has to
> process each line individually in order to hilight text from quoted
> replies; e.g. look at the different colors in
> http://qd.anifics.com/read.asp?num=32)

I can think of two solutions. Instead of splitting it into lines based on a
newline, you can do a global replace using \n as an anchor. Something like:
search for "\n>" and replace with "\n$colorcolor>". I am not sure if this
would be much faster than doing a split you can try it out. But again
thinking scalability you might want to do this conversion once, and store
the results in the database. So you dont have to worry about parsing it into
lines once you retrieve it from the db, you can just spit it out because it
already has the color info in it.

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Efficiency of MEDIUMTEXT columns?

2001-04-18 Thread ryc

Messages that are around 10k should be just fine, as long as they are the
majority.

A few tips for you... Store all the varchar/text columns for each message in
a seperate table so that the primary message table only has fixed length
fields (ie messageid, size, #of replies, userid, bodyid, ...). Then use the
"bodyid" field as an index to the table that contains the varchar/text
columns (you can do a left join on it when you need the bodies). This way it
will be very fast for mysql to find a row in the primary message table. Also
it will help you remember to not transfer the body unless you need it (try
to make your queries return the fewest number of rows that you can live with
because it reduces the strain on the server).

One thing to keep in mind about using mysql with something like this with
heavy load: inserts lock the table so you can not perform selects during
that time (same with updates). So if you are dumping/posting a lot of
messages into the table the select performance is going to suffer. If you
make the inserts low priority and have a bunch of load they may not get
inserted for quite a while. The innobase tables dont have this problem, but
I dont think they support mediumtext column types because they have a row
size limit (correct me if Im wrong).

Hope this helps.

ryan


> Is it efficient to store things in MEDIUMTEXT columns?
>
> I am developing a web based system (using perl and MySQL) for viewing an
> archive of mailing list messages. In my database, I have one row per
> message. The column called "message" is of type MEDIUMTEXT. My perl code
> retrieves it using $sth->fetchrow_hashref and then prints it. Message size
> is typically ~10K; sometimes it's as high as a few hundred K.
>
> Here's an example URL from my site:
>
> http://qd.anifics.com/browse.asp?y=1995&m=12&d=30
>
> I'm wondering if it's a good idea to use MEDIUMTEXT like this, or should I
> put the message body into individual separate files? It runs fine right
> now, but I'm worried that it won't scale up under significant load.
>
> -Philip Mak ([EMAIL PROTECTED])
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




query memory usage

2001-04-12 Thread ryc

When performing a query that will return a large dataset (by large I mean
100k+ rows), is it more effecient (memory usage wise) to use the results as
you fetch them or fetch all the results into an array, free the statement
handle, and the process from array? What about performance wise? I am using
perl w/ DBI, but I assume it would be the same if I were using the C api as
well.

I have tried doing "$sth->{"mysql_use_result"} = 1" to see how using
mysql_use_result changes the memory/performance usage but I get odd errors
like:

Issuing rollback() for database handle being DESTROY'd without explicit
disconnect()

and

DBI::db=HASH(0x14652c)->disconnect invalidates 1 active statement handle
(either destroy statement handles or call finish on them before
disconnecting)

despite the code working fine without the mysql_use_result line above. Is
this a known mysql DBI bug?

Does anyone have advice on how to process such large queries?

Thanks,
ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SELECT Calculation

2001-04-12 Thread ryc

What you wrote will work just fine. It is very easy to create a test case
and see if it works, saves you the trouble of posting to the list.

mysql> create table test (a int, b int, c int);
Query OK, 0 rows affected (0.09 sec)

mysql> insert into test values (2,3,5);
Query OK, 1 row affected (0.00 sec)

mysql> insert into test values (3,3,5);
Query OK, 1 row affected (0.00 sec)

mysql> select * from test where a+b=c
|2 |3 |5 |
1 row in set (0.00 sec)

> Can someone tell me if it is possible to have a calculation in a SELECT
> statement.
>
> EG
>
> SELECT * FROM (table_name) where (col_1 + col_2 = col_3)
>
> I am using MySQL 3.23.xx and PHP4
>
> I only want to display those fields where the sum is true.
> Kindest Regards,
>
> Steve Lowe
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How to loada 28MB LONGTEXT data?

2001-04-12 Thread ryc

# perror 139
Error code 139:  Unknown error: 19
139 = Too big row (>= 16 M)

This means the 28MB row you tried loading is bigger than the max allowed
packet size. You might want to change the max_allowed_packet variable (ie -O
max_allowed_packet=32M or something similar on the commandline).

The duplicate entry error can probably be fixed by running myisamchk on the
table.

Hope this helps.

ryan

> I have some problem in load a 28 MB LONGTEXT data into mysql database
using
> "LOAD DATA INFILE ...". At the beginning, server returns me the error
> message as below:
> ERROR 1030: Got error 139 from table handler
>
> I tried the same command again, then I got another error message:
> ERROR 1062: Duplicate entry 'Chr1_A_thaliana' for key 1
>
> But, there is no entries in the table. Could you give me any suggestion?
>
> Thank you in advance.
>
> Wei Zhu
> _
> Get your FREE download of MSN Explorer at http://explorer.msn.com
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Best way to add over 5,000 mp3 file names to database

2001-04-10 Thread ryc

You can make a script that formats all the file names into a text file and
use mysqlimport or "load data infile" to import them into the database. Or
you can use a scripting language like perl to get all the file names and
insert them into the database.

hope that helps.

ryan


- Original Message -
From: "David" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, April 09, 2001 7:49 PM
Subject: Best way to add over 5,000 mp3 file names to database


> I have a hard drive dedicated to mp3 files that are stored in the
> following format: /mp3/artist/album/song1.mp3.  What is the best way to
> add these file names to a database.  I have been adding a few via a web
> interface but can only do one at a time which is not going to work.  I
> also tried just listing the files in each directory and redirecting
> output to a text file and loading the database from the text file.
> There has to be a better way.
>
> Running Linux with latest MySQL rpm
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Importing

2001-04-02 Thread ryc

I believe the problem is with your use of "LOCAL". This is telling the
server to try to load the file on the computer it is running on. If this is
not where the file is on the computer running mysqld this could explain why
it cant read it. Try removing the "LOCAL" part, you also might want to
enable compression depending on how fast the link between the two computers
are.

ryan


> I'm tring to import a text file to mysql. The problem I am having is
> that the mysql database is located on a seperate server from the web
> site. I can do query's and inserts but I can not get the mysql server to
> grab a txt file from the web server and load it into the database. I
> have been using the following code:
>
> $query = "LOAD DATA LOCAL INFILE '$tmp\\$file1_name' ";
>  $query .= "INTO TABLE usr FIELDS ";
>  $query .= "TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n'
> (name,email,login,active,defaultpw)";
>
> The site is located on an NT server and the database is on a UNIX
> server. Not sure if that matters or not.
>
> Any help would be great..
>
> Ian
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Strange error messages

2001-03-10 Thread ryc

I have been getting the same error message, but using perl and DBI. One
thing it MAY be but I haven't been able to pinpoint it is when for some
reason my program dies because of a bad query or the mysql server getting
sig11. Dunno.

Anyone else have this problem? Anyone know what causes it or if it can be
safely ignored?

ryan


> Hello
>
> I get a lot of these in my hostname.err file.  When the load is high i
> get around 10 every minute.
> 010310 13:01:51  Aborted connection 215 to db: 'XXX' user: 'XXX' host:
> `XXX' (Got an error reading communication packets)
>
> What does it mean? Everything seems to work ok, except this strange
> error message.  The selects and updates works fine.
>
> MySql: 3.23.32
> I am connecting from a Java with the mm driver. (ver 2.0.2)
>
> Is it a MySql or a JDBC problem?
> Would it help to upgrade the server to the most recent? If yes, whats
> the best way to upgrade the server? It's in production and I can't stop
> the server for more than a few seconds...
>
> T2
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: changed column type

2001-03-09 Thread ryc

You are expecting an awfull lot from mysql to remember data that it never
recorded. When you inserted something into the varchar(50) column that was
longer than 50 it dropped the rest of the string that would not fit.
Changing the column size will not bring back the data.

ryan


> I had a column that was set to varchar(50) which ended up not being enough
> space. So I went in and did a:
> alter table main change description description varchar(255) not null;
> It said everything went ok, I reimported the text file, but the words stop
> after 50 characters still did I miss something or do I have to totally
> delete the column and readd it
>
> Thanks,
> Scott



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How can i duplocate a database???? i 've tried mysqldump can getit to work ANy ideas??

2001-03-09 Thread ryc

Try
mysqldump -u username -p --opt databasename > dumpfile.sql

to get it back into another mysql server you need to create the database you
want to put it in, and then
cat dumpfile.sql | mysql -u username -p databasename

Hope that helps.

ryan


- Original Message -
From: "Jorge Cordero" <[EMAIL PROTECTED]>
To: "mysql" <[EMAIL PROTECTED]>
Sent: Thursday, March 08, 2001 9:30 PM
Subject: How can i duplocate a database i 've tried mysqldump can get it
to work ANy ideas??


I am trying to make several copies of a database i have tried
 mysqldump -databasename can some one throw a piece of code please;

Thanks



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




query with OR

2001-03-07 Thread ryc

I am having trouble getting this query to run faster:

SELECT * FROM t1 WHERE col1 = 'text' OR col2 = 'text';

Here is the output of the explain for that query

+---+--+---+--+-+--+--+-
---+
| table | type | possible_keys | key  | key_len | ref  | rows | Extra
|
+---+--+---+--+-+--+--+-
---+
| users | ALL  | col1,col2 | NULL |NULL | NULL | 1321 | where used |
+---+--+---+--+-+--+--+-
---+

Here is the "show index from t1":

+---++-+--+-+---
+-+--++-+
| Table | Non_unique | Key_name| Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Comment |
+---++-+--+-+---
+-+--++-+
| users |  1 | col1_2 |1 | col1   | A |
124114 | NULL | NULL   | |
| users |  1 | col1_2 |2 | col2 | A |
124114 | NULL | NULL   | |
| users |  1 | col2 |1 | col2 | A |  124114
| NULL | NULL   | |
+---++-+--+-+---
+-+--++-+

If I split the query into two, ie:
SELECT * FROM t1 WHERE col1 = 'text'; SELECT * FROM t1 WHERE col2 = 'text';
Both complete in no time at all, however the query with the OR takes a few
seconds.

How can I make the query take advantage of the indexes better?

Thanks for the help!

ryan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Error Out of Nowhere?

2001-03-05 Thread ryc

You might want to check to see if the server is still running, it could have
crashed and burned in which case you would get than error when trying to
connect.

ryan

> I was looking through my website and everything was fine and then all of a
> sudden I got this error:
>
> ERROR 2002: Can't connect to local MySQL server through socket
> '/tmp/mysql.sock' (111)
>
> It was working fine, and now for some reason won't connect? Any
suggestions
> how to fix this and why it would happen? Any help appreciated. Thanks in
> advance.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: [Q] DBI connect with MySQL

2001-03-01 Thread ryc

You will get errors like these if your script does not print out a header
before spitting out other information. It is usually best to print the
header in the very beginning of the script so you do not have to worry about
it. You dont have to wait until you are about to print out some html.

$output->header();

Also, if your script has parse errors or something similar you will get that
error.

Hope this helps.

ryan



> Hi,gurus,
> I am a novice at MySQL. When I try to run a sample program in the book of
> "MySQL & mSQL" I get a error message from my apache server as follows,
> *
> Internal Server Error
> The server encountered an internal error or misconfiguration and was
unable to complete your request.
>
> Please contact the server administrator, [EMAIL PROTECTED] and inform
them of the time the error occurred, and anything you might have done that
may have caused the error.
>
> More information about this error may be available in the server error
log.
> *
>
> The sample program is
>
> #!/usr/bin/perl -w
>
> use strict;
> use CGI qw(:standard);
> use CGI::Carp;
> use DBI;
> CGI::use_named_parameters(1);
>
> my ($db);
> my $output = new CGI;
>
> $db = param('test') or die("Could not find database!");
> my $dbh = DBI->connect("DBI:mysql:$db:$server",root,passward );
>
> if (not $dbh) {
> print header, start_html('title'=>"Information on $host => $db",
> 'BGCOLOR'=>'white');
>
> print < $host
> $db
> For the following reasons the connection failed.
> $DBI::errstr
> 
> END_OF_HTML
> exit(0);
> }
>
> print header, start_html('title'=>"Information on $host => $db",
> 'BGCOLOR'=>'white');
> print < $host
> $db
> 
> Tables
> 
> END_OF_HTML
>
> my @tables = $dbh->func( '_ListTables' );
> foreach (@tables) {
>  print "$_\n";
> }
> print < 
> 
> END_OF_HTML
>exit(0);
> 
>
> Could someone tell me what is wrong? Thanks a lot.
>
> Regards,
>
> --Wen
> [EMAIL PROTECTED]
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysql is really a slow rdbms

2001-02-25 Thread ryc

It is not that mysql it slow, its your database design and the fact you have
2.5 million records in the table.

It is not clear from your description of the problem what you are trying to
do, so giving advice is that much harder. If you are creating an index on
the column hari, it will take a long time. However this time is negligable
because you only do it once, and never again. But if you are talking about a
query that uses the column hari going slow, making it a key in the table
will speed up your query considerably.

Hope this helps.

> This table contains more than 2.5 million records. I
> created index on column  hari. It needs more than 3797
>  secs, and have not finished yet.
>
> Anybody has advice?




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Random record

2001-02-25 Thread ryc

Instead of pulling using equals, ie "select * from table where id =
$php_random_variable" you can do this instead "select * from table where id
<= $php_random_variable LIMIT 1"... As long is there is atleast one row in
the table you will get back a result. Hope that helps.

ryan

- Original Message -
From: "joe" <[EMAIL PROTECTED]>
To: "MySQL" <[EMAIL PROTECTED]>
Sent: Sunday, February 25, 2018 5:39 PM
Subject: Random record


Does anyone have any good ideas on how to pull a random record? I've been
using PHP's random function to create a random number to pull, but the
problem is that if I delete a record, it could still try to pull it.  Is
there a better way?  I'd really like to be able to just pull it in the SQL
statement instead of pulling a bunch of records then sorting throught that.

Thanks,

JOE



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




how to optimize for large databases

2001-02-24 Thread ryc

First let me explain what I am trying to do. I want to archive many many
messages into a mysql database. The format is pretty simple... either the
message is a parent, or it is a reply. The problem is that some of the
queries are taking 5-10 seconds, which is not acceptable for my application.
The size of the table is always going to be 250k+ rows (about 2k of text per
row). Below is the query that I need to make faster, and the table layout.
Does anyone have any ideas on how I could make this query preform better?

Query (with 250k rows, this is taking between 5 and 20 seconds):

SELECT m.messageid,catid,m.userid,subject,postdate FROM messages AS m LEFT
JOIN users AS u ON (m.userid = u.userid) WHERE m.catid = 1 AND m.parentmsgid
= 0 ORDER BY postdate LIMIT 50;

This query selects all the messages within category specified by the 'catid'
field.

The 'parentmsgid' field is 0 if it is a parent, and if it is a reply it
contains the messageid of its parent.


Here is the layout of the tables. I made sure that the two fields in the
'message' table that I will be using in queries is a KEY in that table.
However I suspect that mysql is using the index to find the location of all
the messages with catid, and parentmsgid = 0, but is using a lot of time
sorting these records (each category can have 20-30k messages). Would adding
a key (catid, parentmsgid, postdate) help? Does adding a key for what you
sort on help?

CREATE TABLE category (
  catid int(10) unsigned NOT NULL auto_increment,
  name varchar(255) NOT NULL default '',
  PRIMARY KEY (groupid)
);

CREATE TABLE messages (
  messageid int(10) unsigned NOT NULL auto_increment,
  catid int(10) unsigned NOT NULL default '0',
  userid int(10) unsigned NOT NULL default '0',
  parentmsgid int(10) unsigned NOT NULL default '0',
  subject text NOT NULL,
  body text NOT NULL,
  postdate int(10) unsigned NOT NULL default '0',
  replies int(10) unsigned NOT NULL default '0',
  PRIMARY KEY (messageid),
  KEY groupid(groupid),
  KEY parentmsgid(parentmsgid)
) TYPE=MyISAM;

CREATE TABLE users (
  userid int(10) unsigned NOT NULL auto_increment,
  username varchar(30) NOT NULL default '',
  password varchar(30) NOT NULL default '',
  email varchar(255) NOT NULL default '',
  PRIMARY KEY (userid),
  KEY username(username,password),
  KEY email(email)
) TYPE=MyISAM;


I am open to any type of suggest, even if it means restructuring my layout.
How can I make this query faster?

Thanks,
ryan



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php