RE: Exporting utf-8 data problems

2006-01-04 Thread Lopez David E-r9374c
dave

I believe group is a reserved word. change to grp.

david 

-Original Message-
From: Dave M G [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 04, 2006 8:55 AM
To: mysql@lists.mysql.com
Subject: Re: Exporting utf-8 data problems


Thank you for the advice.

In order to isolate the problem, I have checked to see if the
text file that I have exported from the MySQL server on my virtual host
is, in fact, in UTF-8.
By opening it in OpenOffice and selecting UTF-8 encoding, it
displays correctly.
Not entirely without problems. Most of the Japanese text shows
up correctly. But about 10% of it shows like this:
プル・リーフで
I believe this is because there is probably some Japanese text
that was entered in sometime before the server upgraded MySQL to 4.1.
But, ignoring the 10% of "legacy" text, it seems that at the
very least, I do have a proper UTF-8 encoded text file with which to
import into my home MySQL server.

Instead of importing the data as an SQL file (which successfully
imported, but with faulty Japanese characters), I copied the text and
pasted them in as a straight SQL query. 

But it returns an error. Can anyone enlighten me as to why the
file would import into SQL as an SQL file, but the text won't work as an
import statement?

Here is the error output:

SQL query:

# phpMyAdmin MySQL-Dump
# version 2.3.3pl1
# http://www.phpmyadmin.net/ (download page) # # Host: localhost #
Generation Time: Jan 04, 2006 at 10:04 AM # Server version: 3.23.37 #
PHP Version: 4.3.11 # Database : `signup` #

#
# Table structure for table `event_groups` # CREATE TABLE event_groups(

egid int( 11 ) NOT NULL AUTO_INCREMENT , GROUP int( 11 ) NOT NULL
default '0', event int( 11 ) NOT NULL default '0', PRIMARY KEY ( egid )
) TYPE = MYISAM 


MySQL said: Documentation

#1064 - You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near 'group int(11) NOT NULL default '0',
  event int(11) NOT NULL de

(the error message cuts abruptly, as shown here)

Any advice would be much appreciated.

--
Dave M G


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: really big key_buffer_size (> 4GB)?

2005-11-10 Thread Lopez David E-r9374c
chris

I chased this down a while back.
you are correct. 4G is max on 4.0 & 4.1 versions
of mysql. I don't know about 5.0 though.

I looked at the source for sql/mysqld.cc
  line 4170
  UINT_MAX32

someone else found docs that said monty had
done this since myism tables had issues 
deep inside the code base. so to prevent issues,
simply clamped to 32 bit. I hope he has had
time to change it for 5.0.

maybe you could post if you find the answer
to 5.0.

david

 

-Original Message-
From: Chris Kantarjiev [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 10, 2005 9:55 AM
To: mysql@lists.mysql.com
Cc: [EMAIL PROTECTED]
Subject: really big key_buffer_size (> 4GB)?

I'm running 4.0.25 on NetBSD 3 on an amd64. It appears that the
key_buffer_size is being truncated to 4GB, even though I've set it
larger.

Could someone give me a hint about where to look? I'm a little
suspicious of ha_resize_key_cache() which is using longs internally to
hold the keybuff_size, but I suspect this is not quite the source of the
problem (there's no logic here to truncate the input value).

Thanks,
chris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problems with key_buffer_size > 4 GBytes

2005-10-20 Thread Lopez David E-r9374c
solaris box with 16 Gbytes of memory
os using 5.8, mysql at 4.0.20_64bit

with key_buffer_size set to 6144m, the mysqld crashes
and re-starts (mysql_safe) under heavy load (1day to 1week). 
prior to this, key_buffer_size set to 1.5G and no crashes since
early 2004. 

the process size is 2G even though I specify key_buffer_size
to 6144m. I backed off to 4096m and the process size is correctly
set to 4G. has been stable ever since.

tried mysql version 4.1.15 64bit with same issue. will not
set the process size any larger than 4G.

anyone have ideas on how to track down this issue?

(checked ulimit for csh shell and vmemory is unlimited)

appreciate help.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing Excel Data in MySql

2004-10-21 Thread Lopez David E-r9374c
bertrand

i went to tab delimited fields for the same reason:
commas in text is often done, tabs is rarely done.

david

> -Original Message-
> From: Bertrand Gac [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 21, 2004 12:09 AM
> To: mysql
> Subject: Importing Excel Data in MySql
> 
> 
> Hello 
> 
> We have clients sending us xls file. 
> We export them as csv files, using commas or semi-colons. as 
> delimiters and then make an import in mysql.
> Problem : some of the fields, containing texts, have 
> sometimes already a comma or a semi-colon in them...and when 
> we import, of course MySql thinks the portion between 2 
> commas for example is a table field...
> 
> What can we do ? We can't control what the client have typed, 
> neither tell them "Don't use , or ;" neither ckeck and modify 
> every xls file
> 
> Thanks for any idea
>  
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sql statement: "optimize TABLE tablename" taking forever

2004-09-17 Thread Lopez David E-r9374c
the statement is taking 3 hours and counting.

  "optimize TABLE tablename"

normally this takes 30 minutes or so.

boss wants to kill this thread. 
does that mean the table will end up corrupt?

any input helpful.

david


hrdw: solaris 4cpu, running version 8
mysql version 4.0.20 64bit
table is 15 million row, type=myisam, fixed

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread Lopez David E-r9374c
matt

> 
> the issue is insert speed, I get 150k-1M records daily, of 
> these, only 
> 5-10 % are new records, I load the EBCDIC file into a temp table, and 
> then do "insert ignore into historytable select * from temp table"
> 
Since you have a temp table created (no keys I assume), use the command

  mysqldump -v -e -n -t dbname tablename > filename.sql

This should create insert statements with many values in a single
insert. Then use the client program to insert them to you db.

  mysql -u matt -p dbname < filename.sql

This is very fast way to insert rows. Speeds up insertion by at
least 10x on my large tables (11 million rows). I noticed someone
with 1000x more rows w/o problems.

BTW, by forcing the table to have packed keys, the docs say it
will slow your insertion. Maybe not that much, i don't know.

David

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql growing pains, 4 days to create index on one table!

2004-06-30 Thread Lopez David E-r9374c
matt

1) inserts using this format is much faster:
 INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
is much faster then single row insert. My experience is
2.5 hrs vs.. 36 hrs.

2) The PACK_KEYS=1 may be hurting you. I've never used it.

3) There may be a cache somewhere that's to small. You'll
   have to do some digging in this area.

4) dup key ignore - what does that mean exactly?

5) what is your OS & rev, mysql rev.

Please post any suggestions that you find valuable so we 
can all learn..

david
  

> -Original Message-
> From: matt ryan [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, June 30, 2004 11:51 AM
> To: [EMAIL PROTECTED]
> Subject: Mysql growing pains, 4 days to create index on one table!
> 
> 
> Rebuilding index takes 3 1/2 days!!!  Growing pains with mysql..
> 
> I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
> raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
> backplanes (new ones will have dual channel)
> 
> All have 2 gig of ram, but I've never seen mysql use more than 600mb
> of ram.
> 
> The servers handle huge loads, each day there are 30 1-2 gig files
> loaded into large tables, total mysql data size is 96 gig, the large
> tables are 2-6 gig.
> 
> Inserts are done on dupe key ignore, this takes hours on the large
> files, it barely keeps up with input files.
> 
> At the bottom of this post I've got the mysql.ini config lines, any
> suggestions are welcome, I'm already beyond the mysql "huge" sample
> they used to include in the program.
> 
> Sample table that I load is as follows.
> 
> each day I get 40 % new records on the text file, the input file is
> normally 20mb, once a week I get one that's 1-2 gig, these take all
> day to load.
> 
> I need more multiple column indexes, as some querys return millions of
> rows that must be scanned, but the index size already exceeds the
> table size, and the combinations I need would result in an myi that's
> 5x larger than the data itself.
> 
> Here's an example of the speed problem, the index was corrupt so I
> dropped all and recreated, rather than a myisam repair. I think 3 days
> is a little excessive for a table that's only 3.428 gig, index is
> 2.729 gig.  I cant remove the primary key, as it keeps duplicates out
> of the system, the input files are from old database's, we use mysql
> to store the data for the web frontend, mostly done in ASP, most
> queries take less than a second, unforuntatly we have big queries that
> take way more than the IIS timeout setting all the time, but no way
> around it, I cant add more indexes without making it even slower :(
> 
> 
> 
> I cant tell if it's mysql that's the problem, or the 
> hardware, Here's a 
> screenshot of the disk IO, if I copy a file while mysql is doing the 
> build index, the io shoots way up, which tells me, mysql is 
> NOT maxing 
> out the drives, and it's also not maxing out the memory.
> 
> Unless it's doing lots and lots of seeks on the drive, which 
> is harder 
> to test using perfmon, are there any mysql test setups that 
> would help 
> identify where the bottleneck is?
> 
> 
> screenshot of disk io usage
> 
> http://www.geekopolis.com/pics/diskio.jpg
> 
> I'm all out of ideas, other than switching to another db, and 
> the table 
> & indexes split across drives, maybe a 2 channel setup, 4 drives per 
> channel, each 4 is a separate raid 5 setup, one holds data one holds 
> indexes, cant do this with mysql though
> 
> mysql> alter table hood_stat add primary key
> (dic,niin,fr_ric,don,suf,dte_txn,sta) , add index `don` (`don`), add
> index `niin` (`niin`), add index `stor` (`stor`), add index `dic`
> (`dic`), add index `ctasc` (`ctasc`);
> 
> Query OK, 45449534 rows affected (3 days 19 hours 6 min 34.94 seconds
> Records: 45449534 Duplicates: 0 Warnings: 0
> 
> 
> CREATE TABLE `hood_stat` (
> `dic` char(3) NOT NULL default '',
> `fr_ric` char(3) NOT NULL default '',
> `niin` char(11) NOT NULL default '',
> `ui` char(2) NOT NULL default '',
> `qty` char(5) NOT NULL default '',
> `don` char(14) NOT NULL default '',
> `suf` char(1) NOT NULL default '',
> `dte_txn` char(5) NOT NULL default '',
> `ship_to` char(3) NOT NULL default '',
> `sta` char(2) NOT NULL default '',
> `lst_sos` char(3) NOT NULL default '',
> `esd` char(4) NOT NULL default '',
> `stor` char(3) NOT NULL default '',
> `d_t` char(4) NOT NULL default '',
> `ctasc` char(10) NOT NULL default '',
> PRIMARY KEY (`dic`,`niin`,`fr_ric`,`don`,`suf`,`dte_txn`,`sta` ),
> KEY `don` (`don`),
> KEY `niin` (`niin`),
> KEY `stor` (`stor`),
> KEY `dic` (`dic`),
> KEY `ctasc` (`ctasc`)
> ) TYPE=MyISAM MAX_ROWS=10 PACK_KEYS=1
> 
> skip-locking
> set-variable=delay_key_write=ALL
> set-variable= key_buffer_size=1500M
> set-variable=join_buffer=512M
> set-variable= max_allowed_packet=256M
> set-variable= table_cache=512
> set-variable= sort_buffer=256M
> set-variable=tmp_table_size=400M
> set-variable= record_buffer=512M
> set-variable= thread_

Solaris install of 4.0.20 - Only 20 connections allowed

2004-06-19 Thread Lopez David E-r9374c
All

Hardware: Solaris 4 cpu, 16G ram, 900MHz, ver 5.8
MySQL: 4.0.20 binary install, 32 bit version

Admin install, I maintain the my.cnf, and do db admin.

Installed a production db (10 million rows) with no
problems. However, can only open 20 or so connections
to mysqld. The error is:

  ERROR 1135: Can't create a new thread (error 11).
  If you are not out of available memory, you can 
  consult the manual for a possible OS-dependent bug

The command "perror 11" returns "Resource temporarily
unavailable".

The my.cnf config is based on my-huge.cnf with the
key-buffer set to 4000m. The max number of connections
is set to 100. That's standard.

Looked at file descriptors using command "pfiles pid" 
and it has over 1k on that process.

I get the error while using client mysql, or using the
normal client (java jdbc).

I have an older development solaris box with 4.0.18 
installed (64 bit version) and it can do connections
up to 100 with no problems. This was confirmed with
by creating a java program and doing connections in 
a loop.

Does anyone have any ideas as to what to check or what
else to do?

Kinda desperate

David

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query question

2004-05-24 Thread Lopez David E-r9374c
John

Try

select field, count(*)
 from db.table
 group by field;

David

> -Original Message-
> From: John Nichel [mailto:[EMAIL PROTECTED]
> Sent: Monday, May 24, 2004 10:37 AM
> To: MySQL List
> Subject: Query question
> 
> 
> Hi,
> 
>I have a table which I want to select data from (obiviously).  In 
> this table, I have a field which is an integer, and defaults 
> to 0.  What 
> I would like to do is count all rows in that table which not 
> only equals 
> 0 for the field, but has a distinct value which is greater than 0.
> 
> idfield
> 1 0
> 2 0
> 3 7
> 4 8
> 5 7
> 6 0
> 7 6
> 8 7
> 9 8
> 
> For the above example, my count should return 6.  Three 
> zero's count as 
> 3, three seven's count as 1, two eight's count as 1, and one 
> six counts 
> as 1.
> 
> I've tried...
> 
> SELECT COUNT(*) FROM db.table WHERE ( field = 0 || ( field > 0 && 
> DISTINCT field ) )
> 
> But it still returns the count of all the rows.
> 
> -- 
> John C. Nichel
> KegWorks.com
> 716.856.9675
> [EMAIL PROTECTED]
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Index not functioning

2004-03-24 Thread Lopez David E-r9374c
Jack

you must have a compound index in your table:

  ALTER Table ifInOctets_137 add INDEX i_id_dtime(id,dtime);

In your SELECT statement, change
   "USE INDEX (dtime)"
to
   "USE INDEX (i_id_dtime)"

Let us know how it works.

David

> -Original Message-
> From: Jack Coxen [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 24, 2004 11:51 AM
> To: '[EMAIL PROTECTED]'; MySQL List (E-mail)
> Subject: RE: Index not functioning
> 
> 
> I tried using 'USE INDEX' and 'BETWEEN' but it didn't change anything.
> 
>   EXPLAIN SELECT counter, UNIX_TIMESTAMP(dtime) FROM 
> ifInOctets_137
> USE INDEX (dtime) WHERE id=2809 AND dtime BETWEEN 
> FROM_UNIXTIME(107397)
> AND FROM_UNIXTIME(1076734799) ORDER BY dtime;
>   
> ++--+---+--+-+
> --+-+-
> +
>   | table  | type | possible_keys | key  | 
> key_len | ref  |
> rows| Extra   |
>   
> ++--+---+--+-+
> --+-+-
> +
>   | ifInOctets_137 | ALL  | dtime | NULL |
> NULL | NULL |
> 9279150 | Using where; Using filesort |
>   
> ++--+---+--+-+
> --+-+-
> +
>   1 row in set (0.00 sec)
> 
> Is my syntax wrong?   
> 
> The resultset size should be around 8640 rows - 5 minute 
> interval data for
> 30 days - 12 X 24 X 30 = 8640
> 
> -Original Message-
> From: Victor Pendleton [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 24, 2004 1:22 PM
> To: Jack Coxen; MySQL List (E-mail)
> Subject: RE: Index not functioning
> 
> 
> Optimizers work on a thirty percent rule. If the resultset 
> will contain
> thirty
> percent of the table then a table scan will be performed. If 
> you want to
> `force` the use of an index you can use the `USE INDEX 
> ` syntax.
> You also should look at your query. You have three criteria 
> in your where
> clause
> id=2809
> dtime>FROM_UNIXTIME(107397)
> dtime<=FROM_UNIXTIME(1076734799)
> A compound index containing id and dtime should be created.
> Can you use a BETWEEN statement?
> WHERE id = 2809 AND ( dtime BETWEEN start AND stop)
> ...
> Also what is the sum of your `>` and `<=` resultsets? This 
> value may be
> larger
> than you expect.
> 
>   On Wed, 24 Mar 2004,
> Jack Coxen wrote: > 
> > Is there a way of changing what it believes?  The resultset 
> size is only
> > a few thousand lines.  
> > 
> > And, btw, why does it believe that a table scan is more 
> efficient than
> > using an index? 
> > 
> > Jack 
> > 
> > -Original Message- 
> > From: Victor Pendleton [ mailto:[EMAIL PROTECTED]
> >  ] 
> > Sent: Wednesday, March 24, 2004 12:24 PM 
> > To: Jack Coxen; MySQL List (E-mail) 
> > Subject: Re: Index not functioning 
> > 
> > 
> > The optimizer is informing you that `it` believes a table 
> scan is more 
> > efficient than using an index due to the resultset size. 
> > 
> >  On Wed, 
> > 24 Mar 2004, Jack Coxen wrote: > 
> > > I have a series of tables that were created by: 
> > > 
> > >   CREATE TABLE ifInOctets ( 
> > > id int(11) NOT NULL default '0', 
> > > dtime datetime NOT NULL default '-00-00 00:00:00', 
> > > counter bigint(20) NOT NULL default '0', 
> > > KEY ifInOctets_idx (dtime) 
> > >   ); 
> > > 
> > > When I run a query against any of the tables, the index 
> isn't used.
> > The 
> > > query syntax is: 
> > > 
> > >   SELECT counter, UNIX_TIMESTAMP(dtime) FROM 
> ifInOctets_137 WHERE 
> > > id=2809 AND dtime>FROM_UNIXTIME(107397) AND 
> > > dtime<=FROM_UNIXTIME(1076734799) ORDER BY dtime; 
> > > 
> > > Running an EXPLAIN of that command gives: 
> > > 
> > >   
> > >
> > 
> ++--+---+--+-+
> --+---
> > 
> > > --+- 
> > > + 
> > >   | table  | type | possible_keys | key  | 
> key_len | ref 
> > > | 
> > > rows| Extra   | 
> > >   
> > >
> > 
> ++--+---+--+-+
> --+---
> > 
> > > --+- 
> > > + 
> > >   | ifInOctets_137 | ALL  | dtime | NULL |
> NULL | NULL 
> > > | 
> > > 9279150 | Using where; Using filesort | 
> > >   
> > >
> > 
> ++--+---+--+-+
> --+---
> > 
> > > --+- 
> > > + 
> > >   1 row in set (0.00 sec) 
> > > 
> > > I'm running on a Sun E250 w/RAID running Solaris 8.  I'm 
> running MySQL
> > 
> > > 4.0.16 precompiled Solaris binary.  My my.cnf is essentially the 
> > > my-huge.cnf 
> > > file.  Other people are running this application (RTG - 
> > > http://rtg.sourceforge.net  ) 
> on various
> > platforms and MySQL versions and 
> > > ARE 
> > > NOT having this pr

processlist: state is "Opening table"

2004-03-09 Thread Lopez David E-r9374c
filter: select, mysql

Solaris 3.23.40  connections using perl, jdbc, odbc.

Yesterday, we experienced a rare mysqld failure where all connections
where in the state="Opening table". Normally our 200+ connections
are in state="sleep". The Time field from command 
  "show processlist"
showed each connection accumulating time. This is not the case
when a connection is in the sleep mode.

The number of connections was nearly 400 when we normally have
over 200. The threads running about equal to number of connections.
The threads created was at 1240 - normally is ~ number of connections.

When I attempted a "mysql.server stop", it timed out. I had to
use a "kill -9". I used a combination of myismcheck (db off)
and check table (db on) to verify that all tables were ok.
I did repair tables that were OK but terminated improperly
probably due to the "kill -9" command.

The info column from the "show processlist" showed the command
the user or program was executing. All of the connections were
accounted for including the extra 170 connections. These extra
connections accumulated from programs between the time server
failed and time of restart.

To me, it appears that the mysqld allowed connections, but then
froze with all connections executing at the state "Opening table". 
This is verified from programs making connections without failing. 
It appears all were waiting for mysqld to process the query.

Just in case this matters, there are ~75 tables in this installation.
On a normal day, open_tables=149 and table cache is set to 256. 
Prior to restarting mysqld, opened_tables was at 831.

Any suggestions welcome.

David



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Syntax Check: Group BY with Joins

2004-01-27 Thread Lopez David E-r9374c
Eric

Try putting the "HAVING" clause after group by. I believe
"having" is the last clause (may be LIMIT).

David

> -Original Message-
> From: Eric Ellsworth [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 27, 2004 8:01 AM
> To: [EMAIL PROTECTED]
> Subject: Syntax Check: Group BY with Joins
> 
> 
> Hi,
> I'm having trouble with this query:
> 
> SELECT Max(DATE_ADD('1970-1-1',INTERVAL Lastupdate/3600/24 
> DAY)) as BaskDate, ofm_baskets.Code, 
> Sum(ofm_baskets.Quantity) AS TotalQty
> FROM (ofm_basklist INNER JOIN ofm_basklupd ON 
> ofm_basklist.Session_id = ofm_basklupd.Session_id) INNER JOIN 
> ofm_baskets ON ofm_basklist.Basket_id = ofm_baskets.Id
> GROUP BY ofm_baskets.Code
> HAVING (((ofm_basklist.Order_id)=0))
> ORDER BY Max(DATE_ADD('1970-1-1',INTERVAL Lastupdate/3600/24 DAY));
> 
> I get the following error:
> ERROR : Invalid use of group function
> 
> Can someone give me an idea of what's wrong with this query?  
> I'm using MySQLd version 4.0.16.
> 
> Thanks,
> 
> Eric
> 
> 
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: large table performance for WHERE queries

2004-01-15 Thread Lopez David E-r9374c
Gregory

mysql,select,query

I agree with Joe, use multiple-column index. Much more efficient.
All queries should be sub 5-10 seconds or less.

David

> -Original Message-
> From: Gregory Newby [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 15, 2004 11:53 AM
> To: [EMAIL PROTECTED]
> Subject: large table performance for WHERE queries
> 
> 
> I'm using MySQL for an information retrieval application where word
> occurrences are indexed.  It seems that performance is not as good as
> I would expect (it seems nearly linear with the number of rows).
> Any advice would be welcome.  I'll lay out a lot of detail.
> 
> Some details follow.  The basic scenario is:
> - there are over 498M rows.  This seems to qualify it as a
>   rather large table
> - the table has 6 fixed-length fields
> - I have created indexes on each field
> - I set MAX_ROW_SIZE to 550,000,000
> - I've tried this with both MyISAM and INNODB tables, with very
>   similar performance
> - Slower inserts are not really a problem, if I could get 
> faster searching
> - I've run ANALYZE, OPTIMIZE, myisamchk etc. (see below)
> - Using a socket for local communication; MySQL++ for apps, but I've
>   also confirmed benchmarking results using the mysql> command line 
> - I'm not presenting a lot of variety in queries and response 
> times below,
>   but can send more if needed.  
> - Due to the patterns in the data, there are many many duplicate
>   values in the indexes.  I suspect this might be important.
> - This is not a results transport issue.  I'm able to retrieve
>   about 1000 rows/second, which is OK.  It's generating the 
> result set,
>   prior to results transport, which is slow.
> 
> Queries are definitely disk-bound, with 5-40% of CPU utilization
> during a long-running query.  (I did a little experimentation with a
> ramfs to confirm this...too bad I don't have 40GB of RAM).
> 
> The application: I want to run queries that will identify particular
> DOCUMENTS (via "docid") that have particular TERMS (via "termid").
> This is your garden variety task for a search engine or other
> information retrieval system.  I often submit additional criteria,
> such as adjacent terms (offset A - offset B=1) or terms in a
> particular tag (term A in  and term B in ) or proximity
> (paragraph A == paragraph B).  Being able to do such powerful queries
> is the whole point of using MySQL, rather than trying to manage things
> myself (using BerkeleyDB or home-grown B-trees, as I've done in the
> past).
> 
> I have made a quickie 2000-document, then a longer 100,000 document,
> then the full 1.2M document database (input is about 20GB of HTML).
> The challenge is that on the 1.2M document dataset with nearly 1/2 a
> billion rows (term occurrences), I just can't seem to get a quick
> response for combinations with terms occurring in a lot of rows.
> 
> Maybe the answer is that this is the best I can expect due to
> duplicate key values and other factors (that is, there are many terms
> with each docidmany many terms with each paragraph idmany many
> many terms with each offset, etc.).  But I hope there might be other
> tuning parameters or tips that will speed things up.
> 
> If I look for terms in only a few hundred rows, no problem.  But for
> terms with thousands of rows, it takes a very long time (several
> minutes for the first 1000 rows to an hour for all rows) to get the
> query output, even when I order by and limit to get a smaller output
> set.
> 
> Concrete example:
> 
>   select * from inv0web02 as t0,inv0web02 as t1 where 
> ((t0.termid=35)
>   and (t1.termid=141) and (t0.docid=t1.docid) and
>   (t0.whichpara=t1.whichpara) and ( (t1.offset - 
> t0.offset) = 1)) order
>   by t0.weight_in_subdoc limit 1000
> 
> This takes about 3 minutes where term35="web" and term141="page" (very
> common terms).  (Timing depends slightly on whether I employ a Result
> or ResUse in my MySQL++ code - but I don't think the MySQL++ is the
> controlling factor here since I've also experimented with mysql on the
> command line).  This is on a RedHat 9 box: Dell 4600 with 12GB RAM,
> 2x2.8Ghz Xeon (hyperthreaded to look like 4 CPUs) and 10KRPM drives.
> It's a pretty speedy system.  I'm using mysql-max-4.0.17-pc-linux-i686
> (binary build).
> 
> With less common terms, I'm able to get a response in just a few
> seconds.  The subtraction in the offset is not a deciding factor;
> performance is comparable without it.  Clearly, the issue is the merge
> within the same table using WHERE criteria.
> 
> The "order by" doesn't matter much; the "limit" speeds things up quite
> a bit since the response set is smaller.  (I use these so that the
> program that uses the results only needs to look at the "best," and to
> limit the response set size).  If I remove the "limit 1000", the query
> can take well over 1/2 hour (even without actually retrieving the
> rows).
> 
> I have my.cnf tuned to use a lot of memory, so we're not doing a lot
> of swap

Replication on one slave and two different masters

2003-12-10 Thread Lopez David E-r9374c
mysql, query

I have a situation come up where we want one slave to act
as backup for two different databases located in two 
different hosts. Can this be done?

The master setup is easy. The slave setup is unclear.
Can I have two sets of master-host, master-user, and 
master-password, master-port, master-connect-retry,
w/o the mysqld getting confused?

My experience is one slave, one master. No FAQ covers 
multiples master on one slave that I can find.

I'm running mysql version 3.23.49 on Solaris.

Thanks in advance

David

---
David E Lopez
Org:   SPS->WMSG->AT->SOC->CSAM
Phone: 480-413-6251
email: [EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Lotus Notes/Script...

2003-09-15 Thread Lopez David E-r9374c
Jonathon

We are using odbc and jdbc to link up from notes to mysql.

David

> -Original Message-
> From: Jonathan Villa [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 15, 2003 1:43 PM
> To: [EMAIL PROTECTED]
> Subject: Lotus Notes/Script...
> 
> 
> Has anyone every connected Lotus Notes/Script to MySQL?
> 
> Someone asked me about it and I can't seem to find any information on
> Google... and I don't know much of MySQL...
> 
> Would I have to have built MySQL with ODBC support?  I installed the
> binary so wouldn;t this already be available...
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



JDBC is not surviving a mysql reboot

2003-06-06 Thread Lopez David E-r9374c
Dudes & Dudetts

filter: mysql, select, sql
mysql version: 3.23.49
jdbc version: 2.0.14
development os: win2k
production os: solaris

Problem:

During a reboot of mysqld, the perl connections survived
and the jdbc connections did not. My definition of survived is 
that the connections waited until mysqld rebooted and automatically
reconnected.

Discussion
--

None of the jdbc connections survived. Back to the books I went.
I did not have the properties that control this behavior:
  autoReconnect, maxReconnects, initialTimeout

I changed my url + db + properties to:
jdbc:mysql://localhost/db1?user=david&passward=secret&\
autoReconnect=true&maxReconnects=5&initialTimeout=10

(The trailing backslash is simply for ledgability - not in code)

I believe this translates to: 
  If connection is lost, keep trying 5 times and wait 10 seconds
  between retrys.

Result:
---

If I start a test application program which simply selects a 
bunch of records from a local mysql db and writes them to a file, 
then terminate the mysqld in the middle of the program, 
I get this exception:

  Error: findFeature
  java.sql.SQLException: null: Server shutdown in progress

If I start the application program with mysqld not running, I get
this exception:

  java.sql.SQLException: Cannot connect to MySQL server on localhost:3306. 
  Is there a MySQL server running on the machine/port you are trying to 
  connect to?  (java.net.ConnectException)

Question:
--
How can I automatically keep my application program running without
getting exceptions?

Confused as usual

David


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: # of connected user

2003-02-01 Thread Lopez David E-r9374c
Mustafa

Try this command from the client window: 

  show processlist;

You must be logged in from root to list all connections.
If you are logged in as normal user, only those connections
which you have privileges are listed.

David

> 
> How i list connected users(active) to mysql server?
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: 3.23.54a Instability

2002-12-18 Thread Lopez David E-r9374c
kees

How do you measure spiked queries/s? 

All I see is average queries/s from the status command.

I can see the calculation based on uptime in seconds and
total queries in that time. But that's average. My boss
wants avg and skipped on a web site.

David

> -Original Message-
> From: Kees Hoekzema [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, December 18, 2002 1:54 PM
> To: Lenz Grimmer
> Cc: [EMAIL PROTECTED]
> Subject: RE: 3.23.54a Instability
> 
> 
> > From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
> > Sent: Wednesday, December 18, 2002 9:43 PM
> 
> > OK, as I already assumed, the Max binary was not linked 
> statically. Doh!
> > Interesting, that this also causes the load to spike, even 
> though it's not
> > statically linked against an unpatched glibc.
> Maybe it is the version of glibc i'm using? it is unstable, 
> the load spikes
> to +50 in 2 minutes.
> (it is the glibc package form Slackware 8.1)
> 
> 
> > Could you please give this one a try? If this one solves the
> > problem, I will publish it on our download pages ASAP:
> >
> > 
> http://work.mysql.com/~lgrimmer/mysql-max-3.23.54c-pc-linux-i6
> 86.tar.gz
> 
> Ok, this one works without the spikes,
> normally the load would jump from ~1 to +50, nog it is stable 
> at 1 again.
> And if this one works for me, it'll work for almost everyone ;)
> (kinda loaded server, +1000 queries/s spikes and avg of +350 q/s)
> 
> -kees
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




ALTER a auto_increment column

2002-12-16 Thread Lopez David E-r9374c
Guru's

Problem is type SMALLINT needs to be MEDIUMINT (MyISAM).

  column: id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT

My solution is to use the following ALTER statement:

  ALTER TABLE messages CHANGE id 
 id MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT

My question is: what happens to the sequence of
numbers in this primary key?

The manual states that if you don't change the
AUTO_INCREMENT column, the sequence number will
not be affected. It also says that if you drop
a AUTO_INCREMENT column and then add another 
AUTO_INCREMENT column, MySQL will resequence the
new column.

Well I'm kinda in-between these two extremes. I want
to increase the width but keep the sequence. As usual,
this is a primary key which will affect other tables
(foreign keys). The total rows affected is 14 million.

BTW I expected the rows would be around 50. It's now
near 60k. So much for my foresight.

Thanks a ton

David

Filter: MySQL, SELECT, QUERY

---
David E Lopez
email: [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Understanding MySQL column type SET

2002-10-24 Thread Lopez David E-r9374c
Gurus

I'm having trouble understanding the column type SET.
To explain my problem, I'll create a table, populate
it, and do selects.


mysql> CREATE TABLE settest( chain SET( "A", "B", "C" ) NOT NULL, 
  UNIQUE INDEX ichain( chain ) );

mysql> INSERT INTO settest SET chain="A,C";

mysql> SELECT * from settest;
+---+
| chain |
+---+
| A,C   |
+---+

mysql> SELECT * FROM settest WHERE chain="A,C";
+---+
| chain |
+---+
| A,C   |
+---+

mysql> SELECT * FROM settest WHERE chain="C,A";
Empty set (0.00 sec)

 or 

mysql> SELECT * FROM settest WHERE FIND_IN_SET("C,A", chain);
Empty set (0.01 sec)

In reading MySQL Reference Manual, this second select statement 
should work. But in version()=3.23.49-nt-log, it does not. 

In my understanding of set theory, if a SET has A,B,C

  A,C == C,A

Can anyone tell me what I'm missing?

BTW, for my application, I'm only interested in unique entries.
---
David E Lopez

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: retyping data

2002-10-20 Thread Lopez David E-r9374c
Hasan

Try typing your select statement in a text editor.
Open a mysql client connection and copy paste. This
gets around the problem.

For multiple sql statements, put them in a file and
redirect it to the mysql client:

shell> mysql -u hasan -p < file.sql

David

> -Original Message-
> From: Hasan / Impex Holidays Maldives [mailto:hasan@;impex.com.mv]
> Sent: Thursday, October 17, 2002 4:22 AM
> To: [EMAIL PROTECTED]
> Subject: retyping data
> 
> 
> hi,
> 
> I am using win-xp and command prompt to access mysql.
> I try to retype a very long insert line into the command 
> prompt, but the
> command prompt would not allow me to type all my insert 
> query. There seems
> to be a limit to the characters I can insert when in mysql.
> I can type any amount of characters when in C: prompt.
> 
> Does anyone know a solution, I have tried changing the properties but
> nothing seems to help.
> 
> Thanks,
> Hasan
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Query for multiple rows

2002-10-16 Thread Lopez David E-r9374c

Fibilt

try: 

  SELECT id, concat(lname, fname) as name, count(*) as cnt 
FROM table1 
GROUP BY name 
HAVING cnt > 1;

This is close anyhow.

David

> -Original Message-
> From: Phillip S. Baker [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, October 16, 2002 1:17 PM
> To: [EMAIL PROTECTED]
> Subject: Query for multiple rows
> 
> 
> Greetings all,
> 
> I have a users table in the DB.
> I am importing allot of existing data from various places 
> into this users 
> table.
> There is a strong likelihood of the same user getting entered twice.
> Though each record will have a separate and unique ID, I want 
> to be able to 
> query the table and look for duplicate records based on first 
> name last name.
> 
> So for clarity I want to execute a query to one table.
> The results I want displayed are a listing of records that 
> have the same 
> first name and last name of other records in that same table.
> 
> Am I making this clear and what do I need to do?
> 
> Thanx
> 
> Fibilt
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: killing a hung thread

2002-10-16 Thread Lopez David E-r9374c

Inandjo

Try setting the variable "connection_timeout" to 300 seconds.
That way, the mysqld will close the connection automatically.

Set the variable in file "my.cnf".

David


> -Original Message-
> From: Inandjo Taurel [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, October 16, 2002 9:54 AM
> To: [EMAIL PROTECTED]
> Subject: killing a hung thread
> 
> 
> hi,
> i'd like to know if there is a way to tell the mysqld to 
> remove a thread 
> that has hang, without human intervention.
> Let say an update query is fired to the engine, and the 
> client just freezes 
> and we have to reboot the machine manually:  how can i make 
> sure that mysqld 
> will remove that thread from the process list thus freeing 
> the table locked 
> during the update??
> 
> SQLSQL
> 
> 
> 
> _
> Broadband? Dial-up? Get reliable MSN Internet Access. 
> http://resourcecenter.msn.com/access/plans/default.asp
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: time stamp

2002-08-28 Thread Lopez David E-r9374c

Steve

Try: SELECT MAX(field_timestamp) from table1;

David

> -Original Message-
> From: Steve Buehler [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 27, 2002 5:21 PM
> To: mysql; PHP
> Subject: time stamp
> 
> 
> I am using PHP with MySQL and have a timestamp field in my db 
> table.  What 
> would be the easiest way to get the newest timestamp out of the db?
> 
> Thanks in Advance
> Steve
> 
> 
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> ow3
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Indexing question

2002-08-27 Thread Lopez David E-r9374c

Ben

It would appear that the deletion of rows may be a problem.
After deleting rows older than 6 months, do you optimize the
table?

As I understand it, mysql does not delete delete, only marks
a bit for every row thats deleted. That way, delete speed is
fast. However, it slows down queries and inserts. I've tested
this my deleting half of the rows in a table. The before and 
after file size size's are equal.

I don't think you need the ORDER BY Account clause. The GROUP
BY Account will automatically sort it. I read it somewhere.

David

> -Original Message-
> From: Ben Holness [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 27, 2002 1:20 AM
> To: Mysql z_mailing
> Subject: Indexing question
> 
> 
> Hi all,
> 
> I would like to create an index to speed up the following query:
> 
> SELECT Account, Status, count(*) From MessageStatus WHERE sentDate >
> '(variable)' AND sentDate < '(variable)' GROUP BY 
> Account,Status ORDER BY
> Account
> 
> sentDate is a timestamp(14), Account and Status are both varchars.
> 
> The table also contains another couple of columns and already 
> has an index
> on sentDate.
> 
> So two questions:
> 
> 1. What index should I create (my original guess was just 
> sentDate, but now
> I am thinking sentDate(8),Account,Status)
> 
> 2. Does the fact that I created the sentDate index, which is 
> not being used
> as it does not speed up the query, detriment the performance 
> significantly?
> Should I remove that index as it is not being used?
> 
> The table currently has around 800,000 entries in it and 
> grows by between
> 4,000 and 100,000 entries a day.
> 
> Entries are deleted once they are 6 months old.
> 
> Many thanks,
> 
> Ben
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Slow queries

2002-07-24 Thread Lopez David E-r9374c

Bhavin

I'm doing queries to one table with 4 other smaller tables.
The large table is 12-14 million records. With an index,
the table select is 13-15 seconds. With a compound index,
the delay is sub one second. Use explain to verify which
index is being used. Read the manual to tune your index's.

The only time it is slower (4-6 seconds) is when the index
is in swap. Can anyone tell me how to prevent Solaris from
moving part of mysqld to swap? I'm thinking a config file -
but where.

David


> -Original Message-
> From: Bhavin Vyas [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 24, 2002 3:54 PM
> To: [EMAIL PROTECTED]
> Subject: Slow queries
> 
> 
> Hello,
> I am running mysql in our production environment, which have 
> a few reporting
> tables with millions of rows. Certain queries cascade 2 such 
> tables that
> have millions of rows and there by, there are extremely slow, 
> sometimes
> taking over 10 minutes.
> However, we are ok with that since the size of the data is 
> extremely large.
> However, the problem that we are facing is that when these queries are
> running, other queries, *unrelated to the 2 large tables* are 
> also put on
> hold, so they can't be executed either and basically mysql becomes
> unresponsive until I kill the SLOW query and then everything 
> is back to
> normal.
> Anybody knows why this happens and if there is a solution for 
> this (like
> maybe, reprioritizing the slow query to a very low priority 
> or something). I
> would think that mysql would independently try to execute the 
> other queries
> but that does not seem to be happening.
> 
> Thanks,
> Bhavin.
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: actual database filesizes

2002-07-08 Thread Lopez David E-r9374c

mike0

MYI is the index's
MYD is the data

Can't think why it would suddenly grow in size.
Try the CHECK TABLE command to get more detail.

David

> -Original Message-
> From: //mikezero/ [mailto:[EMAIL PROTECTED]]
> Sent: Monday, July 08, 2002 12:37 PM
> To: [EMAIL PROTECTED]
> Subject: actual database filesizes
> 
> 
> can anyone think of a reason that the .MYD and .MYI files would 
> overnight grow to sizes over 5 gigs.  and which file holds what data? 
>  any help would be greatly appreciated.  i included a copy of 
> the text 
> returned from ls.
> 
> 12 -rw-rw1 mysqlmysql8662 Sep 21  2001 
> track_log.frm
> 5940028 -rw-rw1 mysqlmysql6076641280 May  4 08:02 
> track_log.MYD
> 5725520 -rw-rw1 mysqlmysql5857197056 May  5 04:02 
> track_log.MYI
> 
> //mikezero/
> //radiotakeover
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Auto-increment across multiple tables / Atomicity of update statements

2002-06-12 Thread Lopez David E-r9374c

Andy

2) Locks are by thread. If thread dies, so does
   it's lock.

David

> -Original Message-
> From: Andy Sy [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, June 12, 2002 12:03 PM
> To: [EMAIL PROTECTED]
> Subject: Auto-increment across multiple tables / Atomicity of update
> statements 
> 
> 
> In MySQL,
> 
> I want to use an auto-incrementing value that works 
> across multiple tables. For example I have 3 tables 
> forumA, forumB, forumC, each of which has a primary 
> key field called 'msgnum'. However, if I declare 
> 'msgnum' auto_increment, 'msgnum' will not be
> unique across the 3 tables. 
>  
> Thus, instead of using an auto_increment column, I
> made an auxiliary table which will hold the last used 
> 'msgnum' and update it each time a new record is 
> inserted in any of the 3 tables. Since I plan to use 
> this auxiliary table to hold other values as well and 
> INSERTs to the 3 tables may happen extremely often, I 
> would rather not have the overhead of repeatedly LOCKing 
> and UNLOCKing the table.
>  
> In connection with this, the following 2 issues crop 
> up:
>  
> #1) Is the following statement guaranteed atomic?
>  
> UPDATE TBL SET COL=COL+1
>  
> and is there anyway to retrieve the value of COL
> that was last set by the connection that set it?
>  
> #2) If a thread with a LOCK on a table unexpectedly dies 
> without being able to UNLOCK it, does it automatically 
> relinquish the lock?
> 
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: insert into multiple tables??

2002-05-15 Thread Lopez David E-r9374c

Guy

No. Requires multiple inserts.

David

> -Original Message-
> From: Defryn, Guy [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, May 15, 2002 4:55 PM
> To: [EMAIL PROTECTED]
> Subject: insert into multiple tables?? 
> 
> 
> 
> 
> Is it possible to insert data into different tables with one insert 
> statement.
> 
> 
> For example , I have one form on my web page and the data needs to go 
> into different tables( first 5 entries to table A and the 
> rest to table 
> B)?
> 
> Mysql
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL running out of Hard drive space

2002-04-30 Thread Lopez David E-r9374c

Mike

Try moving the datadir to your home partition.
This is a variable in the my.cnf config file.
I just did this on an NT box. You will have
to stop mysql, change the config file, move
your current data dir to new location and
restart mysql.

David

-Original Message-
From: Mike Mike [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 30, 2002 7:17 AM
To: [EMAIL PROTECTED]
Subject: MySQL running out of Hard drive space


Hello,
I have made a couple of partisions in Linux.  Somehow
MySQL got installed in a partision that doesn't have a
lot of space and i'm running very low. How do I add
more space to that directory or partision?
I have a lot of space on my user partision and my home
partition. Is there a command in mysql to tell it to
go to my user directory or home directory.
Thanks
  --Mike

__
Do You Yahoo!?
Yahoo! Health - your guide to health and wellness
http://health.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: table lock - which connection id own's it

2002-04-23 Thread Lopez David E-r9374c

Monty

BTW, it's a thrill to hear from the CTO.

The procedure for locking a table is more complex but will
attempt to describe it further for you. Any of the 150 daemons
can insert to the hash tables. The data in the row can be used
by all 150 daemons. The primary key in hash table is the 
"id" field which is an auto-increment type. This "id" field
is the hash value inserted in the main table as foreign key.

When a daemon starts up, it reads the hash table with each
entry a key-value pair. The value is the "id" field. During
the course of logging info to the main table, it may find
an entry that is not in it's local hash in RAM. When this 
occurs, it does a sql select on the hash. If a row is returned,
it will insert the key-value pair into the it's hash and 
insert an entry into the main table. 

If no row is returned, then the following happens:
  1) lock table ...
  2) select ...
  3) if no row is returned, 
  4)insert ...
  5)use LAST_INSERT_ID() to get the value of the key
  6) else get the key-value pair
  7) unlock table
  8) put key-value pair in RAM hash
  9) insert row in main table (using new key-value pair)

Since >99% of the time, the data hash table information is
already stored in the daemon hash table on initial start up.
We wanted to lock the table so two different daemons would
not enter the same information in the hash tables. Extremely
unlikely, but the boss is paranoid. Or is it more experienced.

Hope that clears the insertion process on the hash tables. I'm
glad that if a connection is lost in steps 2-6, mysql would
automatically unlock the tables. That will satisfy the db
specification nicely. The architecture for our db came
from the optimization chapter in the manual. Thanks!

David


-Original Message-
From: Michael Widenius [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 12:59 AM
To: Lopez David E-r9374c
Cc: Mysql List (E-mail)
Subject: table lock - which connection id own's it



Hi!

>>>>> "Lopez" == Lopez David > writes:

Lopez> AntiSpam - mysql, sql, query
Lopez> Version: 3.23.49a, MyISAM, NT, Solaris

Lopez> My app is 150 daemons writing logs to mysql. The main 
Lopez> table is of fixed size but depends on four other tables
Lopez> for foreign keys (hash tables). These tables are uploaded
Lopez> once to the daemon at start-up. Occasionally, a new entry 
Lopez> must be written into these hash tables. The procedure is
Lopez> to lock the table, insert entry, get the key (auto-increment
Lopez> field) and release the lock.

A better solution is to use LAST_INSERT_ID() and not use any locks at all.

Lopez> But what if the connection dies during the update process.
Lopez> If this happens, how can I tell which connection id has the 
Lopez> lock so I can kill it? 

If a connection dies, the server will automaticly delete all temporary
tables and all table locks.

Regards,
Monty

-- 
For technical support contracts, goto https://order.mysql.com/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Mr. Michael Widenius <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__   MySQL AB, CTO
/_/  /_/\_, /___/\___\_\___/   Helsinki, Finland
   <___/   www.mysql.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




table lock - which connection id own's it

2002-04-22 Thread Lopez David E-r9374c

AntiSpam - mysql, sql, query

Version: 3.23.49a, MyISAM, NT, Solaris

My app is 150 daemons writing logs to mysql. The main 
table is of fixed size but depends on four other tables
for foreign keys (hash tables). These tables are uploaded
once to the daemon at start-up. Occasionally, a new entry 
must be written into these hash tables. The procedure is
to lock the table, insert entry, get the key (auto-increment
field) and release the lock.

But what if the connection dies during the update process.
If this happens, how can I tell which connection id has the 
lock so I can kill it? 

I have tried:
  show full processlist - shows connection id's - no lock info
  show table status;- will not display until lock is removed
  show open tables  - shows open table with comment in_use=1
  but this could be normal operation.

My understanding is that the lock will be removed once the
connection dies (sql command "kill id" or the server terminates
connection via timeout). I would to detect when the lock is
on for a long period of time.

Any ideas? Suggestions?

David
---
David E Lopez
Org:   SPS->ASP->SOC ATIC->CSAM
Phone: 480-413-6251
email: [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Is there a function to sort the result (after using ORDER BY)?

2002-04-12 Thread Lopez David E-r9374c

Keith

What about placing the output of the select in a
temporary table in mysql space and making
a second selection from the temp table. That would
get you what you want.

  1) CREATE TEMPORARY TABLE tmp ...
  2) INSET into tmp SELECT ...
  3) SELECT ... FROM tmp ...

Just a thought

David

-Original Message-
From: Keith C. Ivey [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 12, 2002 11:40 AM
To: [EMAIL PROTECTED]
Subject: RE: Is there a function to sort the result (after using ORDER
BY)?


On 12 Apr 2002, at 17:54, SED wrote:

> However, I want to get the result
> and then sort the result by importance e.g.:
> 
> result (date, name, impoartance):
> 
> 20020102, John, 3;
> 20020101, Smith, 5;
> 20010101, Sigmund, 1;
> 2101, Sigmund, 8;
> 
> now I want to sort the result by importance (desc) so the final will be:
> 
> 2101, Sigmund, 8;
> 20020101, Smith, 5;
> 20020102, John, 3;
> 20010101, Sigmund, 1;
> 
> Do you have any way of doing it?

Not in MySQL.  You'll have to use a program written in Perl or PHP
or some other language to sort the results again after you retrieve 
them.  It's a very strange thing to want to do: take the top 20 
records sorted in one way and then sort them by something else.  It 
seems to me that it would be confusing to whoever is looking at the 
results, but maybe I'm missing something.

-- 
Keith C. Ivey <[EMAIL PROTECTED]>
Tobacco Documents Online
http://tobaccodocuments.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: select query optimization

2002-04-10 Thread Lopez David E-r9374c

Katen

The manual has a section on optimization. If you have done
deletes on this table, the table may not be optimized.

Try: OPTIMIZE TABLE RADPOOL1

It does lock the table while doing it. 

Other than that, I don't know what else to do.

David



-Original Message-
From: Steve Katen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 10, 2002 8:47 AM
To: Lopez David E-r9374c; [EMAIL PROTECTED]
Subject: RE: select query optimization


i made the change, but it looks like it didn't speed the query up at all.

here are the results from the first explain:
mysql> explain select TIME_STAMP, YIADDR from RADPOOL where STATE=0 and 
POOL='GLOBAL-POOL-SJC' ORDER BY TIME_STAMP limit 1;
+-+--+---++-+---+--++
| table   | type | possible_keys | key| key_len | ref   | 
rows | Extra  |
+-+--+---++-+---+--++
| RADPOOL | ref  | RADPOOL_I2,RADPOOL_I3 | RADPOOL_I2 |  20 | const | 
6893 | where used; Using filesort |
+-+--+---++-+---+--++
1 row in set (0.00 sec)

i made the changes to acoomodate this create statement:
CREATE TABLE `RADPOOL1` (
`id` int(11) NOT NULL auto_increment,
`STATE` tinyint(4) default NULL,
`TIME_STAMP` int(11) NOT NULL default '0',
`EXPIRY` int(11) default NULL,
`USERNAME` char(35) default NULL,
`POOL` char(20) NOT NULL default '',
`YIADDR` char(15) NOT NULL default '',
PRIMARY KEY  (`id`),
UNIQUE KEY `RADPOOL_I` (`YIADDR`),
INDEX ipoolstate( POOL, STATE )
);

here are the new explain results:
mysql> explain select TIME_STAMP, YIADDR from RADPOOL1 where STATE=0 and 
POOL='GLOBAL-POOL-SJC' ORDER BY TIME_STAMP limit 1;
+--+--+---++-+-+--++
| table| type | possible_keys | key| key_len | ref | 
rows | Extra  |
+--+--+---++-+-+--++
| RADPOOL1 | ref  | ipoolstate| ipoolstate |  22 | const,const | 
6011 | where used; Using filesort |
+--+--+---++-+-+--+----+
1 row in set (0.00 sec)

katen

At 08:35 AM 4/10/2002 -0700, Lopez David E-r9374c wrote:
>Katen
>
>Try using a compound index with STATE and POOL
>
>  INDEX ipoolstate( POOL, STATE )
>
>Use EXPLAIN SELECT  to see what mysql thinks.
>
>David
>
>PS anybody know if KEY is the same as INDEX?
>
>-Original Message-
>From: Steve Katen [mailto:[EMAIL PROTECTED]]
>Sent: Wednesday, April 10, 2002 8:07 AM
>To: [EMAIL PROTECTED]
>Subject: select query optimization
>
>
>i have been working on getting this query as fast as possible and figured
>it was time to come to the mailing list.
>
>the below table currently holds about 43 thousand records with potential to
>grow to around 1,000,000 records.
>
>CREATE TABLE `RADPOOL` (
>`id` int(11) NOT NULL auto_increment,
>`STATE` tinyint(4) default NULL,
>`TIME_STAMP` int(11) NOT NULL default '0',
>`EXPIRY` int(11) default NULL,
>`USERNAME` char(35) default NULL,
>`POOL` char(20) NOT NULL default '',
>`YIADDR` char(15) NOT NULL default '',
>PRIMARY KEY  (`id`),
>UNIQUE KEY `RADPOOL_I` (`YIADDR`),
>KEY `RADPOOL_I2` (`POOL`),
>KEY `RADPOOL_I3` (`STATE`),
>KEY `RADPOOL_I4` (`TIME_STAMP`)
>);
>
>the query below becomes slow when I added the ORDER BY clause to it.
>
>select TIME_STAMP, YIADDR from RADPOOL where STATE=0 and
>POOL='GLOBAL-POOL-SJC' ORDER BY TIME_STAMP limit 1;
>
>the output received from the mysql client is "1 row in set (0.09 sec)"
>
>is there a way to speed this query up when using the ORDER BY?
>
>thanks in advance,
>
>katen
>
>
>
>
>
>-
>Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
>To request this thread, e-mail <[EMAIL PROTECTED]>
>To unsubscribe, e-mail <[EMAIL PROTECTED]>
>Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: select query optimization

2002-04-10 Thread Lopez David E-r9374c

Katen

Try using a compound index with STATE and POOL

 INDEX ipoolstate( POOL, STATE )

Use EXPLAIN SELECT  to see what mysql thinks.

David

PS anybody know if KEY is the same as INDEX?

-Original Message-
From: Steve Katen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 10, 2002 8:07 AM
To: [EMAIL PROTECTED]
Subject: select query optimization


i have been working on getting this query as fast as possible and figured 
it was time to come to the mailing list.

the below table currently holds about 43 thousand records with potential to 
grow to around 1,000,000 records.

CREATE TABLE `RADPOOL` (
   `id` int(11) NOT NULL auto_increment,
   `STATE` tinyint(4) default NULL,
   `TIME_STAMP` int(11) NOT NULL default '0',
   `EXPIRY` int(11) default NULL,
   `USERNAME` char(35) default NULL,
   `POOL` char(20) NOT NULL default '',
   `YIADDR` char(15) NOT NULL default '',
   PRIMARY KEY  (`id`),
   UNIQUE KEY `RADPOOL_I` (`YIADDR`),
   KEY `RADPOOL_I2` (`POOL`),
   KEY `RADPOOL_I3` (`STATE`),
   KEY `RADPOOL_I4` (`TIME_STAMP`)
);

the query below becomes slow when I added the ORDER BY clause to it.

select TIME_STAMP, YIADDR from RADPOOL where STATE=0 and 
POOL='GLOBAL-POOL-SJC' ORDER BY TIME_STAMP limit 1;

the output received from the mysql client is "1 row in set (0.09 sec)"

is there a way to speed this query up when using the ORDER BY?

thanks in advance,

katen





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: select query optimization

2002-04-03 Thread Lopez David E-r9374c

Steve

Have you tried using compound index:

  INDEX( POOL, STATE )

Just a thought.

David

-Original Message-
From: Steve Katen [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 03, 2002 10:13 AM
To: [EMAIL PROTECTED]
Subject: select query optimization


i am running a basic install of mysql version 3.23.49-log and i have a 
pretty simple query that is not fast enough for my needs.  any thoughts on 
optimization would help.

the table description is below:
mysql> desc RADPOOL;
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra  |
++--+--+-+-++
| id | int(11)  |  | PRI | NULL| auto_increment |
| STATE  | int(11)  |  | MUL | 0   ||
| TIME_STAMP | int(11)  | YES  | | NULL||
| EXPIRY | int(11)  | YES  | | NULL||
| USERNAME   | char(50) | YES  | | NULL||
| POOL   | char(20) | YES  | MUL | NULL||
| YIADDR | char(50) |  | UNI | ||
++--+--+-+-++
7 rows in set (0.00 sec)

the query:
select TIME_STAMP, YIADDR from RADPOOL where POOL='GLOBAL-POOL-SJC' and 
STATE=0 order by TIME_STAMP limit 1

the mysql client shows that it takes .09 seconds
mysql> select TIME_STAMP, YIADDR from RADPOOL where POOL='GLOBAL-POOL-SJC' 
and STATE=0 order by TIME_STAMP limit 1;
++-+
| TIME_STAMP | YIADDR  |
++-+
| 1016494596 | 66.81.70.26 |
++-+
1 row in set (0.09 sec)

how can i optimize the table, the database, or anything else that will 
speed the query up?

katen



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: auto_increment question

2002-03-29 Thread Lopez David E-r9374c

Jeff

You can make a clean start but it takes work.

 1) create new table clients_tmp - same fields - no indexes.
add a field and make it a simple integer - say idx_int

 2) insert all the fields  into client_tmp from clients
place the client.idx field into client_tmp.idx_int

a) At this point, you will have a idx starting at 1 and
ending at however many rows you have.

b) the idx_int is your cross reference between old and new.

 3) replace the old client id with the new one from 2-b.

 4) rename the clients table and drop the idx_int field.

I'm assuming a mixture of sql code and application code.
The above is a lot of work simply to avoid consecutive index 
numbers. There is probably another way - isn't there always?
No doubt...

David

PS By destroying the client idx field, you loose information.
That is, how many total clients have been to your site. Total
current clients in the number of rows in client table. Seems
like management would like to know that. If you have a 
timestamp field, you could track it over time. Hum... I've
crossed over to the suit side. Sorry.

-Original Message-
From: Jeff Habermann [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 28, 2002 4:45 PM
To: [EMAIL PROTECTED]
Subject: auto_increment question


This is my first time posting to this list, so please forgive me if I am not
doing this correctly.  I will break this into post into two parts: what i
want to know and why.  Is there a way to get auto_increment fields to fill
in gaps in the numbering sequence if some records have been deleted?

Here is an example of my problem:
I have two tables in MySQL:
1.web_hits:  date datetime not null, client mediumint unsigned not null, URL
varchar(500) not null
2.clients: idx mediumint unsigned not null unique auto_increment, name
varchar(255) not null

"web_hits" holds information about web traffic on our network.  There are
more fields that what i have listed, but this is all that is needed for this
example.  It records the time of the web hit, the URL, and the ID of person
who made the hit.  The ID can be looked up in the "clients" table.

Now, after a while the web_hits table starts to get really big, because of
this we start deleting the old entries from the web_hits table based on the
date field.  Because of this deletion some clients in the clients table may
not be used any more.  So we have a query that will delete those from the 
clients table as
well.  Now, because of this deletion, there are sequence gaps in the "idx"
field.  We would like to be able to use those numbers again for incoming
clients...Is this possible?

Thank you,
Jeff



_
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Replication failed: TRUNCATE TABLE command

2002-03-27 Thread Lopez David E-r9374c

Anti-SPAM: mysql, query

Gurus

I'm new to replication. Set it up this weekend and has worked
flawlessly for several days. Then I used the command:

  TRUNCATE TABLE tbl;

That worked in master, but was not replicated in slave.
No mention of this in manual. Using version 3.23.49 on
master and slave. Master is Solaris and slave is NT.

The truncate command is much, much faster than

  DELETE FROM tbl;

I can see why some clever person invented it. BTW, no
mention of any problems in ref. manual or books.

Any thoughts?

David

---
David E Lopez
Org:   SPS->ASP->SOC ATIC->CSAM
Phone: 480-413-6251
email: [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: How can I Sort by Relevance?

2002-03-27 Thread Lopez David E-r9374c

Walter

Try adding to your select statement: 

  ORDER BY DESCRIPTION ASC

David

-Original Message-
From: Walter D. Funk [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 27, 2002 11:05 AM
To: [EMAIL PROTECTED]
Subject: How can I Sort by Relevance?


I have a query like this

select * from PROFILES where UCASE(DESCRIPTION) like '%A%'

the result of this will be a set in which any member contains at least an
"A",
but I'd like to sort them so as the result set becomes like this


first: "A"
second:  "AAA"
third:  "any string containing an A"
and so on

I´d like to display results this way , in this case the first data is much
relevant than the third one.
But in the practise i get all results mixed up

i will appreciate any help

thanx



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: question about converting isam to myisam tables (shotcut!)

2002-03-26 Thread Lopez David E-r9374c

Hank

Instead of dropping down to OS cp commands, insert the data from
table TBL_FLAT to TBL_INDX.

 INSERT INTO TBL_INDX SELECT * FROM TBL_FLAT;

This will be valid in mysql. The insert will be fast since
only one insert statement. Let the list know if this works
for you. Or if your method is valid as well.

David

-Original Message-
From: Henry Hank [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 26, 2002 3:37 PM
To: [EMAIL PROTECTED]
Subject: question about converting isam to myisam tables (shotcut!)



Hello all,

  I'm in the process of building a new mysql db server box to migrate a
database off an existing web server.  The webserver is running mysql 3.22.29,
and the new box is running 3.23.  The two biggest tables are about 20 million
records and the data and index files are each just over 1GB, and each have a
primary key and three or four non-unique indexes. 

 I've copied the data files from the old box to the new, but when I use the
ALTER TABLE...TYPE=MYISAM, it takes forever since it is rebuilding the indexes
by keycache.  So this is what I'm doing instead, and I'm asking if there are
any problems associated with this approach:

- the existing ISAM Table is called TBL_ISAM
- Create a new/empty myisam table with no indexes named TBL_FLAT
- Create a new/empty myisam table with all indexes named TBL_INDX

mysql> insert into TBL_FLAT select * from TBL_ISAM;  (runs very quickly - not
updating any indexes)
mysql> flush tables;

the in the OS shell (linux) do the following:

$ cp TBL_INDX.frm TBL_FLAT.frm
$ cp TNL_INDX.MYI TBL_FLAT.MYI

then to rebuild all the indexes via sorting, use myisamchk:

$ myisamchk -r -q TBL_FLAT  

I'm left with a fully populated and indexed TBL_FLAT table, which seems to work
perfectly as the original converted ISAM table.

An in addition, I did first try "myisamchk -r -k0" to disable the indexes and
do the insert into the TBL_INDX table, but mysql still insists on building the
primary key via the keycache - and it takes way too long.  My method allows me
to populate the data file with no indexes, and then fool mysql into rebuilding
all the indexes at once via sorting.

Does anyone think there are any risks/problems with this approach?

Thanks in advance!
 -Hank




__
Do You Yahoo!?
Yahoo! Movies - coverage of the 74th Academy Awards®
http://movies.yahoo.com/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Deleting rows from logging db efficiently

2002-03-04 Thread Lopez David E-r9374c

Using MySQL 3.23.40

App is logging db with 130 clients to a central MySQL db 
with 2 million events per day. Aged events of over 7 days
are no longer required. The central table is myisam type
with fixed size (datetime, enum and foreign keys only).
Selects are done infrequently while inserts are done 
throughout the day. What is the fastest delete possible 
and still keep table optimized?

Here are two ways that were developed. Either algorithm 
is done once a day when server activity is at a minimum. 
The first technique is in the documentation, the second 
technique is twice as fast.

Delete rows and optimize

 DELETE FROM table WHERE etime<"2002-02-25";
 OPTIMIZE TABLE table;

The time to delete 2 million rows is 24 minutes.
The time to optimize is 18 minutes.
Total time is 42 minutes.

Transfer only newest data to no-index temporary table

  LOCK TABLES table t READ;
  INSERT INTO table_tmp 
 SELECT * FROM table WHERE etime>"2002-02-25";
  TRUNCATE TABLE table;
  UNLOCK TABLES;
  INSERT INTO table
 SELECT * FROM table_tmp;

The time to insert 10 million rows into temporary table
  is 3 minutes.
The time to truncate table is 5 seconds.
The time to insert from temporary table back to primary
  table is 18 minutes.
Total time is 21 minutes.

Does anyone know of a different approach of deleting 
rows while keeping the table optimized? Would welcome
any comments.

David

PS1 Optimized table is defined as no deletes to table 
without a subsequent optimize. If deletes and inserts
are done simultaneously, query times go up drastically.
This slowdown is documented.

PS2 Hardware is a 4 cpu Solaris with key_buffer=1024M &
thread_concurrency=8. Only other db in mysql is used 
infrequently.
---
David E Lopez
email: [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php