Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
I reset the master, flush logs, reset master, show master status..
shows FINANCE-bin.186 at position 79
so I started the slave
CHANGE MASTER TO
MASTER_HOST='192.168.1.168',
MASTER_USER='repl',
MASTER_PASSWORD='Daredevil22',
MASTER_LOG_FILE='FINANCE-bin.186',
MASTER_LOG_POS=79;
start slave;
and I get this error after a few seconds..
040813  8:55:15  Slave SQL thread initialized, starting replication in 
log 'FINANCE-bin.186' at position 79, relay log 
'.\databasebackup-relay-bin.001' position: 4
040813  8:55:15  Slave I/O thread: connected to master 
'[EMAIL PROTECTED]:3306',  replication started in log 'FINANCE-bin.186' 
at position 79
040813  8:55:39  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040813  8:55:39  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040813  8:55:39  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 79

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
More info..
I dont see anythign wrong with the binlog the slave has
E:\mysql\datamysqlbinlog databasebackup-relay-bin.001
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
# at 4
#691231 19:00:00 server id 1  log_pos 0 Rotate to 
FINANCE-bin.186  pos:79
# at 46
#691231 19:00:00 server id 1  log_pos 0 Rotate to 
FINANCE-bin.186  pos:79

E:\mysql\data
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan

Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
[EMAIL PROTECTED] wrote:
Have you considered that a proxy server may be in the way. I have been 
watching this thread but I can't remember if you said anything about 
your network connectivity (sorry!). I have seen several programs make 
what they thought was a connection then fail because they don't know 
they are connecting through a proxy and not the real server. Also if 
your proxy is dropping your session, it could cause the same 
interrupted behavior.

my 2 cents
Both servers are connected to the same switch, no proxy servers between 
them.

The slave will connect, and will process all the way up to the current event
as soon as it hits the current event it dies, all I have to do is wait 5 
min for more events to build up, and start slave and it takes off again

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan
matt ryan wrote:

Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
I've switched to innodb but performance isnt very good
while the insert runs, here's what I get for performance
select count(*) from rondon;
1 row in .13 sec
select count(*) from rondon;
1 row in 21.88 sec
select count(*) from rondon;
1 row in 42.47 sec
select count(*) from rondon;
1 row in 1 min 47.69 sec
not sure why the first was so fast, the rest SUCK
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan
matt ryan wrote:

Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
Doh another problem
innodb has no merge option, I have too much data, and the only way to 
deal with it, is partition the data and then tie it together with merge 
views.

Unfortunatly innodb will not work for me :(
Anybody know if SQL Server  desktop supports what I need?  I know oracle 
does, but the cost is an issue, maxdb costs too much too.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


GROUP BY optimization headscratcher

2004-08-13 Thread Matt Eaton
Hi all.  Got a weird one.  Mysql 4.0.20.  Let's say for the sake of
argument that I've got two tables, T1  T2.  Here are the two create
statements:
 
CREATE TABLE `T1` (
  `guid` smallint(5) unsigned NOT NULL default '0',
  `qid` smallint(5) unsigned NOT NULL default '0',
  `a` tinyint(2) NOT NULL default '-2',
  `d` tinyint(2) NOT NULL default '-2',
  KEY `IX_FW_qid` (`qid`),
  KEY `IX_FW_d` (`d`)
) TYPE=HEAP 
 
CREATE TABLE `T2` (
  `guid` mediumint(8) unsigned NOT NULL default '0',
  `qid` tinyint(3) unsigned NOT NULL default '0',
  `a` tinyint(4) NOT NULL default '0',
  `d` decimal(1,0) unsigned NOT NULL default '0',
  PRIMARY KEY  (`guid`,`qid`),
  KEY `IX_s23aw_d` (`d`),
  KEY `IX_s23aw_qid` (`qid`)
) TYPE=HEAP
 
So, in T1 there are as many records as there are qids (around 150)
(there's only one user in T1).  In T2 there are as many records as there
are qids * user id's = (around 497,964).
 
The weirdness comes when I try to join them and do a group by at the
same time.  The following:
 
SELECT T1.guid, sum(T1.d + T2.d) as theSum
FROM T1, T2
WHERE T1.qid=T2.qid
GROUP BY T1.guid
 
takes 1 second to run.  This seems absurdly long.  Explain shows that
everything seems fine (although it shows that T2 has a key length of 2,
which is weird, and I don't really understand), and if I do the same
query without the group by or the sum, it goes in 0.01 seconds.  So, I
ran the same query without the group by and the sum and stored the
result set in a temporary table, and then did a group by on guid for
that temporary table, and that ran 0.01 seconds. so I've got a solution
to my problem already, even though it's a two-query solution.  However,
I'm really curious as to why MySQL takes so long on my original query.
It seems like it's creating a temporary table in memory and doing the
group by on that, which is exactly what I was doing the second time
around. and I find it hard to believe that I'm that much smarter than
the MySQL preprocessor.  So, if anyone has any thoughts on this strange
disparity in time, I'd be interested to hear them!  Thanks a lot!
 
-Matt


Help, slave wont stay running!

2004-08-12 Thread matt ryan
I cant keep the slave up for more than 10 minutes
constantly getting these errors
040812 10:32:25  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040812 10:32:25  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040812 10:32:25  Slave I/O thread exiting, read up to log 
'FINANCE-bin.185', position 284963878

both servers have plenty of free space
Here is the master setup..
skip-locking
set-variable= key_buffer_size=1000M
set-variable=bulk_insert_buffer_size=256M
set-variable=delay_key_write=ALL
set-variable=join_buffer_size=256M
set-variable= max_allowed_packet=256M
set-variable= table_cache=512
set-variable= sort_buffer_size=256M
set-variable=tmp_table_size=256M
set-variable= record_buffer=256M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=256M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
here is the slave setup
skip-locking
set-variable= key_buffer_size=1500M
set-variable=bulk_insert_buffer_size=512M
set-variable=delay_key_write=ALL
set-variable=join_buffer_size=512M
set-variable= max_allowed_packet=384M
set-variable= table_cache=512
set-variable= sort_buffer_size=256M
set-variable=tmp_table_size=400M
set-variable= record_buffer=384M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=384M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
I'm out of ideas, I've played with buffer sizes, packet sizes, but still 
get the same error

my other master/slave has no problems at all, the slave is the same 
server (one box that's slave for two sites)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-12 Thread matt ryan
Victor Pendleton wrote:
Can you reset the slave to read the next event its relay log? If this is not
possible, is refreshing the data from the master a viable option?
 

I can start slave, and it runs a little while, then stops again.
I can refresh the data from the master, iv'e done it 25 times at least, 
3 times a week, the database is 90 gig, so it's not very fun!

every time I resync them, I'll reset master first, resync, set the slave 
to start on the new master info, and then start the slave, boom fails in 
15 min

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-12 Thread matt ryan
I deleted every table off the slave, and reloaded them, I do this twice 
a week because it wont replicate

The master server has a check  optimize every sunday
I had a similar situation one week ago. Found one of the tables (MyISAM) had a 
corrupt index. After fixing it, everything was fine again.

Regards.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Replace delayed locks table

2004-08-12 Thread matt ryan
SQL is, replace delayed into table c1, c1, c3 select c1, c2, c3 from 
temp table.

This takes 2 hours to comlete, the temp table is rather large.
The table being updated is locked, the whole time, all web requests are 
locked and the pages time out.

Is there any way to get this to run without locking the whole table?  I 
thought with myisam it would only lock a table if you delete records, 
and insert records, it locks it to fill the gaps.

If I need to switch to another table type it's an option, having locked 
tables is NOT an option.

Thanks in advance Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-12 Thread matt ryan
Check it out
mysql start slave;
Query OK, 0 rows affected (0.00 sec)
mysql start slave;
ERROR 1198: This operation cannot be performed with a running slave, run 
SLAVE S
TOP first
mysql start slave;
ERROR 1198: This operation cannot be performed with a running slave, run 
SLAVE S
TOP first
mysql start slave;
Query OK, 0 rows affected (0.00 sec)

mysql start slave;
Query OK, 0 rows affected (0.00 sec)
I can start slave over and over, it does one event, stops, start it and 
it does one event, then stops, over and over and over

I just keep running start slave really fast to get threw the updates.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Cannot connect via TCP (Lost connection to MySQL server during query)

2004-08-12 Thread Matt Winckler
Hi,
I'm running into troubles trying to connect to MySQL (version 4.0.18) via
TCP/IP. Connecting on the actual machine via Unix sockets works just
fine--it's solid as a rock. But as soon as I attempt to connect via TCP
(from either the local machine or a remote machine), mysqld crashes and I
get the ERROR 2013: Lost connection to MySQL server during query error.
This happens to me using both version 4.0.18 and 4.0.20 (I was running .20
and downgraded to .18 to see if that fixed the problem. It didn't, but I
haven't upgraded back to .20 again yet.)
So for instance, after starting mysqld, this works fine:
$ mysql -h localhost
But the following command does not:
$ mysql -h 127.0.0.1
ERROR 2013: Lost connection to MySQL server during query
Running mysqladmin version immediately after getting the error confirms
that the server did crash and come back up. If I telnet to 127.0.0.1 port
3306, it just immediately closes the connection. (Connection closed by
foreign host.) Again, this is due to the server crashing.
I haven't been able to get any useful (to me, anyway) information out of
mysqld.err. I get a backtrace, but the stack trace ends in New values of
fp=0x81bdea4 failed sanity check, terminating stack trace! (which doesn't
look normal to me). When I try to follow the instructions at
http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html, nm tells me there
are no symbols for mysqld. I'm not familiar with resolving stack traces, so
treat me as a newbie in that regard.
For that matter, it's entirely possible that I'm making a newbie mistake
somewhere else. If that's the case, please point me to a FAQ and flame away.
:) But I've read the docs and Googled this one pretty thoroughly, and
although I've found people that seem to have the same problem, I haven't yet
found the answer to that problem. Let me know if I can provide anything to
make the problem more clear. Thanks for your help!
Matt Winckler

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Cannot connect via TCP (Lost connection to MySQL server during query)

2004-08-12 Thread Matt Winckler
Michael Stassen wrote:
What hardware and OS?
Pentium II 300, 192 MB RAM, almost-brand-new 80GB hard drive, running Gentoo 
Linux (kernel 2.4.25-gentoo).

How did you get and install mysql?  MySQL supplied binary? 3rd party 
binary? Built from source?  If the answer is not MySQL supplied 
binary, my first suggestion would be to try that to see if the problem 
goes away.
I installed it from an ebuild, via Gentoo's portage system, compiled from 
source.

Thanks for the suggestion; I'll give the MySQL binaries a shot.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Cannot connect via TCP (Lost connection to MySQL server during query)

2004-08-12 Thread Matt Winckler
Michael Stassen wrote:
What hardware and OS?
How did you get and install mysql?  MySQL supplied binary? 3rd party 
binary? Built from source?  If the answer is not MySQL supplied 
binary, my first suggestion would be to try that to see if the problem 
goes away.

Michael
That did end up being the problem...MySQL's binaries seem to work fine.
Thanks again for your rapid insight!
Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Select non-matching fields

2004-08-06 Thread Matt Warden
On Fri, 06 Aug 2004 13:17:42 -0400, Michael Stassen
[EMAIL PROTECTED] wrote:
 You need a LEFT JOIN:
 
SELECT ticket_number
FROM purchased_items LEFT JOIN purchases
ON purchased_items.ticket_number = purchases.ticket_number
WHERE purchases.ticket_number IS NULL;


No, actually he doesn't.

SELECT ticket_number
FROM purchased_items
WHERE ticket_number NOT IN 
  (SELECT ticket_number FROM purchases);

The above will most certainly be faster than any join, because it is
only a couple projections and a simple selection over ticket_number
(which is almost certainly indexed).

Although, I suppose if this is only a maintenance query (I suspect it
is), then it probably doesn't matter. But, the bottom line is: if you
can avoid join, do it. There's only so much the query optimizer can
do.




-- 

Matt Warden
Berry Neuroscience Lab
Department of Psychology
Miami University



This email proudly and graciously contributes to entropy.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Select non-matching fields

2004-08-06 Thread Matt Warden
 I believe that when the query engine executes your statement, for each 
 row of purchased_items data it looks at, it will have to run the query 
 SELECT ticket_number FROM purchases scan those results and 
 determine if the current row matches. If it is in the list then it wil exclude 
 that row from the final results. 
 
I would find it very silly if mysql's query optimizer decided that the
optimized way to execute the query is to execute SELECT ticket_number
FROM purchases N times, where N is the number of rows in
purchased_items. There is no reason why that query would be executed
any more than one time.

FWIW, there is a correlated subquery version of this query (the
example I gave is uncorrelated), and it would be the following:

SELECT ticket_number
FROM purchased_items a
WHERE NOT EXISTS
  (SELECT * FROM purchases b WHERE a.ticket_number=b.ticket_number);

I cannot say for certain that these two queries are not executed by
mysql in the same manner, but I would be surprised if they were. I say
this because you could alter my query and use an explicit set:

SELECT ticket_number
FROM purchased_items
WHERE ticket_number NOT IN
 (112, 456, 942, 356, 623, 783);

I would find it more likely that the above query is executed in the
same way as my original solution.

And, Emmet Bishop insightfully commented:

 You're making the assumption that he's using 4.1.x. He
 didn't state which version he's using so your solution
 may be of no use to him.

Good point. I often forget about which features are/were unimplemented
in mysql. My apologies.


-- 

Matt Warden
Berry Neuroscience Lab
Miami University
http://mattwarden.com


This email proudly and graciously contributes to entropy.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large 30 GB Database

2004-07-28 Thread matt ryan
Should I even attempt this using mysql?
Has anyone played with this much data in mysql?
I've got two 100 gig databases in mysql, and slave replication on both 
of them, the only time I have a problem is table scans, that much data 
will be slow.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


binlog truncated in the middle of event

2004-07-28 Thread matt ryan
One of my slaves has decided to stop replicating
every time I reset it, I get this
040728  8:46:46  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040728  8:46:46  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040728  8:46:46  Slave I/O thread exiting, read up to log 
'FINANCE-bin.185', position 419216838

All slaves share the same config, but this one refuses to work, the 
master server has plenty of drive space, I have made changes to the 
buffer sizes, I thought one of those could play into it

anyone have a sugestion for a fix?
both a 4.017 (found bugs in current version) master is on windows 2000, 
slave is on server 2003

From the main server..
+-++
| Variable_name   | Value  |
+-++
| binlog_cache_size   | 32768  |
| bulk_insert_buffer_size | 536870912  |
| delayed_queue_size  | 1000   |
| innodb_additional_mem_pool_size | 1048576|
| innodb_buffer_pool_size | 8388608|
| innodb_log_buffer_size  | 1048576|
| innodb_log_file_size| 5242880|
| join_buffer_size| 536866816  |
| key_buffer_size | 1572864000 |
| max_binlog_cache_size   | 4294967295 |
| max_binlog_size | 1073741824 |
| max_heap_table_size | 16777216   |
| max_join_size   | 4294967295 |
| max_relay_log_size  | 0  |
| myisam_max_extra_sort_file_size | 268435456  |
| myisam_max_sort_file_size   | 2147483647 |
| myisam_sort_buffer_size | 268435456  |
| query_alloc_block_size  | 8192   |
| query_cache_size| 0  |
| query_prealloc_size | 8192   |
| range_alloc_block_size  | 2048   |
| read_buffer_size| 268431360  |
| read_rnd_buffer_size| 262144 |
| sort_buffer_size| 268435448  |
| thread_cache_size   | 8  |
| tmp_table_size  | 419430400  |
| transaction_alloc_block_size| 8192   |
| transaction_prealloc_size   | 4096   |
+-++
From the slave server..
+-++
| Variable_name   | Value  |
+-++
| binlog_cache_size   | 32768  |
| bulk_insert_buffer_size | 8388608|
| delayed_queue_size  | 1000   |
| innodb_additional_mem_pool_size | 1048576|
| innodb_buffer_pool_size | 8388608|
| innodb_log_buffer_size  | 1048576|
| innodb_log_file_size| 5242880|
| join_buffer_size| 536866816  |
| key_buffer_size | 1572864000 |
| max_binlog_cache_size   | 4294967295 |
| max_binlog_size | 1073741824 |
| max_heap_table_size | 16777216   |
| max_join_size   | 4294967295 |
| max_relay_log_size  | 0  |
| myisam_max_extra_sort_file_size | 268435456  |
| myisam_max_sort_file_size   | 2147483647 |
| myisam_sort_buffer_size | 268435456  |
| query_alloc_block_size  | 8192   |
| query_cache_size| 0  |
| query_prealloc_size | 8192   |
| range_alloc_block_size  | 2048   |
| read_buffer_size| 314568704  |
| read_rnd_buffer_size| 262144 |
| sort_buffer_size| 268435448  |
| thread_cache_size   | 8  |
| tmp_table_size  | 33554432   |
| transaction_alloc_block_size| 8192   |
| transaction_prealloc_size   | 4096   |
+-++
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: binlog truncated in the middle of event

2004-07-28 Thread matt ryan
Update on this, I found that when the slave stops, all I have to do is 
start the slave and it's good again

Here's what the log shows.. the only thing I did was start slave and 
it picked right back up

040728  9:25:13  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040728  9:25:13  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040728  9:25:13  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 171309530
040728  9:29:40  Slave I/O thread: connected to master 
'[EMAIL PROTECTED]:3306',  replication started in log 'FINANCE-bin.186' 
at position 171309530
repeats removed
040728  9:29:56  Error reading packet from server: Lost connection to 
MySQL server during query (server_errno=2013)
040728  9:29:56  Slave I/O thread: Failed reading log event, 
reconnecting to retry, log 'FINANCE-bin.186' position 171309530
040728  9:29:58  Error reading packet from server: Lost connection to 
MySQL server during query (server_errno=2013)
040728  9:29:58  Slave I/O thread: Failed reading log event, 
reconnecting to retry, log 'FINANCE-bin.186' position 171309530
040728  9:30:26  Slave: load data infile on table 'e47wk_in' at log 
position 979433898 in log 'FINANCE-bin.185' produced 601027 warning(s). 
Default database: 'finance'

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SHOW INNODB STATUS

2004-07-26 Thread Matt Solnit
Mark,

How is it possible to have a hit rate of 1000/1000?  Doesn't the buffer
get initialized by cache misses?

-- Matt [EMAIL PROTECTED]


Re: INNODB SHOW STATUS 
From: Marc Slemko (marcsznep.com)
Date: Wed Apr 21 2004 - 10:29:44 CDT 

On Tue, 20 Apr 2004, Emmett Bishop wrote: 
 Howdy all, 
 
 Quick question about what I'm seeing in the BUFFER 
 POOL AND MEMORY section... 
 
 I've configured the innodb_buffer_pool_size to be 128M 
 and when I do a show variables like 'innodb%' I see 
 
 | innodb_buffer_pool_size | 134217728 | 
 
 So that looks good. However, I see the following in 
 the BUFFER POOL AND MEMORY section of the output from 
 the innodb monitor: 
 
 -- 
 BUFFER POOL AND MEMORY 
 -- 
 Total memory allocated 152389988; in additional pool 
 allocated 1048576 
 Buffer pool size 8192 
 Free buffers 0 
 Database pages 7947 
 Modified db pages 0 
 Pending reads 0 
 Pending writes: LRU 0, flush list 0, single page 0 
 Pages read 20345325, created 9857, written 763089 
 0.00 reads/s, 0.00 creates/s, 0.00 writes/s 
 Buffer pool hit rate 1000 / 1000 
 
 Why does it say the buffer pool size is only 8M? 
 Shouldn't it be 128M? Also, could someone explain the 
 hit rate? I remember seeing in someone's recent post 
 that the 1000/1000 is good, but I don't know what that 
 means. Can someone suggest a good resouce that 
 explains the contents of Innodb show status in detail. 
 The page on www.mysql.com gives a very cursory 
 overview of the output. 
Buffer pool size, free buffers, database pages, and modified database 
pages are in 16k pages. 
The buffer pool hit rate simply says the fraction of page reads
satisfied 
from the innodb buffer cache, in this case 1000/1000 == 100%. 
Unfortunately, I'm not really aware of a better reference. Perhaps some 
of this is explained in High Performance MySQL, but I don't have a 
copy yet. 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-22 Thread matt ryan

Split the myisam table into seperate tables.  We will
call each table a bucket.
Create a MERGE table of all of them. For selecting the
data.
When inserting, use a hash function on your primary
key values to determine which bucket to insert into. 
If you almost always select by primary key, then you
can optimize your select queries to only look at the
correct bucket as well, and to only use the merge
table when you aren't selecting by primary key.

This will speed your inserts because instead of 258M
rows to search through for each insert there are only
8M if you use 32 buckets.  The biggest benefit is that
you could also insert in parallel using multiple mysql
connections, because you could calculate the bucket as
an additional column in your temporary table, then do
the inserts for all the buckets the same time.
 

I hit duplicate hashes for unique records, not sure why, I think I used 
aes_encrypt, how do you recomend creating a hash column via sql?

I already split the data into separate tables, and use a merge, it 
really didnt speed things up that much

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan

I went over your data. This is what I noticed first:
| Select_full_join | 0|
| Select_full_range_join   | 0|
| Select_range | 1|
| Select_range_check   | 0|
| Select_scan  | 301  |
 

What command will provide this data?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
Hmm
I'm guessing my stats arent too good, lots of full table scans, but this 
is to be expected, my users can query any table by any column, and I 
cant index all column combinations

Variable_name  
Value  

Select_full_join   
0  

Select_full_range_join 
24 

Select_range   
145321 

Select_range_check 
0  

Select_scan29402   
Sort_Scan   15360
Key_reads 37811885

and on the other big db..
Variable_name  
Value  

Select_full_join   
535

Select_full_range_join 
0  

Select_range   
1098   

Select_range_check 
0  

Select_scan10443 
Sort_Scan2464
Key_reads 20282002

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
Resend, firefox did not send the way it looked when I typed it!
I'm guessing my stats arent too good, lots of full table scans, but this
is to be expected, my users can query any table by any column, and I
cant index all column combinations
Variable_name   Value
Select_full_join 0
Select_full_range_join 24
Select_range   145321
Select_range_check 0
Select_scan29402
Sort_Scan   15360
Key_reads 37811885
and on the other big db..
Variable_name   Value
Select_full_join   535
Select_full_range_join 0
Select_range   1098
Select_range_check 0
Select_scan10443
Sort_Scan 2464
Key_reads 20282002

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
I've found the slow query log is useless to me, it's 50 meg right now.
Is there a tool that will identify common querys?   I could probably 
come up with some sql's if I load it into a table, but it would take 
quite a while to sort out.

I posted a request on the mysql bugtraq to move it to a table instead of 
that raw file, but they closed the request, guess they didnt like that idea

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How do I import a .dmp file?

2004-07-22 Thread matt ryan

David
Did you look at MYSQL LOAD DATA INFILE ???
doc is available at
http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html
regards,
Load data infile only works with a text file going into a table, if the 
data is in another format, like raw oracle, or EBCDIC it wont work, 
you'll need to convert it to ascii first.

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How do I import a .dmp file?

2004-07-22 Thread matt ryan

OK, so if I can convert it into ascii, then it will be
a text file, which I can import using the instructions
at http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html,
right?
Thanks.
 

Yep, just have the table structure match the ascii file and load it in
Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
There is a perl script that comes with MySQL called mysqldumpslow.  
You can just run it on your slow log and it will output summary 
statistics about the slow log.

I saw that in the docs, but I definitly dont want to install perl on a 
production server, I never looked to see if I could do it offline, I 
only have ms boxes, no linux here, none of our techs know linux well 
enough to move to it.

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
I load all the data into a table with no keys
then I insert this data into a table with 225 million records, this 
large table has the primary key, this is what takes a LONG time

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Lachlan Mulcahy wrote:
MySQL Version: 4.0.18
Server OS: windows 2000, or 2003
Memory 2 gig
CPU(s) dual 2.6-3ghz  xeon 500-2mb cache (cpu load is low)
Disks (RAIDs and Independent disk speed/types) 8x72 gig 15,000 rpm scsi 
II u320 raid 5 dell perc setup


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Lopez David E-r9374c wrote:
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
 

This creates a file that inserts the records back into the same table
it also does not do an insert ignore
I need the records to go into the historical table, with an insert ignore
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Do you ever delete from this table?
Temp table is trunicated before the EBCDIC file is loaded
Have you removed the unecessary duplicate key on the first column of your primary key?
Have not touched the DIC index yet, I need a backup server to change
indexes, it would take the main server down for too long, and it wont be
that big an increase, it's only a 3 character index, I also do joines on
that field to other tables, so I was hesitant on removing that index.
Can you post the results from show variables for 
nope, the list wont let me send an email that big
I did put it on the web though, here's the results from show variables
http://www.geekopolis.com/Query_Result.txt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread matt ryan
Donny Simonton wrote:
Matt,
I've been reading this thread for a while and at this point, I would say
that you would need to provide the table structures and queries that you are
running.
For example, we have one table that has 8 billion rows in it and it close to
100 gigs and we can hammer it all day long without any problems.  It really
depends on how you are doing things.
But as far as you mentioning about mysql not using multiple indexes, it
does.  You just have to create an index on multiple fields at one time.
I've got tables with 10 fields in one index, now the trick with mysql is
that you must use all top 10 fields in your where clause for mysql to really
take advantage of the index.
But I would definitely send the list your table structure with your indexes
and some of your selects and inserts.  You can always change the names of
things if you don't want people to know the names of everything.
Just my 2 cents.
 

Original email has the table structure, query speed is not an issue (it 
is, but I will deal with that later)

the issue is insert speed, I get 150k-1M records daily, of these, only 
5-10 % are new records, I load the EBCDIC file into a temp table, and 
then do insert ignore into historytable select * from temp table

and I cant create multiple field indexes, I would need 100 indexes on a 
table, the index side already exceeds the table size, I tried to add an 
index on date, but gave up because it ran for 2 days and was not done yet.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread matt ryan
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
This should create insert statements with many values in a single
insert. Then use the client program to insert them to you db.
 mysql -u matt -p dbname  filename.sql
This is very fast way to insert rows. Speeds up insertion by at
least 10x on my large tables (11 million rows). I noticed someone
with 1000x more rows w/o problems.
BTW, by forcing the table to have packed keys, the docs say it
will slow your insertion. Maybe not that much, i don't know.
David
 

Isnt the text file it creates, going to insert the records back into the 
temp table when I load it back in?

Does this do insert ignore or insert replace?  I need to control that, 
on some tables I do insert ignore, on others i do insert replace.

Almost all of the speed issue is read related, the disk writes are 
nearly 0, the reads are as fast as the drive can run, reading to see if 
the record violates the primary key I assume

about 3 gig seems to be the magic number, less than that is lightning 
fast, more than that is extreemly slow




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Justin Swanhart wrote:
Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.
 

Using 8x72 gig 15,000 rpm U320 scsi drives in raid 5 now, that should be 
a fast read raid config

no more will fit in the server, and solid state are 70,000 $ it's out of 
our budget

I optimize the tables every weekened
any other sugestions?
Would it help to defrag?  The only way I can do it, is backup every 
file, wipe out the server, and then restore the files, there's not 
enough free space to do a proper defrag

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Tim Brody wrote:
You may find that the 'dic' KEY isn't necessary, as it's the first part of
your PRIMARY KEY.
I've found better performance for multi-column keys by putting the columns
in order of least variance first, e.g. for a list of dates:
1979-04-23
1979-07-15
1980-02-04
1980-06-04
You want a key on (YEAR-MONTH-DAY)
If you can you could put the index/data on different disks - not sure how
you would do that in Windows (INDEX DIRECTORY option for CREATE TABLE?).
You should definitely put the binary log file on another disk, but again not
something I've used.
I've found MySQL to be a royal pain working with multi-GB tables (my biggest
is 12GB+13GB index). I've learnt that MySQL is a bit like a confused puppy -
it doesn't know why it wet the floor, but it expects you to know that pained
expression means you need to move it somewhere else ...
 

I need the DIC in the key to keep the record unique, I have thousands 
with everything identical except the DIC.

I was confused on the multi key index issue, I thought it would seek 
faster if I put the most unique field up front, which I do on most 
tables, I did not on this one though.   I have one large raid array now, 
so I cant split the data, or put the binary log on another disk.

I found mysql was great up to about 3 gig, then everything hit the 
brakes and got really really really slow

I'm scared of joines, every time I do a join in mysql on indexed fields 
in mysql, the performance is horrible, because the where clause is not a 
field that's in the join, performance is poopy

I wish mysql could use multiple indexes like oracle, to narrow down the 
results, I've got some simple queries that take hours due to single 
index use, but every query field is indexed.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Consider replicating to some slave servers and dividing reads among them.
I already replicate to slaves, and sites will do read only queries off 
these slaves

99.9 % of the tables are read only anyway, the only tables we update or 
insert into, are very very small and fast.

These big tables are daily extracts from IBM DB2 sites, in ebcdic 
format, we archive the data and users then query our site which is 
faster, unless they start doing multiple query options, then things get 
slow.

If you query only one feild its FAST, but if you query two feilds, its 
slow, very slow, need multiple key per query support in mysql.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan

You might be out of luck with MySQL ... sorry.
You may need to switch to a database that has a parallel query 
facility. Then - every query becomes a massive table scan but gets 
divided into multiple concurrent subqueries - and overall the job 
finishes in a reasonable amount of time. The epitomy of brute force. 
It's hard to rationalize initially but after a while you see it's the 
only way to go. Remember -  indexes are no longer required.

We have a billion row 100GB table the users search any and every way. 
Response time is less than a minute.

We are anxiously waiting to see this technology added to MySQL. Maybe 
one day we'll have some money to contribute to the effort. Parallel 
query is not trivial. That's why these databases are expensive.

I can send you more details privately if you are interested.

I've used it, with oracle, but oracles index searches are better, hit 
the best one first, then 2nd best, then 3rd, but I really dont want to 
go to oracle, it's too complicated for my tech's

vs mysql, hit the best one first, and use no other
Query time is a non issue at this point, it's load time, load daily file 
into temp table, then insert ignore into main table, on key violation 
the violating record is ignored

load time is the issue, the server loads files 15 hours a day, that big 
primary key makes loading any table over 2-3 gig VERY slow

I thought it was a bug, everything was great untill you get up to about 
3 or 4 gig, then it gets SLOW

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
Lopez David E-r9374c wrote:
matt
1) inserts using this format is much faster:
INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
   is much faster then single row insert. My experience is
   2.5 hrs vs.. 36 hrs.
2) The PACK_KEYS=1 may be hurting you. I've never used it.
3) There may be a cache somewhere that's to small. You'll
  have to do some digging in this area.
4) dup key ignore - what does that mean exactly?
5) what is your OS  rev, mysql rev.
Please post any suggestions that you find valuable so we 
can all learn..

david
 

If I understand it, pack_keys should help if your disk IO limited vs cpu 
limited, cpu is ususally near idle.

I increased the cache sizes, helped a little, but not much
delay_key_write=ALL
key_buffer_size=1000M
read_buffer_size=512M
record_buffer=512M
What would the syntax for that type of insert be?
I have a table with 30,000 records, I need to insert them into the main 
table with millions of records, I thought that insert into table select 
* from  2nd table  would be the fastest way.

insert ignore will not insert a record, if it violates the primary key, 
I do this to keep duplicate records out of the system

windows 2003 and 2000 servers, mysql-nt 4.0.16  I tried the newer 
versions, but found bugs on all of them, I submitted them to the bug system.

I believe my bottleneck is reading the data to ensure the primary key is 
not violated, I see lots of read IO, but little write IO

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
Egor Egorov wrote:
Are you running this under Microsoft Windows? 
 

Yes, windows 2k and 2003, mysql-nt 4.0.16
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
[EMAIL PROTECTED] wrote:
You may want more indexes but you might be getting killed because you already have too 
many.
To test - try loading into a table without indexes and see if it makes a difference.
At the very least - check to see if the primary index which starts with 'dic' can make 
your special 'dic' index superfluous.
If write speed is a bottleneck you might consider Raid-1 instead of Raid-5.
Reading lots of rows via index is a killer. Depending on your hardware it may be 
cheaper to table scan 50 rows than to read 1 via index. However, this requires 
partitioning of the data based on some column which appears in every query and acts as 
an initial filter. If you are lucky enough to be in that situation - consider a MERGE 
table.
 

These tables are merged, the total table size is huge, on this 
particular table, it's , 45,449,534 rows, however, all the merge tables 
combined are 258,840,305 records

perhaps I should reorder the pimary key, putting the longest most unique 
record up front, and the least unique at the end, would that speed up 
the key check?   I can tell that almost everything is read IO, very 
little write IO

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: INSERT DISTINCT?

2004-07-07 Thread Matt Chatterley
This is certainly the first step.

Also, if you want to insert only those rows which are not already present,
you can use something akin to:

INSERT INTO table1
SELECT * FROM table2 t2
LEFT JOIN table1 t1 ON (unique row identifiers -- whatever these are for
your data)
WHERE t1.XYZ IS NULL


(obviously put in the appropriate column names etc for your data structure!)


Cheers,

Matt

 -Original Message-
 From: Joshua J. Kugler [mailto:[EMAIL PROTECTED]
 Sent: 07 July 2004 22:22
 To: [EMAIL PROTECTED]
 Subject: Re: INSERT DISTINCT?
 
 Certainly, it's called making a unique index on the field(s) you want to
 keep
 unique.
 
 Hope that helps.
 
 j- k-
 
 On Wednesday 07 July 2004 12:48 pm, John Mistler said something like:
  Is there a way to do an INSERT on a table only if no row already exists
  with the same info for one or more of the columns as the row to be
  inserted? That is, without using a method outside SQL?
 
  Thanks,
 
  John
 
 --
 Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
 Every knee shall bow, and every tongue confess, in heaven, on earth, and
 under
 the earth, that Jesus Christ is LORD -- Count on it!
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Writing to an MySQL Database

2004-07-02 Thread Matt MacLeod
You're missing a closing  at the end of your sql on the last line 
which may be throwing up an error.

Cheers,
Matt
On 2 Jul 2004, at 15:05, Michael Mason wrote:
Write.txt

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Parent-Child Relationship Question

2004-07-02 Thread Matt Chatterley
Essentially, I think, you are asking about organizing hierarchical data.

This is something which I always find tricky to wrap my head around, but,
one approach I seen used successfully is to use (initially) two tables:

1. A table such as 'Places' which contains the actual data (or Locations,
any suitable name really)

2. A second table which contains the 'meta-data' - in this case, the
organization of the hierarchy.

So. 

Places might be: PlaceID, Type, Name, Description

Places_Hierarchy might be: PlaceHierarchyID, PlaceID, ParentID

Thus, for the example below..

Places:

1   state   Arizona .. stuff ..
2   country USA .. stuff ..
3   country Japan   .. stuff ..

Places_Hierarchy:
1   1   2
2   2   NULL
3   3   NULL

Then, you can join the two together and organize things that way. There are
a number of other things you can try - like moving the 'type' out into a
lookup table and storing the ID in the hierarchy (allowing you to retrieve
all places of a certain type, for instance).

This is a situation in which views are (for me, anyway) sorely missed!


Cheers,

Matt

 -Original Message-
 From: David Blomstrom [mailto:[EMAIL PROTECTED]
 Sent: 02 July 2004 03:13
 To: [EMAIL PROTECTED]
 Subject: Re: Parent-Child Relationship Question
 
 And here's a follow up question...
 
 After looking at my database from a fresh perspective,
 I'm now thinking of combining tables area and
 family into a single table.
 
 If I do that, it would make life so much simpler if I
 had TWO name fields, like this:
 
 ID |  Name  | ParentID | Parent Name
 
 az  |Arizona|us| United States
 us  | United States |kna   | North America
 jpn | Japan |keu   | Eurasia
 
 I could then slap a $mycode = 'az on a page and
 easily fill in its name and the name of its parent
 without fiddling with queries, joins, unions, etc.
 
 I know that duplicating names in two fields isn't the
 most elegant solution, but would create any major
 problems?
 
 Thanks.
 
 
 
 __
 Do you Yahoo!?
 Read only the mail you want - Yahoo! Mail SpamGuard.
 http://promotions.yahoo.com/new_mail
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Select help

2004-07-01 Thread Matt Eaton
Hey Rob,
You're looking for a group by to allow mysql to aggregate over the IP's:

SELECT ip, count(*) FROM iptable GROUP BY ip ORDER BY ip DESC limit 10;

-Matt


-Original Message-
From: rmck [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 01, 2004 1:03 PM
To: [EMAIL PROTECTED]
Subject: Select help

Hi,

I have a table with ip,port and I want to see the top ten Ip's with the
most entries? 
Ip's can be in db many times...

Not the first distinct 10... Im stuck...

I have tried:
mysql select DISTINCT ip from iptable limit 10;

+---+
| ip   |
+---+
| 0.0.0.0   |
| 10.0.1.42 |
| 10.0.1.8  |
| 10.1.1.1  |
| 10.10.10.1|
| 10.115.94.193 |
| 10.115.94.195 |
| 10.115.94.40  |
| 10.122.1.1|
| 10.20.7.184   |
+---+
10 rows in set (0.04 sec)
 
mysql 

But doesn't that just give the first 10 DISTINCT ip's?? 

rob




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Select help

2004-07-01 Thread Matt Eaton
Woops!  Forget I said that, you wanted to order by the most occurrences.
Sorry.

SELECT ip, count(*) FROM iptable GROUP BY ip ORDER BY 2 DESC limit 10;

Heh... I should learn to read one of these days...

-Matt


-Original Message-
From: rmck [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 01, 2004 1:03 PM
To: [EMAIL PROTECTED]
Subject: Select help

Hi,

I have a table with ip,port and I want to see the top ten Ip's with the
most entries? 
Ip's can be in db many times...

Not the first distinct 10... Im stuck...

I have tried:
mysql select DISTINCT ip from iptable limit 10;

+---+
| ip   |
+---+
| 0.0.0.0   |
| 10.0.1.42 |
| 10.0.1.8  |
| 10.1.1.1  |
| 10.10.10.1|
| 10.115.94.193 |
| 10.115.94.195 |
| 10.115.94.40  |
| 10.122.1.1|
| 10.20.7.184   |
+---+
10 rows in set (0.04 sec)
 
mysql 

But doesn't that just give the first 10 DISTINCT ip's?? 

rob




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL and Macs

2004-06-30 Thread Matt MacLeod
Marc Liyanage has taken much of the effort out of php and mysql  
installation for the mac.

 http://www.entropy.ch/software/welcome.html

On 30 Jun 2004, at 04:41, Kieran Kelleher wrote:
Mac OS X is well supported by MySQL. MySQL is even preinstalled in Mac  
OS X Server, although we choose to ignore the OS X Server installation  
in favor of installing the binary so that development and servers all  
have the MySQL files in the same paths. Developers in our company run  
MySQL on our Powerbooks and we have a dedicated master XServe running  
MySQL with another XServe acting as a slave replicating to the master.

It is easy to install. Here are my installation notes for Panther (OS  
X 10.3)

http://homepage.mac.com/kelleherk/iblog/C711669388/E733468496/ 
index.html

The www.mysql.com site has information on Mac OS X too aswell as the  
read me file in the download.

Once you are up and running, I recommend CocoaMySQL as a complementary  
GUI (note complementary, you still need to do stuff on the command  
line ... easy after a little while) if you are still learning  
MySQL command line.

-Kieran
On Jun 29, 2004, at 9:14 PM, Jim Carwardine wrote:
Im new to the list and new to mySQL.  Im a Mac user and would like  
to set
up a DB on my Mac.  When I look at the MySQL web site, I cant seem  
to find
any info on what hardware can be used.  Can mySQL be run on a Mac?   
If so,
what do I need to know about how to set it up?  Can anyone point me  
to a
setup procedure?   Jim
--

OYF is... Highly resourceful people working together.
http://www.OwnYourFuture-net.com
Own Your Future Consulting Services Limited,
1959 Upper Water Street, Suite 407, Halifax, Nova Scotia. B3J 3N2
Info Line: 902-823-2477, Phone: 902-823-2339. Fax: 902-823-2139



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Mysql growing pains, 4 days to create index on one table!

2004-06-30 Thread matt ryan
Rebuilding index takes 3 1/2 days!!!  Growing pains with mysql..
I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
backplanes (new ones will have dual channel)
All have 2 gig of ram, but I've never seen mysql use more than 600mb
of ram.
The servers handle huge loads, each day there are 30 1-2 gig files
loaded into large tables, total mysql data size is 96 gig, the large
tables are 2-6 gig.
Inserts are done on dupe key ignore, this takes hours on the large
files, it barely keeps up with input files.
At the bottom of this post I've got the mysql.ini config lines, any
suggestions are welcome, I'm already beyond the mysql huge sample
they used to include in the program.
Sample table that I load is as follows.
each day I get 40 % new records on the text file, the input file is
normally 20mb, once a week I get one that's 1-2 gig, these take all
day to load.
I need more multiple column indexes, as some querys return millions of
rows that must be scanned, but the index size already exceeds the
table size, and the combinations I need would result in an myi that's
5x larger than the data itself.
Here's an example of the speed problem, the index was corrupt so I
dropped all and recreated, rather than a myisam repair. I think 3 days
is a little excessive for a table that's only 3.428 gig, index is
2.729 gig.  I cant remove the primary key, as it keeps duplicates out
of the system, the input files are from old database's, we use mysql
to store the data for the web frontend, mostly done in ASP, most
queries take less than a second, unforuntatly we have big queries that
take way more than the IIS timeout setting all the time, but no way
around it, I cant add more indexes without making it even slower :(

I cant tell if it's mysql that's the problem, or the hardware, Here's a 
screenshot of the disk IO, if I copy a file while mysql is doing the 
build index, the io shoots way up, which tells me, mysql is NOT maxing 
out the drives, and it's also not maxing out the memory.

Unless it's doing lots and lots of seeks on the drive, which is harder 
to test using perfmon, are there any mysql test setups that would help 
identify where the bottleneck is?

screenshot of disk io usage
http://www.geekopolis.com/pics/diskio.jpg
I'm all out of ideas, other than switching to another db, and the table 
 indexes split across drives, maybe a 2 channel setup, 4 drives per 
channel, each 4 is a separate raid 5 setup, one holds data one holds 
indexes, cant do this with mysql though

mysql alter table hood_stat add primary key
(dic,niin,fr_ric,don,suf,dte_txn,sta) , add index `don` (`don`), add
index `niin` (`niin`), add index `stor` (`stor`), add index `dic`
(`dic`), add index `ctasc` (`ctasc`);
Query OK, 45449534 rows affected (3 days 19 hours 6 min 34.94 seconds
Records: 45449534 Duplicates: 0 Warnings: 0
CREATE TABLE `hood_stat` (
`dic` char(3) NOT NULL default '',
`fr_ric` char(3) NOT NULL default '',
`niin` char(11) NOT NULL default '',
`ui` char(2) NOT NULL default '',
`qty` char(5) NOT NULL default '',
`don` char(14) NOT NULL default '',
`suf` char(1) NOT NULL default '',
`dte_txn` char(5) NOT NULL default '',
`ship_to` char(3) NOT NULL default '',
`sta` char(2) NOT NULL default '',
`lst_sos` char(3) NOT NULL default '',
`esd` char(4) NOT NULL default '',
`stor` char(3) NOT NULL default '',
`d_t` char(4) NOT NULL default '',
`ctasc` char(10) NOT NULL default '',
PRIMARY KEY (`dic`,`niin`,`fr_ric`,`don`,`suf`,`dte_txn`,`sta` ),
KEY `don` (`don`),
KEY `niin` (`niin`),
KEY `stor` (`stor`),
KEY `dic` (`dic`),
KEY `ctasc` (`ctasc`)
) TYPE=MyISAM MAX_ROWS=10 PACK_KEYS=1
skip-locking
set-variable=delay_key_write=ALL
set-variable= key_buffer_size=1500M
set-variable=join_buffer=512M
set-variable= max_allowed_packet=256M
set-variable= table_cache=512
set-variable= sort_buffer=256M
set-variable=tmp_table_size=400M
set-variable= record_buffer=512M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=512M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
log-bin
server-id=1
replicate-do-db=finlog
set-variable=open-files-limit=500
set-variable=table-cache=400
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


C API -- huge result sets slowin me down

2004-06-28 Thread Matt Eaton
Hi all,

I was hoping this was the right place for a question about the C API.
I've been grabbing result sets from tables in the C API for a few years
now, but I'm starting to work with result sets that are big enough to
bog me down.  Of course, the result sets aren't insanely big, so I was
wondering why it was taking so long for me to suck them in to C,
especially when I can run the same query from the command line using the
binaries and they can cache it to a file on the hard disk pretty much
instantly.  So, basically, I was just hoping that I've been doing
something wrong, or at least that there was something I could do better,
to make my database communication as fast as the mysql command line
tools.  I've checked out their source and nothing obvious jumps out at
me.  Here's a non-functional sample of my code:

int main(int argc, char *argv[] ) {
int uid;
int sid;
char sqlBuff[4000];
int err = 0;
int i;
// Setup the database communications space:
MYSQL dbase;
MYSQL_RES *result;
MYSQL_ROW row;

float **genAttrib;

//... snip ...


// Connect to the database:
if (mysql_init(dbase) == NULL) err = 1;
else {


if(mysql_real_connect(dbase,localhost,login,pass,test,0,NULL,CL
IENT_FOUND_ROWS) == NULL) {
err = 1;
fprintf(stderr, Failed to connect to database:
Error: %s\n,
mysql_error(dbase));
}
}

// If the connection couldn't be established:
if(err) {
printf(db connection failed!\n);
exit(1);
}


//... snip ...

// This query could have as many as a million rows returned, but
the query itself runs quite fast.  It seems to just be
// sucking it into C that can take up to four seconds on our
dual Xeon server.
sprintf(sqlBuff,SELECT A.* FROM `attribs` as A, login AS L
WHERE A.guid=L.guid AND L.isActive=1 AND L.sid=%d AND
A.guid!=%d,sid,uid);
if (mysql_real_query(dbase,sqlBuff,strlen(sqlBuff))) {
printf(Pool Attributes Select Failed... dumbass\n);
fprintf(stderr, Error: %s\n,
mysql_error(dbase));
exit(1);
}

result = mysql_store_result(dbase);
numRows=mysql_num_rows(result);
for (i=0;inumRows;i++) {
row = mysql_fetch_row(result);
tempq=atoi(row[1]);
tempP=atoi(row[0]);
genAttrib[tempP][tempq]=atoi(row[2]);
}

return 0;
}

So, if someone sees something that I could change to speed things up, or
I should direct this question elsewhere... thanks for your help and
thanks for reading this far!

Thanks again,
Matt

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Optimising SQL Statements

2004-06-25 Thread Matt Chatterley
Hi Martin,

Speaking more from a general standpoint (I mostly work with MS SQL, but my
home projects are MySQL - these days there is precious little time for
those, though)..

Assuming you have appropriate indexes on tables y and y2:

1. Truncate WILL be faster than delete, as it is a non-logged operation.
Deleting is comparatively slow. If the goal is to remove 'old' records no
longer in the Y set from X, it may be better to just delete those rows no
longer in Y (see below), rather than the whole lot.

2. It may be better to do a left join to tabley (on y_id, and y_id IS NULL),
although I do not know quite how mysql handles these things - for MS, where
not exists / left join where null is quicker than not in.

3. If X is indexed, it may be faster to drop those indexes, bulk-insert
data, and recreate them. This is often better when dealing with large sets
of data, since it is SO much quicker to insert into an un-indexed table and
then create an index, compared with inserting into the table and updating
the index for each row.

Hope this helps!


Matt

 -Original Message-
 From: Martin Gainty [mailto:[EMAIL PROTECTED]
 Sent: 25 June 2004 17:43
 To: [EMAIL PROTECTED]
 Subject: Optimising SQL Statements
 
 Hello All:
 
 I have a very simple test procedure
 
 PROCEDURE TEST_PROCEDURE AS
 BEGIN
 
 DELETE FROM X;
 COMMIT;
 
 INSERT INTO X (column1)
 SELECT
 Y.y_id
 FROM
 Y_TABLE Y
 WHERE
 Y.y_id NOT IN (select Y.y_id FROM TABLEY2);
 COMMIT;
 
 END;
 
 this very simple procedure takes 5 min 30 sec to complete its very basic
 delete and insert operations
 Any ideas on how I can optimise
 (I used truncate instead of delete and that helped big time)
 
 Vielen Danke,
 -Martin
 
 Martin Gainty
 
 __
 Disclaimer and confidentiality note
 Everything in this e-mail and any attachments relating to the official
 business of Laconia Data Systems (LDS) is proprietary to the company. It
 is
 confidential, legally privileged and protected by law. LDS does not own
 and
 endorse any other content.
 (cell) 617-852-7822
 (e) [EMAIL PROTECTED]
 (http)www.laconiadatasystems.com
 
 
 
 
 
 From: Ron McKeever [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Location of files
 Date: Fri, 25 Jun 2004 07:47:18 -0700
 MIME-Version: 1.0
 Received: from lists.mysql.com ([213.136.52.31]) by mc8-f18.hotmail.com
 with Microsoft SMTPSVC(5.0.2195.6824); Fri, 25 Jun 2004 07:50:32 -0700
 Received: (qmail 11035 invoked by uid 109); 25 Jun 2004 14:47:44 -
 Received: (qmail 11013 invoked from network); 25 Jun 2004 14:47:43 -
 Received: pass (lists.mysql.com: domain of [EMAIL PROTECTED]
 designates 207.217.120.74 as permitted sender)
 X-Message-Info: JGTYoYF78jH0d9Gs+XXJZ4+neItA1A7m
 Mailing-List: contact [EMAIL PROTECTED]; run by ezmlm
 List-ID: mysql.mysql.com
 Precedence: bulk
 List-Help: mailto:[EMAIL PROTECTED]
 List-Unsubscribe:
 mailto:[EMAIL PROTECTED]
 List-Post: mailto:[EMAIL PROTECTED]
 List-Archive: http://lists.mysql.com/mysql/167906
 Delivered-To: mailing list [EMAIL PROTECTED]
 Message-ID: [EMAIL PROTECTED]
 X-MSMail-Priority: Normal
 X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2910.0)
 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409
 X-Virus-Checked: Checked
 Return-Path: [EMAIL PROTECTED]
 X-OriginalArrivalTime: 25 Jun 2004 14:50:35.0116 (UTC)
 FILETIME=[C575A2C0:01C45AC3]
 
 Is there any benefit to having the .MYD files on one drive, and the .MYI
 on
 its own dedicated hard drive??
 
 
 rm
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 _
 FREE pop-up blocking with the new MSN Toolbar - get it now!
 http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INDEX DESC

2004-06-23 Thread Matt W
Hi Michael,

- Original Message -
From: Michael Stassen
Sent: Wednesday, June 23, 2004 2:30 PM
Subject: Re: INDEX DESC

 Jeremy Zawodny wrote:
  Why is sorting required at all?  Indexes *are* sorted already.

 I expect he's referring to mysql's poor performance when doing ORDER BY
 indexed_column DESC relative to ORDER BY indexed_column ASC.

The performance is only poor when using an index for DESC, *if the index is
PACKED*.  (There's different criteria that determines if a certain index is
packed.)  Otherwise, it should be pretty much exactly the same as ASC.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INDEX DESC

2004-06-23 Thread Matt W
Hi Gerald,

- Original Message -
From: gerald_clark
Sent: Wednesday, June 23, 2004 2:28 PM
Subject: Re: INDEX DESC


 I suspect he is refering to 3.23's inability to use an index on a ORDER
 BY xxx DESC

That's not always true.  3.23 WILL use the index for ORDER BY ... DESC in a
query like this:

SELECT * FROM table ORDER BY key DESC LIMIT 10

but it won't for this:

SELECT * FROM table WHERE key_part1=123
ORDER BY key_part2 DESC LIMIT 10

e.g. when the index is already used for the WHERE.  Of course this was
fixed in 4.0.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Full text search problem

2004-06-21 Thread Matt W
Hi Pieter,

That's because may is a stopword in MySQL's full-text indexing, by
default (like can, the, etc).  You can define your own stopword file
with the ft_stopword_file variable.  And you can find the default, built-in
list of stopwords in the file myisam/ft_static.c of the source
distribution.

Hope that helps.

(Oh, also what Terry said in his reply!)


Matt


- Original Message -
From: Pieter Botha
Sent: Monday, June 21, 2004 3:54 AM
Subject: Full text search problem


 Hi

 I have a fulltext search on a dbase for lost pets.
 My problem is the following:

 I have dog in the database called May May which doesnt show up in the
 search results. A dog called Doggy Doggy does show up however. I guess
 the problem is that MySql sees May May as being a date or something
 and doesnt do a text compare.

 Here is my query, from php.
 $query_results = sprintf(SELECT * FROM dogslost WHERE  MATCH
 (`name`,`colour`,`gender`,`breed`,`location`,`description`) AGAINST
 ('%s' IN BOOLEAN MODE), $crit_results);

 any ideas?

 Regards
 Pieter


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqld not starting

2004-06-20 Thread Matt Brei
Greetings,

Last night I upgraded from 3.23 to 4.0.20 and when I try to start mysqld
it fails.  The only entries I have in my error log are:

040620 23:44:19  mysqld started
040620 23:44:19  mysqld ended

I know this isnt' enough info to troubleshoot the problem.  How can I
set the log level to give me more detail so I can figure out why the
service keeps dying?  Any help would be greatly appreciated.

Thanks,
Matt

MySQL 4.0.20
Red Hat Enterprise Linux 2.9.5
Kernel 2.4.21-1.1931.2.399.ent



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Returning updated rows

2004-06-19 Thread Matt Chatterley
There are a number of approaches which you could take to this.

One option would be to 'pre-assign' rows to a server - so that each server
has a distinct set of rows to work through.

Another would be to use transactions to handle this, so that only one set of
updates actually occur at a time (locking should prevent the processes from
getting the same data, since the objects/rows held in the transaction would
be locked out).

Difficult to say without a better idea of what you are trying to achieve,
though!


Cheers,

Matt

 -Original Message-
 From: Thomas Schwanhaeuser [mailto:[EMAIL PROTECTED]
 Sent: 19 June 2004 23:40
 To: [EMAIL PROTECTED]
 Subject: Returning updated rows
 
 Is it possible that one can return the actual rows, which where
 affected by an update statement?
 
 What I want to do: I have n rows in a table which symbolize some work,
 which have several servers to do. For this, the table has a column
 called INPROCESS.
 
 In order that multiple servers can work on the transactions, I'ld like
 that each of them requests 1 row, which is not currently processed  -
 and set's INPROCESS to YES. I have to avoid that two server grab the
 same row...
 
 My ideas was now something like UPDATE ... INPROCESS=YES WHERE
 INPROCESS=NO ... LIMIT 1 - but of course the application would also
 have to know which item it should process know.
 
 
 Thank you in advance for your help.
 
 
 Thomas
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with apostrophe and FTS

2004-06-11 Thread Matt W
Hi Andrea,

The ' isn't NOT a stopword, it's simply not a word-boundary character,
which I think is what you want.  And there is no way I know of to change
that in MySQL... unless you edit the source of course and compile it
yourself. :-)  Is that an option for you?  If so, I think you just need to
change 1 line in myisam/ftdefs.h:

#define misc_word_char(X)   ((X)=='\'')

change that to:

#define misc_word_char(X)   (0)

I HOPE that is correct! ;-)


Matt


- Original Message -
From: Andrea Gangini
Sent: Thursday, June 10, 2004 9:44 AM
Subject: Help with apostrophe and FTS


 Is there the possibility of making the apostrophe char ( ' ) a stopword
in
 mysql?
 Full text search queries in italian or other European language are
greatly
 affected by that; for example searching amore will not return
dell'amore
 as a match
 Any workaround suggested?

 Andrea Gangini [EMAIL PROTECTED] Mimesi Srl


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Column's DataType -- TEXT vs BLOB...

2004-06-10 Thread Matt W
Hi Scott,

No, TEXT and BLOB are the same except for the case-sensitivity differences.
Neither is like VARCHAR (except the with/without BINARY attribute part) in
that TEXT/BLOB columns don't lose trailing spaces when inserted, as VARCHAR
will -- just to clear that up.

That article is wrong, at least for MySQL.  In MySQL, unlike some other
DB systems, neither TEXT nor BLOB data is stored separate from the table.
If you want to have the non-TEXT/BLOB part of your table smaller and faster
(usually for full table scans), you have to manually create a separate
table for your TEXT/BLOB columns.


Hope that helps.

Matt


- Original Message -
From: Scott Fletcher
Sent: Thursday, June 10, 2004 2:47 PM
Subject: Column's DataType -- TEXT vs BLOB...


I'm wrestling over deciding on which data type to go with, TEXT or BLOB.  I
have one table with one column of 400 characters, I was thinking that TEXT
may be the way to go for that one.

I also have another table that use 4 columns of 800 characters along with 5
columns that use 250 characters.  I'm thinking of using TEXT for 9 of those
columns.

The reason is because I read the MySQL Manual there that say TEXT and BLOB
are pretty much the same in many ways, the only thing different is that
BLOB use VARCHAR Binary while TEXT use VARCHAR.  But reading the
article somewhere (not part of MySQL's Manual) say this...

--snip--
If it doesn't have to be searchable then a BLOB might be more efficient and
you shouldn't have to worry about size (Like size is important?  ). The
reason being that BLOB information is stored seperate from the table data
and is related by a reference number in the table. This keeps the table
smaller and faster as I understand.
--snip--

So, I don't feel too sure what to decide on...  Care for some advice or
recommendation??

Thanks,
 Scott Fletcher


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table types

2004-06-03 Thread Matt W
Hi Ronan,

Yes, it's fine to mix table types in databases and queries.


Matt


- Original Message -
From: Ronan Lucio
Sent: Thursday, June 03, 2004 2:44 PM
Subject: Table types


 Hi,

 Is it wise to have a database with hybrid table types?

 In other words: if I have a table that wouldn´t have many
 INSERT/DELETE/UPDATE queries, may I create it
 as MyISAM type and even interact (make JOINs) with
 other InnoBD and MyISAM tables?

 Or is it better (faster) to create all columns with the same type
(InnoDB)?

 Thanks,
 Ronan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tuning MySQL 4.0.20 for large full table scans

2004-06-03 Thread Matt W
Hi Dan,

- Original Message -
From: Dan Nelson
Sent: Thursday, June 03, 2004 12:34 PM
Subject: Re: Tuning MySQL 4.0.20 for large full table scans

[snip]

  Not sure what can be done about making it not go straight to tmpdir
  with a BLOB column in the SELECT clause, though.  Probably nothing,
  in 4.0.

 Do you actually see a temp file being created?  With tmp_table_size set
 to 2gb, it shouldn't have to go to disk.  Some more interesting data
 would be the State column from show processlist during the query,
 every 10 seconds or so.

Currently, temporary tables with TEXT/BLOB columns are always created on
disk because HEAP tables don't support variable-length rows.  I think this
limitation will be lifted in 5.0 or 5.1.

For the original poster, maybe this is one of the times that a RAM disk
could be useful for MySQL's tmpdir.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specifying an index length and the default value

2004-05-30 Thread Matt W
Hi David,

Great questions:

- Original Message -
From: David Griffiths
Sent: Friday, May 28, 2004 6:05 PM
Subject: Specifying an index length and the default value


 The length of indexes on varchar and char indexes can be specified at
 index creation.

 What is the default length of an index if no length is provided?

The default is to index the whole column length (of course that's not
possible with TEXT/BLOB columns).


 The High Performance MySQL book hints that the index-length used is
 specific for each entry in the indexed column (ie an index on a column
 where the average length of the data is 8 bytes would take up (8 x
 number-of-rows) bytes).

Well, maybe.  It depends... see below.


 If a column was a varchar(128), would the index use 128 bytes per entry,
 or would it use the number of bytes in each row of the indexed column.
 So if each row had exactly four characters, the index would use four
 bytes per row, but if a row was added with 8 characters, that one row
 would have an index entry that was 8 bytes in size and the rest would
 remain at 4?

I don't know that it's in the manual anywhere, but from experimenting and
stuff, I've found that, generally, indexes with a TOTAL length (if there's
multiple columns) of = 8 are fixed-length.  That is, they'll always use 8
bytes/row even if a string doesn't take up that much space.  Actually, this
up to 8 bytes, fixed length behavior might only occur with indexed
character columns (e.g. 100% numeric indexes may stay fixed-length
regardless of their size).  I'm not sure...

You can change the behavior, however, by setting PACK_KEYS in ALTER/CREATE
TABLE.  The default, in MySQL 4+, is DEFAULT, where MySQL decides whether
to use fixed-length keys (faster) or packed variable-length keys (space
saving) depending on the index.  Setting PACK_KEYS to 0 forces all
fixed-length keys (I *think* 0 in 3.23 is like DEFAULT in 4+).  Setting
PACK_KEYS to 1 packs all *non-unique* indexes, even if they're numeric or
= 8 bytes.

It's useful to use myisamchk to find out the properties of columns in an
index:

myisamchk -dv /path/to/db/table

In the second half of the output, you will see information about the
table's indexes.  Some things you may see in the Type column are:

packed - I think this is for character indexes that have prefix compression
(multiple index entries that start with the same characters are
compressed).  Any unused space at the end of the index (storing 10 chars in
a 32 character index) is also not stored (like you were talking about
above).

prefix - I think this one is for numeric indexes that have prefix
compression (in an INT index, values 0 - 255 use the same 3 bytes, so those
can be compressed).

stripped - This is for character indexes that have unused trailing space
stripped (again, like you were talking about above).

Now, having said that, there's still some things in the myisamchk output
that I can't figure out: like sometimes there will be packed and
stripped on the same column; sometimes not.  And other things I can't
remember now that don't seem consistent.  I just kinda figured it out on my
own since I don't know that there are official descriptions anywhere.
(  But at least it gives you more of an idea of what's going on internally
than you can get from a SQL query. :-)


 Thanks for any input.
 David.

Hope that helps somewhat.

Matt






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB case sensitive collation

2004-05-18 Thread Matt Mastrangelo
I'm using version 4.1.1-alpha, running on RedHat Linux 9.
Victoria Reznichenko wrote:
Matt Mastrangelo [EMAIL PROTECTED] wrote:
 

How can an InnoDB table be created with case sensitive collation? The 
example below creates two identical tables, one MyISAM and the other 
InnoDB. The InnoDB fails when inserting primary keys that differ in case 
only. What am I doing wrong?
   

Which version do you use?
Worked fine for me on 4.1.2:
mysql INSERT INTO `table_02` VALUES ('victoria'), ('Victoria');
Query OK, 2 rows affected (0.04 sec)
Records: 2  Duplicates: 0  Warnings: 0
mysql SELECT * FROM `table_02`;
+--+
| tst_key  |
+--+
| Victoria |
| victoria |
+--+
2 rows in set (0.00 sec)
 

Thanks.
drop database test;
create database test default character set latin1 default collate 
latin1_general_cs;
CREATE TABLE `table_01` (`tst_key` varchar(10), PRIMARY KEY(`tst_key`)) 
Type=MyISAM DEFAULT CHARSET latin1 COLLATE=latin1_general_cs;
CREATE TABLE `table_02` (`tst_key` varchar(10), PRIMARY KEY(`tst_key`)) 
Type=InnoDB DEFAULT CHARSET latin1 COLLATE=latin1_general_cs;


   


 

--
Matt Mastrangelo
X2 Development Corporation
781-740-2679 



Problem with like wildcard syntax

2004-05-18 Thread Matt Mastrangelo
I'm running version 4.1.1-alpha. The 3 select statements below on the 
following test table produce inconsitent results:

create table test (test varchar(20)) charset latin1 collate 
latin1_general_cs;
insert into test values ('abcField1');
insert into test values ('abcField2');
insert into test values ('abcField3');
insert into test values ('xyzField1');
insert into test values ('xyzField2');
insert into test values ('xyzField3');

select * from test where test like '___Field%'; /* Works */
select * from test where test like '%Fie%'; /* Works */
select * from test where test like '%Field%'; /* Does NOT work */
Am I doing something wrong?
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


InnoDB case sensitive collation

2004-05-17 Thread Matt Mastrangelo
How can an InnoDB table be created with case sensitive collation? The 
example below creates two identical tables, one MyISAM and the other 
InnoDB. The InnoDB fails when inserting primary keys that differ in case 
only. What am I doing wrong?

Thanks.
drop database test;
create database test default character set latin1 default collate 
latin1_general_cs;
CREATE TABLE `table_01` (`tst_key` varchar(10), PRIMARY KEY(`tst_key`)) 
Type=MyISAM DEFAULT CHARSET latin1 COLLATE=latin1_general_cs;
CREATE TABLE `table_02` (`tst_key` varchar(10), PRIMARY KEY(`tst_key`)) 
Type=InnoDB DEFAULT CHARSET latin1 COLLATE=latin1_general_cs;


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: WHERE clause problem

2004-05-03 Thread Matt Chatterley
Hmm.

Bit Odd. However, I suspect the problem is that your 'where' isn't explicit
enough:

Where ( month(date) = month(now()) ) or ( month(date) = month(now())-1 )

Bear in mind that if month(now()) = 1 you will be looking for records in
month 0!

A better way to do this might be:

WHERE month(date) BETWEEN month(now() - interval 1 month) AND month(now)

Cheers,

Matt

 -Original Message-
 From: mayuran [mailto:[EMAIL PROTECTED]
 Sent: 03 May 2004 16:15
 To: [EMAIL PROTECTED]
 Subject: WHERE clause problem
 
 This is my table:
 mysql desc testing;
 +---+--+--+-+-+---+
 | Field | Type | Null | Key | Default | Extra |
 +---+--+--+-+-+---+
 | date  | date | YES  | | NULL|   |
 +---+--+--+-+-+---+
 
 
 Here are the values:
 
 mysql select *from testing;
 ++
 | date   |
 ++
 | 2004-04-10 |
 | 2004-04-15 |
 | 2004-01-01 |
 ++
 
 Here is my question:
 
 The following query returns incorrect rows and I dont understand why.
 
 mysql SELECT * FROM testing WHERE MONTH(date) = (MONTH(NOW()) OR
 MONTH(NOW())-1);
 ++
 | date   |
 ++
 | 2004-01-01 |
 ++
 
 I wanted the query to return the rows whose months are from this month
 or last month.
 
 This query however, returns the correct rows:
 mysql SELECT * FROM testing WHERE MONTH(date) = MONTH(now()) OR
 MONTH(date) = MONTH(NOW())-1;
 ++
 | date   |
 ++
 | 2004-04-10 |
 | 2004-04-15 |
 ++
 
 Why does the first one not work? its shorter to type :)
 
 Thanks
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Delayed insert record visibility

2004-05-02 Thread Matt W
Hi Peter,

- Original Message -
From: Peter Thomas
Sent: Saturday, May 01, 2004 11:24 PM
Subject: Delayed insert record visibility


 I'm trying to understand the delayed insert process to see whether I
can use
 it to reduce the load on mysql, and have the following question.

 Are 'delayed inserts' which are queued but not yet actually added to
the
 database by the handler visible to selects on the table?

 I'm assuming they are not. Hence, if I am using a table effectively as
a
 queue, where I insert records on one end and select.. limit 1 to pull
them
 off the other end, I could end up being told by the select that there
is
 nothing left in the queue, when in reality there could be a dozen
records
 waiting in the delayed insert handler waiting to be put into the
table.

 Is my assumption correct?

Yes it is.


 Cheers
 Peter

Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Indexing

2004-05-02 Thread Matt W
Hi John,

- Original Message -
From: John Mistler
Sent: Sunday, May 02, 2004 12:50 AM
Subject: Indexing


 I know this is an elementary question, but I am getting two sets of
 instructions from different MySQL manuals about setting an index on a
prefix
 of a column of a table.  One says to use:

 KEY indexName (colName(length))

 and the other says to use

 INDEX indexName (colName(length))

 Are both all right?  Any light shed on indexing columns would be much
 appreciated.

INDEX is a synonym for KEY... or vice versa.  So yes, they're both the
same.  In fact, you'll see that if you use INDEX, and then use SHOW
CREATE TABLE, MySQL will have it specified as KEY. :-)


 Thanks,

 John

Hope that helps.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Another Trailing Spaces Issue

2004-05-02 Thread Matt W
Hi John,

What version do you use?  In 4.0.18, they fixed some bugs that were
introduced in 4.0.17 related to trailing spaces on indexed TEXT-family
columns: http://dev.mysql.com/doc/mysql/en/News-4.0.18.html

I see 3 Bugs fixed entries with trailing spaces in them.  If you're
not using 4.0.17, what you're seeing IS a bug and should be reported if
it hasn't already been.


Matt


- Original Message -
From: John Mistler
Sent: Friday, April 30, 2004 1:39 PM
Subject: Another Trailing Spaces Issue


 The TINYTEXT format solves the problem of storing the string with
spaces at
 the end.  Now, I'm having trouble SELECTING a row WHERE the
TINYTEXTcolumn =
 theStringWithTheSpacesAtTheEnd;

 If the string is theString + space and another row has an entry with
 theString (+ no space), the query returns BOTH rows.

 Is there a way to get around returning the latter entry?

 Thanks,

 John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: simplifying OR clauses

2004-05-02 Thread Matt Chatterley
As others have said, you can use 'IN'. You could also use UNION (although I
don't think I would, personally, for quite what you want!):

SELECT word FROM word_table WHERE id = 1
UNION
SELECT word FROM word_table WHERE id = 2

Etc. Assuming your version of MySQL supports the UNION operator!

Another option (although less elegant than 'IN') is to create a temporary
table with one column, 'word_id' or similar, and insert all of the IDs you
wish to search for in there. You can then INNER JOIN to that table:

SELECT word FROM word_table wt INNER JOIN id_table it ON it.word_id = wt.id

It all depends on how you're doing this, and exactly what you want. :)

Cheers,

Matt

 -Original Message-
 From: Matthias Eireiner [mailto:[EMAIL PROTECTED]
 Sent: 26 April 2004 23:00
 To: [EMAIL PROTECTED]
 Subject: simplifying OR clauses
 
 hi there,
 
 I have a basic question:
 how can I simplify multiple OR statements in a WHERE clause where I have
 only one column to which I refer?
 
 e.g.
 
 SELECT word FROM word_table WHERE id = 1 OR id = 34 OR id = 78 OR id =
 8787
 OR ...
 
 I thought I once read over something like this but I can't find it right
 now. Would be great if somebody could help me out!
 Thanks a lot in advance!
 
 regards
 
 Matthias
 
 
 _
 
 Matthias Eireiner
 
 email: [EMAIL PROTECTED]
 
 www.bvcapital.com
 _
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Standard of Column Names

2004-05-02 Thread Matt Chatterley
To me, this is entirely a matter of personal choice - and the important
thing is to pick a standard and stick to it. :)

I usually end up with a table called 'People' for arguments sake, which will
have an abstract PK (auto increment int) called PeopleID (I always use the
table name). I also capitalize each word (and all abbreviations), which is a
habit from MSSQL programming - MySQL is case sensitive, which is worth
remembering. I use underscores to indicate that a table is a 'glue' table -

e.g. If each row in People can correspond to multiple rows in the table
Jobs, and vice versa, I would create People_Jobs to describe the
relationship between the two.

There are a number of different methods that have been published, including
'Norwegian', I believe - and a bit of googling should turn up some info on
these. :)

Cheers,

Matt

 -Original Message-
 From: Ronan Lucio [mailto:[EMAIL PROTECTED]
 Sent: 27 April 2004 15:46
 To: [EMAIL PROTECTED]
 Subject: Standard of Column Names
 
 Hello,
 
 I´m doing the planing for an application that will use
 MySQL as database.
 
 So, I´d like to know your opinions about the standard
 for the column names.
 
 Supposing that I should create a table named car.
 Is it better to have either the column names (cod,
 name, description) or (car_cod, car_name, car_description)?
 
 Thanks,
 Ronan
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: What is your mysql debugging strategy?

2004-05-02 Thread Matt Chatterley
I do something similar in my PHP applications - using an object wrapper to
the SQL connection. That way, when an error occurs, the object automatically
outputs the query, along with any error which was returned.


Cheers,

Matt

 -Original Message-
 From: Richard Bryson [mailto:[EMAIL PROTECTED]
 Sent: 27 April 2004 22:26
 To: [EMAIL PROTECTED]
 Subject: Re: What is your mysql debugging strategy?
 
 I only use mysql with php so all I need is
 
 php code:
 $result = mysql_query($sql) or die(mysql_error());
 
 This always tells me what I did wrong in the query. You could easily put
 together a very short script into which you just drop you query. THis
 would output the problem to the page. Dead simple and quick.
 
 Rich
 
 Joshua J. Kugler wrote:
 
  On Tuesday 27 April 2004 04:26 am, zzapper said something like:
 
 Even though I solved the following problem myself, I'd like to know
 what debugging strategy people use to solve problems when they get the
 dreaded Error in Mysql look in the manual
 
 
  Fire up MySQL CC and paste the SQL in there, and see what error it gives
 me.
  As in 'You have an error near' type messages.
 
  j- k-
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Storing a space

2004-04-30 Thread Matt W
Hi John,

I *think* VARCHAR is *supposed* to work that way, but doesn't in MySQL.
So you'll have to use TINYTEXT.  Its storage requirements are the same
as VARCHAR(255) and it behaves the same way, except for, I think, 3
things: 1) the trailing space thing, obviously; 2) it can't have a
DEFAULT value; and 3) you can't index the whole column -- but you can
INDEX (col(255)), which has the same effect. :-)

Hope that helps.


Matt


- Original Message -
From: John Mistler
Sent: Friday, April 30, 2004 3:54 AM
Subject: Storing a space


 Is there a way to force an invisible space character to be stored at
the
 END of a string in a column (either VARCHAR or CHAR), so that when it
is
 retrieved the space at the end is not cut off?

 theString + space

 or even,

 theString + space + space + space, etc.

 Currently, I can only get the string back as theString

 Thanks,

 John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



date format problem

2004-04-30 Thread Matt Tucker
Hi,

I'm moving a JavaServlet app over from Tomcat on Win2K with a MS SQL 7 DB to Tomcat on 
Red Hat Linux with mySQL. Of course, there's about a hundred queries that use dates 
and of course, they're all in the format mm-dd-. is there a way to format the date 
column in my mysql tables to accept a date in this format or do i really have to go 
through every sql statement and parse the date and rebuild it to be -mm-dd? thanks 
so much. deadline is fast approaching.

Matt Tucker
thoughtbot


Re: fulltext index -- word needs not found

2004-04-28 Thread Matt W
Hi Joyce,

needs is a stopword, that's why it's not indexed or found.  You can
use your own ft_stopword_file to define the list without needs.  The
default, built-in stopword list is defined in, I think, the
myisam/ft_static.c file of the source distribution, for reference.

Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, April 28, 2004 12:01 AM
Subject: fulltext index -- word needs not found


 Description:
 We have three different unrelated tables, each with one field that
 has a fulltext index. In each table, in the field with the fulltext
 index, there are records with the word needs in them, and in each
case no
 records are found when doing this type of query (using a fulltext
index):

   select * from testdb where match(highergeog) against('needs' in
boolean mode);

 However, records are found when doing substring searches:

   select * from testdb where highergeog like '%needs%';

snip

 Also I know someone running 4.0.15 on linux, and needs can't be
found
 in fulltext queries on his data also, even though the word exists in
the data.

 Perhaps this is all solved in 4.0.18?


 Fix:
 Don't use fulltext index--use substring search.
 But substring is slower than fulltext.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Compound Primary Key question

2004-04-24 Thread Matt Chatterley
As Jeremy says - it depends totally on what you want to do.

If you have tables where there is no logical, unique way to identify that
column (or the only way to do so is via a column you do not want to use for
this purpose), then assigning a separate ID column as a PK makes sense.

E.g: If you have a lookup table 'ItemDescription' which contains a list of
description fields for items, it would make sense to make the table (ItemID,
Description) with ItemID being an autoincrement primary key.

However, in some other cases, a compound key will make more sense - for
instance if you have a 'glue table' such as 'Item_Shop' which lists the
items that are available in each shop: (ItemID, ShopID), then clearly, you
cannot have a PK on either column alone (since there is a many to many
relationship), so a compound PK is the only way to actually put a PK on the
table (and uniquely identify a given row).

One rule of thumb is: If there are two or more columns within a given table
which together are the logical way to identify that row (and the way you
would always join to the table), then use those as a compound key, otherwise
assign a separate autoincrement column as a PK.


Cheers,

Matt

 -Original Message-
 From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]
 Sent: 23 April 2004 23:51
 To: Emmett Bishop
 Cc: [EMAIL PROTECTED]
 Subject: Re: Compound Primary Key question
 
 On Fri, Apr 23, 2004 at 03:40:43PM -0700, Emmett Bishop wrote:
  Quick question. In general, is it better to create
  compound primary keys or use an auto increment field
  to uniquely identify each record?
 
 Yes.
 
 It depends on your application and your data.
 
 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
 
 [book] High Performance MySQL -- http://highperformancemysql.com/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [PHP] oblivious to the obvious

2004-04-23 Thread Matt Matijevich
[snip]
$sun_5a_n1 = 1;
$sun_5a_t1 = 2;

if($sun_5a_n1) {
$result = mysql_query(UPDATE sunday SET 
a5_n_1='$sun_5a_n1',a5_t_1='$sun_5a_t1')
while($row = mysql_fetch_array($result)) {
(line 17)echo Sunday @ 5am slot modified to Name:
$sun_5a_n1, Time: 
$sun_5a_t1br;
break;
}
mysql_free_result($result);
(line 21)}
[/snip]

I dont think sql UPDATE will return a result resource, so you dont have
anything to fetch or free.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LAST_INSERT_ID() and Stored Procs

2004-04-22 Thread Matt Chatterley
Hi all.

Another hiccup along the happy road with MySQL 5.0!

The last bit of a stored procedure I have just put together does this:

-- insert cluster row
INSERT INTO clusters (Name) VALUES (sName);
SELECT LAST_INSERT_ID() INTO iNewClusterID;

-- insert map row
INSERT INTO map (X, Y) VALUES (iX,iY);
SELECT LAST_INSERT_ID() INTO iNewMapID;

-- insert map_clusters row
INSERT INTO map_clusters (MapID, ClusterID) VALUES (iNewMapID,
iNewClusterID);

The last table mentioned, map_clusters has an FK on either column - each
pointing to one of the other two tables. The procedure always fails on this
insert, citing that there has been an FK violation.

I've returned the values of iNewClusterID and iNewMapID out as parameters,
and they always seem to be 0.

However, I tried this:

Create procedure id_test (out id int)
Begin
Select last_insert_id() into id;
End

And this correctly returns the last insert_id for the current connection.

Most puzzling - I saw a closed bug from March on mysql.com which would have
explained this, however, then, the above short procedure would have failed
as well!

Has anyone out there run into similar troubles?


Cheers,

Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LAST_INSERT_ID() and Stored Procs

2004-04-22 Thread Matt Chatterley
Oh.

If only I'd tried ONE more thing before mailing that out!

If anyone does have the same problem, the vital (missing) piece of
information is that I was using MySQLCC. It seems to have problems with SPs
unless you open a new query window after changing the contents of a
procedure..


Thanks,

Matt

 -Original Message-
 From: Matt Chatterley [mailto:[EMAIL PROTECTED]
 Sent: 23 April 2004 02:08
 To: 'MySQL List'
 Subject: LAST_INSERT_ID() and Stored Procs
 
 Hi all.
 
 Another hiccup along the happy road with MySQL 5.0!
 
 The last bit of a stored procedure I have just put together does this:
 
 -- insert cluster row
 INSERT INTO clusters (Name) VALUES (sName);
 SELECT LAST_INSERT_ID() INTO iNewClusterID;
 
 -- insert map row
 INSERT INTO map (X, Y) VALUES (iX,iY);
 SELECT LAST_INSERT_ID() INTO iNewMapID;
 
 -- insert map_clusters row
 INSERT INTO map_clusters (MapID, ClusterID) VALUES (iNewMapID,
 iNewClusterID);
 
 The last table mentioned, map_clusters has an FK on either column - each
 pointing to one of the other two tables. The procedure always fails on
 this
 insert, citing that there has been an FK violation.
 
 I've returned the values of iNewClusterID and iNewMapID out as parameters,
 and they always seem to be 0.
 
 However, I tried this:
 
 Create procedure id_test (out id int)
 Begin
 Select last_insert_id() into id;
 End
 
 And this correctly returns the last insert_id for the current connection.
 
 Most puzzling - I saw a closed bug from March on mysql.com which would
 have
 explained this, however, then, the above short procedure would have failed
 as well!
 
 Has anyone out there run into similar troubles?
 
 
 Cheers,
 
 Matt
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: query help

2004-04-21 Thread Matt Chatterley
I suspect you want 'IS NULL' rather than '= NULL'. :)

I always find it best to think of NULL as undefined value rather than no
value - which is why you need to check for it especially (using IS rather
than = or other operators).


Cheers,


Matt

 -Original Message-
 From: Yonah Russ [mailto:[EMAIL PROTECTED]
 Sent: 21 April 2004 14:47
 To: MySQL List
 Subject: Re: query help
 
 I got a response off the list suggesting writing a function to go over
 the query results- it's not hard but I'd rather do this in sql if
 possible.
 
 I came up with this:
 select books.bookid,books.title,copies.copyid from books left join
 copies on books.bookid=copies.bookid where copies.copyid=NULL;
 
 this didn't work even though without the where clause I got exactly what
 I wanted- the left join filled in the entries that didn't have copies
 with a null copyid.
 
 what did I do wrong?
 thanks
 yonah
 
 Yonah Russ wrote:
 
  Hi,
  I have two tables- books and copies
 
  every book has an id in the books table
  every copy of a book has the books id and a copy id in the copies
  table (1 row per copy)
 
  I want a list of all the books that don't have any copies meaning all
  the book id's in books that don't match any book id's in copies.
 
  how can I do this?
  thanks
  yonah
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Query Question - Need help of Gurus.

2004-04-19 Thread Matt W
Ben,


- Original Message -
From: Ben Dinnerville
Sent: Monday, April 19, 2004 1:49 AM
Subject: RE: Slow Query Question - Need help of Gurus.



snip

Then try again:

   SELECT `Call Svc Tag ID`,
   Count(*) as counter,
  `Journal Create Date`
   FROM 31909_859552
   WHERE `Journal Create Date` between '2004-03-13' AND  '2004-03-16'
   AND `Call Svc Tag ID`'null'
   GROUP BY `Call Svc Tag ID`
   HAVING counter  3
   ORDER BY counter;

 The count(*) will be causing some havoc here, as all columns in the
 underlying table(s) will have to be read in, negating any direct hits
to the
 index's and causing a lot more IO than is needed. Change it to a count
on
 one of the columns in the result set or simply a count(1) as
counter -
 will give you the same result without any IO.

COUNT(*) is not a problem.  It won't cause the data file to be read if
just the index can be used.  EXPLAIN will show the same plan for
COUNT(*) and COUNT(1).  :-)


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow Connection from PHP to MySQL 5.0

2004-04-17 Thread Matt Chatterley
Yep - both the Win2k box and Linux box now identify each other by IP (access
control and any host references in code) - but the problem persists.

And yep again, I agree that the presence of windows is the other variable!
However, I don't see why it would be so slow to connect! Perhaps it's a
peculiarity of the MySQL 5 windows build currently available - but then I
was hoping someone else would have seen the same issue!

I'm going to try building MySQL 5 on another linux box at some point (then I
can throw a backup of the database on there and try it), but would really
like to keep it installed on Windows at the moment - because it's more
convenient to debug (and restart)!


Cheers,


Matt

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:mysql-list-
 [EMAIL PROTECTED]
 Sent: 15 April 2004 20:29
 To: Matt Chatterley
 Subject: Re: Slow Connection from PHP to MySQL 5.0
 
 do you have (mysql) access control on the mysql5 box that's based on
 hostname, rather than ipnumber?  if so, how quickly does the
 inverse-map address of the linux box resolve?
 
 from a testing perspective, you have two variables with your setup.
 the first is mysql5 (vs. 4) but you also have windoz vs unix.  part
 of the issue may simply be that windoz is that much slower setting up
 the connection.
 
 
 -- Original Message --
  From: Matt Chatterley [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Date: Thursday, April 15, 2004 07:58:20 PM +0100
  Subject: Slow Connection from PHP to MySQL 5.0
 
  Hi Folks.
 
  I've seen similar questions asked both here, and via a google
  search - but I haven't found an answer which has helped yet - so
  heres hoping!!
 
  I run a Linux webserver which has PHP 4 installed, and am currently
  prototyping a design using MySQL 5.0-alpha on Windows 2K
  professional. The two servers are on different subnets of a LAN,
  but are able to talk to each other unrestricted (all TCP services I
  have tried work perfectly, e.g. SMTP, FTP).
 
  The problem is that connections from PHP to MySQL seem to take 4-5
  seconds (after which any queries within the connection go through
  in normal lengths of time - only the connection time itself is
  long). Although I have a fully functional internal DNS server, I
  thought name resolution might be an issue. I amended my PHP pages
  (they inherit a global 'data connection' object which is used as a
  wrapper) to connect to the Win2k box by IP address - and the
  problem persists.
 
  Has anyone else encountered this, and are there any ideas?
 
  Connections from the same Linux box with PHP to another Linux box
  on the same subnet as the Win2k box running MySQL 4 work perfectly
  - so is it possible this is a MySQL 5 issue? I am using the 'old
  connections' flag (but have tried both with it off and on, since I
  am not actually using a password for the connection at present).
 
  Any suggestions will be most gratefully received!
 
 
  Cheers,
 
 
  Matt
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 -- End Original Message --
 
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB

2004-04-15 Thread Matt Lynch
Hi Jenny,

mysql SHOW VARIABLES LIKE '%have%';
++---+
| Variable_name  | Value |
++---+
| have_bdb   | YES   |
| have_crypt | NO|
| have_innodb| YES   |
| have_isam  | YES   |
| have_raid  | NO|
| have_symlink   | YES   |
| have_openssl   | NO|
| have_query_cache   | YES   |
++---+
8 rows in set (0.03 sec)

You might find the following helpful
http://www.codeant.com/tutorials/mysql/MySQLTutorial.html

Matt

-Original Message-
From: Chen, Jenny [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 15, 2004 10:23 AM
To: '[EMAIL PROTECTED]'
Subject: InnoDB


MySQL Experts:

I am new for MySQL database.  
We have a 4.0.18 MySQL sit on Linux box.

I am reading on InnoDB. And having a question.

How do I know the table is configured by InnoDB instead of normal table
? Should I at least see some entry in the /etc/my.cnf to indicate that
InnoDB is configured?

Thanks in advance.
Jenny




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.656 / Virus Database: 421 - Release Date: 4/9/2004
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.656 / Virus Database: 421 - Release Date: 4/9/2004
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow Connection from PHP to MySQL 5.0

2004-04-15 Thread Matt Chatterley
Hi Folks.

I've seen similar questions asked both here, and via a google search - but I
haven't found an answer which has helped yet - so heres hoping!!

I run a Linux webserver which has PHP 4 installed, and am currently
prototyping a design using MySQL 5.0-alpha on Windows 2K professional. The
two servers are on different subnets of a LAN, but are able to talk to each
other unrestricted (all TCP services I have tried work perfectly, e.g. SMTP,
FTP).

The problem is that connections from PHP to MySQL seem to take 4-5 seconds
(after which any queries within the connection go through in normal lengths
of time - only the connection time itself is long). Although I have a fully
functional internal DNS server, I thought name resolution might be an issue.
I amended my PHP pages (they inherit a global 'data connection' object which
is used as a wrapper) to connect to the Win2k box by IP address - and the
problem persists.

Has anyone else encountered this, and are there any ideas?

Connections from the same Linux box with PHP to another Linux box on the
same subnet as the Win2k box running MySQL 4 work perfectly - so is it
possible this is a MySQL 5 issue? I am using the 'old connections' flag (but
have tried both with it off and on, since I am not actually using a password
for the connection at present).

Any suggestions will be most gratefully received!


Cheers,


Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Altering MAX_DATA_LENGTH

2004-04-13 Thread Matt W
Hi Dan,

(Sending to the General list too, since this isn't a Windows-specific
thing.)

SHOW TABLE STATUS LIKE 'tbl_name'

will show you the current Avg_row_length.

But the specific values for MAX_ROWS and AVG_ROW_LENGTH when you CREATE
or ALTER the table don't matter (except for looking correct :-)) as
long as their product is greater than 4GB.  BTW, you can't have the
limit be 8GB -- when you go greater than 4GB, the Max_data_length will
be 1TB.


Hope that helps.


Matt


- Original Message -
From: Dan
Sent: Tuesday, April 13, 2004 3:58 PM
Subject: Altering MAX_DATA_LENGTH


 If I have a table that has two fields: Char(100), Blob

 How do I determine the avg_row_length value if I want to increase the
size limit to 8GB?

 Thanks
 Dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Do I specify a primary key to be primary, unique and index ?

2004-04-11 Thread Matt Chatterley
As I discovered recently, thanks to another user on this list, there is at
least one situation where you WILL need to also create a KEY index on a
PRIMARY KEY column -

If you have a composite primary key such as (col1, col2) and you wish to
place a foreign key on col2, you will ALSO have to add a KEY on col2 to be
able to do so.


Cheers,

Matt.

-Original Message-
From: Eldon Ziegler [mailto:[EMAIL PROTECTED] 
Sent: 11 April 2004 08:53
To: [EMAIL PROTECTED]
Subject: Re: Do I specify a primary key to be primary, unique and index ?

 From the MySQL documentation:
* A PRIMARY KEY is a unique KEY where all key columns must be defined 
as NOT NULL.
KEY is a synonym for INDEX. So, specifying PRIMARY KEY implies UNIQUE and 
INDEX.. You don't have to specify them yourself.

At 01:11 am 4/11/2004, you wrote:
I learned that there are three types of indexes (PRIMARY, UNIQUE, and
INDEX).

Now assuming I create a performance-critical PRIMARY key, will I better
have
to specify UNIQUE and INDEX for this column also !? It should be obvious
that a primary key is unique anyway, and an index as well, shouldnt it !?
Please note, I am not after saving disk space here, performance is all I am
after, and such a three-fold indexing exercise just seems redundant to me
in
the best case scenario, or harmful even, am I right there !?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MSSQL to MYSQL

2004-04-11 Thread Matt Chatterley
Yep. Theres no reason at all why this sort of thing won't work for MSSQL
too. Use SQL Enterprise Manager to generate a create script for all objects
in the database, and also tell it to script referential integrity (FKs,
etc).

Then add anything MySQL specific, such as Type=InnoDB (which you will need
for transactions/FKs, although I believe BDB works as well?).

The only possible problem you will run into is with any code that is
embedded into the database - stored procedures shouldn't be too bad, as the
syntax in MySQL is fairly similar, you'll just end up changing some function
names and tweaking (unless you have very complicated MS SPs). Remember that
there are no table variables in MySQL, and that the syntax to create a
temporary table is 'create temporary table xyz' not 'create #xyz'.

Views of course, are a different matter. In terms of the database structure
itself, without embedded code though, it should work perfectly...

Cheers,


Matt

-Original Message-
From: David Carlos Brunstein [mailto:[EMAIL PROTECTED] 
Sent: 11 April 2004 05:23
To: 'Rodrigo Galindez'
Cc: [EMAIL PROTECTED]
Subject: RE: MSSQL to MYSQL

Hi Rodrigo.

I'm facing a similar task but from Informix to MySQL. What I do is:

1. Obtain a SQL script to create the logical database (an Informix tool
give it). You can use Erwin, with reverse engineer and the save the
script.

2. Add the Type=INNODB clause for each CREATE sentence. 

3. Divide the scritp into tow: one for the tables creation (with its
primary key)  (CreateDB.sql) and another one for the alter tables to
create the foreing keys (AlterDB.sql).

4. Create a script for loading data from TXT files. First you have to
save every table data from SQL Server into TXT files, then load them
into MySQL tables (LoadDB.sql).

5. Run the AlterDB.sql script (step 3).

It works fine to me.

Regards,
David.
 
==
David Carlos Brunstein
System Analyst / Software Developer
Buenos Aires, Argentina
 
Mail to: David _ Brunstein @ Yahoo . Com . ar
IM: DavidBrunstein @ Hotmail . Com



-Original Message-
From: Rodrigo Galindez [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 10, 2004 3:38 AM
To: [EMAIL PROTECTED]
Subject: MSSQL to MYSQL


Hello list,
I have to move a database in MSSQL to MYSQL, with the table 
structures and all the respective data. I tested one product to do this,

SQLyog, and it works fine, except for some little problems with indexes 
and primary/secondary keys. I want to know if anyone have been dealing 
with the same problem to recommend me some suggestions/tips/tricks. Do 
you know another program/script/ways to do this migration ? I want to 
migrate everything from the original MSSQL database, like indexes, 
relationships, and so on. Can you guys recommend me some actions or tips

to take ?
Thanks in advance,

-- 
Rodrigo Galindez
Information Management Assistant
Center for Human Rights and Environment (CEDHA)
Gral Paz 186 10 A
5000 - Cordoba - Argentina
Tel/fax 54-351-4256278
[EMAIL PROTECTED]
www.cedha.org.ar


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: sql join help

2004-04-11 Thread Matt Chatterley
Hmm.

SELECT o.*
FROM orders o
INNER JOIN orderitems oi ON o.orderid = oi.orderid
INNER JOIN products p ON p.productid = oi.productid
AND p.productparentid  2
WHERE o.orderstatus =2

Not sure why you are checking for a NULL ordered in orderitems? That would
suggest you get back only items that have no associated order?

The above should do what you state below, though - I think!


Cheers,

Matt

-Original Message-
From: Michael Collins [mailto:[EMAIL PROTECTED] 
Sent: 11 April 2004 05:14
To: [EMAIL PROTECTED]
Subject: sql join help

I suppose this would be easier with subselects but I am using MySQL 4:

I want all orders that are of orderStatus 2 and whose orderitems 
contain a product that is in a productparent category greater than 2. 
An orderitem can only have one product, and that product has a single 
certain product parent (defined in the products table). This is how 
the tables are related:

members - orders - orderitems - products - productparents

I have tried the following, but I know it is not correct:

SELECT count(*) FROM orders AS o
LEFT JOIN members AS m USING (memberId)
LEFT JOIN orderItems AS oi ON (o.orderId=oi.orderId)
LEFT JOIN products AS p ON (oi.productId=p.productId) AND 
(p.productParentId  2)
WHERE (oi.orderId IS NULL) AND (o.orderStatus=2);

-- 
Michael
__
||| Michael Collins
||| Kuwago Inc  mailto:[EMAIL PROTECTED]
||| Seattle, WA, USAhttp://michaelcollins.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



5.0.0a stored procedure crash

2004-04-11 Thread Matt Chatterley
Hi folks.

I've just submitted the following as a 'bug' via the MySQL website, and was
wondering if anyone out there had experienced the same problem?

It only seems to cause a crash when a nested IF is put into a stored
procedure, so the work-around is obvious - don't nest 'em!


As posted to mysql.com:

Description: Using mysqld-opt on Windows 2000 Professional (5.0.0a-alpha),
and
the stored procedure shown below is created and called, the error: The
instruction at 0x referenced memory at 0x. The memory
could
not be read

Is displayed, and the server shuts down 'unexpectedly'.

I have tried different permutations of the procedure, and it is the addition
of
a 'nested' if statement which triggers the problem - if this is removed, the
code will execute with no problems and give the expected result.

All tables referenced are InnoDB, and are simple tables with two columns (an
auto increment primary key and a unique indexed value column of type
VARCHAR(6)).

How to repeat:
From the command line:

delimiter //
CREATE PROCEDURE name_test (OUT sFragment VARCHAR(6), OUT iRand INT)
BEGIN
DECLARE bContinue INT;
--DECLARE iRand INT;

SELECT 1 INTO bContinue;

WHILE bContinue = 1 DO

SELECT CAST((RAND() * 100)/33 AS UNSIGNED) INTO iRand;

IF iRand = 0 
THEN

SELECT Fragment INTO sFragment FROM namefragmentvowel ORDER BY RAND() LIMIT
1;

ELSE IF RAND = 1 THEN
SELECT Fragment INTO sFragment FROM namefragmentconsonant ORDER BY RAND()
LIMIT
1;

END IF;
END IF;

SELECT 0 INTO bContinue;

END WHILE;
END
//

Then from MySQLCC:

call name_test(@sTest, @iRand);
select @sTest AS frag, @iRand AS rng;


Cheers,


Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question regarding defaults

2004-04-10 Thread Matt W
Hi Boyd,

Can I ask why it really matters? :-)  I would assume the DEFAULT value
is stored at creation time; but the end result would be the same either
way.

BTW, I hate how MySQL's SHOW CREATE TABLE quotes DEFAULT INT-family
values. :-(  It shouldn't do that.


Matt


- Original Message -
From: Boyd E. Hemphill
Sent: Friday, April 09, 2004 9:49 PM
Subject: Question regarding defaults


 Hello:

 I have need to declare a column as type integer then default is at '0'
 (that is a string with a zero in it).

 An example may be:
 Create table foo (
 foo_id  int not null default '0'
 )

 My question centers on the notion of implicit type conversion.  Is the
 server converting the type at the time the ddl (not really too big a
 deal) or is it doing the conversion at run time (i.e. each time a row
is
 inserted in the DB).

 Thanks for your time and expertise!

 Best Regards,
 Boyd E. Hemphill


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fulltext index is not being built with large database

2004-04-10 Thread Matt W
Hi sascha,

How's the space on your datadir partition (or wherever this table is)?
I believe MySQL creates the temp tables during ALTER in the DB
directory, not the tmpdir.

If the space there is OK, have you checked the error log for anything
related?


Matt


- Original Message -
From: sascha mantscheff
Sent: Friday, April 09, 2004 4:21 PM
Subject: Fulltext index is not being built with large database


 I'm trying to build a fulltext index on a table with about 4 million
 entries with 2 varchar and one text field.
 The indexing starts and runs for about 1/2 to 1 hour, then the process
 stops without any error message. And leaves me with no index. I
checked
 for the size in tmp and redirected it to a partition with 50GB space
 (about 15 times as much as the database tables).
 Mysql runs on gentoo-linux 2.4 with 800MB RAM with server version
4.0.17.

 Any clues, hints or tips?
 Thank you.


 sascha mantscheff


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: backup

2004-04-10 Thread Matt W
Hi Steve,

You might want to look at FLUSH TABLES WITH READ LOCK.  That's a query
to run from mysql, but I'm sure you can get it to work in your shell
script (you need to maintain the MySQL connection while doing the
backup).  I don't know much about that, though.  I think you just run
UNLOCK TABLES when you're finished.


Matt


- Original Message -
From: Steve Sills
Sent: Tuesday, April 06, 2004 8:17 PM
Subject: backup


I want to use rsync to backup my db server, how do i lock all the tables
for all the db's to read only so i cando my backup, then unlock them
again.  It needs to be done from the command line, not the mysql
program.  Anyone have any ideas?  I have looked and couldn't find the
answer i was looking before.  Its running from a shell script, from my
backup machine.  Its currently setup to shut down the server, however i
don't want to have to do this.  Thanks in advance.

Steve Sills
Platnum Computers, President
http://www.platnum.com
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LIKE search with different orders

2004-04-10 Thread Matt Chatterley
Hmm.

You might be best off using the FIND_IN_SET function (check the manual at
mysql.com for more information).

In short, if you replace all spaces in your string to search with commas,
you should be able to do something like:

SELECT * FROM xyz WHERE FIND_IN_SET(test_column, your string here)  0

Not 100% sure, as I haven't tried quite this approach!

Another, more long winded way might be to explode your string out into a
temporary table and compose a query which will bring back all matching rows
(by multiple joins to the temporary table).

FIND_IN_SET looks like a better alternative!


Cheers,

Matt

-Original Message-
From: Tarik ANSARI [mailto:[EMAIL PROTECTED] 
Sent: 10 April 2004 14:51
To: [EMAIL PROTECTED]
Subject: LIKE search with different orders

Hello again,

To follow my previous message, the CONCAT method does works, but now my
problem is to make a search where the order doesn't count : then to find
members whose firstname is john, lastname smith and vice-versa.

I would like to use an operator or a function for this, but I cannot
enumerate all possible combinations (in this case yes, because the query
only has 2 words, but with a query with 6 words it would make 6!
combinations then a very long query !).

Thank you


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication slaves quitting with binlog error

2004-04-09 Thread Matt Sturtz
Hi all--

We have one master and 12 slaves replicating from it.  Server is 4.0.16
(havn't wanted to take it down to upgrade), the slaves are 4.0.17, all
running on RedHat AS.  Lately, every few hours one of the machines caughs
up this error, and quits replicating:

Could not parse relay log event entry. The possible reasons are: the
master's binary log is corrupted (you can check this by running
'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you
can check this by running 'mysqlbinlog' on the relay log), a network
problem, or a bug in the master's or slave's MySQL code. If you want to
check the master's binary log or slave's relay log, you will be able to
know their names by issuing 'SHOW SLAVE STATUS' on this slave.

It's complaining about a corrupted relay-log, but it's always only one
machine at a time.  When I run 'mysqlbinlog offending relay-log-file' it
gives me the following error:

ERROR: Error in Log_event::read_log_event(): 'read error', data_len: 396,
event_type: 2
ERROR: Could not read entry at offset 60096864 : Error in log format or
read error

The offending log in the current case is _not_ the last one, so the IO
thread is appearently still functional.

The only fix I know is to blow away the data on the slaves and start again
with a fresh snapshot (including master.info).  As far as I know there's
no disk problems (all servers less than a year old, and lots of space
available).

Any thoughts?

-Matt-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Database design question

2004-04-07 Thread Matt Chatterley
I'm not 100% sure as to what you are trying to do, however, the relationship
you describe could, I believe, be modeled as:

Candles (candleid, description, price)
Waxes (waxid, name/description)
Candle_Waxes (candleid, waxid)

Thus one table holds the description and price of each candle, another table
holds the name of each wax, and a third table connects the two - as a candle
can have multiple waxes, the logical way to do this (to me, anyway) is via
this third table - glueing the other two together.

You'll need to be a bit clever when querying, as simplying joining all three
together will bring back multiple rows for candles which contain more than
one wax - this could be eliminated by not bringing back the wax details (and
using distinct), or in a number of other ways.

One other way might be to come up with a way to combine all of the wax names
into one field (tricky - can't think how to do this in mysql, off the top of
my head).

What precisely are you trying to achieve, though - this might be completely
wrong for you!


Thanks,

Matt

-Original Message-
From: JOHN MEYER [mailto:[EMAIL PROTECTED] 
Sent: 07 April 2004 15:39
To: [EMAIL PROTECTED]
Subject: Database design question

Hi,
I'm writing a database for an online candle store.  Here's the situation.
This store sells all sorts of items including candles.  Now the gist is that
some of the candles can be made in different types of waxes and some only
have one wax.  My question is how do I resolve this when I write up the
order and write up the line items.  This almost seems like it is going to be
some sort of a three way join or something.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why can't I use an AS value in the WHERE clause.

2004-04-05 Thread Matt W
Hi,

This is what HAVING is for.  :-)


Matt


- Original Message -
From: Joe Rhett
Sent: Monday, April 05, 2004 8:59 PM
Subject: Re: Why can't I use an AS value in the WHERE clause.


 On Mon, Apr 05, 2004 at 08:03:33PM -0500, Paul DuBois wrote:
  At 17:29 -0700 4/5/04, Daevid Vincent wrote:
  I'm curious when will I be able to do something like this:
  
  SELECT *, IF(( (unix_timestamp()-unix_timestamp(last_seen)) 
600),1,0) as
  active FROM wifi_table WHERE active = 1;
 
  I think you'll never be able to do it.
 
  The stuff after the SELECT is calculated based on the rows selected
  by the WHERE.  The WHERE therefore cannot be based on the stuff
after
  the SELECT.

 So why not the reverse?  Allow aliasing in the WHERE clause, that we
can
 use in the select clause.  Probably not ANSI, but MySQL isn't shy
about that.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: can't call a stored proc from another stored proc?

2004-04-03 Thread Matt Chatterley
Morning :)

1. You sure can, e.g:

CREATE PROCEDURE `user_authenticate`(IN sUserName VARCHAR(25), IN sPassword
CHAR(32), OUT sUserCookie CHAR(32))

BEGIN

DECLARE iUserID INT;
DECLARE iLogID INT;

SELECT MD5(CONCAT(UserID,NOW())) INTO sUserCookie FROM users WHERE UserName
= sUserName AND Password = sPassword;

IF LENGTH(sUserCookie) = 32 THEN 
UPDATE users SET Cookie = sUserCookie, LoggedOnAt = NOW() WHERE UserName =
sUserName AND Password = sPassword; 
SELECT UserID INTO iUserID FROM users WHERE Cookie = sUserCookie;
CALL processlog_write(NULL, 'user_authenticate', CONCAT('User authenticated
successfully (', sUserName, ').'), iUserID, iLogID);

ELSE CALL processlog_write(NULL, 'user_authenticate', CONCAT('User
authentication failed (', sUserName, ')'), 0, iLogID);

END IF;

END

2. I believe this is planned for the future - I read something about it in
the documentation not long ago.

Sounds like an oddness either with 5.0.1-alpha, or with your build - I am
using the pre-compiled binary version of 5.0.0a-alpha on Win2k, and the
above procedure executes with no problems at all.

As a side note, does anyone know if it is now confirmed that views will be
in 5.1 rather than 5.0? :) I know, I know.. I keep harping on about views...


Regards,


Matt.


-Original Message-
From: Michael Pheasant [mailto:[EMAIL PROTECTED] 
Sent: 03 April 2004 11:57
To: [EMAIL PROTECTED]
Subject: can't call a stored proc from another stored proc?

Hi,

1) Can a stored procedure call another stored procedure?
  Ie, can you do 'call someproc()' from within a stored procedure?

2) Also, will a function ever be able to issue a SELECT query?

I am using mysql-5.0.1-alpha (built froms ource) , winXP  win2k. 
The mysql daemon crashes without an error message when I try (1)

Cheers,

Mike 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: two tables with same field names into one table

2004-04-03 Thread Matt Chatterley
To select the contents of both into one table, you most likely want to use
the 'UNION' operator:

SELECT * FROM desktops
UNION
SELECT * FROM laptops

If you create the computers table before hand (you can see how you would
create either of the others with SHOW CREATE tablename), then you can just
do one INSERT into the new table, using a select similar to the one above.


Thanks,

Matt


-Original Message-
From: Brad Tilley [mailto:[EMAIL PROTECTED] 
Sent: 03 April 2004 21:00
To: [EMAIL PROTECTED]
Subject: two tables with same field names into one table

Hello,

I am a mysql newbie. Recently, I've been given the task of joining two
tables 
within the same DB into one table. Currently, the tables are named
'desktops' 
and 'laptops'... ultimately, I would like one table named 'computers' Both 
tables have the exact same fields... they fields even have the same names. I

tried this:

create table computers 
select * from desktops, laptops where
desktops.field_1 = laptops.field_1
...
...
...

But I got an error about duplicate field names. Any suggestions on how to do

this?

Thanks,
Brad


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



5.0 InnoDB problem - ADD CONSTRAINT

2004-04-01 Thread Matt Chatterley
Hi folks. I have a problem creating a foreign key constraint which I just
don't seem to be able to figure out.

There are three tables, for which the 'show create' output is given below:

CREATE TABLE `users` (
  `UserID` int(11) unsigned NOT NULL auto_increment,
  `ContactID` int(10) unsigned NOT NULL default '0',
  `UserName` varchar(25) NOT NULL default '',
  `Password` varchar(32) NOT NULL default '',
  `LoggedOnAt` datetime default '-00-00 00:00:00',
  `Cookie` varchar(100) NOT NULL default '',
  PRIMARY KEY  (`UserID`),
  UNIQUE KEY `UserName` (`UserName`),
  KEY `ContactID` (`ContactID`),
  KEY `Cookie` (`Cookie`),
  CONSTRAINT `0_34` FOREIGN KEY (`ContactID`) REFERENCES `contact`
(`ContactID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

CREATE TABLE `usergroups` (
  `UserGroupID` int(10) unsigned NOT NULL auto_increment,
  `Code` varchar(20) NOT NULL default '',
  `Description` varchar(100) NOT NULL default '',
  PRIMARY KEY  (`UserGroupID`),
  KEY `CodeLookup` (`Code`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

CREATE TABLE `user_usergroups` (
  `UserGroupID` int(11) unsigned NOT NULL default '0',
  `UserID` int(11) unsigned NOT NULL default '0',
  PRIMARY KEY  (`UserID`,`UserGroupID`),
  CONSTRAINT `0_75` FOREIGN KEY (`UserID`) REFERENCES `users` (`UserID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1


As you can see, there is one FK on user_usergroups, which references the
users table (UserID). However, when I do this, to create a similar
constraint on UserGroupID to the usergroups table:

ALTER TABLE user_usergroups ADD CONSTRAINT FOREIGN KEY
`usergroupid_ref_usergroups`  (UserGroupID) REFERENCES usergroups
(UserGroupID);

I get this error:

[Dorfl] ERROR 1005: Can't create table '.\space\#sql-340_b.frm' (errno: 150)

The online documentation indicates that this is because my FK constraint is
'badly formed' (I looked up innodb error codes 1005 and 150 in the manual).

I have tried recreating the usergroups table with usergroupid as an int(11)
(I am unsure as to why it is length 10, rather than 11, to be honest - I
created the tables via MySQLCC, and other similar columns are length 11),
but this makes no difference. Both columns are unsigned and NOT NULL, and
although the documentation states that both parent and child columns must be
indexed - they are, because they are both a part (or the whole) of the
primary keys.

It cannot be because user_usergroups.UserGroupID is part of a combined
primary key - because UserID is too! This leaves the only reason I can
envisage as: It is because UserGroupID is not the FIRST column referenced in
a combined Primary Key - meaning I would have to create a secondary index on
it. Is this the case, or have I missed something obvious?


Before I forget, I am using (please forgive me), 5.0a-alpha on Windows 2K.


Thanks,


Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is this possible?

2004-03-31 Thread Matt W
Hi,

GROUP_CONCAT() is in 4.1. :-)


Matt


- Original Message - 
From: m.pheasant
Sent: Wednesday, March 31, 2004 5:26 PM
Subject: RE: Is this possible?


 You would need an aggregate concat() function I think its in 5.0
 m
 
 -Original Message-
 From: Chris Boget [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, April 01, 2004 7:08 AM
 To: MySQL
 Subject: Is this possible?
 
 I have 2 tables (looks best in fixed width font):
 
 test_items
 +---+--+
 | name  | rec_num  |
 +---+--+
 | Book  | 1|
 | Game  | 2|
 +---+--+
 
 test_attributes
 +---++
 | item_rec_num  | attribute  |
 +---++
 | 1 | Thick  |
 | 1 | Tall   |
 | 1 | Green  |
 | 2 | Narrow |
 | 2 | Yellow |
 +---++
 
 How can I query the above tables so that if I select name and
 attribute, the result set comes out looking like this:
 
 +---+---+
 | name  | attribute |
 +---+---+
 | Book  | Thick/Tall/Green  |
 | Game  | Narrow/Yellow |
 +---+---+
 
 Because every join query I've been able to think of always
 returns the result set as follows:
 
 +--+---+
 | name | attribute |
 +--+---+
 | Book | Thick |
 | Book | Tall  |
 | Book | Green |
 | Game | Narrow|
 | Game | Yellow|
 +--+---+
 
 So my question, is it even possible to do what I'd like to do?
 
 thnx,
 Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Getting around 2 queries

2004-03-30 Thread Matt Chatterley
One option would be to 'union' the two queries (assuming the columns are the
same type and length), allowing you to run one query string:

Select serial from blacklist where serial = x
Union
Select serial from seriallist where serial = x

Would return 1 or 2 rows, depending on whether rows are found in one table
or both. You wouldn't know which table though (but from your message, I
guess that is unimportant).

I suppose there are a number of things you could do, really...


Regards,

Matt

-Original Message-
From: Scott Haneda [mailto:[EMAIL PROTECTED] 
Sent: 30 March 2004 07:39
To: MySql
Subject: Getting around 2 queries

I currently run this, which is 2 hits to the database.

Select serial from blacklist  where serial = '23'
Select serial from seriallist where serial = '23'

I only desire to know if either of the 2 has at least one row, I am only
testing for existence here.  Is there some way I can get around 2 queries
and do this as one?

-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com   Fax: 313.557.5052
[EMAIL PROTECTED]Novato, CA U.S.A.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



<    1   2   3   4   5   6   7   >