Re: Correct way to use innodb_file_per_table?

2005-03-03 Thread Heikki Tuuri
Mike,
- Original Message - 
From: "Mike Debnam" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Friday, March 04, 2005 6:15 AM
Subject: Correct way to use innodb_file_per_table?


I've added a second drive to my database machine and want to split my
database tables between the two drives to balance the load and improve
performance. I know I'll need to drop and restore the db to get MySQL
to create the tables in their own files. What's the correct way to use
innodb_file_per_table? If my datadir is /var/db/mysql , should I let
MySQL create all the tables in /var/db/mysql then move the files and
create symlinks in /var/db/mysql?
the best way would be to symlink whole database directories under the 
datadir. Then also an ALTER TABLE keeps the new .ibd file on the drive you 
intended it to be on. If you symlink a single .ibd file, then an ALTER will 
create the new .ibd file as not symlinked.

As an example, let us say you have three databases: 'database1', 
'database2', and 'test'. You may shut down mysqld, copy all the contents of 
/datadir/database2 to drive 2, and then symlink the directory 
/datadir/database2 to drive 2.

I see a note in the InnoDB docs that says you can't move the *.ibd
files around, what does that refer to?
You cannot move .ibd files between different MySQL instances 
(installations). You CAN move .MYI and .MYD files.

.ibd files cannot live without the 'system tablespace' in ibdata1. That is 
why you cannot move .ibd files between different MySQL instances.

System is FreeBSD 5.3, MySQL 4.10, current ibdata file is about 25 gb.
Thanks for any pointers.
Mike
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

--
Save the Pacific Northwest Tree Octopus! |
http://zapatopi.net/treeoctopus.html |
--
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Connect to MYSQL server from Wi-Fi enabled Windows CE device

2005-03-03 Thread Joshua J. Kugler
Please don't respond directly to me if you have further questions, as 
you would probably be more likely to get better help from the list.

It sounds like a typical database connection?  What is your problem?  
What are the errors you are getting?  All you have said so far is that 
you need help, and here is how you are doing it.  If you can get a 
TCP/IP connection on the mobile device, you should be good to go.

j- k-

On Tuesday 01 March 2005 22:44, you said something like:
> Thank you for your Reply,
>
> Yes I have a TCP/IP connection on the WinCE device.
>
> My VB application on the ipaq gets its data from a DB on the device.
> I use ADOCE to connect to this DB, this DB on the device synchronizes
> with a MYSQL DB, and currently I have a desktop application that
> handles the synchronization. This means I physically plug in to a
> desktop PC with a USB cable and then sync the 2 DBs. So what I want
> to do is move the sync app to the CE Device and connect to MYSQL
> directly from the mobile device.
>
> Hope this gives you a better idea of the problem.
>
> Regards
> Hough
>
> >>> "Joshua J. Kugler" <[EMAIL PROTECTED]> 03/02/05 9:07 AM >>>
>
> On Wednesday 23 February 2005 03:47, Hough Van Wyk said something
>
> like:
> > I am developing a embedded VB application running on a hp ipaq
> > running Windows CE 2003. This app has to connect to a MYSQL DB over
>
> a
>
> > wireless network. I have surfed the internet for hours with no
> > luck. Can anyone please help me with this problem.
>
> What exactly is your problem?  Do you have a TCP/IP connection on the
> WinCE device?  Will the MySQL ODBC drivers run on the device (or does
> your programming environment have its own drivers, such as the
> light-weight drivers for VB)?  What have your tried?  If it's a
> standard ethernet connection, there is no difference than if you were
> plugged into a wall.
>
> What exactly are you looking for in your search?
>
> j- k-

-- 
Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
Every knee shall bow, and every tongue confess, in heaven, on earth, and 
under the earth, that Jesus Christ is LORD -- Count on it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Speed of Inserts into MyIsam tables from mysqlbinlog

2005-03-03 Thread Thomas Lekai
I was running a table in InnoDB, and the table had about 6 indexes, none
of which seemed to be affected when I ran mysqlbinlog against the tables
in order to apply bin-logs from production against a test system.  I was
manage to process upwards of 2300 queries per second by throwing about
1.8 GB of memory into the innodb_buffer_pool.

Now that I am dealing with MyIsam tables, I shifted my memory towards
the key_buffer, and am considering using bulk_insert_buffer_size.  I am
not sure what is a good value for the bulk_insert_buffer_size.  I have
also removed all indexes that are not needed from the table while it is
being loaded with data.

My question, is there anything else that I can do to increase the speed
of the inserts into this MyIsam table as the bin_logs are played back on
the system?  This is also used to help gauge performance, so this method
of inserting data is important to the process as a whole.

Regards,

Thomas.


Correct way to use innodb_file_per_table?

2005-03-03 Thread Mike Debnam
I've added a second drive to my database machine and want to split my
database tables between the two drives to balance the load and improve
performance. I know I'll need to drop and restore the db to get MySQL
to create the tables in their own files. What's the correct way to use
innodb_file_per_table? If my datadir is /var/db/mysql , should I let
MySQL create all the tables in /var/db/mysql then move the files and
create symlinks in /var/db/mysql?

I see a note in the InnoDB docs that says you can't move the *.ibd
files around, what does that refer to?

System is FreeBSD 5.3, MySQL 4.10, current ibdata file is about 25 gb. 

Thanks for any pointers.

Mike

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem with fulltext search

2005-03-03 Thread Michael Stassen
"another" is a stop word.  The default list of stopwords is in 
myisam/ft_static.c in a source distribution.  See the manual for more 
.

Michael
Dawn O'Brien wrote:
I'm trying to use the following query ::
SELECT movie_id, title, genre, description, TRIM(
TRAILING ','
FROM actors ) , director, disclaimer, rating, year, (
MATCH (
title, description
)
AGAINST (
'another'
)
) AS score
FROM movie_details
WHERE (
MATCH (
title, description
)
AGAINST (
'another'
)
) AND service = 'tmn' AND last_run > now( )
ORDER BY score DESC , title ASC
LIMIT 200
The 'movie_details' table has 2 entries with the word 'another' in the 
title that have a last_run date greater than the current date, but the 
query is not returning any results.

I modified the query to ::
SELECT movie_id, title, genre, description, TRIM(
TRAILING ','
FROM actors ) , director, disclaimer, rating, year, (
MATCH (
title, description
)
AGAINST (
'another'
)
) AS score
FROM movie_details
WHERE (
title
LIKE '%another%' OR description
LIKE '%another%'
) AND service = 'tmn' AND last_run > now( )
ORDER BY score DESC , title ASC
LIMIT 200
With this statement I'm getting the desired results, but it's obviously 
not efficient and takes approximately 35x longer to run than the first 
query.

Any thoughts as to why the first query isn't returning the proper results?
Thanks for the help in advance ...
:: dawn ::

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ERROR 1045: Access denied for user: 'root@localhost' (Using password: NO)

2005-03-03 Thread sam wun
Deniss Hennesy wrote:
Hi, 

I have had to change mysql root passwd  
My procedure is just below

shell> mysql -u root -p
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd')  WHERE User='root';
mysql> flush privileges;
 

YOu may be need to execute command mysqladmin -u root password 
(yourpassword) for the creation of password. I m not sure,  not expert 
in DBA.

Sam
and I checked this password another consol  I saw MySQL wasn't
accepted  new passwd else, it didnt old password or without password.
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: # Seconds between two datetime fields

2005-03-03 Thread Roger Baklund
Brian Erickson wrote:
We have a table that has two datetime fields (start_time and
end_time). We'd like to be able to calculate the number of seconds
between these two fields in a query. However, a simple "end_time -
start_time" does not yield a correct result.
SELECT start_time, end_time, end_time - start_time FROM
mailings_sendstats order by start_time desc;
[...]
So my question is, how can I fix this? I know there are several
date/time functions that I could probably use, but they were not
introduced until version 4.1 and I am stuck with version 3.23. 
There are lots of date/time functions available in version 3.23, for 
example unix_timestamp():

SELECT start_time, end_time,
  unix_timestamp(end_time) - unix_timestamp(start_time) AS seconds
FROM mailings_sendstats order by start_time desc;
--
Roger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: # Seconds between two datetime fields

2005-03-03 Thread Patrick
The return you are getting is correct for the format you are using. A 90
second difference is in fact 1 minute, 30 seconds(130).
To get the time difference in seconds convert the datetime or timestamp to a
julian date or unixtime and then process.

  SELECT start_time, end_time, UNIX_TIMESTAMP(end_time) -
UNIX_TIMESTAMP(start_time) FROM  mailings_sendstats order by start_time
desc;

I hope this helps...

Pat...

CocoNet Corporation
SW Florida's 1st ISP

- Original Message - 
From: "Brian Erickson" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, March 03, 2005 7:23 PM
Subject: # Seconds between two datetime fields


> We have a table that has two datetime fields (start_time and
> end_time). We'd like to be able to calculate the number of seconds
> between these two fields in a query. However, a simple "end_time -
> start_time" does not yield a correct result.
>
> SELECT start_time, end_time, end_time - start_time FROM
> mailings_sendstats order by start_time desc;
> +-+-+---+
> | start_time  | end_time| end_time - start_time |
> +-+-+---+
> | 2005-03-02 19:08:00 | 2005-03-02 19:08:53 |53 |
> | 2005-03-02 19:06:00 | 2005-03-02 19:07:31 |   131 |
> | 2005-03-02 19:04:00 | 2005-03-02 19:04:55 |55 |
> | 2005-03-02 19:02:00 | 2005-03-02 19:02:53 |53 |
> | 2005-03-02 19:00:00 | 2005-03-02 19:01:10 |   110 |
> | 2005-03-02 18:57:59 | 2005-03-02 18:59:39 |   180 |
> | 2005-03-02 18:56:00 | 2005-03-02 18:57:10 |   110 |
> | 2005-03-02 18:54:00 | 2005-03-02 18:55:20 |   120 |
> | 2005-03-02 18:52:00 | 2005-03-02 18:53:22 |   122 |
> | 2005-03-02 18:50:00 | 2005-03-02 18:51:26 |   126 |
> | 2005-03-02 18:48:00 | 2005-03-02 18:48:58 |58 |
> | 2005-03-02 18:46:00 | 2005-03-02 18:47:28 |   128 |
> etc
>
> As you can see, if the time difference is less than 1 minute, a
> correct result is returned. If the difference is 1 minute and 30
> seconds, '130' is returned instead of 90.
>
> I tried playing with the end_time - start_time conversion using
> different calculations. What I came up with is below.
>
> mysql> SELECT start_time, end_time,
> ->  end_time - start_time as cur,
> ->  FLOOR((end_time - start_time) / 100) as num,
> ->  (end_time - start_time) % 100 as mod,
> ->  (FLOOR((end_time - start_time) / 100) * 60) + (end_time -
> start_time) % 100 AS seconds
> -> FROM mailings_sendstats
> -> ORDER BY id desc;
>
+-+-+-+--+--+-+
> | start_time  | end_time| cur | num  | mod  | seconds
|
>
+-+-+-+--+--+-+
> | 2005-03-02 19:10:00 | 2005-03-02 19:11:07 | 107 |1 |7 |  67
|
> | 2005-03-02 19:08:00 | 2005-03-02 19:08:53 |  53 |0 |   53 |  53
|
> | 2005-03-02 19:06:00 | 2005-03-02 19:07:31 | 131 |1 |   31 |  91
|
> | 2005-03-02 19:04:00 | 2005-03-02 19:04:55 |  55 |0 |   55 |  55
|
> | 2005-03-02 19:02:00 | 2005-03-02 19:02:53 |  53 |0 |   53 |  53
|
> | 2005-03-02 19:00:00 | 2005-03-02 19:01:10 | 110 |1 |   10 |  70
|
> | 2005-03-02 18:57:59 | 2005-03-02 18:59:39 | 180 |1 |   80 | 140
|
> | 2005-03-02 18:56:00 | 2005-03-02 18:57:10 | 110 |1 |   10 |  70
|
> | 2005-03-02 18:54:00 | 2005-03-02 18:55:20 | 120 |1 |   20 |  80
|
> | 2005-03-02 18:52:00 | 2005-03-02 18:53:22 | 122 |1 |   22 |  82
|
> | 2005-03-02 18:50:00 | 2005-03-02 18:51:26 | 126 |1 |   26 |  86
|
> | 2005-03-02 18:48:00 | 2005-03-02 18:48:58 |  58 |0 |   58 |  58
|
> | 2005-03-02 18:46:00 | 2005-03-02 18:47:28 | 128 |1 |   28 |  88
|
> etc
>
> First, I calculated the number of minutes (num), then seconds (mod),
> then total seconds (seconds). If you scroll through the results,
> you'll see most of them are accurate. However, when the beginning time
> is just before a new minute, the entire calculation is thrown off.
> Examples:
>
> | 2005-03-02 18:57:59 | 2005-03-02 18:59:39 | 180 |1 |   80 | 140
|
> | 2005-03-02 18:37:59 | 2005-03-02 18:38:53 |  94 |0 |   94 |  94
|
>
> So my question is, how can I fix this? I know there are several
> date/time functions that I could probably use, but they were not
> introduced until version 4.1 and I am stuck with version 3.23. We also
> modified the table to use timestamp(14) fields instead of datetime
> fields, but the same result occurs.
>
> Ultimately:
>
> a) Can someone think of a way to modify the query above so that it
produces
> *correct* results every time (maybe I'm missing something simple after
> working on this for so long)
>
> b) We're using this table to track execution time of PHP Cron sc

# Seconds between two datetime fields

2005-03-03 Thread Brian Erickson
We have a table that has two datetime fields (start_time and
end_time). We'd like to be able to calculate the number of seconds
between these two fields in a query. However, a simple "end_time -
start_time" does not yield a correct result.

SELECT start_time, end_time, end_time - start_time FROM
mailings_sendstats order by start_time desc;
+-+-+---+
| start_time  | end_time| end_time - start_time |
+-+-+---+
| 2005-03-02 19:08:00 | 2005-03-02 19:08:53 |53 |
| 2005-03-02 19:06:00 | 2005-03-02 19:07:31 |   131 |
| 2005-03-02 19:04:00 | 2005-03-02 19:04:55 |55 |
| 2005-03-02 19:02:00 | 2005-03-02 19:02:53 |53 |
| 2005-03-02 19:00:00 | 2005-03-02 19:01:10 |   110 |
| 2005-03-02 18:57:59 | 2005-03-02 18:59:39 |   180 |
| 2005-03-02 18:56:00 | 2005-03-02 18:57:10 |   110 |
| 2005-03-02 18:54:00 | 2005-03-02 18:55:20 |   120 |
| 2005-03-02 18:52:00 | 2005-03-02 18:53:22 |   122 |
| 2005-03-02 18:50:00 | 2005-03-02 18:51:26 |   126 |
| 2005-03-02 18:48:00 | 2005-03-02 18:48:58 |58 |
| 2005-03-02 18:46:00 | 2005-03-02 18:47:28 |   128 |
etc

As you can see, if the time difference is less than 1 minute, a
correct result is returned. If the difference is 1 minute and 30
seconds, '130' is returned instead of 90.

I tried playing with the end_time - start_time conversion using
different calculations. What I came up with is below.

mysql> SELECT start_time, end_time,
->  end_time - start_time as cur,
->  FLOOR((end_time - start_time) / 100) as num,
->  (end_time - start_time) % 100 as mod,
->  (FLOOR((end_time - start_time) / 100) * 60) + (end_time -
start_time) % 100 AS seconds
-> FROM mailings_sendstats
-> ORDER BY id desc;
+-+-+-+--+--+-+
| start_time  | end_time| cur | num  | mod  | seconds |
+-+-+-+--+--+-+
| 2005-03-02 19:10:00 | 2005-03-02 19:11:07 | 107 |1 |7 |  67 |
| 2005-03-02 19:08:00 | 2005-03-02 19:08:53 |  53 |0 |   53 |  53 |
| 2005-03-02 19:06:00 | 2005-03-02 19:07:31 | 131 |1 |   31 |  91 |
| 2005-03-02 19:04:00 | 2005-03-02 19:04:55 |  55 |0 |   55 |  55 |
| 2005-03-02 19:02:00 | 2005-03-02 19:02:53 |  53 |0 |   53 |  53 |
| 2005-03-02 19:00:00 | 2005-03-02 19:01:10 | 110 |1 |   10 |  70 |
| 2005-03-02 18:57:59 | 2005-03-02 18:59:39 | 180 |1 |   80 | 140 |
| 2005-03-02 18:56:00 | 2005-03-02 18:57:10 | 110 |1 |   10 |  70 |
| 2005-03-02 18:54:00 | 2005-03-02 18:55:20 | 120 |1 |   20 |  80 |
| 2005-03-02 18:52:00 | 2005-03-02 18:53:22 | 122 |1 |   22 |  82 |
| 2005-03-02 18:50:00 | 2005-03-02 18:51:26 | 126 |1 |   26 |  86 |
| 2005-03-02 18:48:00 | 2005-03-02 18:48:58 |  58 |0 |   58 |  58 |
| 2005-03-02 18:46:00 | 2005-03-02 18:47:28 | 128 |1 |   28 |  88 |
etc

First, I calculated the number of minutes (num), then seconds (mod),
then total seconds (seconds). If you scroll through the results,
you'll see most of them are accurate. However, when the beginning time
is just before a new minute, the entire calculation is thrown off.
Examples:

| 2005-03-02 18:57:59 | 2005-03-02 18:59:39 | 180 |1 |   80 | 140 |
| 2005-03-02 18:37:59 | 2005-03-02 18:38:53 |  94 |0 |   94 |  94 |

So my question is, how can I fix this? I know there are several
date/time functions that I could probably use, but they were not
introduced until version 4.1 and I am stuck with version 3.23. We also
modified the table to use timestamp(14) fields instead of datetime
fields, but the same result occurs.

Ultimately:

a) Can someone think of a way to modify the query above so that it produces
*correct* results every time (maybe I'm missing something simple after
working on this for so long)

b) We're using this table to track execution time of PHP Cron scripts.
We may be approaching this entirely the wrong way. If someone has
other ideas, please let me know.

Thank you very much!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



need to know if mysql can handle this data

2005-03-03 Thread Donald Frederick
hi, i'm trying to choose a db, and looking at mysql. 
here's the problem.
we're looking at storing large blob type objects - up
to 30,000 char's per field, and possibly a table of
1000 records by 300 fields/record.
can i put that all in one table?  I have mac osx, so
i'm not worried about the filesize constraints. 
my other question is, if i reverse the layout, i.e.
store the data vertically with one char/field (i know,
tons of wasted overhead) with 30,000 records, can my
sql realistically and efficiently deal with a switch
of how the data is stored(i.e. records being in
columns instead of rows) and can it effeciently deal
with 1000 tables




__ 
Celebrate Yahoo!'s 10th Birthday! 
Yahoo! Netrospective: 100 Moments of the Web 
http://birthday.yahoo.com/netrospective/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: access two like tables with one query

2005-03-03 Thread SGreen
Rich Allen <[EMAIL PROTECTED]> wrote on 03/03/2005 04:28:07 PM:

> iH
> 
> i have a database that i only have READ access to (i am not the 
> creator). there are tables with stats data that are created each day; 
> ie "data3_1_2005" and 'data3_2_2005". each table has the same layout.
> 
> create table data3_1_2005 (
> port char(8),
> tmval int,
> val int
> )
> 
> how can i create a single query to get all the rows related to port 
> from both files without creating a merge table?
> 
> 
> thanks
> - hcir
> 

That's a UNION query...

SELECT * 
FROM data3_1_2005
UNION
SELECT *
FROM data3_2_2005;

That will dump both tables into one long list. If you need only certain 
rows, make that a condition for each select like this...

SELECT * 
FROM data3_1_2005
WHERE port = 'ABCDE';
UNION
SELECT *
FROM data3_2_2005
WHERE port = 'ABCDE';

Make sense?

Many read-only users could still have rights to create temporary tables. 
Have you tried this...

CREATE TEMPORARY TABLE tmpPorts
SELECT * 
FROM data3_1_2005
UNION
SELECT *
FROM data3_2_2005;

Then you could use tmpPorts just like a MERGE table. You can either ":DROP 
TABLE tmpPorts;" to make it go away or it will be dropped for you when 
your connection disconnects. Personally, I prefer to do the first so that 
I don't leave a mess behind when I leave a server.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine






access two like tables with one query

2005-03-03 Thread Rich Allen
iH
i have a database that i only have READ access to (i am not the 
creator). there are tables with stats data that are created each day; 
ie "data3_1_2005" and 'data3_2_2005". each table has the same layout.

create table data3_1_2005 (
port char(8),
tmval int,
val int
)
how can i create a single query to get all the rows related to port 
from both files without creating a merge table?

thanks
- hcir
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql 4.0.18 crashing on startup

2005-03-03 Thread Heikki Tuuri
Baba,
an undo log seems to be corrupt. He should restart mysqld with:
innodb_force_recovery=4
dump all tables, and recreate the whole InnoDB installation.
Did the .err log contain any hint what could be behind the corruption?
Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

- Original Message - 
From: "Baba Buehler" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Wednesday, March 02, 2005 6:32 PM
Subject: mysql 4.0.18 crashing on startup


I've got a customer with a mysql that is crashing on startup.  MySQL is
4.0.18 with InnoDB tables on a Linux 2.4.26 system.  MySQL binaries are
Linux x86 glibc static gcc from mysql.com
Can anyone divine what might have happened to cause this?
thanks,
baba

resolve_stack_dump gives:
0x8071f44 handle_segfault + 420
0x82a0e38 pthread_sighandler + 184
0x818cfeb trx_undo_rec_get_partial_row + 443
0x81302af row_purge_parse_undo_rec + 863
0x813033f row_purge + 127
0x81304d7 row_purge_step + 23
0x8115aa3 que_run_threads + 755
0x8189520 trx_purge + 2304
0x811b8dd srv_master_thread + 1389
0x829e5ec pthread_start_thread + 220
0x82c7dea thread_start + 4

the .err log shows this, ever repeating:
050222 10:49:57  mysqld started
050222 10:49:57  InnoDB: Started
/usr/sbin/mysqld: ready for connections.
Version: '4.0.18-standard'  socket: '/tmp/mysql4.sock'  port: 3306
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly 
built,
or misconfigured. This error can also be caused by malfunctioning 
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=1048576
read_buffer_size=4190208
max_used_connections=0
max_connections=35
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections
= 287603 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbfebf5b8, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8071f44
0x82a0e38
0x818cfc1
0x81302af
0x813033f
0x81304d7
0x8115aa3
0x8189520
0x811bb6f
0x829e5ec
0x82c7dea
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://www.mysql.com/doc/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
Number of processes running now: 1
050222 10:49:58  mysqld restarted
050222 10:49:59  InnoDB: Started
/usr/sbin/mysqld: ready for connections.
Version: '4.0.18-standard'  socket: '/tmp/mysql4.sock'  port: 3306
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly 
built,
or misconfigured. This error can also be caused by malfunctioning 
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=1048576
read_buffer_size=4190208
max_used_connections=0
max_connections=35
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections
= 287603 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbfebf5b8, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8071f44
0x82a0e38
0x818cfc1
0x81302af
0x813033f
0x81304d7
0x8115aa3
0x8189520
0x811bb6f
0x829e5ec
0x82c7dea
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://www.mysql.com/doc/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
Number of processes running now: 1
050222 10:50:00  mysqld restarted
050222 10:50:01  InnoDB: Started
/usr/sbin/mysqld: ready for connections.
Version: '4.0.18-stan

Re: "Load Data Infile ... Repair with KeyCache" --WAY SLOW!

2005-03-03 Thread Harrison Fisk
Hi,
On Mar 3, 2005, at 3:13 PM, mos wrote:
At 12:39 PM 3/3/2005, Harrison Fisk wrote:
Hi,
On Mar 3, 2005, at 11:32 AM, mos wrote:
At 10:07 PM 3/2/2005, you wrote:
Hello.
You may use ALTER TABLE .. DISABLE KEYS to speed up the loading 
process
on the MyISAM table.
That may work provided I can get the keys rebuilt later using 
FileSort and not KeyCache.

You see the problem isn't in loading the data into the table which 
occurs reasonably fast that because running "Load Data Infile" on an 
empty table will disable the keys until all the data is loaded, so 
explicitly disabling before hand them won't help.  After the data is 
loaded, Load Data Infile will then rebuild the keys and will choose 
to use "Repair with keycache" whereas "Repair with filesort" would 
be 100x-1000x faster. There doesn't seem to be any way to get it to 
use "Repair with filesort".

So I could use disable keys as you had suggested and then rebuild 
them manually with myisamchk "repair with sort" provided it doesn't 
reload all the data into a temporary table (which Alter Table 
usually does). If it does create a temporary table it will 
physically reloads the 500 million rows a second time and I will 
need another 100g of free disk space.

So is there a solution to this paradox?
It chooses which method to use based on your myisam_sort_buffer_size, 
myisam_max_sort_file_size and myisam_max_extra_sort_file_size.  Have 
you increased the size of these?  Keep in mind these are SESSION 
variables, so they can be set on the connection right before you LOAD 
DATA INFILE.

You always have to tell MySQL how much extra memory/disk it can use, 
it can't assume that you want it to use it.  That is why it will use 
Repair by keycache, since it doesn't use any extra resources.  If you 
increase the amount of diskspace or memory, then it should use a 
Repair by sort instead.
That's what I thought. I have myisam_max_sort_file_size set to 15.7GB 
and myisam_max_extra_sort_file_size set to 2GB. I confirmed this with 
Show Variables. I thought this would be enough to sort the indexes. I 
have confirmed there is enough free space on the hard drive to handle 
this both in the database directory and the mysql temp directory.


Also keep in mind that Repair by sort doesn't work for UNIQUE or 
PRIMARY KEYs.
So the solution is to keep using Load Data Infile with even larger 
max_sort variables and more disk space? There are only 4 indexes on 
the table and the largest index is around 50 bytes. This is running on 
an AMD3500+ with 1gb RAM.

My problem is it takes about 6 hours to load in the rows using Load 
Data Infile to the empty table (which is fine) then by doing a "Show 
ProcessList" I'll know if it is using FileSort or KeyCache. That is a 
lot of time to waste if it ends up using KeyCache because then I know 
it could be a few days to index the table.
If you want to test it out to see how much is required (though it 
shouldn't matter, just set them as large as you possibly could (see 
below)), then take your table and load it once with indexes disabled.  
Then enable them.  It will immediately do one or the other.  If you are 
unhappy, then you can kill it, change variables and repeat.

So how do people force Load Data Infile to use FileSort and not 
KeyCache? In other words how do they know ahead of time what to set 
myisam_max_extra_sort_file_size set and myisam_max_sort_file_size to 
without guessing? To me it seems a lot like hit and miss guesswork. I 
would rather see a parameter "BySort" or "ByKeyCache" added to Load 
File command that forces it to use either FileSort or KeyCache rather 
than having the developer cross his fingers and hope the system 
variables are set correctly.
You don't play around with the variables.  Set them to the largest 
possible amount that you would let MySQL use (ie. how much ram and how 
much disk space).  Then it will have the information to make the 
decision properly.  It only uses what it needs, up to your limit.  If 
it sees that your limits are too small, then it won't be able to do a 
By Sort, even if you want it to.  Why aren't you setting them as large 
as you can in the first go?

Regards,
Harrison
--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com
Get a jumpstart on MySQL Cluster -- 
http://www.mysql.com/consulting/packaged/cluster.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql crash - innodb not starting

2005-03-03 Thread Heikki Tuuri
Hi!
Error 11 means that you already have a mysqld process running on the same 
files.

Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

--
Save the Pacific Northwest Tree Octopus! |
http://zapatopi.net/treeoctopus.html |
--
- Original Message - 
From: ""mel list_php"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Thursday, March 03, 2005 2:25 PM
Subject: mysql crash - innodb not starting


Hi,
I just have a crash of my mysql 4.1.5 .(the machine where it was running
just shutdown)
Tried to restart it, and problem with innodb:
050303 11:58:46  [WARNING] Asked for 196608 thread stack, but got 126976
InnoDB: Unable to lock ./ibdata1, error: 11InnoDB: Error in opening
./ibdata1
050303 11:58:46  InnoDB: Operating system error number 11 in a file
operation.
InnoDB: Error number 11 means 'Resource temporarily unavailable'.
InnoDB: See also section 13.2 at http://www.innodb.com/ibman.php
InnoDB: about operating system error numbers.
InnoDB: Could not open or create data files.
InnoDB: If you tried to add new data files, and it failed here,
InnoDB: you should now edit innodb_data_file_path in my.cnf back
InnoDB: to what it was, and remove the new ibdata files InnoDB created
InnoDB: in this failed attempt. InnoDB only wrote those files full of
InnoDB: zeros, but did not yet use them in any way. But be careful: do not
InnoDB: remove old data files which contain your precious data!
050303 11:58:46  [ERROR] Can't init databases
050303 11:58:46  [ERROR] Aborting
050303 11:58:46  [NOTE] ./bin/mysqld: Shutdown complete
I'm sure I already saw that kind of problem, tried to go through the list
but didn't find the thread again. In addition I'm not very familiar with
that kind of thing.
I don't use innodb at all, only myisam, so I was able to restart with
--skip-innodb.
About the error code "ressource unavailable" I tried later on no 
difference,
the file ibdata1 is there with adequate permissions, I don't think that I
have a my.cnf...

To sum up I have no idea what to do, and I'm not a system guru!
My first ideas:
- removing the ibdata1 file and recreating an empty one
- I saw a force-recovery option but have no idea how it works
Hope somebody here can help me
Thanks a lot
_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: "Load Data Infile ... Repair with KeyCache" --WAY SLOW!

2005-03-03 Thread mos
At 12:39 PM 3/3/2005, Harrison Fisk wrote:
Hi,
On Mar 3, 2005, at 11:32 AM, mos wrote:
At 10:07 PM 3/2/2005, you wrote:
Hello.
You may use ALTER TABLE .. DISABLE KEYS to speed up the loading process
on the MyISAM table.
That may work provided I can get the keys rebuilt later using FileSort 
and not KeyCache.

You see the problem isn't in loading the data into the table which occurs 
reasonably fast that because running "Load Data Infile" on an empty table 
will disable the keys until all the data is loaded, so explicitly 
disabling before hand them won't help.  After the data is loaded, Load 
Data Infile will then rebuild the keys and will choose to use "Repair 
with keycache" whereas "Repair with filesort" would be 100x-1000x faster. 
There doesn't seem to be any way to get it to use "Repair with filesort".

So I could use disable keys as you had suggested and then rebuild them 
manually with myisamchk "repair with sort" provided it doesn't reload all 
the data into a temporary table (which Alter Table usually does). If it 
does create a temporary table it will physically reloads the 500 million 
rows a second time and I will need another 100g of free disk space.

So is there a solution to this paradox?
It chooses which method to use based on your myisam_sort_buffer_size, 
myisam_max_sort_file_size and myisam_max_extra_sort_file_size.  Have you 
increased the size of these?  Keep in mind these are SESSION variables, so 
they can be set on the connection right before you LOAD DATA INFILE.

You always have to tell MySQL how much extra memory/disk it can use, it 
can't assume that you want it to use it.  That is why it will use Repair 
by keycache, since it doesn't use any extra resources.  If you increase 
the amount of diskspace or memory, then it should use a Repair by sort instead.
That's what I thought. I have myisam_max_sort_file_size set to 15.7GB and 
myisam_max_extra_sort_file_size set to 2GB. I confirmed this with Show 
Variables. I thought this would be enough to sort the indexes. I have 
confirmed there is enough free space on the hard drive to handle this both 
in the database directory and the mysql temp directory.


Also keep in mind that Repair by sort doesn't work for UNIQUE or PRIMARY KEYs.
So the solution is to keep using Load Data Infile with even larger max_sort 
variables and more disk space? There are only 4 indexes on the table and 
the largest index is around 50 bytes. This is running on an AMD3500+ with 
1gb RAM.

My problem is it takes about 6 hours to load in the rows using Load Data 
Infile to the empty table (which is fine) then by doing a "Show 
ProcessList" I'll know if it is using FileSort or KeyCache. That is a lot 
of time to waste if it ends up using KeyCache because then I know it could 
be a few days to index the table.

So how do people force Load Data Infile to use FileSort and not KeyCache? 
In other words how do they know ahead of time what to set 
myisam_max_extra_sort_file_size set and myisam_max_sort_file_size to 
without guessing? To me it seems a lot like hit and miss guesswork. I would 
rather see a parameter "BySort" or "ByKeyCache" added to Load File command 
that forces it to use either FileSort or KeyCache rather than having the 
developer cross his fingers and hope the system variables are set correctly.

Mike

Regards,
Harrison
--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com
Get a jumpstart on MySQL Cluster -- 
http://www.mysql.com/consulting/packaged/cluster.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: backup scripts

2005-03-03 Thread Tom Crimmins
Kelly,

You can find out what user mysqld is running as with the following:

#ps axu | grep mysqld

To change the permissions on the directory run the following as root:

#chown mysql.mysql /usr/local/mysql/bakups
#chmod 770 /usr/local/mysql/bakups

If it is a different user, substitute it in for mysql in the first 
command.

-- 
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

On Thursday, March 03, 2005 13:58, [EMAIL PROTECTED] wrote:

> Tom,
>   I seem to be having difficulty allowing the mysqld user wx privs on
> the /backup folder.   I read the following, but I don't see how to
> allow mysqld to w and x to that directory:
> http://dev.mysql.com/doc/mysql/en/mysqldump.html  
> 
> Kelly
> 
> On Thursday, March 03, 2005 13:12, [EMAIL PROTECTED]
> wrote: 
> 
> Hi Kelly,
> 
>> Hello,
>>   When I attempt to try and run the backup:
>> 
>> shell> mysqldump --tab=/path/to/some/dir --opt db_name
>> I get the following errors:
>> ./mysqldump: Got error: 1: Can't create/write to file
>> '/usr/local/mysql/bakups/config.txt' (Errcode: 13) when executing
>> 'SELECT INTO OUTFILE'
> 
> [EMAIL PROTECTED] tom]$ perror 13
> Error code  13:  Permission denied
> 
> You need to make sure that the user mysqld is running as has write and
> execute permission to /usr/local/mysql/bakups.
> 
> Read the following page. Specifically the part about the tab option.
> http://dev.mysql.com/doc/mysql/en/mysqldump.html
> 
>> Or:
>> 
>> shell> mysqlhotcopy db_name /path/to/some/dir
>> DBI->connect(;host=localhost;mysql_read_default_group=mysqlhotcopy)
>> failed: Client does not support authentication protocol requested by
>> server; consider upgrading MySQL client at ./mysqlhotcopy line 178
> 
> You need a newer version of DBD-mysql (you can get this from CPAN),
> or you can use the old password option.
> 
> http://dev.mysql.com/doc/mysql/en/old-client.html
> 
>> I followed the directions from:
>> http://dev.mysql.com/doc/mysql/en/backup.html
>> 
>> I also attempted to follow these directions, to no avail:
>> --
>> mysqlhotcopy, etc is great - but using it (and most other myql
>> automation scripts) requires placing a user/password on the command
>> line for all/some to see (ps axw)
>> There doesn't appear to be a way to place the user/pass into a file
>> somewhere and specify only that (secured) filename on the command
>> line. I get around this in the case of mysqlhotcopy by taking a
>> local copy of the script (perl) and hard-coding the auth info into
>> that copy thus: 
>> 
>> mysqlhotcopy - line 164ish:
>> 
>> my $dbh =
>> DBI->connect("dbi:mysql:$dsn;mysql_read_default_group=mysqlhotcopy",
>> 'backup_user', 'backup_password'},
>> 
>> and again at around line 745:
>> 
>> my $dbh =
>> 
>
DBI->connect("dbi:mysql:${db}${dsn};mysql_read_default_group=mysqlhotcopy",
>> "backup_user", "backup_password",
>> 
>> then, just to be sure,
>> 
>> chown root.nobody mysqlhotcopy
>> chmod 700 mysqlhotcopy
>> 
>> Any ideas would be greatly appreciated.  I would really like to add
>> this to a cronjob to have it run automatically. Thanks in advance!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problem with fulltext search

2005-03-03 Thread Dawn O'Brien
I'm trying to use the following query ::
SELECT movie_id, title, genre, description, TRIM(
TRAILING ','
FROM actors ) , director, disclaimer, rating, year, (
MATCH (
title, description
)
AGAINST (
'another'
)
) AS score
FROM movie_details
WHERE (
MATCH (
title, description
)
AGAINST (
'another'
)
) AND service = 'tmn' AND last_run > now( )
ORDER BY score DESC , title ASC
LIMIT 200
The 'movie_details' table has 2 entries with the word 'another' in the title 
that have a last_run date greater than the current date, but the query is 
not returning any results.

I modified the query to ::
SELECT movie_id, title, genre, description, TRIM(
TRAILING ','
FROM actors ) , director, disclaimer, rating, year, (
MATCH (
title, description
)
AGAINST (
'another'
)
) AS score
FROM movie_details
WHERE (
title
LIKE '%another%' OR description
LIKE '%another%'
) AND service = 'tmn' AND last_run > now( )
ORDER BY score DESC , title ASC
LIMIT 200
With this statement I'm getting the desired results, but it's obviously not 
efficient and takes approximately 35x longer to run than the first query.

Any thoughts as to why the first query isn't returning the proper results?
Thanks for the help in advance ...
:: dawn ::

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: DB Export multiple tables to CSV

2005-03-03 Thread H Ba
On 03/03/2005 2:18 PM <[EMAIL PROTECTED]> wrote:

>Are you really asking this list to help you with moving data from
>Excel to some unknown set of users? That's stretching it a bit, don't
>you think? 

No, I was actually asking if there is a way or somebody had an helpful idea on 
how to export multiple tables which relate to each other and import them into 
Excel, so a "normal" (Excel-)User can work with the data, like copying adresses 
or just looking at it, etc.

Have a great day,
Ho.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Simple explain

2005-03-03 Thread SGreen
"Jonathan Mangin" <[EMAIL PROTECTED]> wrote on 03/03/2005 02:38:54 
PM:

> Hello,
> 
> Does this mean a key is not being used?
> 
> mysql> explain select vl_ts from view_log where vl_uid='bb';
> +--+--+---+--+-+--+--
> ++
> | table| type | possible_keys | key  | key_len | ref  | rows | Extra 

> |
> +--+--+---+--+-+--+--
> ++
> | view_log | ALL  | vluid | NULL |NULL | NULL |   60 | where 

> used |
> +--+--+---+--+-+--+--
> ++
> 
> Thanks,
> -Jon
> 
> 
The short answer is "YES, there was no key used to answer the query". 

Please read: http://dev.mysql.com/doc/mysql/en/explain.html for more 
details on how to read the EXPLAIN output.

Please also read: 
http://dev.mysql.com/doc/mysql/en/where-optimizations.html for some of the 
reasons why your index was not used to answer your query.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Simple explain

2005-03-03 Thread Jonathan Mangin
Hello,
Does this mean a key is not being used?
mysql> explain select vl_ts from view_log where vl_uid='bb';
+--+--+---+--+-+--+--++
| table| type | possible_keys | key  | key_len | ref  | rows | Extra 
|
+--+--+---+--+-+--+--++
| view_log | ALL  | vluid | NULL |NULL | NULL |   60 | where 
used |
+--+--+---+--+-+--+--++

Thanks,
-Jon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: DB Export multiple tables to CSV

2005-03-03 Thread Dathan Pattishall
SELECT * INTO OUTFILE "/dir/they/can/get/to" FROM TABLE WHERE


DVP

Dathan Vance Pattishall http://www.friendster.com

 

> -Original Message-
> From: H Ba [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 03, 2005 11:15 AM
> To: mysql@lists.mysql.com
> Subject: DB Export multiple tables to CSV
> 
> I have multiple tables set up and need to export these to csv 
> in order to import them into Excel. Since all tables are 
> related to each other and I can only export one at a time, is 
> there a way to somehow get a "readable" file after importing 
> into Excel, which non-programmers can edit and use for 
> further processing?
> 
> Regards,
> Holger
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: backup scripts

2005-03-03 Thread Tom Crimmins

On Thursday, March 03, 2005 13:12, [EMAIL PROTECTED] wrote:

Hi Kelly,

> Hello,
>   When I attempt to try and run the backup:
> 
> shell> mysqldump --tab=/path/to/some/dir --opt db_name
> I get the following errors:
> ./mysqldump: Got error: 1: Can't create/write to file
> '/usr/local/mysql/bakups/config.txt' (Errcode: 13) when executing
> 'SELECT INTO OUTFILE'

[EMAIL PROTECTED] tom]$ perror 13
Error code  13:  Permission denied

You need to make sure that the user mysqld is running as has write and 
execute permission to /usr/local/mysql/bakups.

Read the following page. Specifically the part about the tab option.
http://dev.mysql.com/doc/mysql/en/mysqldump.html

> Or:
> 
> shell> mysqlhotcopy db_name /path/to/some/dir
> DBI->connect(;host=localhost;mysql_read_default_group=mysqlhotcopy)
> failed: Client does not support authentication protocol requested by
> server; consider upgrading MySQL client at ./mysqlhotcopy line 178

You need a newer version of DBD-mysql (you can get this from CPAN), 
or you can use the old password option.

http://dev.mysql.com/doc/mysql/en/old-client.html

> I followed the directions from:
> http://dev.mysql.com/doc/mysql/en/backup.html
> 
> I also attempted to follow these directions, to no avail:
> --
> mysqlhotcopy, etc is great - but using it (and most other myql
> automation scripts) requires placing a user/password on the command
> line for all/some to see (ps axw)
> There doesn't appear to be a way to place the user/pass into a file
> somewhere and specify only that (secured) filename on the command
> line. 
> I get around this in the case of mysqlhotcopy by taking a local copy
> of the script (perl) and hard-coding the auth info into that copy
> thus: 
> 
> mysqlhotcopy - line 164ish:
> 
> my $dbh =
> DBI->connect("dbi:mysql:$dsn;mysql_read_default_group=mysqlhotcopy",
> 'backup_user', 'backup_password'},
> 
> and again at around line 745:
> 
> my $dbh =
>
DBI->connect("dbi:mysql:${db}${dsn};mysql_read_default_group=mysqlhotcopy",
> "backup_user", "backup_password",
> 
> then, just to be sure,
> 
> chown root.nobody mysqlhotcopy
> chmod 700 mysqlhotcopy
> 
> Any ideas would be greatly appreciated.  I would really like to add
> this to a cronjob to have it run automatically. Thanks in advance!

-- 
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DB Export multiple tables to CSV

2005-03-03 Thread SGreen
H Ba <[EMAIL PROTECTED]> wrote on 03/03/2005 02:14:57 PM:

> I have multiple tables set up and need to export these to csv in 
> order to import them into Excel. Since all tables are related to 
> each other and I can only export one at a time, is there a way to 
> somehow get a "readable" file after importing into Excel, which non-
> programmers can edit and use for further processing?
> 
> Regards,
> Holger
> 

Are you really asking this list to help you with moving data from Excel to 
some unknown set of users? That's stretching it a bit, don't you think? 

What's wrong with your non-programmers working with your data as an Excel 
worksheet(workbook)? What do they (your non-programmers) need to do with 
the data, what tools do they have to do whatever it is they need to do, 
and what data formats do their tools accept? Answer those questions and 
you may decide to not even use Excel as an intermediate program.

This isn't much of a MySQL question, is it

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


DB Export multiple tables to CSV

2005-03-03 Thread H Ba
I have multiple tables set up and need to export these to csv in order to 
import them into Excel. Since all tables are related to each other and I can 
only export one at a time, is there a way to somehow get a "readable" file 
after importing into Excel, which non-programmers can edit and use for further 
processing?

Regards,
Holger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1045: Access denied for user: 'root@localhost' (Using password: NO)

2005-03-03 Thread Deniss Hennesy
Hi, 

I have had to change mysql root passwd  
My procedure is just below


shell> mysql -u root -p
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd')  WHERE User='root';
mysql> flush privileges;


and I checked this password another consol  I saw MySQL wasn't
accepted  new passwd else, it didnt old password or without password.





Now neither new passwd nor old passwd also empty passwd is not acceped


I change my mind and I tried that procedure

shell> kill `cat /var/db/host_name.pid`

shell> mysqld_safe --skip-grant-tables &

shell> mysqladmin -u root flush-privileges password "newpwd"


mysqladmin:connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket
'/tmp/mysql.sock' (2)'

Check that mysql is running and that the socket:'/tmp/mysql.sock' exits!


3rd way is

shell>/usr/local/etc/rc.d/mysql-server.sh stop

then

shell>safe_mysqld --user=root --pid-file=/var/db/mysql/mysql.pid
--skip-grand-tables &
[1] 28712
shell> Starting mysql deamon with databses from /var/db/mysql
050303 21:04:02 mysql ended


So I suppose that it is not initialize MySQL deamon  isnt it???


How I can recover mysql password?!??

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1045: Access denied for user: 'root@localhost' (Using password: NO)

2005-03-03 Thread Deniss Hennesy
Hi,

I have had to change mysql root passwd
My procedure is just below

shell> mysql -u root -p
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd')  WHERE User='root';
mysql> flush privileges;

and I checked this password another consol  I saw MySQL wasn't
accepted  new passwd else, it didnt old password or without password.

Now neither new passwd nor old passwd also empty passwd is not acceped

I change my mind and I tried that procedure

shell> kill `cat /var/db/host_name.pid`

shell> mysqld_safe --skip-grant-tables &

shell> mysqladmin -u root flush-privileges password "newpwd"

mysqladmin:connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket
'/tmp/mysql.sock' (2)'

Check that mysql is running and that the socket:'/tmp/mysql.sock' exits!

3rd way is

shell>/usr/local/etc/rc.d/mysql-server.sh stop

then

shell>safe_mysqld --user=root --pid-file=/var/db/mysql/mysql.pid
--skip-grand-tables &
[1] 28712
shell> Starting mysql deamon with databses from /var/db/mysql
050303 21:04:02 mysql ended

So I suppose that it is not initialize MySQL deamon  isnt it???

How I can recover mysql password?!??

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



backup scripts

2005-03-03 Thread Kelly . Brace
Hello,
  When I attempt to try and run the backup:

shell> mysqldump --tab=/path/to/some/dir --opt db_name
I get the following errors:
./mysqldump: Got error: 1: Can't create/write to file 
'/usr/local/mysql/bakups/config.txt' (Errcode: 13) when executing 'SELECT 
INTO OUTFILE'
Or: 

shell> mysqlhotcopy db_name /path/to/some/dir
DBI->connect(;host=localhost;mysql_read_default_group=mysqlhotcopy) 
failed: Client does not support authentication protocol requested by 
server; consider upgrading MySQL client at ./mysqlhotcopy line 178

I followed the directions from: 
http://dev.mysql.com/doc/mysql/en/backup.html

I also attempted to follow these directions, to no avail:
--
mysqlhotcopy, etc is great - but using it (and most other myql automation 
scripts) requires placing a user/password on the command line for all/some 
to see (ps axw)
There doesn't appear to be a way to place the user/pass into a file 
somewhere and specify only that (secured) filename on the command line.
I get around this in the case of mysqlhotcopy by taking a local copy of 
the script (perl) and hard-coding the auth info into that copy thus:

mysqlhotcopy - line 164ish:

my $dbh = 
DBI->connect("dbi:mysql:$dsn;mysql_read_default_group=mysqlhotcopy",
'backup_user', 'backup_password'},

and again at around line 745:

my $dbh = 
DBI->connect("dbi:mysql:${db}${dsn};mysql_read_default_group=mysqlhotcopy",
"backup_user", "backup_password",

then, just to be sure,

chown root.nobody mysqlhotcopy
chmod 700 mysqlhotcopy

Any ideas would be greatly appreciated.  I would really like to add this 
to a cronjob to have it run automatically. Thanks in advance!


Kelly S. Brace
Information Technology Exchange Center
Twin Rise 200
1300 Elmwood Avenue
Buffalo, NY 14222
http://www.itec.suny.edu

Main: 716-878-4832
Fax: 716-878-3485
Office: 716-878-3895
Cell: 716-432-4978
[EMAIL PROTECTED]


[ANN] New DBManager 3.0.3 Released Today

2005-03-03 Thread DBTools Software
We are pleased to announce the new DBManager Professional 3.0.3.
This is a major bug fix version with lots of improvements and a few new
features. If you are a previous user of the DBManager 3.0
versions you may consider updating your system with the new version.

Here is the list of changes:

Fixed Form max Characters for SQlite databases
Fixed Uppercase/Lowercase Comparisons with CDBHolder and catalog.ddb
Fixed Moving Table and Data from MySQL => PostgreSQL (date -00-00)
Fixed Export tables mistaking namespace with table name
Fixed Database Dump Wizard (Showing System Tables)
Fixed Dump database display two names of files when save to dump data text
(csv)
Checked Font Use in Datasheet View (Insert Font in Preferences)
Fixed Default Database when attaching external database in Server Manager
Added Default Extension to SQLite (*.db, *.db3, *.*)
Fixed SQL Delimiter for MySQL WebServices
Fixed WSG (not creating stylesheet)
Fixed WSG (Not using results alignment)
Fixed MySQL Webservices Number detection
Fixed WSG (Not creating update.php)
Fixed Pasting into Datasheet enables the Save Button
Fixed Importing MSAccess Indexes and Foreign Keys (GUID names)
Fixed Diagram Designer Print Preview (when the diagram is empty)
Fixed Exception when listing/adding Text Wizard Definitions-Delimiter NULL)
Fixed MySQL COUNT(*) Rows correctly when set in preferences
Enabled Firebird Embedded Server
Fixed Task Wizard (Database Target not using option DROP TABLE IF EXISTS)
Replaced HTML Code to Generate Table Listing (HTML DOM)
Fixed Column Quoting for Insert in MSAccess
MySQL XML Webservice now open URL directly instead of parsing (more error
proof)
Fixed Dropping a Table shows a warning in Diagram if table is referenced
Fixed Queries that do not return results are correctly handled (MSAccess)
Fixed Blank Print and Print Preview in Diagram Builder
Fixed Export to MSExcel (Escaping character ' [Single Quote])
Fixed Server Manager Test Connection (Error in Message Firebird/SQLite3)

New Features

Added Character Set and Collation Support for Databases and Tables
Added Comment for Columns (Mysql 4.1.0 or higher)
Added javascript support (Text Wizard)
Added Multiple Results Set
Added Syntax Coloring for Each Engine (SQL/DDL/Commands/Functions). Still
using the default SQL but it will be changed for 3.1.0 version
Added Preferences Option to turn off Application Exit Question
Added SET FOREIGN_KEY_CHECKS=0 for MySQL Dump
Added Option to Preferences=>Security (My Documents Folder)
Added Copy, Paste, Select All to Datasheet, Query Editor/Results, Memo
Editor

The new file is now available for download in
http://www.dbtools.com.br/EN/downloads.php.

What is DBManager Professional?

DBManager Professional is an application for Windows designer to manage data
and structure for: MySQL, PostgreSQL, Interbase, Firebird, SQLite, Xbase,
MSSQL Server, Sybase, MSAccess and Oracle. It is full of features, which
includes:

- Structure Management for Databases, tables, indexes, columns, foreign
keys, functions, stored procedures, views, etc
- Import and Export Data: MSAccess, MSExcel, ODBC, Paradox,
Dbase/Clipper/FoxPro, Text Files, XML, HTML
- Server, Database and Table monitoring (*)
- Diagram Designer (*)
- Database Comparer Wizard to sync your database structure (*)
- Server Console
- Query Editor and Designer
- Task Builder to process automation (*)
- Web Script Generator Wizard to PHP, ASP(*)
- and much more

(*) Available only in the Enterprise Edition

For more information about DBManager Professional check
http://www.dbtools.com.br/EN/dbmanagerpro features sheet.

Best Regards,

DBTools Software


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "Load Data Infile ... Repair with KeyCache" --WAY SLOW!

2005-03-03 Thread Harrison Fisk
Hi,
On Mar 3, 2005, at 11:32 AM, mos wrote:
At 10:07 PM 3/2/2005, you wrote:
Hello.
You may use ALTER TABLE .. DISABLE KEYS to speed up the loading 
process
on the MyISAM table.
That may work provided I can get the keys rebuilt later using FileSort 
and not KeyCache.

You see the problem isn't in loading the data into the table which 
occurs reasonably fast that because running "Load Data Infile" on an 
empty table will disable the keys until all the data is loaded, so 
explicitly disabling before hand them won't help.  After the data is 
loaded, Load Data Infile will then rebuild the keys and will choose to 
use "Repair with keycache" whereas "Repair with filesort" would be 
100x-1000x faster. There doesn't seem to be any way to get it to use 
"Repair with filesort".

So I could use disable keys as you had suggested and then rebuild them 
manually with myisamchk "repair with sort" provided it doesn't reload 
all the data into a temporary table (which Alter Table usually does). 
If it does create a temporary table it will physically reloads the 500 
million rows a second time and I will need another 100g of free disk 
space.

So is there a solution to this paradox?
It chooses which method to use based on your myisam_sort_buffer_size, 
myisam_max_sort_file_size and myisam_max_extra_sort_file_size.  Have 
you increased the size of these?  Keep in mind these are SESSION 
variables, so they can be set on the connection right before you LOAD 
DATA INFILE.

You always have to tell MySQL how much extra memory/disk it can use, it 
can't assume that you want it to use it.  That is why it will use 
Repair by keycache, since it doesn't use any extra resources.  If you 
increase the amount of diskspace or memory, then it should use a Repair 
by sort instead.

Also keep in mind that Repair by sort doesn't work for UNIQUE or 
PRIMARY KEYs.

Regards,
Harrison
--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com
Get a jumpstart on MySQL Cluster -- 
http://www.mysql.com/consulting/packaged/cluster.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


innodb update issues

2005-03-03 Thread James Nobis
I have run into a fairly bizarre issue.  Two tables were moved to innodb to take
advantage of row level locking but they were moved back within a day.  Both
tables are fixed so updates should not cause table fragmentation and a required
optimize.  Also, there are no deletes to the tables in question.  This leaves
only the possibility of an analyze being required to update the indexes. 
However, after approximately 10-20minutes the tables seem to get out of sync
badly.  Queries that should take 0.0001 seconds start to take 12seconds+  After
an analyze nothing changes.  After an optimize the table is again fine for a
short duration.  Is there something obvious that I'm missing?

James Nobis
Web Developer
Academic Superstore
223 W. Anderson Ln. Suite A110, Austin, TX 78752
Voice: (512) 450-1199 x453 Fax: (512) 450-0263
http://www.academicsuperstore.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Read / Write statistics

2005-03-03 Thread E SA
Hi,

I am trying to obtain read/write statistics from our
database (4.0.21).  I started by using the
Bytes_received and Bytes_sent from "show status"; 
however, today one of them (Bytes_sent) reset itself 
to 0...

Which really messes up my calculations...

I need this data... Is there any other way to obtain
it?

I was doing:

$Total_bytes = ( $Bytes_received + $Bytes_sent );
$Percentage_received = ( ( $Bytes_received * 100 ) / \
 $Total_bytes ));

However, it won't work when the counter clears itself.

Is there a way to se that value to unsigned bigint?

Is there any other way?

Beforehand, thank you for your help!




__ 
Celebrate Yahoo!'s 10th Birthday! 
Yahoo! Netrospective: 100 Moments of the Web 
http://birthday.yahoo.com/netrospective/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql_server_init crashes

2005-03-03 Thread Wolfgang Rohdewald
Hi,

this is 4.0.23 embedded.

If the datadir defined in my.cnf does not exist:

050303 17:48:04 bdb:  /home/vdr/mysql/log.01: No such file or directory
050303 17:48:04 bdb:  PANIC: No such file or directory
050303 17:48:04 Can't init databases

it should return an error condition instead of crashing.

How can I find out beforehand whether the directory exists?

-- 

mit freundlichen Grüssen

with my best greetings

Wolfgang Rohdewald

dipl. Informatik Ing. ETH Rohdewald Systemberatung
Karauschenstieg 4
D 21640 Horneburg
Tel.: 04163 826 819
Fax:  04163 826 828
Internet: http://www.rohdewald.de

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: confirm subscribe to mysql@lists.mysql.com

2005-03-03 Thread Wolfgang Rohdewald
On Donnerstag 03 März 2005 17:57, [EMAIL PROTECTED] wrote:
> To confirm that you would like
> 
>   [EMAIL PROTECTED]
> 
> added to the mysql mailing list, please click on
> the following link:
> 
>   http://lists.mysql.com/s/mysql/422741fc45778425/wolfgang=rohdewald.de
> 
> This confirmation serves two purposes. First, it verifies that we are
> able to get mail through to you. Second, it protects you in case
> someone forges a subscription request in your name.
> 
> 
> --- Administrative commands for the mysql list ---
> 
> I can handle administrative requests automatically. Please
> do not send them to the list address! Instead, send
> your message to the correct command address:
> 
> For help and a description of available commands, send a message to:
><[EMAIL PROTECTED]>
> 
> To subscribe to the list, send a message to:
><[EMAIL PROTECTED]>
> 
> To remove your address from the list, just send a message to
> the address in the ``List-Unsubscribe'' header of any list
> message. If you haven't changed addresses since subscribing,
> you can also send a message to:
><[EMAIL PROTECTED]>
> 
> or for the digest to:
><[EMAIL PROTECTED]>
> 
> For addition or removal of addresses, I'll send a confirmation
> message to that address. When you receive it, simply reply to it
> to complete the transaction.
> 
> If you need to get in touch with the human owner of this list,
> please send a message to:
> 
> <[EMAIL PROTECTED]>
> 
> Please include a FORWARDED list message with ALL HEADERS intact
> to make it easier to help you.
> 
> --- Enclosed is a copy of the request I received.
> 
> Received: (qmail 28466 invoked by uid 48); 3 Mar 2005 16:57:31 -
> Date: 3 Mar 2005 16:57:31 -
> Message-ID: <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Unsubscribe request
> From: <[EMAIL PROTECTED]>
> 
> This message was generated because of a request from 84.143.219.127.
> 
> 

-- 

mit freundlichen Grüssen

with my best greetings

Wolfgang Rohdewald

dipl. Informatik Ing. ETH Rohdewald Systemberatung
Karauschenstieg 4
D 21640 Horneburg
Tel.: 04163 826 819
Fax:  04163 826 828
Internet: http://www.rohdewald.de

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "Load Data Infile ... Repair with KeyCache" --WAY SLOW!

2005-03-03 Thread mos
At 10:07 PM 3/2/2005, you wrote:
Hello.
You may use ALTER TABLE .. DISABLE KEYS to speed up the loading process
on the MyISAM table.
That may work provided I can get the keys rebuilt later using FileSort and 
not KeyCache.

You see the problem isn't in loading the data into the table which occurs 
reasonably fast that because running "Load Data Infile" on an empty table 
will disable the keys until all the data is loaded, so explicitly disabling 
before hand them won't help.  After the data is loaded, Load Data Infile 
will then rebuild the keys and will choose to use "Repair with keycache" 
whereas "Repair with filesort" would be 100x-1000x faster. There doesn't 
seem to be any way to get it to use "Repair with filesort".

So I could use disable keys as you had suggested and then rebuild them 
manually with myisamchk "repair with sort" provided it doesn't reload all 
the data into a temporary table (which Alter Table usually does). If it 
does create a temporary table it will physically reloads the 500 million 
rows a second time and I will need another 100g of free disk space.

So is there a solution to this paradox?
I seem to be left with two options here:
a) Do I shoot myself in the left foot right away,
b) or Do I wait and shoot myself in the right foot? 
Is there a 3rd option?
TIA
Mike

mos <[EMAIL PROTECTED]> wrote:
> I have a 50g CSV file that I am trying to import into an empty MyISAM
> table. It appears to go fine except after 10 hours it hasn't completed. A
> "Show Process List" shows Info="load data infile ..." and State="Repair
> with keycache". The table has a few hundred million rows of data. I assume
> it is using "Repair with keycache" to rebuild the indexes after the data
> has been loaded.
>
> From what I've read "Repair with keycache" is going to take a huge amount
> of time to complete. How do I disable "Repair with keycache" and use
> "Repair with sort" instead (which should be faster, right?)? I'm using
> MySQL 4.1.10 on Win XP and 1gb ram.
>
> TIA
>
> Mike
>
>
--
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: import problem.

2005-03-03 Thread Scott Hamm
It works!

Here is what I used:

LOAD DATA INFILE 
'Batch.txt'
INTO TABLE
Batch
FIELDS TERMINATED BY 
'{'
LINES TERMINATED BY
'\r\n';

And the result:
mysql> select * from batch where orderid=240414989;
+---+---++--+
| QAID  | OrderID   | Errors | Comments |
+---+---++--+
| 87547 | 240414989 |  0 |  |
+---+---++--+
1 row in set (0.56 sec)

Now I better get used with utilities themselves than to depend on
Gooey (GUI) stuff.

Thanks a bunches!

Scott


On Thu, 03 Mar 2005 09:46:10 -0600, Victor Pendleton
<[EMAIL PROTECTED]> wrote:
> Does SQLyog log any errors for you? Are your lines terminated by \n or
> \r\n? Have you tried performing this import with the LOAD DATA INFILE
> command? There you could at least see if warnings or errors were
> encountered.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Switching to InnoDB turns out dissapointing

2005-03-03 Thread Philippe Poelvoorde
Brent Baisley wrote:
calculations from your query, like replace datesub(now(), interval 12 
month) with a constant. Which means figuring out the right date before 
hand.

AFAIK, no column is involved in this particular function, so there's no 
point doing the computation beforehand, it's done by mysql.

--
Philippe Poelvoorde
COS Trading Ltd.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ulimit effects on mysql user?

2005-03-03 Thread Joerg Bruehe
Hi Luke, all,


Am Do, den 03.03.2005 schrieb Crouch, Luke H. um 15:44:
> if I have changed my /etc/security/limits.conf file to include these lines:
>  
> mysqlsoft  nofile4096
> mysqlhard nofile63536
>  
> and then done:
>  
> [EMAIL PROTECTED] root]# su mysql
> [EMAIL PROTECTED] root]$ ulimit -n 8192
> [EMAIL PROTECTED] root]$ ulimit -n
> 8192
> 
> does this new 8192 open file limit apply to the already-running mysqld 
> process, [[...]]

the limit value is an attribute of the currently running shell whose
initial value is taken from the machine's administrative settings at
login time. Any change applies to the shell's current value.
This is inherited by all child processes when these are forked. 
A later change does not affect child processes which are already
running.

In this respect, compare it to an environment variable.

>  
> ? or will I need to restart the mysql server and/or machine to have the new 
> open files limit take effect for the mysql server? am I correct that the open 
> files limit placed on the mysql user will be the limit faced by the mysqld 
> process, since the process is started with the --user=mysql option?

See above.

BTW: Please break your long lines. 


Regards,
Joerg

-- 
Joerg Bruehe, Senior Production Engineer
MySQL AB, www.mysql.com

Are you MySQL certified?  www.mysql.com/certification


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: import problem.

2005-03-03 Thread Victor Pendleton
Does SQLyog log any errors for you? Are your lines terminated by \n or 
\r\n? Have you tried performing this import with the LOAD DATA INFILE 
command? There you could at least see if warnings or errors were 
encountered.

Scott Hamm wrote:
I got my csv delimited using "{" as follows:
87547{240414986{0{
87547{240414987{0{
87547{240414988{0{
87547{240414989{0{
87547{240414990{1{GALLEGOS---MISKEYED MONTH IN BIRTHDATE
I tried to import using SQLyog as following:
Lines Terminated By: \n
Fields set to Variable length, Fields Terminated by: {
Then issued MySQL command to select orderid=20414989
mysql> select * from batch where orderid=240414989;
+---+---++--+
| QAID  | OrderID   | Errors | Comments |
+---+---++--+
   | 240414989 |  0 |
+---+---++--+
1 row in set (0.51 sec)
Why is QAID number not there where "not null" is specified as following:
create table batch (
QAID int not null,
OrderID int not null,
Errors tinyint not null default 0,
Comments varchar(255)
);
I'm confused and any help would be appreciated.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


import problem.

2005-03-03 Thread Scott Hamm
I got my csv delimited using "{" as follows:

87547{240414986{0{
87547{240414987{0{
87547{240414988{0{
87547{240414989{0{
87547{240414990{1{GALLEGOS---MISKEYED MONTH IN BIRTHDATE

I tried to import using SQLyog as following:
Lines Terminated By: \n
Fields set to Variable length, Fields Terminated by: {

Then issued MySQL command to select orderid=20414989

mysql> select * from batch where orderid=240414989;
+---+---++--+
| QAID  | OrderID   | Errors | Comments |
+---+---++--+
| 240414989 |  0 |
+---+---++--+
1 row in set (0.51 sec)

Why is QAID number not there where "not null" is specified as following:

create table batch (
QAID int not null,
OrderID int not null,
Errors tinyint not null default 0,
Comments varchar(255)
);

I'm confused and any help would be appreciated.

-- 
Power to people, Linux is here.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



replication with automatic master failover problems

2005-03-03 Thread Olivier Kaloudoff
Hello,
we're working on a two node setup to achieve high
availability using 4.1.8;
we first setup both servers with --bin-log, and only
the slave server (db1) with --log-slave-updates. (binary logfile
name is "master" on both server)
replication from db0 to db1 runs smoothly.
next, we stop both servers and db1 becomes master, with
db0 slave. Replication still ok.
second stop: db0 becomes master like at the beginning,
db1 slave.
Now the replication is broken, and we get impossible
positions like this:
050301 23:08:35 [Note] Slave SQL thread initialized, 
starting replication in log 'master.09' at position 8198468,
relay log './db0-relay-bin.04' position: 169153

(binary log file master.09 is shorter than this size on db0,
and master.10 already exists)
050301 23:08:35 [ERROR] Slave I/O thread: error connecting to master 
'[EMAIL PROTECTED]:3306': Error: 'Lost connection to MySQL server during query'
errno: 2013  retry-time: 60 retries: 86400

050301 23:08:39 [ERROR] Slave I/O thread killed while connecting to master
050301 23:08:39 [ERROR] Slave I/O thread exiting, read up to log 
'master.09', position 8198468

050301 23:08:39 [ERROR] Error reading relay log event: slave SQL thread 
was killed


Do you know if we forgot an important option, or maybe something
is wrong in our setup ?
Regards,
Olivier Kaloudoff
CKR Solutions Open Source
Mandelieu, France
http://www.ckr-solutions.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Reading oracle table

2005-03-03 Thread Victor Pendleton
Do you know how the file was created/exported from Oracle? Are the files 
contents viewable?

Marcos Sanches wrote:
 Hello all,
I am a new user, in fact I've never used any software like Mysql,
oracle or sqlserver. But I am very familiar with analysing data, I am a
statistician. I've just received an Oracle table, and need just to read
it and export it to a new format, like delimited text. The file name has
no extension, it is like this:
File_nameBKSql
Is there a easy way to read it in Mysql? Ho do I proceed?
Thanks very much,
Marcos

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Reading oracle table

2005-03-03 Thread Marcos Sanches

  Hello all,

 I am a new user, in fact I've never used any software like Mysql,
oracle or sqlserver. But I am very familiar with analysing data, I am a
statistician. I've just received an Oracle table, and need just to read
it and export it to a new format, like delimited text. The file name has
no extension, it is like this:

File_nameBKSql

Is there a easy way to read it in Mysql? Ho do I proceed?

Thanks very much,

Marcos



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ulimit effects on mysql user?

2005-03-03 Thread Crouch, Luke H.
if I have changed my /etc/security/limits.conf file to include these lines:
 
mysqlsoft  nofile4096
mysqlhard nofile63536
 
and then done:
 
[EMAIL PROTECTED] root]# su mysql
[EMAIL PROTECTED] root]$ ulimit -n 8192
[EMAIL PROTECTED] root]$ ulimit -n
8192

does this new 8192 open file limit apply to the already-running mysqld process, 
which was started like so:
 
[EMAIL PROTECTED] root]# /usr/local/mysql/bin/mysqld_safe --user=mysql &
 
? or will I need to restart the mysql server and/or machine to have the new 
open files limit take effect for the mysql server? am I correct that the open 
files limit placed on the mysql user will be the limit faced by the mysqld 
process, since the process is started with the --user=mysql option?
 
thanks,
-L


mysql crash - innodb not starting

2005-03-03 Thread mel list_php
Hi,
I just have a crash of my mysql 4.1.5 .(the machine where it was running 
just shutdown)

Tried to restart it, and problem with innodb:
050303 11:58:46  [WARNING] Asked for 196608 thread stack, but got 126976
InnoDB: Unable to lock ./ibdata1, error: 11InnoDB: Error in opening 
./ibdata1
050303 11:58:46  InnoDB: Operating system error number 11 in a file 
operation.
InnoDB: Error number 11 means 'Resource temporarily unavailable'.
InnoDB: See also section 13.2 at http://www.innodb.com/ibman.php
InnoDB: about operating system error numbers.
InnoDB: Could not open or create data files.
InnoDB: If you tried to add new data files, and it failed here,
InnoDB: you should now edit innodb_data_file_path in my.cnf back
InnoDB: to what it was, and remove the new ibdata files InnoDB created
InnoDB: in this failed attempt. InnoDB only wrote those files full of
InnoDB: zeros, but did not yet use them in any way. But be careful: do not
InnoDB: remove old data files which contain your precious data!
050303 11:58:46  [ERROR] Can't init databases
050303 11:58:46  [ERROR] Aborting

050303 11:58:46  [NOTE] ./bin/mysqld: Shutdown complete
I'm sure I already saw that kind of problem, tried to go through the list 
but didn't find the thread again. In addition I'm not very familiar with 
that kind of thing.
I don't use innodb at all, only myisam, so I was able to restart with 
--skip-innodb.

About the error code "ressource unavailable" I tried later on no difference, 
the file ibdata1 is there with adequate permissions, I don't think that I 
have a my.cnf...

To sum up I have no idea what to do, and I'm not a system guru!
My first ideas:
- removing the ibdata1 file and recreating an empty one
- I saw a force-recovery option but have no idea how it works
Hope somebody here can help me
Thanks a lot
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


INNODB MONITOR OUTPUT

2005-03-03 Thread Deluxe Web
Hi

Do you see any problems?


=
050303 12:51:11 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 20 seconds
--
SEMAPHORES
--
OS WAIT ARRAY INFO: reservation count 14266, signal count 14261
Mutex spin waits 17396, rounds 53230, OS waits 105
RW-shared spins 27594, OS waits 13793; RW-excl spins 329, OS waits 324

TRANSACTIONS

Trx id counter 0 34255768
Purge done for trx's n:o < 0 34255748 undo n:o < 0 0
Total number of lock structs in row lock hash table 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 13811, OS thread id 2335132592
MySQL thread id 200209, query id 1355807 localhost root
show innodb status
---TRANSACTION 0 33778640, not started, process no 13811, OS thread id
2331339696
MySQL thread id 3, query id 6197 Has read all relay log; waiting for
the I/O slave

FILE I/O

I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
2319 OS file reads, 199772 OS file writes, 131082 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.60 writes/s, 0.45 fsyncs/s
-
INSERT BUFFER AND ADAPTIVE HASH INDEX
-
Ibuf for space 0: size 1, free list len 0, seg size 2,
0 inserts, 0 merged recs, 0 merges
Hash table size 4425293, used cells 30780, node heap has 32 buffer(s)
9.10 hash searches/s, 2.80 non-hash searches/s
---
LOG
---
Log sequence number 0 1160879582
Log flushed up to   0 1160879582
Last checkpoint at  0 1160878854
0 pending log writes, 0 pending chkp writes
102003 log i/o's done, 0.35 log i/o's/second
--
BUFFER POOL AND MEMORY
--
Total memory allocated 1179690220; in additional pool allocated 5184256
Buffer pool size   65536
Free buffers   62776
Database pages 2728
Modified db pages  7
Pending reads 0 
Pending writes: LRU 0, flush list 0, single page 0
Pages read 2724, created 4, written 98018
0.00 reads/s, 0.00 creates/s, 0.25 writes/s
Buffer pool hit rate 1000 / 1000
--
ROW OPERATIONS
--
0 queries inside InnoDB, 0 queries in queue
Main thread process no. 13811, id 2322512816, state: sleeping
Number of rows inserted 1129, updated 59141, deleted 4, read 35314661
0.00 inserts/s, 0.30 updates/s, 0.00 deletes/s, 26.75 reads/s

END OF INNODB MONITOR OUTPUT


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL hangs

2005-03-03 Thread Marcin Lewandowski
Hi,
I've got webserver with php and mysql-4.0.22
There I've got large phpbb2. Sometimes, server's system load rapidly 
grows, and mysql are locked. Normally, there are 7-10 mysql processes, 
at this strange situation, there are 30-40.

I don't know what can make such big system load in time of 1 minute.
Maybe this could help: http://saepia.net/temp/mysql.txt
It's 'show processlist' when mysql is locked.
Mysql load is about 50 queries per second. There are mostly myisam 
tables and they are stored on ide100 drive with dma enabled.

Or maybe it's DoS?
--
m.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Please help - MySQL4.1.10 don't run properly after data restored from 5.0.1

2005-03-03 Thread sam wun
Hi,
I have removed all old data and restore the backup data to the directory 
/usr/local/mysql/data

I also created root password with mysqladmin and flush its previliges...
Execute the following perl+dbi script seems fine, it retrieves all data 
from the customer table in DB which I just restored:
#!/usr/bin/perl

use DBI;
$database = "DB";
$username = "me";
$pw = "me0901";
$dbh = DBI->connect("DBI:mysql:$database",$username,$pw);
die "Unable to connect: $DBI::errstr\n" unless (defined $dbh);
$sth = $dbh->prepare(q{SELECT * from customer}) or die "Unable to prep 
our query:".$dbh->errstr."\n";
$rc = $sth->execute or die "Unable to execute our query:".$dbh->errstr."\n";
while ($aref = $sth->fetchrow_arrayref){
 for ($i=0; $i < $sth->{NUM_OF_FIELDS};$i++){
print $sth->{NAME}->[$i].": "   .  $aref->[$i] . "\n";
 }
}
$sth->finish;
$dbh->disconnect;

But when I execute a complex query, it hangs there forever. I don't have 
a chance to see the result because I need to go home to have dinner. The 
same query run in a FreeBSD system in Mysql5.01 just takes 2 seconds. 
The MySQL 5.01 in FreeBSD is the source of the backup. I restore this 
backup to MySQL 4.1.10 in Redhat9.0.

I have checked the DB's indexes on the most complex table, it shows 
indexes are being indexed.
mysql> show index from transaction;
+-++-+--+-+---+-+--++--++-+
| Table   | Non_unique | Key_name| Seq_in_index | Column_name | 
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-++-+--+-+---+-+--++--++-+
| transaction |  0 | PRIMARY |1 | transcode   | 
A |  161629 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | custcode|1 | custcode| 
A | 715 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | prodcode|1 | prodcode| 
A |3367 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | date|1 | date| 
A |1197 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | netsales|1 | netsales| 
A |   23089 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | salesvolume |1 | salesvolume | 
A | 206 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | custcode_2  |1 | custcode| 
A | 788 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | custcode_2  |2 | date| 
A |   53876 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | prodcode_2  |1 | prodcode| 
A | 923 | NULL | NULL   |  | BTREE  | |
| transaction |  1 | prodcode_2  |2 | date| 
A |  161629 | NULL | NULL   |  | BTREE  | |
+-++-+--+-+---+-+--++--++-+
10 rows in set (0.47 sec)

One strange thing I found is when the query was running, I use top 
command to check the loading of mysqld, but it is 0%. This is usually 
9x% when the same perl script running agains MySQL5.0.1 in FreeBSD. The 
result of running the top command in Redhat against mysqld is shown as 
below:
19:26:21  up 14 days, 20:38,  1 user,  load average: 0.00, 0.00, 0.00
1 processes: 1 sleeping, 0 running, 0 zombie, 0 stopped
CPU states:   0.0% user   0.1% system   0.0% nice   0.0% iowait  99.8% idle
Mem:   513852k av,  506580k used,7272k free,   0k shrd,  124196k 
buff
   370448k actv,   0k in_d,   10116k in_c
Swap: 2132888k av,   15556k used, 2117332k free  270220k 
cached

 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
2893 mysql 15   0 50560  44M  3008 S 0.0  8.8   0:00   0 mysqld
But I confirmed that all mysqld instances are running in the system as 
shown below:

# netstat -a | grep mysql
tcp0  0 *:mysql *:* LISTEN
unix  2  [ ACC ] STREAM LISTENING 127441 /tmp/mysql.sock
[EMAIL PROTECTED] root]# !ps
ps -auxww | grep mysql
root  2873  0.0  0.1  4360 1016 ?S14:16   0:00 /bin/sh 
./S98mysqld
mysql 2893  0.0  8.8 302680 45340 ?  S14:16   0:00 [mysqld]
mysql 2894  0.0  8.8 302680 45340 ?  S14:16   0:00 [mysqld]
mysql 2895  0.0  8.8 302680 45340 ?  S14:16   0:00 [mysqld]


The questions I would like to ask is how to make sure mysqld is start up 
properly? Bec

Re: uninstall MySQL

2005-03-03 Thread Gleb Paharenko
Hello.



It depends on the type of distribution you're using.





[EMAIL PROTECTED] wrote:

> Anyone knew how to uninstall old version of MySQL? Thanks 

> 

> __

> Switch to Netscape Internet Service.

> As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register

> 

> Netscape. Just the Net You Need.

> 

> New! Netscape Toolbar for Internet Explorer

> Search from anywhere on the Web and block those annoying pop-ups.

> Download now at http://channels.netscape.com/ns/search/install.jsp

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with PHP + MySQL + InnoDB

2005-03-03 Thread Gleb Paharenko
Hello.



> How to force to kill process which make lock of table ? since PHP make

> many persistent connection to MySQL and i don't know

> which one is locking the table :(



You may get information about processes on your MySQL server using SHOW 
PROCESSLIST

and kill the weird process with the KILL. See:



  http://dev.mysql.com/doc/mysql/en/kill.html

  http://dev.mysql.com/doc/mysql/en/show-processlist.html



  

Ady Wicaksono <[EMAIL PROTECTED]> wrote:

> I have an PHP that do application

> 

> 

> $sql = "SET AUTOCOMMIT=0";

> 

> $db->execQuery($sql);

> 

> $sql = "DELETE FROM TABLE X WHERE...";

> if($db->execQuery($sql)){

>print "ERROR ";

>exit(0);

> }

> 

> $sql  = "INSERT INTO TABLE ";

> 

> if($db->execQuery($sql)){

>print "ERROR ";

>exit(0);

> }

> 

> I have a persistent connection to MySQL

> 

> DELETE is succeed, but INSERT IS FAILED and i simply exit,

> i know that i should do ROLLBACK OR COMMIT OR SET AUTOCOMMIT=1 before exit

> However, if i don't do it, another session will wait for this PHP 

> session to finish his transaction and lock wait timeout will arise

> 

> How to force to kill process which make lock of table ? since PHP make 

> many persistent connection to MySQL and i don't know

> which one is locking the table :(

> 

> 

> 

> 

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Decimal Puzzle

2005-03-03 Thread Gleb Paharenko
Hello.



> I designated a field Decimal 3,1. When I put a PHP

> query on my page, it displayed 450 as 450, but it

> rounded the fractions off, turning 24.8 into 25.





I think - the problem is in number_format() function. From the PHP manual:

"If only one parameter is given, number will be formatted without decimals."





David Blomstrom <[EMAIL PROTECTED]> wrote:

> I apologize if this is a double post, but it appears

> that my first message didn't transmit for some reason.

> 

> Anyway, I'm having a weird problem with decimals.

> Imagine a field with just two numerals - 450 and 24.8.

> You want to display them as 450 and 24.8, not 450.05

> and 24.8 or 450 and 3.

> 

> I designated a field Decimal 3,1. When I put a PHP

> query on my page, it displayed 450 as 450, but it

> rounded the fractions off, turning 24.8 into 25.

> 

> So I went back to my spreadsheet and changed every

> cell to TEXT before exporting it into a new CSV file.

> I imported it into MySQL but got the same results.

> 

> Next, I changed the field to Char 3, emptied the table

> and re-imported the CSV file. To my amazement, I get

> the same results.

> 

> Yet when I browsed my table in phpMyAdmin, I saw the

> data displayed as 450 and 24.8, just like I wanted. So

> I can't figure out why it doesn't display like that.

> 

> Actually, I'd like to change it just a bit. This table

> focuses on geological time. Rather than translate

> everything into a common unit (millions, billions,

> etc.), I decided to use both millions and billions and

> distinguish them by font color.

> 

> For example, a yellow 2.5 would represent 2.5 BILLION,

> while a white 340 would represent 340 MILLION. The

> only remaining problem is the end of the Ice Age,

> 10,000 years ago. That would equal .01 million years.

> 

> Still, I ought to be able to handle this with a Char

> or Varchar, shouldn't I?

> 

> This is the script I'm using to display it, where

> YearBegan and YearEnded equal the beginning and ending

> dates of various geological periods:

> 

> echo ' style="font-size: 85%; line-height: 130%;">

>   ';

>  //

>  while ($row = mysql_fetch_array ($res)) {

> 

> // and add this in your while loop:

> $YearBegan_arr[] = $row['YearBegan'];

> echo " id=\"". $row['IDTime'] ."\"> href=\"javascript:;\"onclick=\"MM_openBrWindow('/time/bits/".$row['IDTime'].".php','TimeHelp','menubar=yes,scrollbars=yes,resizable=yes,width=300,height=250')\">

> ". $row['NameTime'] ." ". $row['TimeType']

> ."

>  " .

> number_format($row['YearBegan']) . "-" .

> number_format($row['YearEnded']) . "

>  \n";

>   $nameTypes[]=$row['NameTime'];

>  }

>  }

>  ?>

>  

>  

>  

> 

> Thanks for any insights.

> 

> 

>

>

> __ 

> Celebrate Yahoo!'s 10th Birthday! 

> Yahoo! Netrospective: 100 Moments of the Web 

> http://birthday.yahoo.com/netrospective/

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "Load Data Infile ... Repair with KeyCache" --WAY SLOW!

2005-03-03 Thread Gleb Paharenko
Hello.



You may use ALTER TABLE .. DISABLE KEYS to speed up the loading process

on the MyISAM table.





mos <[EMAIL PROTECTED]> wrote:

> I have a 50g CSV file that I am trying to import into an empty MyISAM 

> table. It appears to go fine except after 10 hours it hasn't completed. A 

> "Show Process List" shows Info="load data infile ..." and State="Repair 

> with keycache". The table has a few hundred million rows of data. I assume 

> it is using "Repair with keycache" to rebuild the indexes after the data 

> has been loaded.

> 

> From what I've read "Repair with keycache" is going to take a huge amount 

> of time to complete. How do I disable "Repair with keycache" and use 

> "Repair with sort" instead (which should be faster, right?)? I'm using 

> MySQL 4.1.10 on Win XP and 1gb ram.

> 

> TIA

> 

> Mike

> 

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Strange internal loop causing multiple updates on one record!?

2005-03-03 Thread Jigal van Hemert
Tested on MySQL version 4.0.23, 4.1.8 and 4.1.10
OS: RedHat Fedora Core 2 / RedHat 8
Table type: InnoDB or MyISAM

Table structure:

CREATE TABLE `param_str` (
  `id` int(11) NOT NULL default '0',
  `name` varchar(32) NOT NULL default '',
  `value` varchar(255) default NULL,
  `match` varchar(255) default NULL,
  `match2` varchar(255) default NULL,
  `weight` tinyint(4) default NULL,
  KEY `id-name-value-match-match2`
(`id`,`name`,`value`(20),`match`(20),`match2`(20))
) TYPE=InnoDB;

`id` links to a table with the accounts whose string parameters are stored
in this table.

The object of the query is to update a field that contains a hit counter and
the last 9 (that is maximum 9) unique ID's that caused the hits. Proposed
format was: ;,,,...

I tried to fix an bug when an existing id starts with the new id (e.g. 11305
is in the list and a new hit is from id 113).

UPDATE `param_str`
SET `value` =
 CONCAT_WS(
  ';',
  (
   SUBSTRING_INDEX(
COALESCE(`value`,''),
';',
1
   ) + 1
  ),
  CONCAT_WS(
   ',',
   SUBSTRING_INDEX(
REPLACE(
 SUBSTRING(
  COALESCE(`value`,'') FROM
  LOCATE(
   ';',
   COALESCE(`value`,'')
  ) + 1
 ),
 CONCAT(
  ',' ,
  '113',
  ','
 ),
 ','
),
',' ,
-(9 -1)
   ),
   '113'
  )
 ) WHERE
 `id`=6  AND
 `name`='prf_hits'

I started out with the value "1;,11305" for the first hit by ID 11305 (which
is inserted by an INSERT query).
The query above resulted in 'affected rows: 9' and the value of the single
(!) row was "10;,11305,113,113"; the double 113 was caused by the bug I
mentioned before, but I'm now focussing on the counter part of the value.
I repeated the query and now it returned 'affected rows: 90' and the value
in the row was "100;,11305,113,113".

Various tests with different hit counter values show that the update is
repeated until the next power of 10 is reached:
1 -> affected: 9 -> 10
12 -> affected: 88 -> 100
413 -> affected: 578 -> 1000

Since the WHERE clause clearly identifies a single row in our table, I'm
beginning to think of a bug...

Any ideas or suggestions?

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL & CRON

2005-03-03 Thread shaun thornburgh
Hi,
I have a web based that allows users to upload XML files and insert them 
into the database, however some of these files are large and are causing the 
webserver to timeout.

Is it possible to create a cron job that can insert the xml files say every 
hour, then the users of the site would only have to upload the files and not 
wait for the insert?

Any advice / examples here would be great as I am not a UNIX expert!
Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]