mysql query not dead after 'kill'

2003-12-17 Thread Johannes Ullrich

I have run into this a few times now, and am wondering there is a
solution:

I have two queries:

(1) select query on a MERGE table
(2) 'alter table' on the same MERGE table.

As expected, the select query will lock the 'alter table'
query. The alter table query is intended to swap one of the
tables in the MERGE set.

Now if the 'select' query isn't critical, I would like to
kill it if it takes too long. However, as a result I have
the 'kill flag' set on the query, for a long time, without
it actually dying (e.g. it says in 'send data' status).

is there anyway to avoid this issue? E.g. a way to kill the
select faster? 

Usually I am ending up with load running out of control, and
in a few cases, mysql is crashing as a result.

Also: the status of the 'alter table' is 'rename result table',
not 'locked' or 'waiting for table'.




-- 
--
Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
--
   We regret to inform you that we do not enable any of the 
security functions within the routers that we install.
 [EMAIL PROTECTED]
--



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql 'range' query optimization

2003-10-30 Thread Johannes Ullrich





This problem is bugging me for a while now:

I have a table which has three columns:
'table':
start: long integer
end: long interg
property: varchar(200)

Now I am trying to retrieve the 'property' for a 'number':

select property from table where number between start and end

well, easy enough. Problem: It takes foreve, and according to
'explain', index are sometimes used and sometimes are not used
(for identical queries)


The table has about 700,000 rows. I have indexes on start, end and
(start,end). start and end are unique.
'end' is the primary key.

there are supposed to be no overlaps, but there are 
numbers that are not covered by any range.

'explain' sometimes shows the query using a key, sometimes it doesn't
:-(... all of this is kind of
'random'. The table is rather static (100 updates / day)

This problem has haunted me from 3.x days. Currently I am running 4.0.15

Different ways I did try to ask the query:

select property from table where endnumber order by end asc limit 1

select property from table where numberstart and numberend
  (this one sometimes works much faster if a 'limit 1' is added.
but only sometimes. :-/ )











-- 
--
Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
--
   We regret to inform you that we do not enable any of the 
security functions within the routers that we install.
 [EMAIL PROTECTED]
--



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: PERL DBI DBD mysql / REMOTE ACCESS

2003-10-30 Thread Johannes Ullrich

 I added Host 192.168.1.10 user root with passowrd in mysql user table.

Did you flush privileges?
did you hash the password using the 'password' function?
did you type the password correctly ?



 
 Ideas? What else?
 
 Thanks.
-- 
--
Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
--
   We regret to inform you that we do not enable any of the 
security functions within the routers that we install.
 [EMAIL PROTECTED]
--



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fwd: how to handle a 10GB index-file?

2003-03-05 Thread Johannes Ullrich

Two solutions to your problem:

table full errors: check the 'AVG_ROW_LENGTH' and 'MAX_ROWS' option 
for create and alter table. You can change these on the fly using 'alter
table', but it will take quite a time for a table your size (few hours-1 day
depending on machine). 

The exact values for these parameters are not all that important. just make
it large enough. Usually, you run into this limit around 4 Gig Byte, but
maybe you already set these parameters and need to increase them. Check
'show table status' to verify the current value.

To reduce the size of you index file: try a 'repair table'. The 'alter table'
will take care of it too. You can also drop and recreate this index.

Side note: look at merge tables... they look like a pain at first, but
once you get used to it they are quite handy for tables your size.



 The data file .MYD of a certain table has a size of 7.3 Gig and the
 corresponding index file .MYI has a size of 10Gig. When deleting records from
 this table - will the index file automaticly be stripped down? IF not - is
 there a way to trim the index-file to use less space?!?
 
 BTW: how is the maximum size of a index file determined? yesterday, one of
  our databases refused to insert more data into the table, because there was
  no space left in the index-file (17GB) - but the filesystem had plenty of
  space left (SOLARS 8).
 
 Is the any Documentation regarding this issue?
 
 thanks,
 
 Andy
 
 - -- 
 [EMAIL PROTECTED]
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.2.1 (GNU/Linux)
 
 iD8DBQE+ZgGpRrny/uOBVy4RAsPZAJ9XI/OrOuhfwMGEnBpbvBkKHyTtngCghoqJ
 KRQc0/zSIAnQQkAG+V0zOns=
 =oi3/
 -END PGP SIGNATURE-
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 


-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



too many open files, error 24, but max-files is large

2003-03-05 Thread Johannes Ullrich

  on mysql 4.0.10-gamma (rpm install, Redhat advanced server), I 
am running into 'too many open files' issues ( error 24 ).

  I am using a rather large merge table (30 distinct tables merged),
which is likely the culprit. The error shows up as I have about a
dozen of connections.

  I did increase the number of file handles substantionally (16384)
and it looks like this limit is not reached:

cat /proc/sys/fs/file-nr  
5328475816384

  This is a 2.4 kernel, so inodes are allocated automatically and
there is no inode-max setting.

  'open_files_limit' is set to 0.
  'table_cache' is set to 1024

   cmd line used to run mysqld:

/bin/sh /usr/bin/mysqld_safe --log-slow-queries --datadir=/var/lib/mysql 
--pid-file=/var/lib/mysql/x440.pid
...
/usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql 
--user=mysql--pid-file=/var/lib/mysql/x440.pid --skip-locking--log-slow-queries

   in other words: I checked ulimit...

   What am I missing?

hardware: dual Xeon system, 8 GByte RAM


-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



replication Error updateing slave list in mysql 4.0.10

2003-02-13 Thread Johannes Ullrich

  I am having problems setting up replication between two 4.0.10
servers. 

What I did so far:

- generate a dump of the current state of the server using 
  'mysqldump' (its a mix of mostly innodb tables and some MyISAM
  tables)
- dropped all databases from the slave
- imported the dump into the slave using
  mysql  dump 
- updated the slave parameters using 'CHANGE MASTER'

As I start the slave, it immediatly stops and I am getting the
following in the slave error log:


030213  4:36:30  Slave I/O thread: connected to master 'repl@master:3399',  
replication started in log 'slave-bin.005' at position 86967189
030213  4:36:30  Error updating slave list: Query error
030213  4:36:30  Slave I/O thread exiting, read up to log 'slave-bin.005', position 
86967189

not that the master is running on port 3399 (ssh tunnel).
I can connect to the master as 'repl' from the slave.
looking at the source shows this error in conjunction with
'show slave hosts'. This command returns an empty result on 
master and host. Not sure what it is supposed to return.

I did try a 4.0.5 master first with the same result. Now I upgraded
the master to 4.0.10 (slave ran 4.0.10 all along).




-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: replication Error updateing slave list in mysql 4.0.10

2003-02-13 Thread Johannes Ullrich

 Check the user 'repl' has REPLICATION SLAVE privilege.

Ah. that fixed it. Actually, the real reason was that I had not
yet updated the mysql tables and the new privileges did not take
effect as a result.

mysql_fix_privilege_tables , followed by the 'GRANT' command
and 'flush privileges' fixed it.

  Thanks!

-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




speedup 'alter table' for large tables

2003-02-07 Thread Johannes Ullrich

I just had to alter a large (25Gig, 100Million rows) table to
increase the max_rows parameter. The 'alter table' query is now
running 60+ hours, the last 30+hours it spend in 'repair with
keycache' mode. Is there any way to speed up this operation?
I realize, it is probably too late now. But next time around it
would be handy to know a faster way. 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: encrypted password

2003-02-06 Thread Johannes Ullrich

the easiest way to do this is to use mysql's own 'password' function.

to add a new user use:

insert into table (username,passwd) values ('jianping',password('jian1830')) 

to validate the password:

select count(*) from table where username='jianping' and 
passwd=password('whatwasentered');

or similar...



On Tue, 4 Feb 2003 14:58:28 -0500 (EST)
Jianping Zhu [EMAIL PROTECTED] wrote:

 
 I have mysql in a redhat machine. I need to use mysql do user
 authentication to a website.
 I have a table like following.
 
 +--+--+
 | username | passwd   |
 +--+--+
 | jianping | jian1830 |
 | chichi   | jian1830 |
 +--+--+
 
 I want the passwd field not to be plain text but encrypted. how can i do
 that?
 
 Thanks.
 
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 


-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: tab-delimited text file import

2003-02-02 Thread Johannes Ullrich

 Does anyone have any hints on how to or where to look to find out how 
 to import a number of tab-delimited text files with some header info 
 that reside on a ftp server into a MySQL database using PHP?

the 'key' command is 'load data infile'. It is very flexible in
handling various delimited file formats.
 
 How about doing this on a daily basis, automagically?

well, a little perl script as a cron job? php script will do too.



-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Storage issue

2003-01-30 Thread Johannes Ullrich

 I'm wondering how well MySQL compress data. I'm about to design a
 database which will hold mainly _a lot_ of FLOAT-values, and since I do
 not really know how well MySQL compress data or how I could calculate
 this I'd really appriciate a little guidance.

see chapter 6.2.6 of the mysql manual.

 1 value/minute are stored = 1440 values/day.
 365 days / year.
 
 We have 100 different tables with 25 columns each.
 This makes 100*25*365*1440 = 1 314 000 000 values per year.

float: 4 byte, so you need about 5 gigs? But you will need space
for indexes and such as well. So just get an 18 Gig drive for the
start.


 


-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re[4]: mysql 4.0.8- crash on TCP connection

2003-01-09 Thread Johannes Ullrich
On Thu, 9 Jan 2003 22:56:04 +0200
Gelu Gogancea [EMAIL PROTECTED] wrote:

  All this is very interesting, BUT i have two binary builds (4.0.7 
 4.0.8),
 Ha,Hayou  are gentle.
 
  (load avg 10-60 Query/sec), and 4.0.8 crash (in some
  hardware/software) after 2 seconds work :(
 

Did you see any relationship with 'replication'? I just downloaded
4.0.8. All it did was act as a slave. It crashed after a large
'load table' from the master and now refuses to start. Looks like
it crashes as it read the relay-log-info file or just after it does
so.

at least thats my latest theory after doing more backtrace resolving,
stracing and experimenting.

did submit one or two bug reports about this.


-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: R: Mysql Replication

2002-11-21 Thread Johannes Ullrich

 Then, the master write, update, delete, ecc..
 the slaves answer SELECTs
 
 If this is impossible, which is the utility to have slaves???

it is possible. Follow the manual for a good start. A couple
caveats:

- your application has to be able to send the selects to the different
  (read only) databases on its own. mysql will not help you with the
  actual query funneling.

- be aware that the slave may not be up to date. All replication is
  done 'in serial'. An update that takes a long time will block
  all other updates from the master. 

- if you use 'LOAD DATA INFILE', make sure you do not delete the
  data file until all slaves have it imported.



-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql 3.23.53a-Max crashes on large memory machine

2002-11-21 Thread Johannes Ullrich

I am having 'issues' with MySQL running on Redhat Advanced Server
on a 8 Gigbyte machine (dual P4-Xeon).

After large imports ('load data infile', file size about 1 Gigbyte) into
a large table (20-30 GByte,  100 Million rows), the database crashes.

I did try several key_buffer_size settings. The example below had the
key_buffer_size set to 6 Gig (but as you see, it shows -50MByte ?).
A key buffer size of a bit less then 4 Gig didn't help. I did try a couple
of other versions, including 4.0.4 and all of them crashed.

The machine is running only one mysql instance at the time. The main issue
I try to solve is how to optimize lookups against this one big table.
Its index alone is about 10-15 gig...
 
From the log file:

-

mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked agaist is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail

key_buffer_size=-50335744
record_buffer=209711104
sort_buffer=209715192
max_used_connections=2
max_connections=100
threads_connected=2
It is possible that mysqld could use up to 
key_buffer_size + (record_buffer + sort_buffer)*max_connections = 3161707 K
bytes of memory
Hope that's ok, if not, decrease some variables in the equation

Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Stack range sanity check OK, backtrace follows:
0x806eeb4
0x82d9b38
0x830592f
0x80a194f
0x807598a
0x80742e7
Stack trace seems successful - bottom reached
Please read http://www.mysql.com/doc/U/s/Using_stack_trace.html and follow
instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do 
resolve it
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd-query at (nil)  is invalid pointer
thd-thread_id=9


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: performance tuning

2002-11-21 Thread Johannes Ullrich

 MySQL 3.23.51
 Linux Kernel 2.4.16
 we do have 1 GB of RAM
 
 the main problem seems to be a table with about 8.597.146 records.

Similar situation here (100 Million rows).

things I found that help:

- be selective on what rows to index. Try to limit yourself to one
  row.
- increase the key buffer size. (watch memory / swap usage as you
  do this).
- maybe you need more RAM (but see my other post about a question
  I had about large RAM machines and mysql crashing ;-) ) 


And more difficult, try to look at your application design and try
to come up with 'cache tables' that are generated by a cron job
periodically and are used for most queries.




-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql crashes after adding memory.

2002-08-08 Thread Johannes Ullrich


  Yesterday, I increased the memory in my mysql server from 
2 GByte to 4 GByte. 

  Here the log file as it died:

Number of processes running now: 1
mysqld process hanging, pid 1015 - killed
020808 09:40:12  mysqld restarted
020808  9:40:12  Can't start server: Bind on TCP/IP port: Address already in use020808 
 9:40:12  Do you already have another mysqld server running on port: 3306 ?
020808  9:40:12  Aborting

020808  9:40:12  /usr/local/mysql/bin/mysqld: Shutdown Complete

020808 09:40:12  mysqld ended


   The database was under heavy load at the time. The machine is
a 'vanilla' dual 1 GHz P-III running RedHat Linux and a 2.4.17 kernel
Server version: 3.23.49a-max-log

   After doing the memory upgrade, I also adjusted some of the 
parameters, in particular, I increased the key buffer to 2 Gig.

   The database has a very large (100,000,000 row, 30 GigByte)
MyISAM table, and a few smaller myisam and innodb tables. It ran 
ok for at least half a year (last machine reboot). So far, it does
not look like any data was corrupted.

   Usually, only a small number of client (10 or so) is connected.

Here my my.cnf file:

set-variable=max_connections=200
set-variable=max_allowed_packet=10M
set-variable=key_buffer=2000M
set-variable=join_buffer_size=200M
set-variable=sort_buffer=150M
set-variable=record_buffer=50M
set-variable=record_rnd_buffer=50M
set-variable=table_cache=200
set-variable=thread_concurrency=4
set-variable=long_query_time=5

innodb_data_home_dir=/usr/local/mysql/innodb
innodb_data_file_path=data1/ibdata1:1M;data2/ibdata1:1M
set-variable=innodb_buffer_pool_size=50M
set-variable=innodb_additional_mem_pool_size=10M
innodb_log_group_home_dir=/opt/innodb_logs
innodb_log_arch_dir=/opt/innodb_logs
innodb_log_archive=1
set-variable = innodb_log_files_in_group=3
set-variable = innodb_log_file_size=3M
set-variable = innodb_log_buffer_size=100
innodb_flush_log_at_trx_commit=1
set-variable = innodb_file_io_threads=4
set-variable = innodb_lock_wait_timeout=50


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: database create fails with error 28

2002-08-08 Thread Johannes Ullrich


 What the hell is error 28? Where can I find description?
 mysqlcheck --all-databases # says everything is ok

mysql does come with a little utiltiy, perror, which can be
used to translate error numbers into readable messages:

$ perror 28
Error code  28:  No space left on device

Did you check if you have enough disk space? 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php