Re: Spontaneous corrupted tables on MyISAM with 5.0.x release

2008-01-31 Thread Markus Fischer

Hi Gerald, Michael,

Gerald L. Clark wrote:

Never put MySQl data files on an NFS exported share.


Michael Dykman wrote:
 This is a bit of a generalization, but file locks are known to be
 pretty flakey and unreliable under NFS..  any kind of serious load
 begs races conditions which file locks normally sort out.  I have had
 similar very bad luck using MySQL across NFS.


Thanks for the input. I've talked with our Hoster, they told me that, 
technically, whether the mount points are directly mounted via NFS or whether 
they're VMware discs doesn't matter: the product operates solely on NFS, 
they're no real physical drives involved ...


thanks,
- Markus

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Spontaneous corrupted tables on MyISAM with 5.0.x release

2008-01-30 Thread Markus Fischer

Hi,

our scenario (was):
server1: 5.0.32-Debian_7etch1-log
server2: 5.0.32-Debian_7etch1-log

Hardware-wise (attention, Vmware, see below) they're equal: ~1GHz CPU at at 
minimum 2GB ram.


Suddenly about 4 to 6 weeks ago, server1 started getting serious problems with 
 spontaneous corrupted tables, so that in the end our hoster upgrade server1 
to 5.0.51-2-log what we're currently running.


Unfortunately, things haven't changed a bit. server2 is running different 
databases/applications, some tables are replicated from server1 to server2, 
some from server2 to server1. However, server2, as far as I can remember, 
never had those spontaneous table problems and still hasn't (yet).


Both servers are running on VMware (I think ESX is the product our Hoster is 
using) and the MySQL data files are on a NFS exported share. Those 
share/fileserver is reported to be some kind of Ueber-Beast-Killer-Maschine. 
All servers running in Vmware don't contain virtual Vmware hard discs but have 
NFS mounted root and data, etc. partitions.


We're very often, daily!, getting spontaneous corrupted tables on the server 
with the version which gets us really in trouble.


The only pattern so far:
1) it only affects MyISAM tables
2) once converted to InnoDB, no troubles (so far)

Unfortunately there are tables we weren't able to convert because they contain 
an fulltext index. Interestingly, once we converted all (except mentioned) 
tables in a database, the remaining MyISAM tables won't crash anymore (so far).


Besides this, there's no distinct pattern, because we get crashes
* on high and low traffic tables
* on intensive and non-intensive write tables
* on big (range between 100 and 150MB) and small tables

It often occurred that a simple REPAIR table statement didn't always helped. 
 Sometimes EXTENDED was required, sometimes an offline repair with the 
myisamchk had to be done.


The tables didn't crash because the whole MySQL server went down, this was 
while the server was running.


We've been running the applications using the databases for years. The were 
two major changes during the last year:

* our moved the MySQL server from a physical machine to Vmware
* we upgraded (better, let our Hoster upgrade) from some MySQL4 version to the 
mentioned versions above.


We don't use any fancy stuff, more or less simple SELECT, DELETE, UPDATE, 
INSERT. No Subselects, no triggers, no stored procedures, no key constraints, 
etc, no locking, no REPLACE.


Our Hoster refers to the following MySQL bugs
http://bugs.mysql.com/bug.php?id=28154
http://bugs.mysql.com/bug.php?id=33596

However specially for 33596 I don't see any related information because this 
issue described there never applied.


For 28154: Unfortunately I don't remember seeing this 127 error, however if it 
ever occurred, then only a long time ago. Recent errors are just corrupted 
tables once we start seeing problems in our web application. Our thread cache 
size is 128. Mentioned in #28154 is http://bugs.mysql.com/bug.php?id=29838 . 
I think that's the reason why our Hoster upgraded to 5.0.51.



For what it matters, I just can't believe that MyISAM is to blame completely 
at fault. If it had that problems I just couldn't believe this was in a stable 
product. I'm really curious to just to fix the problems but also find out what 
the cause really is.


I would be glad for any help on this matter and I'm happy to provide any 
information you want.


thanks,
- Markus

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problem with ~0.5 GB tabel

2008-01-02 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

I've learned a bit about the environment this server is running in. It's
VMware with root NFS and storage NFS mount points for MySQL. I've been
told the throughput over NFS for my Server is from 20 to 30 MB/s.

The server has 3GB ram. I'm not sure about it's CPU performance, but the
information I've is that when connections start hanging and the effect
basically multiplies, the CPU load goes aup.

Does this sound like a possible bottleneck?

thanks,
- - Markus

Markus Fischer wrote:
 Hi,
 
 I'm using phorum [1] and made some custom queries against their
 database. My query looks like this:
 
 SELECT
   message_id,  subject,  datestamp,  forum_id,  thread
 FROM
   phorum_messages
 WHERE
   forum_id IN (1, 2, 3, 4) AND parent_id = 0 AND
   thread != 0 AND status = 2 AND closed = 0
 ORDER BY datestamp DESC LIMIT 3
 
 The table phorum_message is about 500MB in size. The problem is that
 such a query often starts to hang in the Sorting result phase. This
 can take up to minutes and during this time problems really start: more
 and more such queries are coming in, each of them hanging for the same
 reason too and after a few minutes the maximum of connections are
 reached (currently 170) and everything is dead. Only killing the queries
 manually helps.
 
 My guess is that the filesort is problematic and so I tried to avoid it
 with the following things.
 
 When I use explain on the query I get back the following:
 
id: 1
   select_type: SIMPLE
 table: phorum_messages
  type: range
 possible_keys: thread_message, thread_forum, status_forum,
 
list_page_float, list_page_flat, dup_check,
last_post_time, forum_max_message, post_count
   key: post_count
   key_len: 9
   ref: NULL
  rows: 1311
 Extra: Using where; Using filesort
 
 When I remove the ORDER BY statements, the query is *not* using
 filesort. However, as you can guess, it is necessary. The goal of the
 query is to get the top-most posters in the selected forums.
 
 The MySQL documentation [2] says that under certain cases it should be
 possible to create appropriate keys so that even an ORDER BY can take
 advantage of, but I was unable to come up with such an.
 
 Is there a recommendation how to go for it?
 
 thanks,
 - Markus
 
 [1] http://www.phorum.org/
 [2] http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHfCl+1nS0RcInK9ARAqEaAJ9JsofQIzoVBfCJQRKE/8X6wW1/SwCg0+en
0HDQBTAB4U87Nuua/h4pDiU=
=utDe
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance problem with ~0.5 GB tabel

2007-12-30 Thread Markus Fischer
Hi,

I'm using phorum [1] and made some custom queries against their
database. My query looks like this:

SELECT
  message_id,  subject,  datestamp,  forum_id,  thread
FROM
  phorum_messages
WHERE
  forum_id IN (1, 2, 3, 4) AND parent_id = 0 AND
  thread != 0 AND status = 2 AND closed = 0
ORDER BY datestamp DESC LIMIT 3

The table phorum_message is about 500MB in size. The problem is that
such a query often starts to hang in the Sorting result phase. This
can take up to minutes and during this time problems really start: more
and more such queries are coming in, each of them hanging for the same
reason too and after a few minutes the maximum of connections are
reached (currently 170) and everything is dead. Only killing the queries
manually helps.

My guess is that the filesort is problematic and so I tried to avoid it
with the following things.

When I use explain on the query I get back the following:

   id: 1
  select_type: SIMPLE
table: phorum_messages
 type: range
possible_keys: thread_message, thread_forum, status_forum,

   list_page_float, list_page_flat, dup_check,
   last_post_time, forum_max_message, post_count
  key: post_count
  key_len: 9
  ref: NULL
 rows: 1311
Extra: Using where; Using filesort

When I remove the ORDER BY statements, the query is *not* using
filesort. However, as you can guess, it is necessary. The goal of the
query is to get the top-most posters in the selected forums.

The MySQL documentation [2] says that under certain cases it should be
possible to create appropriate keys so that even an ORDER BY can take
advantage of, but I was unable to come up with such an.

Is there a recommendation how to go for it?

thanks,
- Markus

[1] http://www.phorum.org/
[2] http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problems with migration from 4.0.24 to 5.0.32 with unique varchar

2007-03-06 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I'm having trouble with the following:

In the MySQL 4 database I had the following values in an unique varchar
and there was no problem:

Gross
Groß

(the latter has the german sharp s character)

Now whenever I import this into the MySQL5 database I get a duplicate
key error; I tried the collcations utf8_bin and latin1_bin already
(started out with latin1_german1 and latin1_german2).

How can I solve this?

thanks,
- - Markus
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF7XxV1nS0RcInK9ARAjNeAKDYU0y+Np0ssQt+bhpEwNb5jb7IWgCfZG1H
AfEHWvhiwlrLlCmaEpcxZJ0=
=O7H4
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Logging from mysql only from certain applications

2006-09-19 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

we've enabled /var/log/mysql.log on our server (5.0.24a on Debian
testing). However I only want to log commands from certain application,
or better say, certain users.

When I look at some log output, it looks like this:

060919  9:53:38
15945 Connect [EMAIL PROTECTED] on
15945 Query   SET AUTOCOMMIT=0
15945 Init DB somedb
15945 Query   SELECT * FROM [...]
15945 Query   SELECT * FROM [..]
15945 Query   UPDATE [...]
15945 Query   COMMIT

The way I read this is that once I have a Connect command, the ID
before uniquely identifies all further SQL commands belonging to this
connection - to this mysql user.

Now what I need is something similar to tail -f but only for selected
connected user. We've so much going on the server that it's impossible
to look over all the data without a tool.

Before I re-invent the wheel I'ld like to know if there are tools/helper
like this available ... ?

thanks,
- - Markus
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFFD6Py1nS0RcInK9ARAloJAJ4s/7ckWeE/s4VO92yVgeUebOrRKwCguCvZ
AlJvzmq1VVCCfGj2eIZ3oME=
=aqc3
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Logging from mysql only from certain applications

2006-09-19 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Markus Fischer wrote:
 Before I re-invent the wheel I'ld like to know if there are tools/helper
 like this available ... ?

Being impatient and in need of this I've written my own version. Maybe
it is valuable to others.

http://markus.fischer.name/lab/php/mysqltail.php.txt

Basic usage:

php mysqltail.php /var/log/mysql.log userToTrace

Stop application with usualy CTRL-C.

- - Markus
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFFEAK01nS0RcInK9ARArYiAJ9MH9wGoolDEuU7mW+3O3SNg1XP+QCghjX6
wnLcE+Jb6rnJG/yBYO/W4eg=
=N5PM
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Matching of german umlauts with LIKE

2006-03-23 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I realized this also. Martin Schwarz off list suggested to me to use
REGEXP for that purpose. That means that I'm rewriting the user input
übersee behind the scenes to ^(ü|ue)bersee.* which works quote well.

The only gotcha is that the user may enter any special regexp characters
also, so I had to make a list of them and escape them prior to modifying
the string.

thanks,
- - Markus

sheeri kritzer wrote:
 MySQL doesn't have anything like that.  You can use the wildcard
 characters instead of the umlauts if you want, such as
 
 SELECT * from person where name like %bersee
 which would get
 übersee and uebersee
 but also a whole lot more.
 
 But doing something like
 SELECT * from person where name like _bersee or name like __bersee
 might work -- the underscore means 1 of any character, so here the
 only noise you'd get are other folks whose names are
 _  _  bersee
 So there's still a margin for error.
 
 Unfortunately, there's no special case for hey, when you're looking
 at LIKE, I want to define that x=y -- particularly when x and y have
 differing #'s of characters.
 
 -Sheeri
 On 3/22/06, Markus Fischer [EMAIL PROTECTED] wrote:
 Hi,
 
 what is the best way to match german umlauts like 'ä' also their
 alternative writing 'ae'?
 
 For example I'm searching for übersee and I also want to find the word
 uebersee in the database. The words are actually names of persons.
 
 One possibility  is to dynamically expand the SQL statement if such
 special characters are found. So the search term übersee will be
 expanded to SELECT * FROM person WHERE name LIKE 'übersee%' AND name
 LIKE 'uebersee%' but this is getting dirty and very very long if
 multiple umlauts are used to cover all cases ...
 
 So the other idea is to have the name twice in the database for every
 person and the second version of the name is a normalized for where
 all special characters are replaced with their alternative writing. E.g.
 I store the field name übersee and also name2 uebersee and when
 matching I match against name2. If the field would container more
 special characters it still would work without much more work, e.g. name
 is überseemöbel then name2 would be ueberseemoebel and when the term
 überseemö is entered it's also normalized to ueberseemoe and the
 LIKE statement will still match. Basically this is some kind of
 primitive stemming like lucene does it.
 
 Is there maybe some built-in support from MySQL for such special cases?
 
 thanks for any pointers,
 - Markus

- --
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEI4BC1nS0RcInK9ARAnMrAJ9jZ5LAxS1S2bjqrPvIUBSiTGsxxQCgv+5l
xyxQhd7B9HTnc8sTa7Tsekk=
=uycm
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Matching of german umlauts with LIKE

2006-03-22 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

what is the best way to match german umlauts like 'ä' also their
alternative writing 'ae'?

For example I'm searching for übersee and I also want to find the word
uebersee in the database. The words are actually names of persons.

One possibility  is to dynamically expand the SQL statement if such
special characters are found. So the search term übersee will be
expanded to SELECT * FROM person WHERE name LIKE 'übersee%' AND name
LIKE 'uebersee%' but this is getting dirty and very very long if
multiple umlauts are used to cover all cases ...

So the other idea is to have the name twice in the database for every
person and the second version of the name is a normalized for where
all special characters are replaced with their alternative writing. E.g.
I store the field name übersee and also name2 uebersee and when
matching I match against name2. If the field would container more
special characters it still would work without much more work, e.g. name
is überseemöbel then name2 would be ueberseemoebel and when the term
überseemö is entered it's also normalized to ueberseemoe and the
LIKE statement will still match. Basically this is some kind of
primitive stemming like lucene does it.

Is there maybe some built-in support from MySQL for such special cases?

thanks for any pointers,
- - Markus
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEIWDH1nS0RcInK9ARAkzyAKCyoPPVd1YRfhs1p/p8kY465/QPVQCfa5uj
r2ZarPZvsJp5FPNDsdhAN7E=
=5ADZ
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Index and multiple fields

2006-03-17 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

is there, performance wise, any difference whether I create one index
for multiple fields or each field with its own index? I'm running 4.0.16.

thx,
- - Markus

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.1 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEGnNH1nS0RcInK9ARAq9FAJ427uJXMuujd6Etnq7fhTSOqmISKgCg2Tn4
Qpytyz4PD4CPGSMEPX4ABbI=
=cyqe
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index and multiple fields

2006-03-17 Thread Markus Fischer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

thanks, somehow I wasn't able to find those pages.

Basically, this means if I've a table like this

id1
id2
id3
id4
id5

and I've two different select statements:

select * from ... where id1 = .. and id2 = ..

and the other being

select * from ... where id3 = .. and id4 = ..

I would create two indexes, one for id1/id2 and the other for id3/id4 ,
right?

again, thanks

- - Markus

Косов Евгений wrote:
 Hi, Mark!
 
 Of course, it depends on queries you are running.
 
 I beleive you can find all anwers here:
 http://dev.mysql.com/doc/refman/5.0/en/indexes.html
 http://dev.mysql.com/doc/refman/5.0/en/multiple-column-indexes.html
 http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
 
 
 Markus Fischer пишет:
 Hi,
 
 is there, performance wise, any difference whether I create one index
 for multiple fields or each field with its own index? I'm running 4.0.16.
 
 thx,
 - Markus
 


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.1 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEGo1p1nS0RcInK9ARAjOsAJsGmgh1VVI3RCG1ci7sr2vBKR7VgQCgpvg8
k3wTpe1bqh7BIHaDGze+ttY=
=ZqNR
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unable top drop table, error 1051

2004-09-29 Thread Markus Fischer
Egor Egorov wrote:
Bug filled: http://bugs.mysql.com/bug.php?id=5784
Thank you!
Nice, thanks to you too ;-)
regards,
- Markus
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unable top drop table, error 1051

2004-09-24 Thread Markus Fischer
Hi,
Egor Egorov wrote:
Can you create a test case? I.e. a .sql file which is supposed to drop the 
table well but instead fails? 

This will help us determine if it's a bug and fix if it is. 
Thanks, this hit a pretty interesting nail for me: I can dump it, but I can't load 
the dump into the database again. I get:
ERROR 1005 at line 28: Can't create table './test1/aktionen_produkte.frm' (errno: 150)
The definition of the Table is:
CREATE TABLE `aktionen_produkte` (
 `ap_id` int(10) unsigned NOT NULL auto_increment,
 `ap_pr_id_produkt` int(10) unsigned NOT NULL default '0',
 `ap_ak_id_aktion` int(10) unsigned NOT NULL default '0',
 PRIMARY KEY  (`ap_id`),
 KEY `aktionen_produkte_FKIndex1` (`ap_ak_id_aktion`),
 KEY `aktionen_produkte_FKIndex2` (`ap_pr_id_produkt`),
 CONSTRAINT `aktionen_produkte_ibfk_1` FOREIGN KEY (`ap_ak_id_aktion`) REFERENCES 
`aktionen` (`ak_id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
 CONSTRAINT `aktionen_produkte_ibfk_2` FOREIGN KEY (`ap_pr_id_produkt`) REFERENCES 
`produkte` (`pr_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) TYPE=InnoDB;
I guess I can see where the problem is: the order of creation of the tables is 
important, because if tries to create contraints to table which do not exist (yet) 
because they a further down in the dump file.
Manually reordering doesn't seem very sexy to me. I read the mysqldump manpage but it 
doesn't tell me an option how to have mysqldump respect the ordering. Any ideas?
thanks,
- Markus
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unable top drop table, error 1051

2004-09-24 Thread Markus Fischer
Hi,
Egor Egorov wrote:
Can you create a test case? I.e. a .sql file which is supposed to drop the 
table well but instead fails? 
Nevermind my last post, I found the workaround to disable foregin_key_checks 
during import; interesting.
Here is a small example:
set foreign_key_checks=0;
CREATE TABLE `produkt_kategorie` (
 `pk_id` int(10) unsigned NOT NULL auto_increment,
 `pk_pt_id_typ` int(10) unsigned NOT NULL default '0',
 `pk_kategorie` varchar(255) default NULL,
 `pk_sortid` int(10) unsigned default NULL,
 PRIMARY KEY  (`pk_id`),
 KEY `produk_kategorie_FKIndex1` (`pk_pt_id_typ`),
 CONSTRAINT `produkt_kategorie_ibfk_1` FOREIGN KEY (`pk_pt_id_typ`) REFERENCES 
`produktkategorie_typ` (`pt_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) TYPE=InnoDB;
CREATE TABLE `produktkategorie_typ` (
 `pt_id` int(10) unsigned NOT NULL auto_increment,
 `pt_name` varchar(255) default NULL,
 `pt_sortid` int(10) unsigned NOT NULL default '0',
 PRIMARY KEY  (`pt_id`)
) TYPE=InnoDB;
set foreign_key_checks=1;
When I now try to drop the table produktkategorie_typ I get the error:
#1051 - Unknown table 'produktkategorie_typ' 

I think it's because of the constraint, but then the error message is missldeading.
So basically I would need to disable the foreign_key_checks every time I drop a table 
which has constraints?
I see the bigger picture now I guess, just didn't while trying to achive this with 
phpmyadmin.
thanks,
- Markus
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Unable top drop table, error 1051

2004-09-22 Thread Markus Fischer
Hi,
I've a problem that I can't drop a certain table, always get back the error unknown 
table.
Version: 4.0.21 (Debian Testing)
Table-Type: InnoDB
mysql show tables;
[...]
| produkt_kategorie|
mysql drop table produkt_kategorie;
ERROR 1051: Unknown table 'produkt_kategorie'
The ownerships/permission seem right to me too:
-rw-rw  1 mysql mysql 8670 Sep 21 11:23 produkt_kategorie.frm
mysql describe produkt_kategorie;
| Field| Type | Null | Key | Default | Extra  |
+--+--+--+-+-++
| pk_id| int(10) unsigned |  | PRI | NULL| auto_increment |
| pk_pt_id_typ | int(10) unsigned |  | MUL | 0   ||
| pk_kategorie | varchar(255) | YES  | | NULL||
| pk_sortid| int(10) unsigned | YES  | | NULL||
pk_pt_id_typ is from a 1:n relation to the table produktkategorie_typ.
I do the operation as user 'root' which has all access-rights.
When I view the table in phpmyadmin (using 2.5.7pl1) I also see this additional 
information:
InnoDB free: 44032 kB; (`pk_pt_id_typ`) REFER `produktkategorie_typ`(`pt_id`) ON 
DELETE NO ACTION ON UPDATE NO ACTION
I've used DbDesigner4 to design the table and then use the synchronisation feature to 
create the tables in the database.
I'm using a 1:n relation from another table, produktkategorie_typ, to this table. I've 
created the Reference Definitions, but yet haven't assigned any actions (thus there's 
NO ACTION defined as seen above).
I believe it has to do with the relation/reference definitions from InnoDB, so to me 
the message seems missleading.
The only way for me to drop the table is to completely drop the database and recreate.
thanks for any hints
- Markus
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Markus Fischer
Hi,

On Tue, Dec 16, 2003 at 10:23:05PM +1100, Chris Nolan wrote : 
 How heavy is your usage of TEMPORARY TABLES? I don't use them much
 myself, but I'm sure that the others on the list will have something
 to say in that regard.

Here are the relevant numbers:

  Created_tmp_disk_tables| 21911
  Created_tmp_tables | 329778

This is from an uptime of about 40 hours which makes the
Created_tmp_disk_tables around 6% during this time.

 To get a better look at MySQL's usage of memory, you could try looking 
 at the output of
 SHOW STATUS .

I don't want to look silly, but actually having the show status
output in front of me doesn't tell me much about the memory usage;
at least I can't find it ;)

There are things like Qcache_lowmem_prunes  Qcache_free_memory but
this are the only ones related to memory. Can you advise me where to
look at exactly?

- Markus

Ps: I'm using version 4.0.14

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Markus Fischer
On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote : 
 In the last episode (Dec 16), Markus Fischer said:
  I'm investigating a performance problem with mysql server set up. The
  server is running linux with 1GB ram. I'ld like to tune the
  configuration of the server to use as much RAM as possible without
  swapping to the disc because of the big slow down.
  
  [mysqld]
  set-variable= key_buffer=16M
 
 Way too low; this says only cache 16MB of index data.  Try 256MB.
 
  set-variable= max_allowed_packet=200M
 
 Probably way too high, but doesn't hurt since it won't actually
 allocate that much memory unless you have a 200MB field someplace.

Thanks for pointing this out. I've compensated max_allowed_packet
and increased the key_buffer.
 
  I'm also not sure whether the database is swapping temporary tables
  to the disc or not; is there a way to verify if this is happening?
 
 show status like 'created_tmp%';
 
 Raising sort_buffer_size and join_buffer_size may also help if your
 queries pull a lot of records.

From what I read from the manual, sort_buffer_size is only used for
the isamchk tools, isn't it? I've adapted join_buffer_size though;
thanks.

- Markus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Markus Fischer
On Thu, Dec 18, 2003 at 10:37:46AM -0600, Dan Nelson wrote : 
 In the last episode (Dec 18), Markus Fischer said:
  On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote : 
   Raising sort_buffer_size and join_buffer_size may also help if your
   queries pull a lot of records.
  
  From what I read from the manual, sort_buffer_size is only used
  for the isamchk tools, isn't it? I've adapted join_buffer_size
  though; thanks.
 
 It's used for any sorting:
 
* `sort_buffer_size' Each thread that needs to do a sort allocates a
  buffer of this size. Increase this value for faster `ORDER BY' or
  `GROUP BY' operations.  *Note Temporary files::.

Of course, sorry, I got confused then.

thanks,

- Markus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-16 Thread Markus Fischer
Hello,

I'm investigating a performance problem with mysql server set up.
The server is running linux with 1GB ram. I'ld like to tune the
configuration of the server to use as much RAM as possible without
swapping to the disc because of the big slow down.

The current configuration is as follows (I've just pasted the
relevant pieces):

[mysqld]
skip-innodb
skip-locking
set-variable= key_buffer=16M
set-variable= max_allowed_packet=200M
set-variable= thread_stack=128K
bind-address= 127.0.0.1
port= 3306
skip-networking
set-variable = query_cache_type=1
set-variable = query_cache_size=64M
set-variable = tmp_table_size=50M


The main purpose of the database is a heavily modified forum
application based on vBulletin.

One of the biggest table is the 'post' table with consumes about
617M currently; this table is often used in queries for index pages
which are the most frequent visited pages; indexes for the relevant
column in the WHERE statement are all created. The next tables in
size are using 22M and most of them much less.


I'm also having problems in determining accurately how much memory
MySQL uses when running on the system; the information provided by
ps doesn't seem relieable due the many processes/threads. Getting
the currently used memory and whether MySQL needed to swap would be
very valueable in chasing down this issue.

I'm also not sure whether the database is swapping temporary tables
to the disc or not; is there a way to verify if this is happening?

Other beneficial things would be to know how much memory certain
queries need (mostly the queries on index pages).


Besides more physical memory, are the other ways to better tune the
server for the 1GB ram? What performance analyses tool are
recommended for use with MySQL?

thanks for any advice,

- Markus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Threads or Prozesses on Linux, and how to control them?

2002-06-23 Thread Markus Fischer

[Please CC: directly to me, I'm not subscribed]

Hi,

maybe a pointer to the documentation is sufficient, but does
mysqld uses linux-threads or processes?

And is there way to control them? After I start up mysqld I
see 3 processes with ps. After some time (e.h. 12 hours) I
spotted about 24 (!) of them.

I've enabled sql logging and see that only every few seconds
requests are comming, even more time. I never see the number
if processes shrinking and I don't remember setting a limit
somewhere (compared to apache where I can control this).

On http://www.mysql.com/doc/M/y/MySQL_threads.html I found

Every connection has its own thread.

A few senteces above on this page there were talks about
tcp/ip, but I'm only using sockets for the whole time, so
this applies there too?

How long is the life-time of a connection if there's no
request comming anymore? The mysqld is acesssed with PHP and
I'm using normal mysql_connect() so connections aren't
persisten so they should go after some time?


thx for any hints,

- Markus

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Threads or Prozesses on Linux, and how to control them?

2002-06-23 Thread Markus Fischer

On Sun, Jun 23, 2002 at 08:02:45PM +0100, Paul Roberts wrote : 
 php will close the connection when the script finishes unless
 you are using mysql_pconnect which opens a persistent
 connection.

Yup, ever resource besides persistent ones are cleaned up
upon script termination.

 From: Joseph Bueno [EMAIL PROTECTED]
 To: Markus Fischer [EMAIL PROTECTED]
 mysqld uses threads (you already found the right documentation).
 
  How long is the life-time of a connection if there's no
  request comming anymore? The mysqld is acesssed with PHP and
  I'm using normal mysql_connect() so connections aren't
  persisten so they should go after some time?
  
  
 
 The thread is created when a client opens a new connection and destroyed 
 when the client closes the connection (unless you explicitely ask mysqld 
 to keep some of them, see 'thread_cache_size' server parameter).

Ok, I do not use thread_cache_size and looking into the mysql
documentation the default thread_cache_size is 0. Since PHP
then closes the connection, how is it possible that there are
about 22-24 threads running? This site is my all means not a
busy site. If I check the sql log it takes up to 5 to 10
seconds before a new query occurs which means that I seldomly
get two connection attempts in the same second ...

It doesnt seem to exceed the 24 limit though.

just curious and thanks for the inforamtion,

- Markus

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: C# and MySQL without ODBC

2002-01-14 Thread Markus Fischer

On Mon, Jan 14, 2002 at 03:03:11PM +0200, Sinisa Milivojevic wrote : 
 Markus Fischer writes:
  Does someone know if there's a native C# implementation
  available OR a wrapper around libmysqlclient for C#?

 As far as I know, no not yet.

Ok, thx.

 But we are preparing our strategy on how to tackle the entire issue.

Hmm .. could you be a bit more elaborative (sp?) please what
you mean with this? thank you

- Markus

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




C# and MySQL without ODBC

2002-01-13 Thread Markus Fischer

[Please CC me, I'm not subscribed]

Does someone know if there's a native C# implementation
available OR a wrapper around libmysqlclient for C#?

thx,
Markus

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Once again ... row to column conversion

2001-02-01 Thread Markus Fischer

On Wed, Jan 31, 2001 at 08:50:44PM +0100, leo.putz wrote : 
 Sorry for having to post the same problem a second time -
 but by now I'm really at a loss.
 Is there really no one out there to give me a hint?
 
 I need the output of a query result (all query information is in one
 row) in one column (!). So, e.g., if you have all temperature lows in
 one row, how can I retrieve the data in form of one column?

How about CONCAT'enating all ?

SELECT CONCAT( id, ' ', temp1, ' ', temp2) as allinone FROM Temp WHERE php meets zimt

m.

-- 
Markus Fischer,  http://josefine.ben.tuwien.ac.at/~mfischer/
EMail: [EMAIL PROTECTED]
PGP Public  Key: http://josefine.ben.tuwien.ac.at/~mfischer/C2272BD0.asc
PGP Fingerprint: D3B0 DD4F E12B F911 3CE1  C2B5 D674 B445 C227 2BD0

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php