How to configure libmysql to use a particular source IP?

2007-12-16 Thread Christian Jaeger

Hello

We have the following setup: a host A is running several web 
applications from different customers, each application/customer 
(virtual host) having it's own different incoming IP. A host B is 
running a MySQL instance, which is being used by the applications on A. 
Note that every customer has a different incoming IP, but for outgoing 
connections, every application is using the same source IP, the first 
one of A.


Now one of the applications on A is randomly wreaking havoc, leading the 
MySQL server to block (Host 'A' is blocked because of many connection 
errors; unblock with 'mysqladmin flush-hosts'). Since all customers' 
applications are using the same source IP, all of them are blocked at once.


So my question is: how can we tell libmysql to use a particular source 
IP, so that we can configure each customer to use a different one (e.g. 
his incoming IP)?


There are both PHP and Perl users, so it should preferably be a solution 
that works for both.


Christian.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best practice MySQl backup onto tape

2007-12-16 Thread Christian Jaeger

Jenny Chen wrote:

Hi, Keith,

In my case, it is just single MySQL server(no replication).
Thanks for your info, I'll check with mk-parallel-dump tools, since
the speed of backup is important considering the database is locked for the
duration of the backup.
  


Well, if you're saying that you're only using innodb then you don't need 
to lock the server. I'm using the attached script for backups. We 
configure our slaves to use the given $slave_socket (to prevent normal 
programs from writing to the db by accident), that's the reason for the 
optional --access-slave flag.


Christian.
#!/usr/bin/perl -w

# Fre Sep 30 12:46:38 MEST 2005
(my $email='christian%jaeger,mine,nu')=~ tr/%,/@./;

use strict;

my $slave_socket= --socket=/var/run/mysqld/mysqld_safe.sock;

$0=~ /(.*?)([^\/]+)\z/s or die ?; 
my ($mydir, $myname)=($1,$2); 
sub usage {
print STDERR map{$_\n} @_ if @_;
print $myname [${myname}_options] -- additional_mysqldump_options

  call mysqldump with options to safely backup innodb databases.
  warning: does not handle myisam tables safely!

  options: only one:
   --access-slave   access socket '$slave_socket'

  (Christian T+J $email)
;
exit @_ ? 1 : 0;
}

my @args;
my $DEBUG=0;
my ($opt_access_slave);

for (my $i=0; $i=$#ARGV; $i++) {
local $_=$ARGV[$i];
if (/^--?h(elp)?$/) {
usage
} elsif ($_ eq '--') {
push @args, @ARGV[$i+1..$#ARGV];
last;
} elsif (/^--?d(ebug)?$/) {
$DEBUG=1;
} elsif (/^--access-slave$/) {
$opt_access_slave=1;
# } elsif (/^--?X(?:XXX(?:=(.*))?)?$/) {
# if (defined $1) {
# $XXX=$1
# } else {
# $XXX=$ARGV[++$i] or usage missing argument for '$_' option;
# }
} elsif (/^-./) {
usage(Unknown option '$_'\n);
} else {
push @args, $_
}
}
usage unless @args;

if ($opt_access_slave) {
unshift @args, $slave_socket;
}

sub xexec {
if ($DEBUG) {
print join ( ¦ ,@_),\n
} else {
exec @_ or exit 127;
}
}

xexec
  qw(mysqldump -O single-transaction=TRUE --skip-lock-tables --add-drop-table 
--all  --extended-insert --quick --skip-add-locks ),@args;



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: fulltext search

2003-03-27 Thread Christian Jaeger
At 22:26 Uhr -0600 25.03.2003, mos wrote:
How many people out there are willing to pay $$$ to see it done?? 
Please reply to this thread to see if there is a general interest 
and how much it is worth to you.
IIRC, last time I looked, fulltext was not very good for i.e. the 
german language. If there would be some hooks (on C level, like 
UDF's) for adjusting the tokenizer(?), I could probably improve that 
quite easily myself. (Alternatively, documentation of the relevant 
code parts would help as well, so I don't spend much time trying to 
understand it).

So if it either is going to be useful for german or provide 
hooks/documentation for adaptation, I'll pay as well (either myself 
or by a customer).

Christian.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Innodb transactions and drop table

2003-03-27 Thread Christian Jaeger
Hello

It looks like 'drop table' implicitely does a 'commit', at least when 
issued by the mysql commandline utility with mysql 3.23.51. This 
happens even if it was a temporary heap table as typically used to 
emulate subselects.

I think this should be documented. (Or better yet, not do a commit, 
at least for temporary tables?)

Christian.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: The diferences about /dev/hda1 and access via raw device

2003-03-03 Thread Christian Jaeger
At 18:35 Uhr -0200 23.01.2003, Dyego Souza do Carmo wrote:
/dev/hda1:20Gnewraw
/dev/raw1:20Gnewraw
The speed is more fast in the last case or no ?
In my tests which I did half a year ago with mysql 3.X it depended on
the usage pattern.
The advantage of using hda1 (buffered raw partition) is that linux
(you're talking about linux right?) dynamically handles buffering. So
if you are frequently accessing more memory than you have configured
in innodb_buffer_pool, in the hda1 case linux will deliver data from
linux' cache (if linux does have enough ram to buffer it) and thus
will be faster than the raw1 case where data will be fetched from
disk again.
If, on the other side, you configure innodb_buffer_pool as big as
perhaps 3/4 of all available RAM, linux will not be able to deliver
from RAM either since it's own buffers are too small, and the raw1
case starts to be faster because of lower overhead.
So on a dedicated mysql machine where +-nothing else than innodb is
running, the raw1 case will (probably) gain you some advantage. The
difference is not very big, though. I.e. if you are careful
configuring your setup you should gain something but if you are not
then you will loose much more.
I thought I had sent my results to the list, but I just realized that
I never sent that mail. Well I think I didn't because it's though to
write correctly down what you have seen without making wrong
conclusions, and also because just after writing it i did some more
tests, and the test scripts were (and are still) not prepared to be
published either. Maybe it's still worth something, see below.
Christian.


Date: Mon, 22 Jul 2002 23:59:59 +0100
To: Heikki Tuuri [EMAIL PROTECTED], [EMAIL PROTECTED]
From: Christian Jaeger [EMAIL PROTECTED]
Subject: Re: Innodb and unbuffered raw io on linux?
  I'll test it more thoroughly in the next days.

mysql 3.23.51 with your patches worked without problems in my tests.

But, performance seemed generally rather (a bit) worse than with OS
buffered IO (= using /dev/sdaX).
- Batches of 1000 inserts of ~2kb data and a commit took about 8%
more time with rawio (/dev/raw/raw1) than buffered (/dev/sdaX).
Though I must say I did take another partition on the same disk for
this measurement (it seems that the 'raw' tool, at least the one
from Debian, cannot unmap a raw device once mapped onto a block
device, and so I had to use two different partitions of equal size,
one mapped to /dev/raw/raw1, the other not mapped), so the disk
itself could have made this difference.
- The biggest difference was (as expected) when doing full table
scans on tables bigger than what seemed to fit into the innodb
buffer pool. The first scanning query took the same amount of time
in both cases (~10 seconds), whereas with OS buffered IO the
subsequent scans took only about 6 seconds, but unbuffered IO still
10 seconds for each run. Increasing innodb_buffer_pool_size from
90MB to 290MB changed it so that also with rawio the subsequent
queries profit from cached data: (the test machine has 512MB RAM,
runs linux 2.4.17, one seagate SCSI disk, 1Ghz PentiumIII)
   Query time for the first runs after a fresh mysqld start [s] *)
90MB buffer pool:
/dev/raw/raw1   10.60  11.09  11.01
   /dev/sda10   10.67   6.82   6.80
290MB buffer pool:
/dev/raw/raw19.53   4.18   4.17   3.96**)
   /dev/sda109.48   3.98   3.96
*) note that kernel 2.4 drops it's own buffered pages when the last
application closes the buffer device, so it's enough to restart
mysqld and not necessary to reboot the machine before subsequent
test runs (there's no cache left over).
**) this last number is from a different run
çç .. ist the autoadapting secondlevel cache.

So if I conclude correctly this means: (no big news)

1. if there are no applications besides mysql running on the
machine, it's important to make the buffer cache just as big as
possible, regardless of whether rawio is used or not. In this
scenario there's no compelling reason for rawio either.
2. if there *are* other applications besides mysql, then there may
be two possible strategies:
a) to set the innodb_buffer_pool_size to exactly the size of the
working set, and use rawio. Thus innodb keeps everything in it's own
memory, and the OS doesn't spend cache memory for useless duplicates
which makes more memory available to the other apps.
b) to set the innodb_buffer_pool_size smaller, but use normal
buffered /dev/sd* devices (or files in the filesystem), thus making
the kernel buffer what doesn't fit into the innodb buffer. Some data
will be buffered twice, though.
I haven't made tests with other applications competing with mysql for ram.

I also haven't tested using raw partitions or unbuffered IO for the
logs. Would this work, or make sense? Are the logs cached in innodb
as well? I've noticed some memory pressure (?, at least emacs has
been swapped out) when using a 290MB buffer pool and 2*125M logs, so
with big logs one should maybe compensate

Re: lsof +L1 - mysql.err.1 unlinked ...

2003-01-07 Thread Christian Jaeger
At 15:11 Uhr +0300 07.01.2003, Do-Risika RAFIEFERANTSIARONJY wrote:

Hi everybody,

I have always these mysql.err.1 files not unlinked in all my 
mysqlservers, and only a mysqld restart resolve these (nor reload 
neither mysqladmin refresh doesn't unlink them).

Does anybody in the list encounter this problem ?

I'm seeing the same with lsof (I'm on Debian, too). But when I stop 
mysql, it writes the mysqld ended message to mysql.err, not 
mysql.err.1. If I'm correctly assuming that this message is written 
by the mysqld process, then it looks like it does reopen the file as 
soon as it has something to write to it (instead of immediately after 
getting the flush-logs command).

Am I obliged to add a mysql restart in my logrotate config (the 
debian default contain only flush-logs which doesn't unlink the 
mysql.err.1 log) ? or is there another wa

Looks like it's no problem at all.

Christian.

-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: lsof +L1 - mysql.err.1 unlinked ...

2003-01-07 Thread Christian Jaeger
Sorry, I've restarted mysql again and am seeing that it normally does 
not only write mysql ended but a whole bunch of other lines to the 
logs. So yes it is a problem.

Christian.

At 15:11 Uhr +0300 07.01.2003, Do-Risika RAFIEFERANTSIARONJY wrote:

Hi everybody,

I have always these mysql.err.1 files not unlinked in all my 
mysqlservers, and only a mysqld restart resolve these (nor reload 
neither mysqladmin refresh doesn't unlink them).

Does anybody in the list encounter this problem ?

I'm seeing the same with lsof (I'm on Debian, too). But when I stop 
mysql, it writes the mysqld ended message to mysql.err, not 
mysql.err.1. If I'm correctly assuming that this message is written 
by the mysqld process, then it looks like it does reopen the file as 
soon as it has something to write to it (instead of immediately 
after getting the flush-logs command).

Am I obliged to add a mysql restart in my logrotate config (the 
debian default contain only flush-logs which doesn't unlink the 
mysql.err.1 log) ? or is there another wa

Looks like it's no problem at all.

Christian.


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQl db as filesystem.

2002-10-12 Thread Christian Jaeger

At 12:58 Uhr +0200 10.10.2002, Alex Polite wrote:
Is there any way I could display a MySQL database as a filesystem
under Linux?

We have created a virtual filesystem using MySQL as storage, using 
the LD_PRELOAD version of AVFS. The filesystem logic is written in 
Perl (embedded perl interpreter). It works for shell utils, proftpd, 
samba, and (not well tested yet) netatalk, and I'm sure WebDAV, NFS 
and other services could be served from it as well. It's even quite 
fast, the limiting factor is generally the client (Windows Explorer / 
Mac Finder / FTP Client / whatever). It will still take some time 
until I get to package the sources, though. See 
http://www.ethlife.ethz.ch/newcms

Christian.
-- 
Christian Jaeger  Programmer  System Engineer
ETHLife CMS Project - www.ethlife.ethz.ch/newcms - www.ethlife.ethz.ch

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Innodb and unbuffered raw io on linux?

2002-07-18 Thread Christian Jaeger

Hello Heikki and all,

I've already asked about this a year ago, but didn't finish my 
investigations then.

What's the status with innodb and *unbuffered raw* io on linux?

The manual describes the use of the newraw and raw options, and I 
know these work on disk devices (like /dev/sda8), but this isn't raw 
io, it's still cached by the kernel and so takes up RAM additional to 
the cache from innodb (as well as a bit CPU to copy over the data 
between kernel and user space). If you want to do direct IO, the use 
of the 'raw' tool to set up a 'raw character device' mapped to the 
disk block device is needed:

cd /dev
mkdir raw
umask 077
mknod rawctl u 162 0
umask 007
mknod raw/raw1 u 162 1
mknod raw/raw2 u 162 2
chgrp mysql raw/raw1
 # ^- I'm not sure whether the access rights of the mapped device
 # take precedence over those of the original block device, though
raw raw/raw1 sda8

I've tried Mysql with this config:
#innodb_data_file_path=/dev/sda8:1906Mraw  - did work, but buffered
innodb_data_file_path=/dev/raw/raw1:1906Mraw

020719 00:59:24  mysqld started
InnoDB: Operating system error number 22 in a file operation.
InnoDB: See http://www.innodb.com/ibman.html for installation help.
InnoDB: Look from section 13.2 at http://www.innodb.com/ibman.html
InnoDB: what the error number means or use the perror program of MySQL.
InnoDB: Cannot continue operation.
020719 00:59:25  mysqld ended

perror 22
Error code  22:  Invalid argument

This error code is typical for when buffers are not aligned to sector 
sized memory boundaries, which is necessary for unbuffered io to work 
on linux.
I've written an experimental program that shows this and put it here:
http://pflanze.mine.nu/~chris/mysql/o_direct.c

So I guess Innodb is not ready for unbuffered io. I'm also guessing 
that it's probably not that easy to achieve good performance with 
unbuffered io, since you would probably have to do readahead and so 
on yourself.

I'm also unsure about the current status of rawio in linux (2.4). 
Reading on http://oss.sgi.com/projects/rawio/ (under the FAQ), they 
say that they have a better implementation than the one from 
Stephen Tweedie/Redhat. But the code in kernel 2.4 seems to be only 
the one from Stephen Tweedie.
This is what the source code of the 'dd' tool (as found in 
Debian/testing) shows, btw:
 /* ...
  The page alignment is necessary on any linux system that supports
  either the SGI raw I/O patch or Stephen Tweedies raw I/O patch.
  It is necessary when accessing raw (i.e. character special) disk
  devices on Unixware or other SVR4-derived system.  */


Hope this helps a bit.
What do you think about it?
I could put a bit of time aside for testing (or maybe more, but who 
would pay me?...:)

Cheers,
Christian.
-- 
Christian Jaeger  Programmer  System Engineer  +41 1 430 45 26
ETHLife CMS Project - www.ethlife.ethz.ch/newcms - www.ethlife.ethz.ch

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Fulltext indexing libraries (perl/C/C++)

2001-09-13 Thread Christian Jaeger

Hello

[ It seems the post didn't make it through the first time ]

While programming a journal in perl/axkit I realize that the problems 
of both creating useful indexes for searching content efficiently and 
parse user input and create the right sql queries from it are sooo 
common that there *must* be some good library already. :-) So I 
headed over to CPAN, but didn't really find what I was looking for.

It should create indexes that are efficiently searchable in mysql, 
i.e. only select ... where .. like abcd% queries, not %abc%. 
Allow to search for word parts (i.e. find fulltext when entering 
text). Allow for multiple form fields (i.e. one field for title 
words, one for author names, etc.) at once. Preferably allow for some 
sort of query rules (AND/NOT/OR or something).
Preferably do some relevance sorting. Preferably allow to hook some 
numbers (link or access counts etc) into the relevance sorting.

I think there are 3 tough parts which are needed:
1. creation of sophisticated index structures (inverted indexes)
2. somehow recognize sub-word boundaries to split words on. Maybe use 
some form of thesaurus? Or syllables? (I suspect it should be the 
same rules as for splitting words on line boundaries)
3. user input parser / query creator

Why not:

- use mysql's fulltext indexes? Because I think that currently they 
are too limited (i.e. see user comments about them 
www.mysql.com/doc/) (should be better in mysql-4, I read, but we need 
it in a few weeks already...). And they are also not supported in 
Innodb which we want to use.

- use indexing robots? Because we work with XML documents, and would 
like to both keep the index up to date immediately, as well as split 
the XML contents into several parts (i.e. there's a title, byline, 
etcetc, which should be searchable or weigted differently). We want a 
*library*, not a finished product.

There's Lucene (www.lucene.com) in Java that I think does exactly 
what I want. Anyone who helps me port that to perl or 
C(++)/perl-bindings (-; ? (It should be ready in a few weeks, and 
it's about 500k source code :-().

(Something in C/C++ that would be loaded as UDF or so would be nice 
too, but as I understand (from recent discussion about stored 
procedures) it's not possible since these UDF's would have to start 
other queries (i.e. to insert each word fragment into an index 
table).)

Like Daniel Gardner has pointed out to me, one could maybe use 
Search::InvertedIndex as a basis and complement it with Lingua::Stem 
(only english) or Text::German (german) (both seem to be quite 
imperfect tough) or with some word list processing. (I don't 
understand Search::InvertedIndex enough yet.) I think it would still 
be much work.


Has someone finished something like this? More info about mysql4?

Thx
Christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




innodb on linux raw partitions

2001-08-20 Thread Christian Jaeger

Hello

I read in 
http://www.mysql.com/doc/I/n/InnoDB_Disk_i_o.html
that Innodb supports raw partitions on some unixes. We look forward to use innodb on 
debian (intel): are linux raw partitions supported?

Is it wise to try to use linux2.4 raw partitions with innodb on a production system?

I haven't found linux documentation about raw partitions. Has anyone a pointer?

One question I have: is it possible to mirror (software RAID 1) raw partitions on 
linux? (If not then I guess they are pretty useless without a hardware raid).

Some thoughts about possible loss of memory due to using buffered (non-raw) 
partitions: We will use mysql-innodb on the same machine as apache. Apache will serve 
static files (about 200 MB or so, growing; most hits go to a small subset of this, 
however) which are generated from innodb content. Most items are requested from static 
files much more often (i.e. 1000's of times more) than from innodb. Some (personalized 
stuff) are requested as often from innodb than from cache. I guess because of these 
usage patterns linux will use most disk buffer for apache's files, not for the innodb 
partition, so there won't be much loss of memory? If I make an infrequent mysql query 
that has to read most of the tables content (i.e. 200MB out of the 800MB used table 
space), is linux throwing away all disk buffers used by apache in favour of the innodb 
partition, or is it recognizing that apache will request it's buffers soon again? 
(Ideally innodb partition reads would buffer only for a short p!
eriod (since innodb has it's own buffer anyway) and thus only take little memory.)
(I've been planning to use 512MB ram on this machine.)

christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: innodb on linux raw partitions

2001-08-20 Thread Christian Jaeger

On Mon, 20 Aug 2001 22:24:26 +0300
Heikki Tuuri [EMAIL PROTECTED] wrote:
 My impression comes from a MySQL AB computer
 with Red Hat 6.2 (kernel something like 2.2.19).

From what I read, RedHat often patches their kernels with inofficial stuff (and the 
famous inofficial/buggy gcc thing (in 7.0) says probably enough, too).

BTW yes, I've almost forgotten about the discussion from last month. (Well I hadn't 
seen the success of Peter Duffy).

So I'll try it out in a few days.

Christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Subject: Mini-HOWTO: MySQL over SSH tunnel

2001-08-19 Thread Christian Jaeger

At 0:17 Uhr +0200 20.8.2001, David Tonhofer wrote:
4) Make sure user 'fred' has an account on machine 'foo'. Unfortunately
this must be an account with a shell :-(

You could probably write a pseudo-shell and put that into /etc/passwd (i.e.
#!/usr/bin/perl
while(STDIN){exit if /^exit$/m; print This is a pseudo shell. You 
can only 'exit'.\n}
).

How reliable do ssh tunnels work over long periods of time nowadays? 
Two years ago there were discussions about dropped/hanging(?) 
connections when using tunneling large amounts of data (not mysql 
specific).

What do you do when loosing connection (i.e. the server is restarted)?
Probably some sort of 'pinging' the connection is needed? i.e. 
opening a connection to mysql over the tunnel on the client side, 
doing 'select 1' periodically, if there's no response kill the ssh 
client and restart it.

Has someone used replication over an ssh tunnel?

Christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Local Sockets vs TCP

2001-08-16 Thread Christian Jaeger

At 0:09 Uhr -0400 15.8.2001, Hans Zaunere wrote:
Well I uncovered an interesting thing.  If I use the regular old
bin/mysql client to connect to the database (both on the same machine)
and I run:  ./mysql -uroot -p -h localhost   
Then run netstat -an , it shows that another /tmp/mysql.sock instance
is connected, while the original /tmp/mysql.sock is ofcourse still
listening.

I've seen this too, the mysql command line client connects to the
unix socket also if given 'localhost' as host. Don't know about
DBD::mysql.

And I was thinking, since a socket is a socket (just a file descriptor)
whether local sockets or INET socks are getting used up, ssh still
wouldn't work.

Your problem probably was not running out of file descriptors, but
running out of port numbers. Since every inet port (2^16 of them)
after close takes some time until it can be used again, there may be
no more free ports even if only few are actually in use concurrently.
This is the reason why I've suggested persistent connections.

christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Local Sockets vs TCP

2001-08-14 Thread Christian Jaeger

At 16:41 Uhr -0400 14.8.2001, Hans Zaunere wrote:
sort of thing.  Would it be better to just switch to TCP/IP sockets?

my $dbh = connect(DBI:mysql:database:localhost, user, password);

Well it seems like you ARE using INET TCP/IP sockets, since there is 
'localhost' there (which goes over the loopback device). Not sure 
however if this syntax for specifying the host is correct, it's not 
mentioned this way in man DBD::mysql.

And if you were running out of INET sockets so you couldn't connect 
with ssh anymore, it MUST have used INET sockets, doesn't it?...

So try to strip this :localhost off.

You could also use some persistent connection mechanism so that the 
script doesn't reconnect on each request.

chj

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Using InnoDb table type - Urgent help required

2001-07-21 Thread Christian Jaeger

At 16:27 Uhr -0700 20.7.2001, Rashmi Mittal wrote:
innodb_data_file_path=ibdata1:200M
innodb_data_home_dir=f:/mysql/innodb/ibdata
innodb_log_group_home_dir=f:/mysql/innodb/iblogs

The directories specified in the above variables exist.
However when I try to run the service I get the error Could not start the
MySQL service on Local Computer... Error 1067: The process terminated
unexpectedly. No error is logged in the mysql.err file in the data
directory.

It seems you are missing a innodb_log_arch_dir directive. I remember 
innodb complaining about this not having the same value as 
innodb_log_group_home_dir. Anyway, you could probably try to start 
the daemon from the dos prompt directly, so you see error messages - 
at least that's what I do on linux. (Currently I start it through a 
perl script with the following options (strip the unix_file_flush 
stuff of course):

exec qw{
/usr/local/mysql/libexec/mysqld
--basedir=/usr/local/mysql
--datadir=/usr/local/mysql/var
--pid-file=/usr/local/mysql/var/pflanze.pid
--skip-locking
--innodb_data_home_dir=/usr/local/mysql/
--innodb_data_file_path=ibdata/ibdata1:1000M
--set-variable=innodb_mirrored_log_groups=1
--innodb_log_group_home_dir=/usr/local/mysql/iblog/
--set-variable=innodb_log_files_in_group=3
--set-variable=innodb_log_file_size=50M
--set-variable=innodb_log_buffer_size=8M
--innodb_flush_log_at_trx_commit=1
--innodb_log_arch_dir=/usr/local/mysql/iblog/
--innodb_log_archive=0
--set-variable=innodb_buffer_pool_size=90M
--set-variable=innodb_additional_mem_pool_size=5M
--set-variable=innodb_file_io_threads=4
--set-variable=innodb_lock_wait_timeout=50
--innodb_unix_file_flush_method=O_DSYNC
};

)

christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: innodb and raw disks

2001-07-19 Thread Christian Jaeger

At 15:08 Uhr +0100 19.7.2001, Peter Duffy wrote:
database table sql query
I guess that it's thinking the tablespace is already initialised.

Yes, probably: it's the same if you have an ibdata file and no log 
files and start mysql-innodb. I've noticed this while setting up 
innodb with filesystem based tablespaces and had given wrong paths to 
the logfiles - so mysql did quit, I restarted it with correct paths 
to log directories, and it quit again because of 'corrupt data 
files'. Removing the data file helped.

(PS. it seems that the innodb_log_group_home_dir and 
innodb_log_arch_dir options require absolute paths (or relative to 
cwd perhaps). It took me some time to realize this :o].)

Christian.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: innodb and raw disks

2001-07-17 Thread Christian Jaeger

database,sql,query,table

Just to the record: I've written a small program too, and it reports 
correct sizes for normal disk devices under both linux 2.2 and 2.4, 
as long as the partition sizes don't exceed the 2GB limit (I don't 
have large file support, since I use debian potato with just the 
necessary packages upraded for running 2.4). I've NOT tried with 
*unbuffered raw* partitions, however (linux 2.4 supports both 
partition access with and without buffering), I don't know yet how to 
setup these.

lseek returns 0 if there's no partition on the disk for the device 
being opened.

#include iostream
#include sys/types.h
#include fcntl.h
#include unistd.h
#include errno.h
#include string.h
int main(int argn, char * args[]) {
   for (int i=1;iargn;i++) {
 int fd= open (args[i], O_RDONLY);
 if (fd!=-1) {
   off_t eof = lseek (fd, 0, SEEK_END);
   if (eof != -1)
 cout  Eof of `  args[i]  ' is at:  eof \n;
   else
 cerrCould not lseek `args[i]': strerror(errno)\n;
   if (close (fd)==-1)
 cerr ??\n;
 } else {
   cerrCould not open `args[i]': strerror(errno)\n;
 }
   }
}

./lseektest /dev/hda*
Could not lseek `/dev/hda': Value too large for defined data type
Could not lseek `/dev/hda1': Value too large for defined data type
Eof of `/dev/hda10' is at: 0
Eof of `/dev/hda11' is at: 0
Eof of `/dev/hda12' is at: 0
Eof of `/dev/hda13' is at: 0
Eof of `/dev/hda14' is at: 0
Eof of `/dev/hda15' is at: 0
Eof of `/dev/hda16' is at: 0
Eof of `/dev/hda17' is at: 0
Eof of `/dev/hda18' is at: 0
Eof of `/dev/hda19' is at: 0
Eof of `/dev/hda2' is at: 1024
Eof of `/dev/hda20' is at: 0
Eof of `/dev/hda3' is at: 197406720
Could not lseek `/dev/hda4': Value too large for defined data type
Could not lseek `/dev/hda5': Value too large for defined data type
Could not lseek `/dev/hda6': Value too large for defined data type
Eof of `/dev/hda7' is at: 682665984
Could not lseek `/dev/hda8': Value too large for defined data type
Eof of `/dev/hda9' is at: 0
root@pflanze raw\ partitions# fdisk -l /dev/hda

Disk /dev/hda: 255 heads, 63 sectors, 2110 cylinders
Units = cylinders of 16065 * 512 bytes

Device BootStart   EndBlocks   Id  System
/dev/hda1   * 1   383   3076416b  Win95 FAT32
/dev/hda2   384  1659  10249470f  Win95 Ext'd (LBA)
/dev/hda3  1660  1683192780   82  Linux swap
/dev/hda4  1684  2110   3429877+  83  Linux
/dev/hda5   384   766   3076416b  Win95 FAT32
/dev/hda6   767  1149   3076416b  Win95 FAT32
/dev/hda7  1150  123266b  Win95 FAT32
/dev/hda8  1233  1659   3429846b  Win95 FAT32
root@pflanze raw\ partitions#

Cheers,
Christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Having success with 20 million rows

2001-05-09 Thread Christian Jaeger

Was the table inaccessible during these 4 days? Or is mysql able to 
still write to the table while it's being altered?

Christian.

At 12:08 Uhr -0700 9.5.2001, David J. Potter wrote:
occur very fast.  We use both Windows and Linux.  Adding a column is the
only action that is slow (it took 4 days once to add a column)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Mysql speed :)

2001-04-07 Thread Christian Jaeger

At 13:35 Uhr -0400 6.4.2001, Vivek Khera wrote:

  ... LIMIT '3',2

and resulting in a parse error.  The fix was to call
$sth-execute($start+0,$howmany+0) and then DBI did the right thing.

A similar problem may occur if you use a text variable in a == 
comparison before passing it to execute. If you run perl with 
tainting check, DBD::mysql will interpret the text variable as number 
and not quote it, leading to either a parsing error or a security 
hole in your application. I have written about this on 2000/08/23 to 
the dbi-users and msql-mysql-modules lists. Tim Bunce suspected it to 
be a driver bug. I don't know if it has been solved.

Christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Benchmarking innobase tables

2001-03-18 Thread Christian Jaeger

At 20:43 Uhr -0600 17.3.2001, Dan Nelson wrote:
In the last episode (Mar 17), Christian Jaeger said:
  innobase table:
   autocommit=0, rollback after each insert:   59 insert+rollback/sec.
  autocommit=0, one rollback at the end:  2926 inserts/sec.
  autocommit=0, one commit at the end:2763 inserts/sec.
   autocommit=1:   34 inserts/sec.

  In the last case I can hear the head from the hard disk vibrating, it
  seems that innobase synches each commit through to the disk oxide.
  I'm sure innobase isn't the fastest database in the world if this is
  true for everyone. Why could this be the case for me?

If you are going to be committing on every record, you'll want your
tablespace and logfile directories on separate disks to avoid
thrashing.  If you only have one disk and don't care if you lose the
last few transactions if your system crashes, try setting
innobase_flush_log_at_trx_commit=0 in my.cnf.

Wow, thanks. With innobase_flush_log_at_trx_commit=0, the benchmark now shows:

autocommit=0, rollback after each insert:   1587 inserts+rollbacks/sec
autocommit=1:   2764 inserts/sec.

That's even faster than myisam (2487 inserts/sec today)!!!

ChristianJ

--
   Dan Nelson
   [EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking innobase tables

2001-03-17 Thread Christian Jaeger

Hello

I've compiled mysql-3.23.35 with innobase support - it runs much 
better than BDB for me - and run a simple benchmark with the 
following script:

use DBI;
my $DB= DBI-connect("dbi:mysql:innobase","chris",shift) or die;
$DB-{RaiseError}=1;
$DB-do("drop table if exists speedtest");
$DB-do("create table speedtest (a int not null primary key, b int 
not null) type=innobase");
$DB-do("set autocommit=0"); # or =1
my $ins=$DB-prepare("insert into speedtest values(?,?)");
foreach (0..1000) {
eval {
$ins-execute(int(rand(1000)),int(rand(10)) );
};
if ($@) {warn $@} else {$done++}
}
# $DB-do("commit"); # uncommented for some test
print "have inserted $done entries\n";

On a lightly loaded powermac G3 running linuxppc I get the following results:

myisam table:   2000 inserts/sec.

innobase table:
autocommit=0, rollback after each insert:   59 insert+rollback/sec.
autocommit=0, one rollback at the end:  2926 inserts/sec.
autocommit=0, one commit at the end:2763 inserts/sec.
autocommit=1:   34 inserts/sec.

In the last case I can hear the head from the hard disk vibrating, it 
seems that innobase synches each commit through to the disk oxide. 
I'm sure innobase isn't the fastest database in the world if this is 
true for everyone. Why could this be the case for me?

Some system info:
LinuxPPC June 1999, Kernel 2.2.17-0.6.1,
glibc-2.1.3-0j
gcc-2.95.3-2f
Innobase data is written to an IDE harddisk.

Cheers
Christian Jaeger

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: BDB tables on Linux

2001-03-05 Thread Christian Jaeger

As far I can tell, you have to rebuild from source.

BDB works very bad for me (linuxppc). Extremely slow and errors.

Christian.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




How fast are BDB tables really??

2001-02-24 Thread Christian Jaeger

Hello list

I've just installed mysql-3.23.33 with db-3.2.9a (linux on a ppc) and
still have dog slow bdb tables with data errors.

How fast are bdb tables really, for those where they work? Doing 3
queries/sec is not a real option for me.

Note that I had to change the makefile in db-3.2.9a/build_unix or it
would not have installed: from

install: all# modified by MySQL configure
install_include install_lib install_utilities install_docs

   to

install: install_include install_lib install_utilities install_docs

Christian.


Tests:
-
mysql create table bdbtest4 (a int not null primary key, b text) type=bdb;
Query OK, 0 rows affected (4.03 sec)

mysql insert into bdbtest4 values(0,"Miau");
Query OK, 1 row affected (1.61 sec)

mysql insert into bdbtest4 values(1,"Cat");
Query OK, 1 row affected (0.33 sec)

mysql select * from bdbtest4;
+---+--+
| a | b|
+---+--+
| 0 | Miau |
| 1 | Cat  |
+---+--+
2 rows in set (0.33 sec)

mysql select * from bdbtest4 where a=1;
+---+--+
| a | b|
+---+--+
| 1 | |
+---+--+
1 row in set (0.14 sec)

mysql select * from bdbtest4 where a=0;
+---+--+
| a | b|
+---+--+
| 0 | |
+---+--+
1 row in set (0.14 sec)

mysql status
--
mysql  Ver 11.12 Distrib 3.23.33, for unknown-linux-gnu (powerpc)

Connection id:  1
Current database:   test
Current user:   chris@localhost
Current pager:  stdout
Using outfile:  ''
Server version: 3.23.33
Protocol version:   10
Connection: Localhost via UNIX socket
Client characterset:german1
Server characterset:german1
UNIX socket:/tmp/mysql.sock
Uptime: 8 min 20 sec

Threads: 1  Questions: 33  Slow queries: 0  Opens: 26  Flush tables:
1  Open tables: 18 Queries per second avg: 0.066
--

mysql


System:
--
linuxppc (powermac 9600/300mhz), kernel 2.2.17 , glibc 2.1.3, gcc-2.95.3

Configure:
-
./configure --with-charset=german1 --with-extra-charsets=latin1
--with-berkeley-db --localstatedir=/home/mysql
--
Web Office
Christian Jaeger
Corporate Communications, ETH Zentrum
CH-8092 Zurich

office: HG J43 e-mail:   [EMAIL PROTECTED]
phone: +41 (0)1 63 2 5780  [EMAIL PROTECTED]
home:  +41 (0)1 273 65 46 fax:  +41 (0)1 63 2 3525

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




BDB not working well

2001-02-12 Thread Christian Jaeger

Hello

I've just compiled mysql-3.23.32 (tar.gz) with with the bdb tgz (from
mysql.com) on my linuxppc machine (powermac 9600/300mhz, kernel
2.2.17 , glibc 2.1.3, gcc-2.95.3).

configure --with-charset=german1 --with-extra-charsets=latin1
--with-berkeley-db

I find the following problems:
(1. I didn't find out how to correctly set up mysql.server to run
mysql as 'mysql' user instead of root, where has the mysql_user
setting gone?)
2. long delay on startup of mysql server (there must be some dns
lookup or the like)
3. BDB tables are extremely slow in the first simple tests (0.2 to
2.5 seconds per query). MyIsam tables are fast as usual (0.01 secs)
4. BDB table contents show up corrupted when selected with a 'where' clause.

At least, BEGIN/COMMIT/ROLLBACK work.

Here's what happens in my test:

mysql create table bdbtest (a int not null primary key, b text) type=bdb;
Query OK, 0 rows affected (4.04 sec)

mysql insert into bdbtest values(0,"Miau");
Query OK, 1 row affected (1.58 sec)

mysql insert into bdbtest values(1,"Cat");
Query OK, 1 row affected (0.37 sec)

mysql select * from bdbtest;
+---+--+
| a | b|
+---+--+
| 0 | Miau |
| 1 | Cat  |
+---+--+
2 rows in set (0.32 sec)

mysql select * from bdbtest where a=1;
+---+--+
| a | b|
+---+--+
| 1 | |
+---+--+
1 row in set (0.18 sec)

mysql select * from bdbtest where a=0;
+---+--+
| a | b|
+---+--+
| 0 | |
+---+--+
1 row in set (0.18 sec)

mysql


Thanks for any insights
Christian Jaeger

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php