several master for one slaveserver for backup purpose?

2006-06-20 Thread Eric Persson

Hi,

I've been thinking about setting up a slave server for a few of my 
mysqlservers for extra backup safety. Some of the databases are quite 
large, about 50gb so a mysqldump is very clumsy to handle. And near to 
live backups would be nice to have in a panic situation.


What I dont know is if its possible to have several masters share a 
combined slave, or really one slave acts as a slave/backup server for a 
bunch of masters?
I can see the performance issues with all the updates, but apart from 
that, is there any other limitations?


/eric

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Change on LEFT JOIN ON syntax in 5.x?

2006-02-20 Thread Eric Persson

Hi,

I have a query which works fine for me in my 4.1 environment, but when 
moved to the 5.0.18 environment, it fails with the result below:


mysql SELECT r.uid, u.username, u.image_type, count(id) AS antal, 
s.timestamp FROM recruits_uid r, users u, users u2 LEFT JOIN sessions s 
ON s.uid=u.uid WHERE r.uid=u.uid AND r.rec_uid=u2.uid AND u2.deleted=0 
AND datetime-00-00 00:00:00 GROUP BY r.uid ORDER BY antal DESC 
LIMIT 100;

ERROR 1054 (42S22): Unknown column 'u.uid' in 'on clause'
mysql

It seems like the table alias u is not recognized for some reason. Does 
anyone have any hints about this?


Thanks in advance,
Eric

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



delete with an offset?

2005-10-21 Thread Eric Persson

Hi,

Is it possible to delete from a table with some sort of offset? I did 
the following select query, which selects a message from a message table.


select id, subject from messages where to_uid=1 and read_timestamp0 
order by timestamp desc limit 3,1;


The reason is I only want it to be 3 messages per to_uid, so id thought 
I would be able to delete from the third message and forward with 
something like.


delete from messages where to_uid=1 and read_timestamp0 order by 
timestamp desc limit 3,1000;


What I basically want is it to delete all messages above 100 for each 
user, any clues on how to solve this?


Best regards,
Eric

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: delete with an offset?

2005-10-21 Thread Eric Persson

Dobromir Velev wrote:
delete from where to_uid=1 and read_timestamp0 and timestamp(select 
timestamp from messages  where to_uid=1 and read_timestamp0 order by 
timestamp desc limit 100,1)


HTH


Thanks for your answer, I guess i'll have to do with two queries, but it 
looked like such and idea that someone might have thought about it 
before. ;)


thanks,
Eric

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ibdata1 grows beyound borders..

2005-04-26 Thread Eric Persson
Hi,
I've been running a innodbdatabase for a while, and it works nice, 
however, I've noticed that it have grown beyound the specifications in 
the my.cnf file. I did define autoextend so its not very strange, 
however, I was wondering, how far can it grow, and is it a bad idea to 
just let it grow?

the my.cnf looks like:
[mysqld]
datadir=/usr/local/mysql/data/
socket=/tmp/mysql.sock
set-variable = max_connections=200
set-variable = thread_cache_size=20
innodb_data_home_dir =
innodb_data_file_path = /usr/local/mysql/data/ibdata1:2000M:autoextend
set-variable = innodb_buffer_pool_size=200M
set-variable = innodb_additional_mem_pool_size=20M
innodb_log_group_home_dir = /usr/local/mysql/data/innodb/
innodb_log_arch_dir = /usr/local/mysql/data/innodb/
set-variable = innodb_log_files_in_group=2
set-variable = innodb_log_file_size=50M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=0
set-variable = innodb_lock_wait_timeout=50
set-variable = long_query_time=15
#log-long-format
log-slow-queries = /var/log/mysqld.slow.log
[mysql.server]
user=mysql
basedir=/usr/local/mysql
[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/usr/local/mysql/data/db.bcmanga.se.pid
The ibdata1 is now around 7gb and it keeps growing..
How big can it be? Is it innodb dependent or os dependent? I'm currently 
running Fedora Core release 3 (Heidelberg) on a dual xeon 3ghz with 3gb 
ram.

Can I just define some more ibdata files and it will re-arrange the data 
by itself, or do I need to export and them import it? Or whats the best 
practice here?

Thanks in advance,
Eric
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ibdata1 grows beyound borders..

2005-04-26 Thread Eric Persson
Gleb Paharenko wrote:
Hello.
Searching in the archives says you could get worse performance, because
of extending during transactions:
Yes, read a few of those, but how about the fix for this, will the data 
rearrange itself automatically when I specify more files?

//Eric
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


faster integer compare?

2004-10-23 Thread Eric Persson
Hi,
I'm using mysql with php and have recently made some effort optimizing 
all queries on a project and I noticed something strange and wanted to 
check if this is correct.

Lets say I have a table looking like this:
CREATE TABLE `profile_visitors` (
  `profile_owner` int(10) unsigned NOT NULL default '0',
  `profile_visitor` int(10) unsigned NOT NULL default '0',
  `timestamp` int(10) unsigned NOT NULL default '0',
  KEY `profile_owner` (`profile_owner`)
) TYPE=MyISAM;
Is there or rather, should it be any speed difference between the 2 
queries below:
SELECT * FROM profile_visitors WHERE profile_owner=3432;
and
SELECT * FROM profile_visitors WHERE profile_owner=3432;

Would be nice to know the expected behaviour.. :) Currently running 3.23.56
Best regards,
Eric
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Optimize query, avoid 'using temporary'

2003-04-02 Thread Eric Persson
Stefan Hinz wrote:
The query I used looks like this:
SELECT files.file_id, filename FROM  access, files WHERE 
directory_id=0 AND lang_id=1 AND ( files.file_id=access.conn_id AND 
access.group_id IN (1) AND access.class_id=4 AND class_tbl=file ) 
group by file_id order by filename;


Just a short note on your query: If you compare integers in a text
context (like lang_id=1), MySQL cannot use indexes that it could
possibly otherwise use. So rewrite those parts (e.g. lang_id=1).
Another advice would be to try a LEFT JOIN of access and files.

And, of course, you could add some indexes, on the filename and the
file columns.
I rewrote the column=integer parts but cant really see any 
speedimprovement there.

I also tried to define a lot of different indexes, but none of them 
seemed to get rid of the using temporary in the explaincommand.

The left join idea sounds reasonable, i tried that a few days ago when I 
started debugging this, but I didnt get any better result, but it might 
be worth another look, I'll get back about that.

Should sample data and expected resultlist be helpful for any of you 
that tried to help me? If I got a nice soloution on this thanks to you, 
I would be happy to send a bottle of whiskey or something. :)

Thanks,
   Eric
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Optimize query, avoid 'using temporary'

2003-04-01 Thread Eric Persson
Hi,

I have a query which I did several months ago, it recently caught my
attention since it seems to be very slow when done a lot of times, which
causes a very loaded server.
Its a webbased filesystem, which stores access/file information in
mysql, the actual files are stored on disk. The problem is when I want
to get all files in a directory which the current user have access to.
Below are the table structures used for this(descripten below them):
CREATE TABLE access (
  class_id int(10) unsigned NOT NULL default '0',
  group_id int(10) unsigned NOT NULL default '0',
  class_tbl char(10) NOT NULL default '',
  conn_id int(10) unsigned NOT NULL default '0'
) TYPE=MyISAM;
# The table above is used for more the the accessinfo for the files,
# its also used for directories etc. Thats why conn_id==file_id in
# this case. And class_id=4 and class_tbl=file
CREATE TABLE files (
  file_id int(10) unsigned NOT NULL auto_increment,
  lang_id int(10) unsigned NOT NULL default '0',
  directory_id int(10) unsigned NOT NULL default '0',
  filename varchar(255) NOT NULL default '',
  PRIMARY KEY  (file_id)
) TYPE=MyISAM;
#
# Actual file information, lang_id=1 and directory_id=0 in this case
#
The query I used looks like this:
SELECT files.file_id, filename FROM  access, files WHERE
directory_id=0 AND lang_id=1 AND ( files.file_id=access.conn_id AND
access.group_id IN (1) AND access.class_id=4 AND class_tbl=file )
group by file_id order by filename;
Since access can have several rows per file_id(associated by
file_id=conn_id ) I have to use group by to avoid getting multiple lines
of the same file.
The part access.group_id IN (1) is the groups which the user have access
to, could be more of them to. Used one for simplicity here.
An explain of the query gives me:
mysql explain SELECT files.file_id, filename FROM  access, files WHERE
directory_id=0 AND lang_id=1 AND ( files.file_id=access.conn_id AND
access.group_id IN (1) AND access.class_id=4 AND class_tbl=file )
group by file_id order by filename\G
*** 1. row ***
table: access
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 8958
Extra: where used; Using temporary; Using filesort
*** 2. row ***
table: files
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: access.conn_id
 rows: 1
Extra: where used
2 rows in set (0.00 sec)
If I exclude the group by and order by parts I get only where used,
which is good, but gives me the wrong result.
Is it possible to rewrite the query to get better performance out of
this? Or do I have to change the table structure?
Thanks in advance, best regards,
Eric




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Optimize query, avoid 'using temporary'

2003-04-01 Thread Eric Persson
Hi,

I have a query which I did several months ago, it recently caught my 
attention since it seems to be very slow when done a lot of times, which 
causes a very loaded server.

Its a webbased filesystem, which stores access/file information in 
mysql, the actual files are stored on disk. The problem is when I want 
to get all files in a directory which the current user have access to.

Below are the table structures used for this(descripten below them):
CREATE TABLE access (
  class_id int(10) unsigned NOT NULL default '0',
  group_id int(10) unsigned NOT NULL default '0',
  class_tbl char(10) NOT NULL default '',
  conn_id int(10) unsigned NOT NULL default '0'
) TYPE=MyISAM;
# The table above is used for more the the accessinfo for the files,
# its also used for directories etc. Thats why conn_id==file_id in
# this case. And class_id=4 and class_tbl=file
CREATE TABLE files (
  file_id int(10) unsigned NOT NULL auto_increment,
  lang_id int(10) unsigned NOT NULL default '0',
  directory_id int(10) unsigned NOT NULL default '0',
  filename varchar(255) NOT NULL default '',
  PRIMARY KEY  (file_id)
) TYPE=MyISAM;
#
# Actual file information, lang_id=1 and directory_id=0 in this case
#
The query I used looks like this:
SELECT files.file_id, filename FROM  access, files WHERE 
directory_id=0 AND lang_id=1 AND ( files.file_id=access.conn_id AND 
access.group_id IN (1) AND access.class_id=4 AND class_tbl=file ) 
group by file_id order by filename;

Since access can have several rows per file_id(associated by 
file_id=conn_id ) I have to use group by to avoid getting multiple lines 
of the same file.

The part access.group_id IN (1) is the groups which the user have access 
to, could be more of them to. Used one for simplicity here.

An explain of the query gives me:
mysql explain SELECT files.file_id, filename FROM  access, files WHERE 
directory_id=0 AND lang_id=1 AND ( files.file_id=access.conn_id AND 
access.group_id IN (1) AND access.class_id=4 AND class_tbl=file ) 
group by file_id order by filename\G
*** 1. row ***
table: access
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 8958
Extra: where used; Using temporary; Using filesort
*** 2. row ***
table: files
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: access.conn_id
 rows: 1
Extra: where used
2 rows in set (0.00 sec)

If I exclude the group by and order by parts I get only where used, 
which is good, but gives me the wrong result.

Is it possible to rewrite the query to get better performance out of 
this? Or do I have to change the table structure?

Thanks in advance, best regards,
Eric


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Maximize Performance by Improving Search Program Logic - Help?

2002-10-10 Thread Eric Persson

David Jourard wrote:
 It consists of 2 tables: word and content.
 
 content consists of 2 fields:
 
rec_id int unsigned not null,
description varchar(200) not null
 
 
 word consists of 2 fields:
  word varchar(50) not null,
  rec_id int unsigned not null
 
 word is created from content.  Each actual word from the field
 content.description is created as a record in the table word with its 
 rec_id.
I have done a similiar thing, but the programming is done in php, but 
that shouldnt make any difference. To speed up full word searches I have 
added a int(11) column where I store a crc32(www.php.net/crc32) of the 
word. There is probably an equivalent of the php crc32 function 
availible in perl, but I dont know.
 From my experience of this I would say that mysql searches faster in an 
int column than in a varchar. And then add an index to it and its real 
fast. I have saved the varchar column for some like searches to, but it 
should be possible to remove it, but it doesnt make much difference I guess.


 I then loop over the rec_ids stored in the array @codes.
 
   my $SqlCmd = 'select description from content where rec_id=?';
   my $sth = $dbh-prepare( $SqlCmd );
   $sth-{'mysql_use_result'} = 1; # doesn't store entire result
 
   while ( $continue ) {
 
  if ($njk  $#codes) { last; }
 
  $cd = $codes[$njk++];
  $sth-execute($cd);
  @row = $sth-fetchrow_array;
 
  # Collect results for display etc ...
 
}
$sth-finish;
This must take a while, even if its fast since you have to execute a 
query for each rec_id you found. I have solved this by doing 1 query to 
get all of it.
For example, transform the rec_ids you get into something like this:
select description from content where rec_id in (1,45,3,4,19);
This will enable you to just make one query. Currently I'm not aware if 
there is some limitation in the number of values to the IN (...) but it 
might be.

To create a several word search, you could do step 2 for each word and 
then go through them again in an array or something to see which 
contents that have all of the words desired.

Well, hope you got at least a bit wiser by this.. :)

Best regards,
Eric


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Scale mysqlservers

2002-02-12 Thread Eric Persson

Hi !

What is the usual approach when its needed to scale a mysqlserver. Of
course 1 server can be extended and upgraded in a number of ways. But is
it possible to have a cluster of server who works together and share the
load? So servers can be taken in and out of the cluster without much
work?

I've been looking at http://linuxvirtualserver.org/ as a soloution but I
cant get an idea on how to pull the data around so the same data is
available on all servers, and especially no keys and indexes are messed
up.

Can this be done at all? Is it supported? if not, can it be done anyway?
Will this be better in 4.0? 

Thankful for every idea and thoughts concerning this.

Best regards,
Eric

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




myisamchk: error: 22

2001-09-27 Thread Eric Persson

Hi !

I got a power failure on my database machine and now I cant start it
since I got a error 22 when I start it.

How do I fix this? And what does error 22 mean?

I looked at http://www.mysql.com/doc/R/e/Repair.html but nothing there
seems to fix it.

//eric

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: KEY and index

2001-06-23 Thread Eric Persson

Daniel Ã…kerud wrote:
 In MySQL having a table that looks like this:
 
 idint primary key
 namechar(200)
 
 and making an index on name, will get you a 0.1 * original_retrieval_time
 when searching, but 1.5*original_insertion_time when inserting. It roughly
 means that (on my system that is) you should do at least one select
 statement for every 4000 inset statements, for the index to be profitable.
 So, keep in mind that an index wont allways suit you, but mostly.

Where did you get this numbers from, are they some general factors or do
they depend
on the table design?

//Eric

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Updating non-existant records...

2001-05-29 Thread Eric Persson

Randolpho St John wrote:
 
 Hello all. I'm new to the list, and I just have a quick question.
 Hopefully somebody can help me out.
 
 Anyway, here's the basics of what I want to do:
 
 I want to update a record or, if that record does not exist, create a
 new one.
Try the REPLACE... query. Look in the manual. 
Its quite handy to use replace. =)

//Eric

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php