Re: logrotate script doesn't create a new log file

2012-02-23 Thread Honza Horak

On 02/22/2012 07:58 PM, Johan De Meersman wrote:

Having this line commented, we have to rely on logrotate.conf to have
something similar defined and we see an error when using mysql-rotate


Then something else is wrong. Does your MySQL daemon happen to run as a user 
who can normally not create files in the directory where the log files are?


This seems to be the reason. MySQL is run under mysql user and the log 
file is located under /var/log in Fedora, so the daemon doesn't have 
enough privileges. It's clear now, we'd need to un-comment the line in 
such configuration.


Thanks for the tip.

Honza

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Removing Data Duplicacy

2012-02-23 Thread Johan De Meersman
- Original Message -

 From: Adarsh Sharma adarsh.sha...@orkash.com

 Thanks Johan, but i mentioned before that adding auto increment
 column doesn't solve the issue  it causes errors in the multi
 threaded application.

If it causes errors, you have other problems than this. 

 Multiple clients calls this insert procedure simultaneously, so it
 fails the transactions if two or more clients reads the same ID
 value.

Maybe I'm missing something, but both auto_increment and last_insert_id() are 
threadsafe, so the normal procedure for handling this is an auto_increment on 
table2.c_id, then 

* insert into table2 
* select last_insert_id() into page_id 
* insert into table1 using page_id 

This will guarantee both unique numbering *and* corresponding IDs without the 
need for locks or transactions. Transactions may be a good idea for other 
reasons, but that's another discussion. 

If you *really* will not add an auto_increment to table2, then create pk_table 
with an autoincrement and use that to grab the page_id. Forget this whole 
locking history, you do not need it with auto_increment and last_insert_id. 

 I need to insert row simultaneously with the same C_ID into two
 tables ( table 1  table 2).

There's no such thing as simultaneous. 

 Please let me know if the below solution is genuine or not, I update
 my procedure  create a table that contains only ID.

 UPDATE pk_table set c_id=LAST_INSERT_ID(c_id+1);

Nope, won't work, for the very simple reason that you're not using an 
auto_increment, so there simply *is* no last_insert_id(). Go read the 
documentation before you keep arguing, please. 

You've had roughly the same answer from several people who've been doing this 
stuff for ages. If you keep ignoring that advice, I'm simply going to ignore 
this whole thread until you come up with a damn good technical reason why an 
auto_increment isn't an option. We're trying to help you find a solution, but I 
do have better things to do than hit my head against a wall repeatedly. 

-- 
Bier met grenadyn 
Is als mosterd by den wyn 
Sy die't drinkt, is eene kwezel 
Hy die't drinkt, is ras een ezel 


problem after update with a second node (no cluster problem)

2012-02-23 Thread fatcharly
Hi,

I´m running a mySQL-Cluster with drbd/pacemaker/heartbeat on two centOS 5.7. 
The old version was a mysql-server-5.0.77-4.el5_6.6 the new is 
mysql-server-5.0.95-1.el5_7.1. 
I tried to update the system with less downtime, so first update via yum was on 
the passive node (2 drbd-devices as slave, no mysqld started) and was no 
problem also after reboot eveything looked fine. Then I tried to switch the 
database over to the passive second (updated) node. But the mysqld was unable 
to start on that machine. I did the update (and reboot) also on the active node 
and there is no problem with the mysqld.
This is a part of /var/log/mysqld.log

120223 15:46:44  InnoDB: Page checksum 1575996416, prior-to-4.0.14-form 
checksum 1371122
432
InnoDB: stored checksum 0, prior-to-4.0.14-form stored checksum 0
InnoDB: Page lsn 0 0, low 4 bytes of lsn at page end 0
InnoDB: Page number (if stored to page already) 0,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
120223 15:46:44InnoDB: Error: trying to access a stray pointer 
0x80002e217ff8
InnoDB: buf pool start is at 0x2e208000, end at 0x2aaac6208000
InnoDB: Probable reason is database corruption or memory
InnoDB: corruption. If this happens in an InnoDB database recovery, see
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: how to force recovery.
120223 15:46:44InnoDB: Assertion failure in thread 47112793548384 in file 
./../include/b
uf0buf.ic line 268
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
InnoDB: about forcing recovery.
120223 15:46:44 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=0
read_buffer_size=2097152
max_used_connections=0
max_connections=4000
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 
16384000 Kbytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
frame pointer is NULL, did you compile with
-fomit-frame-pointer? Aborting backtrace!
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
120223 15:46:44  mysqld ended



Any suggestions are welcome


kind regards

fatcharly







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



USING WHERE; USING TEMPORARY; USING filesort

2012-02-23 Thread Daevid Vincent
Anyone have any thoughts on how I might optimize this query? It takes about
2 seconds. I know that seems quick, but we get nearly 30,000 hits per second
and so if we can shave this down, it helps. Also we do use memcache, but
even with that, we still see this in the slow-log sometimes. I have indexes
on everything used in this query and even a compound one as you see in the
EXPLAIN.

I'm not going to lose sleep over it, but I thought if there was something
simple or a way to refactor I'd give it a shot. I thought changing the query
to use JOIN ON syntax would have helped, but it didn't do anything really?!
Also, this is all being used with PHP, so I'm fine with pulling things out
into two or three queries if you suggest it will make a faster difference.

-- old query:

SELECT sg.`scene_id`, COUNT(*) AS num FROM `scenes_list` AS s, `dvds` AS d,
`scenes_genres` AS sg, `scenes_downloads_new` AS sd WHERE sg.`genre_id` IN
('1','8','10','19','38','58','65') AND d.`dvd_id`=s.`dvd_id` AND
sg.`scene_id`=s.`scene_id` AND d.`status`='ok' AND
d.`date_release`!='-00-00' AND sd.`scene_id`=s.`scene_id` GROUP BY
sg.`scene_id` ORDER BY num DESC, sd.`count_dl_monthly` DESC LIMIT 30;

-- refactored: is this correct order?? Smallest table and most filters first
right to narrow the dataset as quick as possible?

-- EXPLAIN
SELECT 
s.`scene_id`,
COUNT(*) AS num 
FROM`dvds` AS d
JOIN `scenes_list` AS s ON  d.`dvd_id` = s.`dvd_id`
AND d.`date_release` != '-00-00' 
AND d.`status` = 'ok' 
JOIN `scenes_genres` AS sg ON sg.`scene_id` = s.`scene_id`
AND sg.`genre_id` IN ('1', '8', '10', '19', '38', '58',
'65') 
JOIN `scenes_downloads_new` AS sd ON sd.`scene_id` = s.`scene_id`
GROUP BY sg.`scene_id` 
ORDER BY num DESC, sd.`count_dl_monthly` DESC 
LIMIT 30
;

SELECT COUNT(*) FROM dvds; -- 12181
SELECT COUNT(*) FROM scenes_downloads_new; -- 66054
SELECT COUNT(*) FROM scenes_list; -- 67197
SELECT COUNT(*) FROM scenes_genres; -- 344272

-- why do I still hit the filesort and temp table? how can I get rid of
that?

id  select_type  TABLE   TYPEpossible_keys
KEY key_len  ref  ROWS  Extra

--  ---  --  --
--  --
---  -  --
--
 1  SIMPLE   d   ref
PRIMARY,date_release,STATUS,status_release,dvd_release  status_release  1
const2436  USING WHERE; USING TEMPORARY; USING
filesort  
 1  SIMPLE   s   ref PRIMARY,dvd_id_2,dvd_id
dvd_id  4.d.dvd_id 6  USING WHERE

 1  SIMPLE   sd  eq_ref  PRIMARY
PRIMARY 3.s.scene_id   1  USING WHERE

 1  SIMPLE   sg  ref PRIMARY,scene_id,genre_id
scene_id4.s.scene_id   5  USING WHERE



If I take off the ORDER BY (just to see what happens), basically it's
exactly the same EXPLAIN output (wrong results of course), but it does add
Using index to the scene_id row (weird).

If I take off the GROUP BY (just to see what happens), basically it's
exactly the same EXPLAIN output (wrong results of course), but it does
remove Using Filesort.

Taking them both off is optimal I suspect, and again, barring the fact the
results are wrong, it takes 1 second for the query. Should I read that to
mean, it is what it is and that's the best I can expect from that
multi-join query? Is there any benefit to splitting this up and if so, how
should I split it?

d


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql