Re: Session ID Generation

2013-06-21 Thread Denis Jedig

Am 21.06.2013 12:48, schrieb Steven Siebert:


You stated these IDs are sequential...do you know if there is any way to
modify this to utilize a random generation?  Sequential session IDs are
an avenue to session hijacking.


There is no attack vector opening up by knowing a session ID. A 
session is tied to a socket which in turn would be a TCP/IP 
network connection. As long as TCP/IP connection hijacking is 
considered unfeasible, so will the corresponding session. If 
connection hijacking is a concern in your environment, consider 
using SSL/TLS as an additional measure against a number of attack 
- including eavesdropping and data manipulation.


http://www.yassl.com/files/yassl_securing_mysql.pdf

Denis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Session ID Generation

2013-06-21 Thread Denis Jedig

Steven,

Am 21.06.2013 13:35, schrieb Steven Siebert:


If the TCP connection is lost...is the effectively session over and
can not be re-established on another socket?


Yes.


In a mysql client sense, I
would need to re-establish a connection and set my session variables again
rather than just reconnect using the session ID from the dropped
connection?


Yes. There is no way for a client to specify a desired session 
ID. The session ID is only used once - the server notifies the 
client of the ID used in the initial handshake upon connection 
establishment, even before authentication is attempted. Take a 
look at the docs for protocol details:


http://dev.mysql.com/doc/internals/en/connection-phase.html#plain-handshake


I apologize about these basic mysql-mechanics questions - I need to satisfy
our auditors, so I need to understand =)


The auditors should know their trade and not simply try pressing 
requirements they've read about in an IT manager magazine.


Denis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How can I drop a table that is named “logs/#sql-ib203” and appeared after MySQL crash?

2013-06-20 Thread Denis Jedig

Franck,

Am 19.06.2013 21:00, schrieb Franck Dernoncourt:


A table `logs/#sql-ib203` appeared after a MySQL crash


the #sql-ibtableID tables are temporarily created during an 
ALTER TABLE operation for recovery purposes. Apparently these 
temporary tables might stay in certain circumstances even after 
recovery is completed.


If you already tried enclosing the table name in backticks (DROP 
TABLE `#sql-ib203`) and using the DROP TEMPORARY TABLE syntax 
without success, copying the table along with all its data and 
dropping the original table afterwards or running `mysqldump 
database tablename  dump.sql  mysql  dump.sql` for a 
backup/restore operation at least will help the problem of being 
unable to run ALTER TABLE commands for the affected main table.


Regards,

Denis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SLAVE aware of binary log file switch?

2013-06-17 Thread Denis Jedig

Am 17.06.2013 13:11, schrieb Mihamina Rakotomandimby:


Say the binary log file (on the master) has reached its maximum
size, so that it has to switch to a +1 binary log file: does he
inform the SLAVE of that switch so that the SLAVE updates its
information about the MASTER status?


The master does not inform the slave via an immediate 
communication channel, but the slave knows how to keep up because 
the end of the binary log file contains continuation information 
- i.e. the name of the next log file to fetch.


Denis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Denis Jedig



Am 09.05.2013 22:58, schrieb Robinson, Eric:


Q: What conditions could cause single query to lock up a database for a while 
for all users


From 
http://docs.oracle.com/cd/E17952_01/refman-5.5-en/table-locking.html 
:


A SELECT statement that takes a long time to run prevents other 
sessions from updating the table in the meantime, making the 
other sessions appear slow or unresponsive. While a session is 
waiting to get exclusive access to the table for updates, other 
sessions that issue SELECT statements will queue up behind it, 
reducing concurrency even for read-only sessions.


You might try using low_priority_updates to mitigate this.

Regards,
--
Denis Jedig
syneticon networks gmbh

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Performance of delete using in

2013-04-25 Thread Denis Jedig

Larry,

Am 25.04.2013 02:19, schrieb Larry Martell:


delete from cdsem_event_message_idx where event_id in ()

The in clause has around 1,500 items in it.


Consider creating a temporary table, filling it with your IN 
values and joining it to cdsem_event_message_idx ON event_id for 
deletion.


Kind regards,

Denis Jedig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Design help

2013-04-21 Thread Denis Jedig

Neil,

Am 21.04.2013 08:47, schrieb Neil Tompkins:


Using joins I can obtain which country each city belongs too.  However,
should I consider putting a foreign key in the CITIES table referencing the
countries_id ?  Or is it sufficient to access using a join ?


It depends. Adding a reference to countries into the cities table 
would break normalization and would require you to maintain the 
correct reference (e.g. through the use of ON UPDATE triggers).


It might be beneficial to do so if you have a high number of 
queries for cities filtering for countries - having a direct 
reference obviously would spare you a JOIN execution and at least 
two index lookups.


In your current example however, the data set will typically be 
small enough (in the order of 1,000 - 10,000 cities) so the query 
performance certainly would not be that much of an issue to 
justify the denormalization[1].


[1] http://en.wikipedia.org/wiki/Denormalization
--
Denis Jedig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Troubleshoot excessive memory usage in InnoDB

2013-04-20 Thread Denis Jedig

19.04.2013 23:39, Ilya Kazakevich:


Try to use tuning-primer.sh: this scripts reads your variables  and prints
memory size you need for that.


I tried that. The results are inconspicious:

MEMORY USAGE
Max Memory Ever Allocated : 5.27 G
Configured Max Per-thread Buffers : 1.92 G
Configured Max Global Buffers : 5.15 G
Configured Max Memory Limit : 7.07 G
Physical Memory : 22.98 G
Max memory limit seem to be within acceptable norms

Although the logics behind the tuning primer script are rather 
simple and I understand predicting the memory usage for MySQL is 
much harder: 
http://www.mysqlperformanceblog.com/2012/03/21/troubleshooting-mysql-memory-usage/


20.04.2013 00:26, Rick James:

 What's the STATUS value of Threads_running?  If it really is
 ~60-100 connection threads, then there could be any of a few
 temp allocations for the queries.  Some allocations are
 per-subquery.

Usually around 2-4. I also tried checking if killing / resetting 
existing (idle) connections would significantly reduce memory 
usage when mysqld has reached ~20 GB - it would not, so this is 
either not related to connection states or the memory is leaking 
from there in a way which would be unaffected by closing the 
connection.


 Is the system I/O bound?  Or CPU bound?  Or neither?

Neither - the system has plenty of headroom for both. The data 
working set easily fits into the RAM, the amount of UPDATEs is 
negligible (resulting in  100 write requests per second for the 
I/O subsystem). 1-minute load average is 2-3 under normal 
(non-swapping) conditions with 6 CPU cores available.


 I recommend you optimize the queries.

I cannot do much about it. I am the infrastructure guy who is 
fixing the obviously broken DBMS. What I still cannot figure 
out is if the behavior is due to a misconfiguration or a 
regression / bug to file. And MySQL counters are not exactly 
helping - it is completely opaque to me where the memory is going.


--
Denis Jedig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Troubleshoot excessive memory usage in InnoDB

2013-04-19 Thread Denis Jedig

Hi all.

In a specific MySQL installation of 5.6.10 using InnoDB tables, I 
am observing unusual memory consumption patterns. The memory 
usage is growing constantly - even beyond the physical memory 
limits. The entire on-disk storage is 41 GB (uncompressed), yet 
memory usage is happily growing to values larger than 50 GB.


The databases mainly experience read load with complex queries 
and subSELECTs running ~60-100 connection threads.


Although the docs state that there should be no memory leaks, 
this case certainly looks like one at first glance.


http://dev.mysql.com/doc/refman/5.6/en/memory-use.html suggests 
that temporary in-memory tables would be used  for this purpose 
so I suspected unfreed temporary tables to be the culprit. But 
memory usage growth rates did not change significantly even after 
lowering tmp_table_size to 2M (from 64M). Also, I have been 
unable to find a way to determine the size of in-memory temporary 
tables at any given time.


Some of the STATUS counters:

| Com_select| 424614  |
| Com_update| 3444|
| Created_tmp_disk_tables   | 1716|
| Created_tmp_files | 43  |
| Created_tmp_tables| 4002|
| Uptime| 5112|

The total number of tables over all databases is 1370. my.cnf 
contains the following memory-related values:


max_allowed_packet  = 16M
thread_stack= 192K
thread_cache_size   = 8
max_connections= 1000
innodb_buffer_pool_size = 5000M
innodb_log_file_size= 256M
innodb_flush_method = O_DIRECT
query_cache_limit   = 1M
query_cache_size= 256M
join_buffer_size= 256k
tmp_table_size  = 2M
max_heap_table_size = 64M
read_buffer_size= 1M
ft_min_word_len = 3
open_files_limit= 1

A replication slave of this very host is running 5.6.10 with 
MyISAM tables and the mysqld process does not exceed 1 GB in 
memory utilization even after several hours of operation under 
similar load.


I have posted a question to 
http://dba.stackexchange.com/questions/40413 which I will 
update with further information as I get it.


Any hints on how to hunt the resource hog greatly appreciated,
--
Denis Jedig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql