locked non-existent row

2011-08-30 Thread Peter Brawley
While a transaction in one thread tries to update a non-existent InnoDB 
row with a given key value, an attempt to insert that value in another 
thread is locked out. Does anyone know where this behaviour is documented?


-- connection 1
drop table if exists t;
create table t( lockid char(3), lockinfo char(8), primary 
key(lockid,lockinfo) );

insert into t values('abc','def');
begin work;
update t set lockinfo='bar' where lockid='foo';

-- connection 2:
insert into t values('aaa','bbb');
Query OK, 1 row affected (0.00 sec)
insert into t values('foo','bar'); -- waits for connection 1 transaction

PB



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: utility of an index

2011-08-30 Thread Dave Dyer
At 12:43 AM 8/30/2011, mysql-plain-digest-h...@lists.mysql.com wrote:
>Not useful to add an index for that. I also wonder why the value is null 
>(meaning: unknown, not certain) for almost all records. 

It depends on if you want the forest or the trees.  A frequently executed
query asks for just the robots.  An index is an efficient way to select
6 of 20,000.  As for why the rest are null, null is just a a random
everything else value, left over from before the "is_robot" column was
added. 

Is there a better way to mark a small number of "special" records in a
big data set? 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2011-08-30 Thread Michael Dykman
There are a few things gonig on, but mainly it is the ORs that are killing
you.

As your  require OR to examine two distinct columns, both of equal relevance
to the query, MySQL: is left with no choice but to do a full table scan on
what might be (at a guess) a very larger table.  No amount of indexing will
fix this for the query presented.

You would be better off writing it as two distinct queires, each concerned
with conditions on a single column (open_dt and close_dt) and then UNIONing
the results. In this form, the indexes have a chance of being engaged.

Once the indexes are engaged, you probably want to your DATE/DATETIME
strings into actual DATEs or DATATIMEs, thus:

 ...
   (open_dt >= DATE('2011-08-30 00:00:00'))
 ...

In it's current state, the DATE fields are being converted to strings
implicitly for every row tested which further frustrates index usage as the
index is against the quasi-numeric DATE, not the string representation which
your current implementation appears to expect.  This query would also
suddenly begin to fail entirely if the DEFAULT_DATE_FORMAT gets modified by
an admin or a future release of MySQL.  The explicit casting I have
suggested will protect you against that,

 - michael dykman`
On Tue, Aug 30, 2011 at 12:45 PM, Brandon Phelps  wrote:

> Hello,
>
> I have the following query I'd like to optimize a bit:
>
> SELECT
>sc.open_dt,
>sc.close_dt,
>sc.protocol,
>INET_NTOA( sc.src_address ) AS src_address,
>sc.src_port,
>INET_NTOA( sc.dst_address ) AS dst_address,
>sc.dst_port,
>sc.sent,
>sc.rcvd,
>spm.desc AS src_port_desc,
>dpm.desc AS dst_port_desc
> FROM firewall_connections AS sc
>LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
>LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
> WHERE
>(open_dt >= '2011-08-30 00:00:00' OR close_dt >= '2011-08-30
> 00:00:00')
>AND (open_dt <= '2011-08-30 12:36:53' OR close_dt <= '2011-08-30
> 12:36:53')
> ORDER BY rcvd DESC
> LIMIT 0 , 10
>
> Currently I have an index on the rcvd column which seems to be working
> based on the output of EXPLAIN:
>
> id  select_type table   typepossible_keys   key key_len
> ref rowsExtra
> 1   SIMPLE  sc  index   open_dt ndx_rcvd4
> NULL10  Using where
> 1   SIMPLE  spm eq_ref  PRIMARY PRIMARY 2
> syslog.sc.src_port  1
> 1   SIMPLE  dpm eq_ref  PRIMARY PRIMARY 2
> syslog.sc.dst_port  1
>
> However the query is still fairly slow for some reason, any ideas how I
> could speed it up with additional indexes, etc?
>
> The values I am using in the WHERE clause are variable and change each
> time.
>
> The table has around 23 million records right now but is expected to
> continue to grow up to a potential 150 million.
>
> Here is the table schema:
> CREATE TABLE IF NOT EXISTS `firewall_connections` (
>  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
>  `open_dt` datetime NOT NULL,
>  `close_dt` datetime NOT NULL,
>  `protocol` smallint(6) NOT NULL,
>  `src_address` int(10) unsigned NOT NULL,
>  `src_port` smallint(5) unsigned NOT NULL,
>  `dst_address` int(10) unsigned NOT NULL,
>  `dst_port` smallint(5) unsigned NOT NULL,
>  `sent` int(10) unsigned NOT NULL,
>  `rcvd` int(10) unsigned NOT NULL,
>  PRIMARY KEY (`id`),
>  KEY `ndx_rcvd` (`rcvd`),
>  KEY `ndx_sent` (`sent`)
> ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
>
> Thanks in advance!
>
> --
> Brandon
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?**unsub=mdyk...@gmail.com
>
>


-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Query Optimization

2011-08-30 Thread Brandon Phelps

Hello,

I have the following query I'd like to optimize a bit:

SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
dpm.desc AS dst_port_desc
FROM firewall_connections AS sc
LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
WHERE
(open_dt >= '2011-08-30 00:00:00' OR close_dt >= '2011-08-30 00:00:00')
AND (open_dt <= '2011-08-30 12:36:53' OR close_dt <= '2011-08-30 
12:36:53')
ORDER BY rcvd DESC
LIMIT 0 , 10

Currently I have an index on the rcvd column which seems to be working 
based on the output of EXPLAIN:


id  select_type table   typepossible_keys   key key_len 
ref rowsExtra
1   SIMPLE  sc  index   open_dt ndx_rcvd4   NULL
10  Using where
1   SIMPLE  spm eq_ref  PRIMARY PRIMARY 2   
syslog.sc.src_port  1   
1   SIMPLE  dpm eq_ref  PRIMARY PRIMARY 2   
syslog.sc.dst_port  1   

However the query is still fairly slow for some reason, any ideas how I 
could speed it up with additional indexes, etc?


The values I am using in the WHERE clause are variable and change each time.

The table has around 23 million records right now but is expected to 
continue to grow up to a potential 150 million.


Here is the table schema:
CREATE TABLE IF NOT EXISTS `firewall_connections` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `open_dt` datetime NOT NULL,
  `close_dt` datetime NOT NULL,
  `protocol` smallint(6) NOT NULL,
  `src_address` int(10) unsigned NOT NULL,
  `src_port` smallint(5) unsigned NOT NULL,
  `dst_address` int(10) unsigned NOT NULL,
  `dst_port` smallint(5) unsigned NOT NULL,
  `sent` int(10) unsigned NOT NULL,
  `rcvd` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  KEY `ndx_rcvd` (`rcvd`),
  KEY `ndx_sent` (`sent`)
) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Thanks in advance!

--
Brandon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Exceptions not caught in UDF written in C++ in MySQL 5.5

2011-08-30 Thread H. Steuer

Hello guys,

after upgrading from MySQL 5.1 to 5.5 we encounter a problem that C++ 
exceptions thrown in an UDF are not catched at all. Even there is a 
"catch (... )" at the end
or our try block which avoids exceptions to be thrown outside of the 
UDF, the server crashes with:


terminate called after throwing an instance of 'DATE::DateException'
110830  8:27:34 - mysqld got signal 6 ;

Even a try to catch DATE::DateException was no success. The very same 
thing was verified with 5.1 and worked out of the box.



Any ideas?

Cheers,
Heri





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org