Re: Conditional in update

2014-02-12 Thread Larry Martell
On Wed, Feb 12, 2014 at 5:35 PM,   wrote:
> 2014/02/11 18:14 -0500, Larry Martell 
> set LIMIT = sign(LIMIT) * 100 * floor(0.01 + (sign(LIMIT) * LIMIT
> * ratio/100)
> 
> The function TRUNCATE can be useful here:
> set LIMIT = TRUNCATE(LIMIT * ratio + 0.01 * sign(LIMIT), -2)
> , if it works as advertized. In any case,
> ABS(LIMIT) = sign(LIMIT) * LIMIT
> .
>
> As for limiting the value, see this (clipping can be useful to you):
>
>
> 11.2.6. Out-of-Range and Overflow Handling
>
> When MySQL stores a value in a numeric column that is outside the permissible 
> range of the column data type, the result depends on the SQL mode in effect 
> at the time:
>
> * If strict SQL mode is enabled, MySQL rejects the out-of-range value 
> with an error, and the insert fails, in accordance with the SQL standard.
> * If no restrictive modes are enabled, MySQL clips the value to the 
> appropriate endpoint of the range and stores the resulting value instead. 
> When an out-of-range value is assigned to an integer column, MySQL stores the 
> value representing the corresponding endpoint of the column data type range. 
> If you store 256 into a 
> TINYINT
>  or TINYINT UNSIGNED column, MySQL stores 127 or 255, respectively.
> When a floating-point or fixed-point column is assigned a value that 
> exceeds the range implied by the specified (or default) precision and scale, 
> MySQL stores the value representing the corresponding endpoint of that range.
>
> Column-assignment conversions that occur due to clipping when MySQL is not 
> operating in strict mode are reported as warnings for 
> ALTER
>  TABLE, 
> LOAD
>  DATA INFILE, 
> UPDATE,
>  and multiple-row 
> INSERT 
> statements. In strict mode, these statements fail, and some or all the values 
> will not be inserted or changed, depending on whether the table is a 
> transactional table and other factors. For details, see 
> Section
>  5.1.7, “Server SQL Modes”.


Thanks for the reply. I was able to do this with a case statement, but
then the requirements were changed and I had to know when I
constrained the limit so I could log it to a file. So I ended up just
doing the update as it was originally, then adding a select after to
find any rows that exceeded the limit, and then updating those to the
max or min, and then I could log them to a file.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Narrowing a SELECT statement by multiple hits

2014-02-12 Thread Larry Martell
On Wed, Feb 12, 2014 at 7:35 PM, Jennifer  wrote:
> Hello,
>
> I have the following SQL statement that I'd like to add to.  It's 
> used to create a report that's emailed to me showing hits to our site that 
> didn't provide a referrer.  However, I only want to report on multiple hits 
> from the same IP address - not just a single hit by someone.
>
> How can I add a condition to only show hits by someone who's hit the 
> site 2 or more times with the same IP?  I tried GROUP BY but that didn't 
> return all the hits - one one per IP.
>
> SELECT `ip`,`page`,`url`,`time_stamp`
> FROM `ip_addresses`
> WHERE (`time_stamp` BETWEEN date_add( CURDATE(), INTERVAL -1 DAY ) AND 
> CURDATE() - INTERVAL 1 SECOND)
> AND TRIM(`referrer`) LIKE ''
> ORDER BY INET_ATON(`ip`), `time_stamp`
>
> I hope I'm explaining this correctly.

Try adding a having clause, e.g.:

SELECT `ip`,`page`,`url`,`time_stamp`
FROM `ip_addresses`
WHERE (`time_stamp` BETWEEN date_add( CURDATE(), INTERVAL -1 DAY ) AND
CURDATE() - INTERVAL 1 SECOND)
AND TRIM(`referrer`) LIKE ''
HAVING COUNT(ip) >2
ORDER BY INET_ATON(`ip`), `time_stamp`

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Narrowing a SELECT statement by multiple hits

2014-02-12 Thread Jennifer
Hello,

I have the following SQL statement that I'd like to add to.  It's used 
to create a report that's emailed to me showing hits to our site that didn't 
provide a referrer.  However, I only want to report on multiple hits from the 
same IP address - not just a single hit by someone.  

How can I add a condition to only show hits by someone who's hit the 
site 2 or more times with the same IP?  I tried GROUP BY but that didn't return 
all the hits - one one per IP.

SELECT `ip`,`page`,`url`,`time_stamp`
FROM `ip_addresses` 
WHERE (`time_stamp` BETWEEN date_add( CURDATE(), INTERVAL -1 DAY ) AND 
CURDATE() - INTERVAL 1 SECOND)
AND TRIM(`referrer`) LIKE '' 
ORDER BY INET_ATON(`ip`), `time_stamp`

I hope I'm explaining this correctly.

Thank you,
Jenni

Superior Shelving Systems
The (Storage|Office|Home|Warehouse) Shelving Specialists
Since 1984

Computer Workstations:
http://www.superiorshelving.com/mfg/nexel/pages/lan-workstations-nexel.php


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Conditional in update

2014-02-12 Thread hsv
 2014/02/11 18:14 -0500, Larry Martell 
set LIMIT = sign(LIMIT) * 100 * floor(0.01 + (sign(LIMIT) * LIMIT
* ratio/100)

The function TRUNCATE can be useful here:
set LIMIT = TRUNCATE(LIMIT * ratio + 0.01 * sign(LIMIT), -2)
, if it works as advertized. In any case,
ABS(LIMIT) = sign(LIMIT) * LIMIT
.

As for limiting the value, see this (clipping can be useful to you):


11.2.6. Out-of-Range and Overflow Handling

When MySQL stores a value in a numeric column that is outside the permissible 
range of the column data type, the result depends on the SQL mode in effect at 
the time: 

* If strict SQL mode is enabled, MySQL rejects the out-of-range value with 
an error, and the insert fails, in accordance with the SQL standard. 
* If no restrictive modes are enabled, MySQL clips the value to the 
appropriate endpoint of the range and stores the resulting value instead. When 
an out-of-range value is assigned to an integer column, MySQL stores the value 
representing the corresponding endpoint of the column data type range. If you 
store 256 into a 
TINYINT
 or TINYINT UNSIGNED column, MySQL stores 127 or 255, respectively. 
When a floating-point or fixed-point column is assigned a value that 
exceeds the range implied by the specified (or default) precision and scale, 
MySQL stores the value representing the corresponding endpoint of that range. 

Column-assignment conversions that occur due to clipping when MySQL is not 
operating in strict mode are reported as warnings for 
ALTER
 TABLE, 
LOAD 
DATA INFILE, 
UPDATE, 
and multiple-row 
INSERT 
statements. In strict mode, these statements fail, and some or all the values 
will not be inserted or changed, depending on whether the table is a 
transactional table and other factors. For details, see 
Section
 5.1.7, “Server SQL Modes”. 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Manuel Arostegui
2014-02-12 12:32 GMT+01:00 Lukas Lehner :

> Hi Antonio
>
> all tables use InnoDB. The size is 27 GB (not yet in prod). I guess in prod
> it will be fast 80GB.
>

Depending on how your application is going to use MySQL resources you will
need to tweak some things (and not only MySQL).
If it is going to be CPU bound, IO bound etc...there are different scenarios

Anyways, some general things to take a look at:

- Use file per table if possible. This won't give you extra performance,
but it will be good if you run into disk spaces issues or for future table
migrations.
- Make sure you have trx_commit and sync_binlog disabled (make sure you
understand what this means and what problems you could have during an un
expected crash)
- If you're expecting lot of temporary tables (filesorts), make sure tmpdir
runs over a fast disk.
- Use NUMA memory handling
- Make sure you test different disk schedulers (depending if you have RAID
and which kind of it) and see how they perform.
- You might want to take a look to smp irq affinity and see how it could
impact in your system.

Manuel.


Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Antonio Fernández Pérez
Hi Lukas,

In that case, such as Adarsh has said, you can configure until 70% of your
RAM for innodb_buffer_pool_size.
In your case, with 3GB RAM, I suggest you to configure until 2GB for MySQL:
Minimal for MyISAM (Maybe 32MB), and the rest for InnoDB. Your problem will
be loading data. Maybe your application will work slowly loading data
because there are more data than RAM memory.

Executing the following script, you can see your optimal buffer size for
InnoDB with your data.

SELECT CONCAT(ROUND(KBS/POWER(1024,
IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.4),
SUBSTR(' KMG',IF(PowerOf1024<0,0,
IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_innodb_buffer_pool_size
FROM (SELECT SUM(data_length+index_length) KBS FROM
information_schema.tables
WHERE engine='InnoDB') A,
(SELECT 3 PowerOf1024) B;

SELECT CONCAT(CEILING(RIBPS/POWER(1024,pw)),SUBSTR(' KMGT',pw+1,1))
Recommended_InnoDB_Buffer_Pool_Size FROM
(
SELECT RIBPS,FLOOR(LOG(RIBPS)/LOG(1024)) pw
FROM
(
SELECT SUM(data_length+index_length)*1.1*growth RIBPS
FROM information_schema.tables AAA,
(SELECT 1 growth) BBB
WHERE ENGINE='InnoDB'
) AA
) A;

Good luck!

Regards,

Antonio.


Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Lukas Lehner
Hi

it's also a Tomcat application server. Not dedicated MySQL instance.


On Wed, Feb 12, 2014 at 11:28 AM, Adarsh Sharma wrote:

> Is it a standalone DB server or Application is also hosted on top of it.
>
> You can give 50-70% of RAM to memory parameters like
> Innodb_buffer_pool_size ( Innodb ) and key_cache  ( Myisam ) for mysql
> tables.
>
> Below link : http://mysql.rjweb.org/doc.php/memory will give you a brief
> idea.
>
> Thanks
>


Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Lukas Lehner
Hi Antonio

all tables use InnoDB. The size is 27 GB (not yet in prod). I guess in prod
it will be fast 80GB.

thanks


On Wed, Feb 12, 2014 at 10:28 AM, Antonio Fernández Pérez <
antoniofernan...@fabergames.com> wrote:

> Hi Lukas,
>
> What is your default engine? In MySQL there are a lot of parameters that
> configure the engine behaviour. Depends on the engine, I suggest you to add
> some parameters or others.
>
> Also it's important to know the size of your data. Your configuration is
> minimal and by default is not optimal.
>
> Regards,
>
> Antonio.
>
>


Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Adarsh Sharma
Is it a standalone DB server or Application is also hosted on top of it.

You can give 50-70% of RAM to memory parameters like
Innodb_buffer_pool_size ( Innodb ) and key_cache  ( Myisam ) for mysql
tables.

Below link : http://mysql.rjweb.org/doc.php/memory will give you a brief
idea.

Thanks


RE: LIKE sql optimization

2014-02-12 Thread Zhigang Zhang
Done.

 

Thand you very much!

 

Zhigang

 

  _  

From: Jesper Wisborg Krogh [mailto:my...@wisborg.dk] 
Sent: Wednesday, February 12, 2014 5:30 PM
To: Morgan Tocker; Zhigang Zhang
Cc: mysql@lists.mysql.com
Subject: Re: LIKE sql optimization

 

On 12/02/2014 13:16, Morgan Tocker wrote:

Hi Zhigang,
 
On Feb 11, 2014, at 8:48 PM, Zhigang Zhang  
 wrote:
 

I want to know the reason, in my opinion, to scan the smaller index data has
better performance than to scan the whole table data. 

 
I think I understand the question - you are asking why MySQL will not index
scan, find matching records, and then look them up rather than table scan?
 
I believe the answer is that there is no way of knowing if 1 row matches, or
all rows match.  In the worst case (where all rows match), it is much more
expensive to traverse between index and data rows for-each-record.
 
So a table scan is a “safe choice" / has less variance.


In addition to what Morgan writes, then with an index scan you will end up
doing a lot of random I/O: even if the index scan itself is one sequential
scan (which is not guaranteed) then for each match, it will be necessary to
look up the actual row. On the other hand a table scan will generally be
more of a sequential read as you already have all the data available for
each match. Random I/O is more expensive than sequential I/O - particularly
on spinning disks - so in general the optimizer will try to reduce the
amount of random I/O.

In some cases though, you may see the index scan be preferred. Assume you
have a query like:

SELECT val FROM table WHERE condition LIKE '%abcd';

and you have an index (condition, val) or (val, condition) then the whole
query can be satisfied from the index (it's called a covering index). In
that case the index scan is usually preferred over the table scan.


For the purpose of using an index to do index lookups to find the matching
rows rather than doing either a table or index scan for WHERE clauses like
"LIKE '%abcd'" you can do a couple of things:

*   Duplicate the column used in the WHERE clause, but reverse the
string. That way the above WHERE clause becomes: WHERE condition_revers LIKE
'dcba%'
This can use an index as it is a left prefix.
*   If you always look for around the same number of characters at the
end in your WHERE clause, you can create a column with just those last
characters, e.g. so the WHERE clause becomes: WHERE condition_suffix =
'abcd'
Do however be careful that you ensure you have enough selectivity that way.
If for example 90% of the rows ends in 'abcd' an index will not do you much
good (unless you are looking for the last 10% of the rows).


Best regards,
Jesper Krogh
MySQL Support



Re: LIKE sql optimization

2014-02-12 Thread Jesper Wisborg Krogh
On 12/02/2014 13:16, Morgan Tocker wrote:
> Hi Zhigang,
>
> On Feb 11, 2014, at 8:48 PM, Zhigang Zhang  wrote:
>
>> I want to know the reason, in my opinion, to scan the smaller index data has
>> better performance than to scan the whole table data. 
> I think I understand the question - you are asking why MySQL will not index 
> scan, find matching records, and then look them up rather than table scan?
>
> I believe the answer is that there is no way of knowing if 1 row matches, or 
> all rows match.  In the worst case (where all rows match), it is much more 
> expensive to traverse between index and data rows for-each-record.
>
> So a table scan is a “safe choice" / has less variance.

In addition to what Morgan writes, then with an index scan you will end
up doing a lot of random I/O: even if the index scan itself is one
sequential scan (which is not guaranteed) then for each match, it will
be necessary to look up the actual row. On the other hand a table scan
will generally be more of a sequential read as you already have all the
data available for each match. Random I/O is more expensive than
sequential I/O - particularly on spinning disks - so in general the
optimizer will try to reduce the amount of random I/O.

In some cases though, you may see the index scan be preferred. Assume
you have a query like:

SELECT val FROM table WHERE condition LIKE '%abcd';

and you have an index (condition, val) or (val, condition) then the
whole query can be satisfied from the index (it's called a covering
index). In that case the index scan is usually preferred over the table
scan.


For the purpose of using an index to do index lookups to find the
matching rows rather than doing either a table or index scan for WHERE
clauses like "LIKE '%abcd'" you can do a couple of things:

  * Duplicate the column used in the WHERE clause, but reverse the
string. That way the above WHERE clause becomes: WHERE
condition_revers LIKE 'dcba%'
This can use an index as it is a left prefix.
  * If you always look for around the same number of characters at the
end in your WHERE clause, you can create a column with just those
last characters, e.g. so the WHERE clause becomes: WHERE
condition_suffix = 'abcd'
Do however be careful that you ensure you have enough selectivity
that way. If for example 90% of the rows ends in 'abcd' an index
will not do you much good (unless you are looking for the last 10%
of the rows).


Best regards,
Jesper Krogh
MySQL Support



Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Antonio Fernández Pérez
Hi Lukas,

What is your default engine? In MySQL there are a lot of parameters that
configure the engine behaviour. Depends on the engine, I suggest you to add
some parameters or others.

Also it's important to know the size of your data. Your configuration is
minimal and by default is not optimal.

Regards,

Antonio.


DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Lukas Lehner
Hi

We will use a Java application which uses Hibernate for DB calls. The
vendor didn't made recommendations howto configure MySQL. The application
is not yet in production.
MySQL is new to me, I previously used Oracle DB. The vendor provided a
guide howto configure Oracle.

This is our my.conf

$ cat /etc/my.cnf
[mysqld]
datadir=/opt/pprd/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
max_allowed_packet=10M
query_cache_size = 8388608
table_open_cache=256
tmp_table_size=67108864
log_bin = /opt/pprd/log/mysql-bin.log
log_bin_index   = /opt/pprd/log/mysql-bin.index
expire_logs_days= 5
max_binlog_size = 100M
binlog_format   = row
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

$ rpm -qa mysql
mysql-5.1.66-1.el6_3.x86_64

$ free -m
 total   used   free sharedbuffers cached
Mem: 32241  31610630  0240  27209
-/+ buffers/cache:   4161  28080
Swap: 2047 74   1973


$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.1 (Santiago)

did we miss an inportant option? What config options do you use?

the DDL is provided by vendor and we are not allowed to change it (for
example indexes).
I personly like if the MySQL behaves like a out-of-tbe-box Oracle 11g db.