Re: best way to copy a innodb table

2013-07-02 Thread Arjun
Well, the easy way to chunk the inserts is by use of limit. Here is what I 
used for one of my projects:

Insert ignore into t1 (f1, f2, f3)
Select f1, f2, f3 from t2 limit 100, 100

Inserts 1M records at a time starting from 1M th record in t2 and you can keep 
incrementing this offset as you progress. This will help in monitoring the 
table inserts and at the same time move chunks of records from source table.

Enjoy! 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Get Affected Rows after Stored Procedure COMMIT

2013-07-02 Thread Neil Tompkins
Hi,

I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
 Also I am handling any EXCEPTION.

However, after calling COMMIT, how can I get the number of Rows that were
affected either INSERTED or UPDATTED ?

Thanks,
Neil


Full text search and sign as a part of the keyword

2013-07-02 Thread lamp



Hi to all,



I have this full text search query

SELECT name, org_id,

address_id



FROM organization



WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN BOOLEAN

MODE)

and I'm not getting any results. And there IS a org ABC,

Inc.

My assumption is the ampersand sign as a part of the

keyword.

Any idea?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: best way to copy a innodb table

2013-07-02 Thread Michael Dykman
Another technique to avoid impact to the source database is to create your
target as MyISAM, pump your records into that (no ACID overhead) and at the
end :

ALTER mytable engine=InnoDb

The alter can take awhile but it will impose no strain on the source server
at all.


On Tue, Jul 2, 2013 at 3:48 AM, Arjun na...@yahoo.com wrote:

 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:

 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100

 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep
 incrementing this offset as you progress. This will help in monitoring the
 table inserts and at the same time move chunks of records from source
 table.

 Enjoy!


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Mysql resource limits.

2013-07-02 Thread Rafał Radecki
Hi All.

I would like to limit resources available to a given user in mysql. I know
that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html,
I also know that cgroups can be used at operating system level.

What are your experiences in limiting resources in mysql? I've user percona
statistics and had information provided by it. Are there any better
solutions?

Best regards,
Rafal Radecki.


Re: Full text search and sign as a part of the keyword

2013-07-02 Thread shawn green

Hello,

(my response is not top-posted)
On 7/2/2013 12:50 PM, l...@afan.net wrote:




Another correction: Searching for Com, the test org is NOT
gonna be listed but all others will.

Searching for Com no results at all.

�
�







Actually, looks like I'm wrong.



For testing purpose I made an org



CompMe



When search for Comp it's gonna



be shown on the list.







When search for Comp it's also gonna be shown.

But


Construction Company as well.



Then I changed the name of



the test org to ComMe.







Searching for Com, the test org is gonna be listed.







Though, Com no results at



all.



?!?



�



Hi to all,
I have this full text search query
SELECT name, org_id,
address_id
FROM organization
WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN

BOOLEAN

MODE)
and I'm not getting any results. And there IS a org ABC,
Inc.
My assumption is the ampersand sign as a part of the
keyword.
Any idea?




Read this:
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_ft_boolean_syntax

Then search on Comp Me.

Let us know your results.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Full text search and sign as a part of the keyword

2013-07-02 Thread lamp



Another correction: Searching for Com, the test org is NOT
gonna be listed but all others will.

Searching for Com no results at all.

�
�




 Actually, looks like I'm wrong.

 For testing purpose I made an org

 CompMe

 When search for Comp it's gonna

 be shown on the list.



 When search for Comp it's also gonna be shown.
But

 Construction Company as well.

 Then I changed the name of

 the test org to ComMe.



 Searching for Com, the test org is gonna be listed.



 Though, Com no results at

 all.

 ?!?

 �













 Hi to all,















 I have this full text search query







 SELECT name, org_id,







 address_id















 FROM organization















 WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN

 BOOLEAN







 MODE)







 and I'm not getting any results. And there IS a org ABC,







 Inc.







 My assumption is the ampersand sign as a part of the







 keyword.







 Any idea?















 --



 MySQL General Mailing List



 For list archives: http://lists.mysql.com/mysql



 To unsubscribe: http://lists.mysql.com/mysql




Re: best way to copy a innodb table

2013-07-02 Thread Perrin Harkins
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto miguel.ni...@percona.com
 wrote:

 You should check pt-archiver.


+1.  It works very well for this type of job.

- Perrin


RE: Mysql resource limits.

2013-07-02 Thread Rick James
cgroups won't work for individual MySQL users, only for mysqld as a whole.

Monitor the slowlog and help the naughty users fix their naughty queries.

 -Original Message-
 From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
 Sent: Tuesday, July 02, 2013 3:07 AM
 To: mysql@lists.mysql.com
 Subject: Mysql resource limits.
 
 Hi All.
 
 I would like to limit resources available to a given user in mysql. I know
 that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html,
 I also know that cgroups can be used at operating system level.
 
 What are your experiences in limiting resources in mysql? I've user
 percona statistics and had information provided by it. Are there any
 better solutions?
 
 Best regards,
 Rafal Radecki.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Full text search and sign as a part of the keyword

2013-07-02 Thread Rick James
FULLTEXT (at least the MyISAM version) has 3 gotchas:
ft_min_word_len=4, stopwords, and the 50% rule

 -Original Message-
 From: shawn green [mailto:shawn.l.gr...@oracle.com]
 Sent: Tuesday, July 02, 2013 10:21 AM
 To: mysql@lists.mysql.com
 Subject: Re: Full text search and  sign as a part of the keyword
 
 Hello,
 
 (my response is not top-posted)
 On 7/2/2013 12:50 PM, l...@afan.net wrote:
 
 
 
  Another correction: Searching for Com, the test org is NOT gonna be
  listed but all others will.
 
  Searching for Com no results at all.
 
 
 
 
 
 
 
  Actually, looks like I'm wrong.
 
  For testing purpose I made an org
 
  CompMe
 
  When search for Comp it's gonna
 
  be shown on the list.
 
 
 
  When search for Comp it's also gonna be shown.
  But
 
  Construction Company as well.
 
  Then I changed the name of
 
  the test org to ComMe.
 
 
 
  Searching for Com, the test org is gonna be listed.
 
 
 
  Though, Com no results at
 
  all.
 
  ?!?
 
 
 
  Hi to all,
  I have this full text search query
  SELECT name, org_id,
  address_id
  FROM organization
  WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN
  BOOLEAN
  MODE)
  and I'm not getting any results. And there IS a org ABC, Inc.
  My assumption is the ampersand sign as a part of the keyword.
  Any idea?
 
 
 Read this:
 http://dev.mysql.com/doc/refman/5.5/en/server-system-
 variables.html#sysvar_ft_boolean_syntax
 
 Then search on Comp Me.
 
 Let us know your results.
 
 --
 Shawn Green
 MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware and
 Software, Engineered to Work Together.
 Office: Blountville, TN
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Get Affected Rows after Stored Procedure COMMIT

2013-07-02 Thread hsv
 2013/07/02 12:29 +0100, Neil Tompkins 
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
 Also I am handling any EXCEPTION.

However, after calling COMMIT, how can I get the number of Rows that were
affected either INSERTED or UPDATTED ? 

Can you use function ROW_COUNT to any effect? If you can, probably you have to 
add its yields up in your own code.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Full text search and sign as a part of the keyword

2013-07-02 Thread lamp



Actually, looks like I'm wrong.
For testing purpose I made an org
CompMe
When search for Comp it's gonna
be shown on the list.

When search for Comp it's also gonna be shown. But
Construction Company as well.
Then I changed the name of
the test org to ComMe.

Searching for Com, the test org is gonna be listed.

Though, Com no results at
all.
?!?
�






 Hi to all,







 I have this full text search query



 SELECT name, org_id,



 address_id







 FROM organization







 WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN
BOOLEAN



 MODE)



 and I'm not getting any results. And there IS a org ABC,



 Inc.



 My assumption is the ampersand sign as a part of the



 keyword.



 Any idea?







 --

 MySQL General Mailing List

 For list archives: http://lists.mysql.com/mysql

 To unsubscribe: http://lists.mysql.com/mysql


Re: database perfomance worries

2013-07-02 Thread Reindl Harald


Am 02.07.2013 23:28, schrieb Andy Wallace:
 mysql update agent set number_of_emails = 5 where acnt = 'AR287416';
 Query OK, 1 row affected (36.35 sec)
 Rows matched: 1  Changed: 1  Warnings: 0
 
 36 seconds to update one table? The primary key is `acnt`. If I run the same 
 (basic)
 command again a few seconds later, I get:
 
 mysql update agent set number_of_emails = 15 where acnt = 'AR287416';
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0
 
 Why would we be getting such huge variations? We're running Solaris 10 on 
 i386, with
 4 processors and 16GB of memory, MySQL 5.1.46-log. We are working out a plan 
 to upgrade
 to MySQL 5.6, but I certainly don't want to depend on that upgrade to solve 
 all performance
 problems

how large is your database?
how large is innodb_ubber?
how large is the table?

for innodb innodb_buffer should be a large as the whole databases
not only the one, all of them, dumb but fact



signature.asc
Description: OpenPGP digital signature


RE: best way to copy a innodb table

2013-07-02 Thread Rick James
The particular example given here is unsafe and slow.

* Without an ORDER BY, you are not guaranteed that the chunks will be distinct.

* If there are any INSERTs/DELETEs between chunk copies, you will get 
dups/missing rows for two reasons:  the inserted/deleted rows, and the OFFSET 
is not quite right.

* OFFSET requires walking over the skipped rows.  As you get farther into the 
table, this takes longer.  That is, you have an ORDER(N**2) operation, not what 
could be ORDER(N).

* If replication is involved, 1M rows is a lot -- there will be noticeable 
delays where other replication activity is stalled.

If you have an AUTO_INCREMENT PRIMARY KEY, then using WHERE id  100 AND id 
= 200 is a better approach -- Order(N), and chunks guaranteed to be 
distinct.  Still, it is not immune from INSERTs/DELETEs.  Replication is fixed 
by decreasing chunk size (and by avoiding OFFSET).

 -Original Message-
 From: Arjun [mailto:na...@yahoo.com]
 Sent: Tuesday, July 02, 2013 12:48 AM
 To: mysql@lists.mysql.com
 Subject: Re: best way to copy a innodb table
 
 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:
 
 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100
 
 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep incrementing this offset as you progress. This will help in
 monitoring the table inserts and at the same time move chunks of records
 from source table.
 
 Enjoy!
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: database perfomance worries

2013-07-02 Thread Andy Wallace

Thanks for the response:

 how large is your database?
about 33GB, and growing

 how large is innodb_ubber?
from my.cnf:
   # You can set .._buffer_pool_size up to 50 - 80 %
   # of RAM but beware of setting memory usage too high
   innodb_buffer_pool_size=2048M
   innodb_additional_mem_pool_size = 20M

 how large is the table?
about 60MB, 95K rows


 for innodb innodb_buffer should be a large as the whole databases
 not only the one, all of them, dumb but fact

Yeah, to do that we'll have to throw a lot more memory in the machine.

Thanks for the info.



On 7/2/13 3:50 PM, Reindl Harald wrote:



Am 02.07.2013 23:28, schrieb Andy Wallace:

 mysql update agent set number_of_emails = 5 where acnt = 'AR287416';
 Query OK, 1 row affected (36.35 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

36 seconds to update one table? The primary key is `acnt`. If I run the same 
(basic)
command again a few seconds later, I get:

 mysql update agent set number_of_emails = 15 where acnt = 'AR287416';
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

Why would we be getting such huge variations? We're running Solaris 10 on i386, 
with
4 processors and 16GB of memory, MySQL 5.1.46-log. We are working out a plan to 
upgrade
to MySQL 5.6, but I certainly don't want to depend on that upgrade to solve all 
performance
problems


how large is your database?
how large is innodb_ubber?
how large is the table?

for innodb innodb_buffer should be a large as the whole databases
not only the one, all of them, dumb but fact



--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code.
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: database perfomance worries

2013-07-02 Thread Singer Wang
O
On 2013-07-02 5:31 PM, Andy Wallace awall...@ihouseweb.com wrote:

 We are on a quest to improve the overall performance of our database. It's
 generally
 working pretty well, but we periodically get big slowdowns for no apparent
 reason. A
 prime example today - in the command line interface to the DB, I tried to
 update one
 record, and got:

 mysql update agent set number_of_emails = 5 where acnt = 'AR287416';
 Query OK, 1 row affected (36.35 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

 36 seconds to update one table? The primary key is `acnt`. If I run the
 same (basic)
 command again a few seconds later, I get:

 mysql update agent set number_of_emails = 15 where acnt = 'AR287416';
 Query OK, 1 row affected (0.00 sec)
 Rows matched: 1  Changed: 1  Warnings: 0

 Why would we be getting such huge variations? We're running Solaris 10 on
 i386, with
 4 processors and 16GB of memory, MySQL 5.1.46-log. We are working out a
 plan to upgrade
 to MySQL 5.6, but I certainly don't want to depend on that upgrade to
 solve all performance
 problems.

 CREATE TABLE `agent`
 (
   `acnt` varchar(20) NOT NULL,
   `passwd` varchar(20) NOT NULL,
   `package` char(2) DEFAULT NULL,
   `data_template` varchar(20) DEFAULT 'NULL',
   `default_search_type` enum('1','2','3') NOT NULL DEFAULT '1',
   `status` enum('A','T','P','C','D','X','**S') NOT NULL
   `flags` varchar(10) DEFAULT NULL COMMENT 'pipe-separated flags',
   `aliases` varchar(4000) NOT NULL DEFAULT '',
   `offices` varchar(4000) NOT NULL DEFAULT '',
   `license_no` varchar(40) NOT NULL DEFAULT '',
   `agent_code` varchar(20) DEFAULT NULL,
   `office_code` varchar(20) DEFAULT NULL,
   `parent_acnt` varchar(20) DEFAULT NULL,
   `number_of_agentlinks` int(11) DEFAULT NULL,
   `number_of_emails` int(11) DEFAULT NULL,
   `fname` varchar(30) DEFAULT NULL,
   `lname` varchar(30) DEFAULT NULL,

 whole bunch of other fields

   PRIMARY KEY (`acnt`),
   KEY `parent_acnt` (`parent_acnt`),
   KEY `status` (`status`),
   KEY `email` (`email`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8


 --
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 Sometimes it pays to stay in bed on Monday, rather than spending the rest
 of the week debugging Monday's code.
 - Christopher Thompson

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: database perfomance worries

2013-07-02 Thread Reindl Harald


Am 03.07.2013 01:25, schrieb Andy Wallace:
 Thanks for the response:
 
 how large is your database?
 about 33GB, and growing
 
 how large is innodb_ubber?
 from my.cnf:
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size=2048M
innodb_additional_mem_pool_size = 20M
 
 how large is the table?
 about 60MB, 95K rows
 
 for innodb innodb_buffer should be a large as the whole databases
 not only the one, all of them, dumb but fact
 
 Yeah, to do that we'll have to throw a lot more memory in the machine.
 Thanks for the info

https://raw.github.com/major/MySQLTuner-perl/master/mysqltuner.pl

[!!] InnoDB data size / buffer pool: 7.8G/5.5G
Variables to adjust: innodb_buffer_pool_size (= 7G)

2 GB innodb_buffer_pool is a joke for a dataset of 33 GB

that leads in permanently I/O on mixed load at the chances
are high that there are times where nothing needed to
operate is in the buffer_pool and on concurrent load
mysqld ends in repeatly swap data in and out of the pool

at least all repeatly accessed tables should fit
permanently in the buffer



signature.asc
Description: OpenPGP digital signature


Re: database perfomance worries

2013-07-02 Thread spameden

  We are on a quest to improve the overall performance of our database.
 It's
  generally
  working pretty well, but we periodically get big slowdowns for no
 apparent
  reason. A
  prime example today - in the command line interface to the DB, I tried to
  update one
  record, and got:
 
  mysql update agent set number_of_emails = 5 where acnt = 'AR287416';
  Query OK, 1 row affected (36.35 sec)
  Rows matched: 1  Changed: 1  Warnings: 0
 
  36 seconds to update one table? The primary key is `acnt`. If I run the
  same (basic)


Check if there is some DELETE running for the selected table.

If there is a DELETE involving whole table it might be locking up database.

Look into mysql-slow.log

Try to optimize your application queries with EXPLAIN.


[!!] InnoDB data size / buffer pool: 7.8G/5.5G
Variables to adjust: innodb_buffer_pool_size (= 7G)

2 GB innodb_buffer_pool is a joke for a dataset of 33 GB

that leads in permanently I/O on mixed load at the chances
are high that there are times where nothing needed to
operate is in the buffer_pool and on concurrent load
mysqld ends in repeatly swap data in and out of the pool

at least all repeatly accessed tables should fit
permanently in the buffer

it depends on the load and how much data you're acquiring.

if you have 33GB in total, but only using few same tables in total size of
less than 2GB at the same time it would work just fine.

for example I have 136GB of data, but my buffer is only about 10Gb, but
most of the queries work just fine (I'm using it for mostly read-only
things).

but ofc, you need to check your system usage, if mysqld swaps its a bad
thing and most likely you need to either upgrade your hardware or consider
checking your data architecture (i.e. use LIMIT for quieries, add more
indexes, split large tables for a smaller ones which you really update or
store large data in mongodb etc).



  command again a few seconds later, I get:
 
  mysql update agent set number_of_emails = 15 where acnt =
 'AR287416';
  Query OK, 1 row affected (0.00 sec)
  Rows matched: 1  Changed: 1  Warnings: 0
 
  Why would we be getting such huge variations? We're running Solaris 10 on
  i386, with
  4 processors and 16GB of memory, MySQL 5.1.46-log. We are working out a
  plan to upgrade
  to MySQL 5.6, but I certainly don't want to depend on that upgrade to
  solve all performance
  problems.
 
  CREATE TABLE `agent`
  (
`acnt` varchar(20) NOT NULL,
`passwd` varchar(20) NOT NULL,
`package` char(2) DEFAULT NULL,
`data_template` varchar(20) DEFAULT 'NULL',
`default_search_type` enum('1','2','3') NOT NULL DEFAULT '1',
`status` enum('A','T','P','C','D','X','**S') NOT NULL
`flags` varchar(10) DEFAULT NULL COMMENT 'pipe-separated flags',
`aliases` varchar(4000) NOT NULL DEFAULT '',
`offices` varchar(4000) NOT NULL DEFAULT '',
`license_no` varchar(40) NOT NULL DEFAULT '',
`agent_code` varchar(20) DEFAULT NULL,
`office_code` varchar(20) DEFAULT NULL,
`parent_acnt` varchar(20) DEFAULT NULL,
`number_of_agentlinks` int(11) DEFAULT NULL,
`number_of_emails` int(11) DEFAULT NULL,
`fname` varchar(30) DEFAULT NULL,
`lname` varchar(30) DEFAULT NULL,
 
  whole bunch of other fields
 
PRIMARY KEY (`acnt`),
KEY `parent_acnt` (`parent_acnt`),
KEY `status` (`status`),
KEY `email` (`email`)
  ) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 
  --
  Andy Wallace
  iHOUSEweb, Inc.
  awall...@ihouseweb.com
  (866) 645-7700 ext 219
  --
  Sometimes it pays to stay in bed on Monday, rather than spending the
 rest
  of the week debugging Monday's code.
  - Christopher Thompson
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql