innodb buffer pool allocation question

2011-02-22 Thread Kyong Kim
Does innodb buffer pool cache indexes and data in sub sets or in entirety?
I've heard people mention the buffer pool allocation is dependent on
the size of your tables and indexes.
Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Changing database tables to different storage engine.

2010-11-22 Thread Kyong Kim
Another thing to keep in mind is to make sure all your foreign keys
are re-created if you have any. We had a similar accident in our
prod box a few years back and converting MyIsam to InnoDB won't
necessarily re-create the foreign keys.
Kyong

On Mon, Nov 22, 2010 at 6:39 AM, Johan De Meersman vegiv...@tuxera.be wrote:
 Another option, if your data hasn't changed in the mean time (I know, rare
 scenario) could be to set up a secondary instance from the same binaries and
 changing only the datafile location and the port in the config,
 re-importing, shutting both instances down and switching out the datafiles.

 You'll get some performance impact from the import's disk activity, but the
 switch should take almost no time - if the datafiles are on the same
 physical disk, of course.


 On Mon, Nov 22, 2010 at 3:24 PM, Tyler Poland tpol...@engineyard.comwrote:

 Machiel,

 Each table will be write locked while it is being altered so this will most
 likely impact the application.  In addition to the write lock, the
 conversion causes each table to be completely rewritten in the new format so
 this will have a high impact on IO write activity and so it will impact
 overall IO throughput.  If your application is mostly reads, is well cached
 in memory, and the tables are small this should be pretty fast and
 relatively pain free.  If you aren't sure about the impact and conversion
 time you may want to restore a backup of the database to another location
 and run through the conversion while monitoring performance numbers.

 Tyler



 On 11/22/10 5:55 AM, Machiel Richards wrote:

 Thank you John

       I have in the meantime fond this to be the case (** someone
 changed config files without my knowledge it seems as this was setup
 properly and working**)

         Anyhow, in order for the innodb to be active again I need to
 restart the database, however aftewards I assume the tables will still
 be MyIsam.

         In this event I will need to manually alter each table, and I am
 concerned about the impact of this on the system performance.

 Regards
 Machiel


 -Original Message-
 From: John Daisleydaisleyj...@googlemail.com
 To: Machiel Richardsmachiel.richa...@gmail.com
 Cc: mysql mailing listmysql@lists.mysql.com
 Subject: Re: Changing database tables to different storage engine.
 Date: Mon, 22 Nov 2010 10:51:23 +

 I have frequently seen Innodb 'silently' disabled if the
 innodb_log_file_size is different to the files size on disk (quite
 common when moving systems about). You wont be able to use innodb until
 you resolve this either by deleting the log files and restarting
 mysqld so they get recreated or changing the innodb_log_file_size to
 match the size of the files on disk.

 If the Innodb engine is not available then MySQL will use the default
 (usually MyISAM) storage engine even if Innodb was specified. You can
 stop this behaviour by setting sql-mode=NO_ENGINE_SUBSTITUTION

 Regards

 John


 On 22 November 2010 10:12, Machiel Richardsmachiel.richa...@gmail.com
 wrote:

         Hi All

            Sorry but things have now changed, and I found the following.


                The tables was in fact restored as Innodb, however
         someone seems
         to have gone and changed something causing innodb to be
         disabled, this
         caused the tables to be defaulted back to MyIsam.

                Should this not rather have just resulted in an error
         allowing
         to fix the problem in the first place instead of changing the
         storage
         engines?

                Anyone have some thoughts on the best solution to fix
         this? I
         will look into the innodb not working soon.

         Machiel




         -Original Message-
         From: Machiel Richardsmachi...@rdc.co.za
         To: mysql mailing listmysql@lists.mysql.com
         Subject: Changing database tables to different storage engine.
         Date: Mon, 22 Nov 2010 11:59:03 +0200


         Good day all

                    Hope all is well.

                     I have something to ask as someone might have done
         this as
         well and may have a good solution on how to fix this.

                    During a database migration this weekend to move a
         MySQL
         database from windows to linux, we created a backup and restore
         of the
         database.

                    However, form my part I made a mistake by overlooking
         the
         fact that the windows database was configured to use default
         storage
         engine as Innodb.

                    On the new server, the default was set to MyIsam.

                       This resulted in all the tables being restored to
         the new
         system as MyIsam instead of Innodb.

                        In order to fix this, I know you can use alter
         table to
         change the storage engine, however I need to know the following:

                   

innodb_autoinc_lock_mode and replication mode

2010-09-17 Thread Kyong Kim
I couldn't find much information on innodb_autoinc_lock_mode and
implications on mixed mode replication. Does the same caution for
innodb_autoinc_lock_mode=2 and statement-based replication apply to
mixed mode replication?

Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Fast Index Creation and fill factor

2010-08-30 Thread Kyong Kim
I've been going through the 5.1 manual and exploring the new features.

To add a secondary index to an existing table, InnoDB scans the table,
and sorts the rows using memory buffers and temporary files in order
by the value(s) of the secondary index key column(s). The B-tree is
then built in key-value order, which is more efficient than inserting
rows into an index in random order with respect to the key values.
Because the B-tree nodes are split when they fill, building the index
in this way results in a higher fill-factor for the index, making it
more efficient for subsequent access.

I've been running some tests on SELECT using indexes built via plugin
and builtin and the results are similar (though I think the test cases
are flawed due to low cardinality of the column being indexed and
queried).
My question is...wouldn't the eventual fill factors be similar whether
the pages are split during unordered B-tree build or ordered build?
I'm having a tough time visualizing this scenario. Any insight into
this will be much appreciated.

Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MMM Mysql

2010-07-23 Thread Kyong Kim
Are there any known issues or challenges implementing MMM?
We're currently focused on MMM but just kinda wanted to keep our eyes open.
Kyong

On Thu, Jul 22, 2010 at 11:19 PM, Rob Wultsch wult...@gmail.com wrote:
 On Thu, Jul 22, 2010 at 8:42 PM, Kyong Kim kykim...@gmail.com wrote:

 Has anyone used this in production?
 We're looking at this as part of our sharding/scale strategy and
 wanted some insight into real world experience.
 Are there alternatives out there?
 Kyong


 Lots of people are using MMM.

 Alternatives include Linux-HA (aka heartbeat) often combined with DRBD and
 MySQL cluster.

 For the general case MMM is probably the best option.


 --
 Rob Wultsch
 wult...@gmail.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MMM Mysql

2010-07-22 Thread Kyong Kim
Has anyone used this in production?
We're looking at this as part of our sharding/scale strategy and
wanted some insight into real world experience.
Are there alternatives out there?
Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Two Primary Keys

2010-06-29 Thread Kyong Kim
This isn't true for innodb.
I think the only requirement is that you need to have a unique index
on the auto increment column.
We created a composite primary key + auto_increment to take advantage
of clustering by primary key while satisfying unique constraint for
the primary key. It worked out well for us except for the sheer size
of the indexes.
Kyong

2010/6/29 João Cândido de Souza Neto j...@consultorweb.cnt.br:
 As far as I know, if you have an auto_increment primary key, you cant have
 any other field in its primary key.


 João Cândido.

 Victor Subervi victorsube...@gmail.com escreveu na mensagem
 news:aanlktikzksmbx5hue0x_q3hx_68gicndghpkjdrna...@mail.gmail.com...
 Hi;
 I have the following:

 create table pics (
   picture_id int auto_increment primary key,
   product_sku int not null primary key,
   picture_num int not null,
   picture_desc varchar(100),
   picture_data longblob
 );

 which doesn't work I need to auto_increment and declare primary key on two
 fields. How do?
 TIA,
 Victor




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Questions regarding Query cache usage

2010-06-09 Thread Kyong Kim
On Tue, Jun 8, 2010 at 10:57 PM, Machiel Richards machi...@rdc.co.za wrote:
 Good morning all



        I would like to try and find out how you can see what is using the
 query cache.



                My reason for asking is the following:



                On one of our client databases, the query cache is set to
 128Mb and the usage always varied between 5% and 53% and basically never
 went above that.



                However, this morning I noticed that the query cache usage
 is at 99.98% which is very odd for the database.

How are you determining the cache usage? I don't think 99.98%
utilitzation is a bad thing. It would be preferable to wasting memory
on a cache that is under-utilized.

                Does anybody have an idea on how to determine why this usage
 is suddenly this high and if we should look at increasing the query cache
 size or not?

Has a new workload been introduced to the server? The cache
utilization may be indicative of a lot of small repeated queries being
introduced. You can monitor the Qcache_lowmem_prunes and
Qcache_free_blocks to determine if you can benefit from increased
query cache size.
http://www.mysqlperformanceblog.com/2006/07/27/mysql-query-cache/

                I also have a second question relating to a previous post I
 sent through but never really received a definitive answer.



                The client database is setup with a master slave
 replication, the master Innodb buffer pool usage is at 4Gb at present (no
 more system memory available to increase this)

                We are starting to receive errors on the slave server
 however relating to the innodb buffer pool size being used up and there is
 no place to add more locks.



                This was found to be related to the slave server's innodb
 buffer pool size that is currently still set to 8mb. I would like to know
 whether it will be worth changing the value on the slave server to match
 that of the master server or will this cause more problems?

If the memory is available, why not use it? It seems like the default
buffer pool size out of the box was just never changed.

Kyong

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Questions regarding Query cache usage

2010-06-09 Thread Kyong Kim
Absolutely. You don't want to obscure the cause by just throwing more
hardware at things.
That approach just buys you time until a bigger pile hits the fan if
the underlying issue remains unresolved.
At the same time, though, 8 MB production innodb buffer pool
allocation should be fairly high on the list of things to scrutinize.
Kyong

On Wed, Jun 9, 2010 at 12:12 PM, Johan De Meersman vegiv...@tuxera.be wrote:
 On Wed, Jun 9, 2010 at 8:04 PM, Kyong Kim kykim...@gmail.com wrote:
 If the memory is available, why not use it? It seems like the default
 buffer pool size out of the box was just never changed.

 Agreed, of course, but if something happens on a system that is out of
 the ordinary, it's very good practice to hunt the cause down before it
 makes more undesireable things happen.


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



innodb_support_xa setting performance impact

2010-06-05 Thread Kyong Kim
I can see how having innodb_support_xa set to 1 can have write
performance impact due to additional flushes to disk.
Can this impact read performance as well?
Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB - 16GB Data

2010-04-13 Thread Kyong Kim
Also, if you have read heavy workload, you might want to try using and
tuning your query cache.
Start off with something like 32M and incrementally tune it.
You can monitor some query cache related server variables.
Kyong

On Sat, Apr 10, 2010 at 4:28 PM, Rob Wultsch wult...@gmail.com wrote:
 On Sat, Apr 10, 2010 at 12:10 AM, Junior Ortis jror...@gmail.com wrote:
 Hi Guys i have a dedicated mysql-server and neeed tips and sugestion
 to optimize its to a better performance.

 1-) Here i have results from mysqltunner

  MySQLTuner 1.0.1 - Major Hayden ma...@mhtx.net
    Bug reports, feature requests, and downloads at http://mysqltuner.com/
    Run with '--help' for additional options and output filtering
 Please enter your MySQL administrative login: toscaoSo
 Please enter your MySQL administrative password:

  General Statistics 
 --
 [--] Skipped version check for MySQLTuner script
 [OK] Currently running supported MySQL version 5.4.3-beta-community
 [OK] Operating on 64-bit architecture

  Storage Engine Statistics 
 ---
 [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
 [--] Data in MyISAM tables: 458M (Tables: 349)
 [--] Data in InnoDB tables: 15G (Tables: 73)
 [!!] Total fragmented tables: 47

  Performance Metrics 
 -
 [--] Up for: 29d 12h 8m 1s (334M q [131.330 qps], 153K conn, TX: 194B, RX: 
 77B)
 [--] Reads / Writes: 31% / 69%
 [--] Total buffers: 15.5G global + 16.2M per thread (50 max threads)
 [OK] Maximum possible memory usage: 16.3G (69% of installed RAM)
 [OK] Slow queries: 0% (386/334M)
 [OK] Highest usage of available connections: 46% (23/50)
 [OK] Key buffer size / total MyISAM indexes: 300.0M/87.3M
 [OK] Key buffer hit rate: 100.0% (78M cached / 22K reads)
 [!!] Query cache is disabled
 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 4M sorts)
 [OK] Temporary tables created on disk: 0% (548 on disk / 1M total)
 [OK] Thread cache hit rate: 99% (23 created / 153K connections)
 [OK] Table cache hit rate: 44% (467 open / 1K opened)
 [OK] Open file limit used: 1% (684/65K)
 [OK] Table locks acquired immediately: 99% (320M immediate / 320M locks)
 [!!] InnoDB data size / buffer pool: 15.5G/15.0G

  Recommendations 
 -
 General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    Enable the slow query log to troubleshoot bad queries
 Variables to adjust:
    query_cache_size (= 8M)
    innodb_buffer_pool_size (= 15G)



 2-) And here is my dedicate server i have (24GB ):


  1  [
         0.0%]     Tasks: 71 total, 2 running
  2  [|||
         7.8%]     Load average: 0.11 0.18 0.19
  3  [|
         0.7%]     Uptime: 62 days, 19:24:09
  4  [|
         0.7%]
  Mem[|16878/24165MB]
  Swp[|
     0/5122MB]


 3-) And my.cnf

 vim .my.cnf
 [client]
 #password       = [your_password]
 port            = 3306
 socket          = /tmp/mysql.sock

 # *** Application-specific options follow here ***

 #
 # The MySQL server
 #
 [mysqld]
 #large-pages

 # generic configuration options
 port            = 3306
 socket          = /tmp/mysql.sock
 skip-locking
 skip-external-locking
 datadir = /disk3/Datareal/oficial/mysql
 net_buffer_length       = 1024K
 join_buffer_size        = 4M
 sort_buffer_size        = 4M
 read_buffer_size        = 4M
 read_rnd_buffer_size    = 4M
 table_cache             = 1000
 max_allowed_packet      = 160M

 max_connections=50
 max_user_connections=200

 key_buffer              = 300M
 key_buffer_size         = 300M
 #thread_cache           = 400
 thread_stack            = 192K
 thread_cache_size       = 96
 thread_concurrency      = 8
 #thread_stack           = 128K

 default-character-set   = utf8
 innodb_flush_method=O_DSYNC
 innodb_buffer_pool_size= 15G
 innodb_additional_mem_pool_size=128M
 innodb_log_file_size= 256M
 innodb_log_buffer_size=72M
 innodb_flush_log_at_trx_commit=0
 innodb_thread_concurrency=8
 innodb_file_per_table=1
 innodb_log_files_in_group=2
 innodb_table_locks=0
 innodb_lock_wait_timeout = 50

 ~/.my.cnf 72L, 1570C



 Thanks guys for any tips/suggestion !


 First, most performance comes from optimized table structures, index,
 and queries. Server tuning will not get you you all that much
 additions performance, if you have a semi sane configuration.  What is
 your current bottleneck or performance problem?

 Anyways... here are some reactions:

 innodb_flush_log_at_trx_commit=0 ... THIS MEANS YOU CAN LOSE COMMITTED
 TRANSACTIONS. Read up on this.

 innodb_flush_method=O_DSYNC
 Any particular reason you aren't using O_DIRECT ? Read up on this.

 Why do you not have skip-name-resolve on? Read up on this.

 innodb_thread_concurrency... As you are running 5.4 you can probably
 set this to 0. Assuming you have 4 cores or 

Re: Recommended swap partition size

2010-04-13 Thread Kyong Kim
Yeah. One of the telltale signs of something amiss is excessive swap activity.
You're not going to be happy with the performance when the swap space
is actually in use heavily.
Kyong

On Tue, Apr 13, 2010 at 8:15 PM, Dan Nelson dnel...@allantgroup.com wrote:
 In the last episode (Apr 13), Joe Hammerman said:
 My organization has a dedicated MySQL server. The system has 32Gb of
 memory, and is running CentOS 5.3.  The default engine will be InnoDB.
 Does anyone know how much space should be dedicated to swap?

 I say zero swap, or if for some reason you NEED swap (for crashdumps maybe,
 but I didn't think Linux supported that), no more than 2GB.  With that much
 RAM, you don't ever want to be in the state where the OS decides to page out
 8GB of memory (for example) to swap.  We have a few Oracle servers with
 between 32 and 48 GB of memory and they all live just fine without swap.

 --
        Dan Nelson
        dnel...@allantgroup.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MyISAM better than innodb for large files?

2010-04-08 Thread Kyong Kim
We've seen good results throwing more RAM to the buffer pool.
It is true that InnoDB data never gets accessed directly on disk.
The only downside I know of with a larger buffer pool is slower restarts.

The load speed depends on the order of the inserts.
Random inserts or updates to primary key will cause result in very
poor performance.
I once ran a test doing completely random insert to InnoDB with a very
small buffer pool on my VM dev machine and it took days to load a
million rows before finally failing. Keep in mind that there may have
been other factors at work as well (we had a rather unusual indexing
strategy which worked for our use case). If you can pre-sort your load
file by primary key order, your load speed should be much better.

In terms of loading data, I doubt you will see better performance with
InnoDB than MyISAM. Our selection was heavily biased towards data
access. I have heard that InnoDB insert buffer scales much more
linearly than MyISAM but I don't know the details. We clustered our
data using a longer composite primary key and saw fairly good data
access performance.

I would caution against InnoDB if you foresee heavy random inserts.
Kyong


On Thu, Apr 8, 2010 at 8:21 AM, mos mo...@fastmail.fm wrote:
 At 09:10 PM 4/7/2010, you wrote:

 Also depends on your data access pattern as well.
 If you can take advantage of clustering my primary key for your
 selects, then InnoDB could do it for you.
 My suggestion would be to write some queries based on projected
 workload, build 2 tables with lots and lots of data, and do some
 isolated testing. For work, I do a lot of query profiling using
 maatkit. Be sure to clear out as much of the caching as possible
 including the OS cache.

 In a related topic, does anyone know how well InnoDb is going to perform if
 you have a 250 million row table (100gb) and only 8gb of RAM? It was my
 understanding that InnoDb needed to fit as much of the table into memory as
 it could for it to be fast. Also, how long is it going to take to load 250
 million rows (using Load Data InFile) compared to a MyISAM table? I've
 always found InnoDb to be incredibly slow at loading large amounts of data
 and nothing I could think of would speed things up.  I too would like to
 switch to InnoDb but until I can solve these problem I'm sticking with
 MyISAM for large tables.

 Mike


 On Mon, Apr 5, 2010 at 7:25 AM, Jan Steinman j...@bytesmiths.com wrote:
  From: Gavin Towey gto...@ffn.com
 
  InnoDB should be your default for all tables, unless you have specific
  requirements that need myisam.  One specific example of an appropriate
  task
  for myisam is where you need very high insert throughput, and you're
  not
  doing any updates/deletes concurrently.
 
  A couple other things: InnoDB does relations better, MyISAM does search
  of
  text fields.
 
 
  
  If we can control fuel we can control the masses; if we can control food
  we
  can control individuals. -- Henry Kissinger
   Jan Steinman, EcoReality Co-op 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MyISAM better than innodb for large files?

2010-04-07 Thread Kyong Kim
Also depends on your data access pattern as well.
If you can take advantage of clustering my primary key for your
selects, then InnoDB could do it for you.
My suggestion would be to write some queries based on projected
workload, build 2 tables with lots and lots of data, and do some
isolated testing. For work, I do a lot of query profiling using
maatkit. Be sure to clear out as much of the caching as possible
including the OS cache.

On Mon, Apr 5, 2010 at 7:25 AM, Jan Steinman j...@bytesmiths.com wrote:
 From: Gavin Towey gto...@ffn.com

 InnoDB should be your default for all tables, unless you have specific
 requirements that need myisam.  One specific example of an appropriate task
 for myisam is where you need very high insert throughput, and you're not
 doing any updates/deletes concurrently.

 A couple other things: InnoDB does relations better, MyISAM does search of
 text fields.


 
 If we can control fuel we can control the masses; if we can control food we
 can control individuals. -- Henry Kissinger
  Jan Steinman, EcoReality Co-op 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MyISAM better than innodb for large files?

2010-04-05 Thread Kyong Kim
Also depends on your data access pattern as well.
If you can take advantage of clustering my primary key for your
selects, then InnoDB could do it for you.
My suggestion would be to write some queries based on projected
workload, build 2 tables with lots and lots of data, and do some
isolated testing. For work, I do a lot of query profiling using
maatkit. Be sure to clear out as much of the caching as possible
including the OS cache.
BTW, I've never had much luck storing large docs in MySQL. If you can
compromise on data integrity, consider filesystem storage.
Kyong

On Fri, Apr 2, 2010 at 5:50 PM, Mitchell Maltenfort mmal...@gmail.com wrote:
 You want the crash safety and data integrity that comes with InnoDB.  Even
 more so as your dataset grows.  It's performance is far better than myisam
 tables for most OLTP users, and as your number of concurrent readers and
 writers grows, the improvement in performance from using innodb over
 myisam becomes more pronounced.

 His scenario is perhaps updated once a year, though, so crash recovery and
 multiple writer performance is not important.

 And the concurrent reader and writer number is set at one, unless I
 undergo mitosis or something.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: SELECT and INSERT if no row is returned

2010-03-24 Thread Kyong Kim
I needed to give greater detail.

parent_id isn't unique. The table has a composite primary key (parent_id,
seq_id).
Here's a better schema def

CREATE TABLE sometable (
parent_id INT(10) NOT NULL,
seq_id INT(10) AUTO_INCREMENT,
child_id INT(10) NULL,
PRIMARY KEY(parent_id, seq_id),
UNIQUE KEY(child_id)
) ENGINE=INNODB;

The requirement is that there can be only 1 parent_id associated with a
given child or there can be only one parent_id not associated with a
child_id (NULL child_id). I need to avoid a race condition where 2
connections can SELECT and return an empty row and insert rows of the same
parent_id not associated with a message_id. It's that .1% of the cases we
want to avoid.

Kyong

On Wed, Mar 24, 2010 at 6:26 AM, Rodrigo Ferreira
rodrigof_si...@yahoo.comwrote:

 First, if you want no duplicate parent_id, make it unique key (as JW
 saids). Look at INSERT ... ON DUPLICATE KEY UPDATE, I think this will solve
 the problem with one statement.

 Rodrigo Ferreira

 --- On *Wed, 3/24/10, Johnny Withers joh...@pixelated.net* wrote:


 From: Johnny Withers joh...@pixelated.net
 Subject: Re: SELECT and INSERT if no row is returned
 To: Kyong Kim kykim...@gmail.com
 Cc: mysql mysql@lists.mysql.com
 Date: Wednesday, March 24, 2010, 9:32 AM

 Make parent_id a unique key. Doing a select first and inserting if no
 result will work 99.9% of the time; however, consider 2 rows being
 inserted at the same time.

 JW

 On Tuesday, March 23, 2010, Kyong Kim kykim...@gmail.com wrote:
  I need to INSERT a row into a table only if it does not exist to
  insure that there won't be duplicate rows.
  Due to the the usage case and the design of the primary key,
  non-unique id + auto increment, I don't think insert ignore is an
  option.
 
  What would be simplest and cheapest way to make sure that given the
  following schema
 
  create table some_table
  ( parent_id int //non-unique
seq_id int auto_increment ) ENGINE=INNODB
 
  that no row with the same parent_id can be inserted?
 
  Kyong
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=joh...@pixelated.net
 
 

 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=rodrigof_si...@yahoo.com





SELECT and INSERT if no row is returned

2010-03-23 Thread Kyong Kim
I need to INSERT a row into a table only if it does not exist to
insure that there won't be duplicate rows.
Due to the the usage case and the design of the primary key,
non-unique id + auto increment, I don't think insert ignore is an
option.

What would be simplest and cheapest way to make sure that given the
following schema

create table some_table
( parent_id int //non-unique
  seq_id int auto_increment ) ENGINE=INNODB

that no row with the same parent_id can be inserted?

Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: logging of BAD queries

2010-02-09 Thread Kyong Kim
I'm not positive if the general log captures all invalid queries but
it does capture at least some.
I was asked the same question a few months back and checking to make
sure that manually issued invalid queries are logged (IIRC).
Could it be that the queries are never even making it to the database?
Kyong

On Tue, Feb 9, 2010 at 2:05 PM, andy knasinski a...@nrgsoft.com wrote:
 Unfortunately, I'm using a commercial application and trying to debug as to
 why some data does and does not get updated properly.

 On Feb 9, 2010, at 2:57 PM, mos wrote:


 I do something like that in my compiled application. All SQL queries are
 sent to a single procedures and executed there. I trap any errors and log
 the SQL in a table along with the error message. This is useful to determine
 if someone is trying to break into the database (sql injection). Having a
 central procedure to execute all queries is paramount in controlling and
 capturing errors. I can also unplug and plug in a different database engine
 quite easily rather than hunting down all direct calls to the database. I
 also don't have to worry about trapping errors throughout the application.
 It's all done at one central point.

 I've been doing it this way for 5 years and would never start a large
 application without it.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization

2010-01-26 Thread Kyong Kim
On Tue, Jan 26, 2010 at 11:23 AM, Keith Murphy bmur...@paragon-cs.com wrote:
 You absolutely *should not* convert the mysql database to InnoDB.

 Read the above sentence again :)

 All others, unless you had a specific reason not to do so, yes, I would
 convert them.

 keith

 On Tue, Jan 26, 2010 at 2:18 PM, John G. Heim jh...@math.wisc.edu wrote:

 Just to be clear, you're suggesting I convert all of the spamassassin,
 drupal, and mediawiki tables to innodb too? Or just my own database? What
 about the mysql database itself? I wouldn't convert those tables, would I?

 - Original Message - From: Keith Murphy bmur...@paragon-cs.com
 To: mysql@lists.mysql.com
 Sent: Tuesday, January 26, 2010 11:06 AM
 Subject: Re: optimization



 ♫
 I would recommend the same to you about reading High Perf. MySQL as Baron,
 et al wrote a great book about performance on MySQL. That being said, it
 has
 been my experience that in 99% of client cases they don't really need to
 run
 two different types of tables. If I were you, I would use InnoDB
 exclusively
 unless there is legitimate reason to do otherwise. In an environment that
 is
 running 25% writes and a decent query rate you are bound to have contention
 issues with MyISAM. While there are always going to be edge cases for
 MyISAM, your default should be innodb and your config should reflect this.

 Changing your tables to InnoDB is a simple ALTER TABLE which you can script
 if there are a number of tables to convert.  Allocate as much of your
 available RAM as possible to the innodb_buffer_pool (typically 50 - 80% of
 total RAM) and I bet you would see a dramatic difference. That is
 simplifying things somewhat, but should give an idea.


 keith


 On Tue, Jan 26, 2010 at 11:53 AM, mos mo...@fastmail.fm wrote:

  Get yourself a copy of the book High Performance MySQL 2nd Edition
 Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English
 ISBN-10: 0596101716 ISBN-13: 978-0596101718

 Here is a brief preview of the first edition:

 http://books.google.ca/books?id=iaCCQ13_zMICprintsec=frontcoverdq=high+performance+mysqlcd=1#v=onepageq=f=false

 Mike

 At 10:19 AM 1/26/2010, John G. Heim wrote:

  From: Jaime Crespo Rincón jcre...@warp.es

 Sent: Monday, January 25, 2010 5:30 PM


 2010/1/25 John G. Heim jh...@math.wisc.edu:

  I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
 running the latest mysql-server from debian lenny (5.0.1). I have
 databases
 for drupal, moodle, spamassassin, horde3, and a small database for
 departmental stuff.

 The problem is that inserts/updates are sometimes very slow, on the
 order
 of
 a minute. I am hoping somebody can sspot something wrong in my config.
 Here's the optimization settings section (for your convenience). The
 whole
 my.cnf is reproduced below that:


 Are your databases using MyISAM or InnoDB?

 Both. Maybe that's the problem? I started creating database tables for my
 own web apps with the default mysql configuration. I believe the default
 database engine is MyISAM. But then I wanted to use foreign keys and I
 saw
 that it required me to use  InnoDB. So I converted some tables to InnoDB
 but
 not all. Maybe it was a mistake not to convert all of them.

 After that, I installed drupal, moodle, and mediawiki. I haven't looked
 at
 what kind of tables those packages create. They may not specify it and
 the
 default is still whatever it is when you install mysql, MyISAM I think.

  * If MyISAM, you could be suffering contention problems on writes


  because of full table locks. No easy solution but engine change or
 database sharding. Also key_buffer, (and the other buffers) coud be
 too small for 16GB of RAM. Are you really using more thant 10% of it?
 You could also disable other engines if unused.
 * If InnoDB, you have not set innodb_buffer_pool_size nor log size.
 You could increase the pool to 50% of ram available.

 Those are very general suggestions. It depends a lot on your hardware
 (slow storage?), other apps installed on the same machine or the load
 of the server, among others.


 Well, it could be that the disks aren't real fast. The server is also
 running a print server (lprng). I don't think that's very CPU intensive
 but
 it might be slowing down writes.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com




 --
 Chief Training Officer
 Paragon Consulting Services
 850-637-3877


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com




 --
 Chief Training Officer
 Paragon Consulting Services
 850-637-3877


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 

Re: Removing 1st character of string for all entries in field

2009-11-05 Thread Kyong Kim
I think you can use update replace.
UPDATE table SET column=REPLACE(column,'$','');

Kyong

On Thu, Nov 5, 2009 at 1:35 PM, Tim Legg kc0...@yahoo.com wrote:
 Hello,

 I am importing CSV data from a proprietary database into a table in MySQL.  
 Due to a flaw in the proprietary software's export tool, currency values 
 (floats) are always exported with a '$' prefixed to them.  This causes a 
 problem where the matching float field in the MySQL table being set to zero 
 (or not set at all) after import.

 As a solution to getting a complete import, I modified the data type of the 
 field in MySQL from float to varchar(8), so now the data is present in the 
 table.

 I am faced with the problem of removing the '$' from the string.

 I can filter out the '$' by doing a string manipulation,

 SELECT MID((SELECT `imported_data`.`PartPrice` FROM `imported_data` WHERE 
 `imported_data`.`PartNumber`='1') FROM 2);

 I can change the value of a record from $100 to 100 by hand.

 UPDATE `imported_data` SET `imported_data`.`PartPrice`='100' WHERE 
 `imported_data`.`ParttNumber`='49152';

 And thus tried,

 UPDATE `imported_data` SET `imported_data`.`PartPrice`=(SELECT MID((SELECT 
 `imported_data`.`PartPrice` FROM `imported_data` WHERE 
 `imported_data`.`PartNumber`='49152') FROM 2);) WHERE 
 `imported_data`.`PartNumber`='49152';

 It was a nice try, but found out from MySQL that You can't specify target 
 table 'imported_data' for update in FROM clause and discovered that it 
 really looks like that I cannot write data to a table while a nested query is 
 reading the same location.

 I could create a new field and insert into that instead of updating.

 But, regardless of the approach, I would still have to execute this statement 
 some 8,000 times.  Once for each part number I have.  Putting something like 
 this in a for-loop almost feels like I am avoiding a feature of convenience 
 that I am not aware of.

 There really must be an easier way.  Can anybody help me with a more elegant 
 solution?  (BTW, I have been explicitly forbidden from doing a search and 
 replace on '$' with the CSV file that got exported)


 Thank you for assisting me and your support of a fine database software 
 package!


 Timothy Legg




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Removing 1st character of string for all entries in field

2009-11-05 Thread Kyong Kim
Yeah. Sometimes the manual is lacking in practical examples.
Always good to try the list instead of just getting bogged down in the manual.
Kyong

On Thu, Nov 5, 2009 at 2:28 PM, Tim Legg kc0...@yahoo.com wrote:
 Holy Crapoly!

 SOLVED in 17 minutes  That is a new record for me.

 UPDATE `imported_data` SET `PartPrice`=REPLACE(`PartPrice`,'$','');

 ...accomplished my task.

 I read the REPLACE page in section 12.2.7 of the online Reference Manual and 
 didn't see the utility of it.  The REPLACE page there is so unclear, I can't 
 even backwards-comprehend the code using the documentation.  Where on earth 
 did you learn to code like this?  A one-liner at that, even on an 80-column 
 terminal.

 Thank you very much!

 Tim Legg
 --- On Thu, 11/5/09, Kyong Kim kykim...@gmail.com wrote:

 From: Kyong Kim kykim...@gmail.com
 Subject: Re: Removing 1st character of string for all entries in field
 To: Tim Legg kc0...@yahoo.com
 Cc: mysql@lists.mysql.com
 Date: Thursday, November 5, 2009, 3:52 PM
 I think you can use update replace.
 UPDATE table SET column=REPLACE(column,'$','');

 Kyong

 On Thu, Nov 5, 2009 at 1:35 PM, Tim Legg kc0...@yahoo.com
 wrote:
  Hello,
 
  I am importing CSV data from a proprietary database
 into a table in MySQL.  Due to a flaw in the proprietary
 software's export tool, currency values (floats) are always
 exported with a '$' prefixed to them.  This causes a
 problem where the matching float field in the MySQL table
 being set to zero (or not set at all) after import.
 
  As a solution to getting a complete import, I modified
 the data type of the field in MySQL from float to
 varchar(8), so now the data is present in the table.
 
  I am faced with the problem of removing the '$' from
 the string.
 
  I can filter out the '$' by doing a string
 manipulation,
 
  SELECT MID((SELECT `imported_data`.`PartPrice` FROM
 `imported_data` WHERE `imported_data`.`PartNumber`='1') FROM
 2);
 
  I can change the value of a record from $100 to 100 by
 hand.
 
  UPDATE `imported_data` SET
 `imported_data`.`PartPrice`='100' WHERE
 `imported_data`.`ParttNumber`='49152';
 
  And thus tried,
 
  UPDATE `imported_data` SET
 `imported_data`.`PartPrice`=(SELECT MID((SELECT
 `imported_data`.`PartPrice` FROM `imported_data` WHERE
 `imported_data`.`PartNumber`='49152') FROM 2);) WHERE
 `imported_data`.`PartNumber`='49152';
 
  It was a nice try, but found out from MySQL that You
 can't specify target table 'imported_data' for update in
 FROM clause and discovered that it really looks like that I
 cannot write data to a table while a nested query is reading
 the same location.
 
  I could create a new field and insert into that
 instead of updating.
 
  But, regardless of the approach, I would still have to
 execute this statement some 8,000 times.  Once for each
 part number I have.  Putting something like this in a
 for-loop almost feels like I am avoiding a feature of
 convenience that I am not aware of.
 
  There really must be an easier way.  Can anybody help
 me with a more elegant solution?  (BTW, I have been
 explicitly forbidden from doing a search and replace on '$'
 with the CSV file that got exported)
 
 
  Thank you for assisting me and your support of a fine
 database software package!
 
 
  Timothy Legg
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com
 
 






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: error code 139 innodb

2009-10-15 Thread Kyong Kim
Raj,
Yup. It's that bug.
I got the row size to below 8K and the insertion takes place fine.
Thanks for pointing me in the right direction.
Kyong

On Wed, Oct 14, 2009 at 10:31 AM, Raj Shekhar rajl...@rajshekhar.net wrote:
 Kyong Kim kykimdba at gmail.com writes:

 For sure all of our columns combined do not exceed 64K. We're using
 latin 1 character set.
 I don't think we would be running into the 8K limit on row length
 since the culprit seems to be data being inserted into VARCHAR(255)
 column.

 Can you show us your create table?

 This might be a known bug http://bugs.mysql.com/bug.php?id=25945



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



error code 139 innodb

2009-10-12 Thread Kyong Kim
We have an InnoDB table on MySQL 5.0.
We recently encountered an this error during a multirow insert(200 rows).
We identified the data causing it and it's a a series of long strings
exceeding the VARCHAR(255) columns into which they're being inserted.
I've been looking at the InnoDB restriction page in the manual and
nothing seems to make sense.

For sure all of our columns combined do not exceed 64K. We're using
latin 1 character set.
I don't think we would be running into the 8K limit on row length
since the culprit seems to be data being inserted into VARCHAR(255)
column.
I'm assuming MySQL is silently truncating the string as it's being
inserted into the VARCHAR column.
Our TEXT columns are empty.

Anyone have any idea what might be causing it?
Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: 100% CPU load problem

2009-09-21 Thread Kyong Kim
Is the status information correct?
 mysql  Ver 14.12 Distrib 5.0.27, for Win32 (ia32)
Are you using some sort of vm?
Kyong

On Mon, Sep 21, 2009 at 12:23 PM, Lawrence Robertson
lawrobert...@yahoo.com wrote:
 Hi.

 We have some MySql servers in a circular replication, and one of the servers 
 is having some performance issues for some weeks. I tried with mysqlcheck and 
 nothing is wrong. I'm logging the slow queries and there's nothing wrong.

 The complete database in this server is like 1GB when it's GZipped. And 
 almost 10GB in the /var/lib/mysql directories, without bin-logs or ibdata 
 files. It's a CentOS 4.4 Linux and a MySql 5.0.27 server.


 Here is my STATUS, SHOW STATUS, SHOW VARIABLES and TOP output. Hope it's 
 useful.

 mysql status
 --
 mysql  Ver 14.12 Distrib 5.0.27, for Win32 (ia32)

 Connection id:          30
 Current database:
 Current user:           r...@192.168.0.142
 SSL:                    Not in use
 Using delimiter:        ;
 Server version:         5.0.26-standard-log
 Protocol version:       10
 Connection:             192.168.0.19 via TCP/IP
 Server characterset:    latin1
 Db     characterset:    latin1
 Client characterset:    latin1
 Conn.  characterset:    latin1
 TCP port:               6419
 Uptime:                 17 min 46 sec

 Threads: 17  Questions: 314  Slow queries: 0  Opens: 37  Flush tables: 1  
 Open t
 ables: 31  Queries per second avg: 0.295
 --


 mysql show status;
 +---+---+
 | Variable_name                     | Value     |
 +---+---+
 | Aborted_clients                   | 2         |
 | Aborted_connects                  | 0         |
 | Binlog_cache_disk_use             | 0         |
 | Binlog_cache_use                  | 0         |
 | Bytes_received                    | 782       |
 | Bytes_sent                        | 14389     |
 | Com_admin_commands                | 0         |
 | Com_alter_db                      | 0         |
 | Com_alter_table                   | 0         |
 | Com_analyze                       | 0         |
 | Com_backup_table                  | 0         |
 | Com_begin                         | 0         |
 | Com_change_db                     | 0         |
 | Com_change_master                 | 0         |
 | Com_check                         | 0         |
 | Com_checksum                      | 0         |
 | Com_commit                        | 0         |
 | Com_create_db                     | 0         |
 | Com_create_function               | 0         |
 | Com_create_index                  | 0         |
 | Com_create_table                  | 0         |
 | Com_dealloc_sql                   | 0         |
 | Com_delete                        | 0         |
 | Com_delete_multi                  | 0         |
 | Com_do                            | 0         |
 | Com_drop_db                       | 0         |
 | Com_drop_function                 | 0         |
 | Com_drop_index                    | 0         |
 | Com_drop_table                    | 0         |
 | Com_drop_user                     | 0         |
 | Com_execute_sql                   | 0         |
 | Com_flush                         | 0         |
 | Com_grant                         | 0         |
 | Com_ha_close                      | 0         |
 | Com_ha_open                       | 0         |
 | Com_ha_read                       | 0         |
 | Com_help                          | 0         |
 | Com_insert                        | 0         |
 | Com_insert_select                 | 0         |
 | Com_kill                          | 0         |
 | Com_load                          | 0         |
 | Com_load_master_data              | 0         |
 | Com_load_master_table             | 0         |
 | Com_lock_tables                   | 0         |
 | Com_optimize                      | 0         |
 | Com_preload_keys                  | 0         |
 | Com_prepare_sql                   | 0         |
 | Com_purge                         | 0         |
 | Com_purge_before_date             | 0         |
 | Com_rename_table                  | 0         |
 | Com_repair                        | 0         |
 | Com_replace                       | 0         |
 | Com_replace_select                | 0         |
 | Com_reset                         | 0         |
 | Com_restore_table                 | 0         |
 | Com_revoke                        | 0         |
 | Com_revoke_all                    | 0         |
 | Com_rollback                      | 0         |
 | Com_savepoint                     | 0         |
 | Com_select                        | 8         |
 | Com_set_option                    | 0         |
 | Com_show_binlog_events            | 0         |
 | Com_show_binlogs                  | 0         |
 | Com_show_charsets                 | 0         |
 | Com_show_collations               | 0         |
 | Com_show_column_types             | 0         |
 | 

Re: What should it be in MySql? In C, it's an array of integers.

2009-09-16 Thread Kyong Kim
Be careful about burying list type of data in a column.
I've seen poor performance issues parsing lists and XML type data
buried in columns.
A lot depends on your application and how and what you need to query
from those lists.

I've seen a case where a submitted documents were stored in a column
as an XML doc where the application had to fetch all the submissions
within the group and parse them in order to figure out who the
submitters were. This was being done at the gateway page of that tool.
It was a performance nightmare.
An alternative might be to store the integers vertically in a table
with a column describing the position of the value within the list.
Kinda off the cuff but something to think about.

Kyong

On Wed, Sep 16, 2009 at 5:29 PM, John Meyer john.l.me...@gmail.com wrote:
 Pete Wilson wrote:

 Hi folks --

 What would be the right approach in MySql 5.0?

 My table, USERS, has columns NAME and IP. Associated with each user is
 also a collection of from 0 to 50 INTs. What's a reasonable way to put these
 50 INTs in the table without using 50 separate columns, INT01...INT50? Is
 BLOB an OK approach?

 I have to manipulate these INTs in my CGI code.

 Thanks!

 -- Pete



 Break them out into a separate table linked via the primary key.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: database design

2009-09-11 Thread Kyong Kim
A) You would probably want to populate the Article.Article_Type column
with Article_Type.ID. You probably wouldn't need Article_Type table if
you're going to store Article_Type value directly.

I would also consider the use of natural primary key vs surrogate
primary key. We've seen good results with primary key lookups on large
tables (especially creating grouped subsets of data)

If you imagine your data set growing fairly large, you should take a
stab at projecting your workload to determine whether you would want
to optimize access speed vs insert.

For example, if you will be searching the article table by uid, you
might want to cluster the data by uid so all related articles will be
stored next to each other.

Kyong

On Fri, Sep 11, 2009 at 5:44 AM, Arthur Fuller fuller.art...@gmail.com wrote:
 I agree with Claudio. You have your design correct. The only other thing you
 need is the uid qualifier. Presumably you are using PHP or some other front
 end to present your data. Your front end would request the user's name and
 password, saving the uid in a variable and then issuing the select with a
 WHERE clause that passes the uid in:
 select * from articles A left joing article_types AT on A.article_type =
 AT.Arcticle_types_id WHERE A.uid = insert your variable here

 hth,
 Arthur

 On Fri, Sep 11, 2009 at 8:22 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 A.J., It sounds good to me!
 You can be a little confused but you did it well,
 It seems you have all you need there.

 A) Yes
 B)  select * from articles A left join article_types AT on A.article_type =
 AT.article_types_id

 Claudio




 2009/9/11 AndrewJames andrewhu...@gmail.com

  This is a bit of a long shot, but i really need some help and or directed
  to the best reading resources.
 
  as i begun building my database (as i went along), i now realise i have
 to
  stop coding and sit back and design the database properly before i can go
  on.
 
  However i am still unable to wrap my head around what data to put into
 what
  tables, and which columns i need to link to make the relationships.  so
 far,
  here is what i have.
 
  TABLES:
 
  users
  -uid(pk)
  -username
  -password
 
  articles
  -article_id(pk)
  -uid(fk)
  -article_type(fk)
  -article_subject
  -article_body
 
  article_types
  -article_types_id(pk)
  -article_type
 
  So i want the user to be able to login and add articles.
 
  I then want to be able to view all the articles the user has submitted.
 
  So in my understanding i need to link the users.uid(pk) to the
  articles.uid(fk) (so i know which user the article belongs to, please
  correct and update me if i am wrong)
 
  I am stuck at this point.
 
  A) Have i created the right tables and columns for each table, AND
  B) How do i link the articles.article_type to articles_type.type? (IF in
  fact that is even the correct linkage)??
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com
 
 


 --
 Claudio



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Does InnoDB ever not cluster data by primary key?

2009-07-31 Thread Kyong Kim
Michael,
We're counting on batch inserts of constant 2 leftmost columns of the
primary key.
We would be selecting within constant values for the leftmost columns as well.

For example, our primary key is

country_id, city_id, auto_inc, ...

We'll always be looking for data from within the same country and city.
Kyong


On Thu, Jul 30, 2009 at 9:44 PM, Michael Dykmanmdyk...@gmail.com wrote:
 With your auto-increment in the right-most position, it seems to me
 that it would tend to make your inserts non-sequential (assuming the
 fields to the left are not sequential) causing inserts to occur all
 over the tree.  With the auto-increment as the first field in the key,
 the inserts would be going to the same place in the tree allowing it
 to build out nicely.

 I have definitely found that sequential inserts perform much better
 than random ones.

  - md

 On Fri, Jul 31, 2009 at 12:25 AM, Kyong Kimkykim...@gmail.com wrote:
 Michael,
 Yeah. We're trying to maximize the benefits of clustering and had to
 sacrifice on the length of the primary key.
 And we got fairly good results from query profiling using maatkit.
 One thing that shocked me was the overhead of random inserts primary
 key updates.
 It's definitely a tradeoff.
 We're reasonably certain that we'll see a lot of ordered bulk inserts.

 It ran counter to the results that we were seeing so I had to verify
 that InnoDB always clusters by primary key regardless of the position
 of the auto increment column in the primary key.
 Kyong

 On Thu, Jul 30, 2009 at 7:08 PM, Michael Dykmanmdyk...@gmail.com wrote:
 InnoDb storage format is (always) a b-tree based on the primary key,
 so the simple answer is: no, InnoDB never clusters by anything other
 than a primary key.  The size of that key can have significant impact
 on performance though, so be careful with the multi-icolumn primary
 key.  Assuming your primary key remains constant over the lifetime of
 the record. I don't think it matters much where you put the
 auto-increment key.

  - michael


 On Thu, Jul 30, 2009 at 10:00 PM, Kyong Kimkykim...@gmail.com wrote:
 We have a multi-column primary key with an auto-increment column as
 the 3rd column in the primary key in InnoDB.
 Is there a requirement to have the auto-increment column as the
 leftmost column in the primary key in order for InnoDB to cluster by
 the multi-column primary key?
 I don't believe this to be the case but there has been some discussion
 on this topic.
 I haven't been able to find any definitive answers.
 Judging by the query profiling results and explain output,  we are
 seeing the benefits of clustering by primary key.
 If you have any insight on this matter, it would be much appreciated.
 Kyong

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





 --
  - michael dykman
  - mdyk...@gmail.com

 Don’t worry about people stealing your ideas. If they’re any good,
 you’ll have to ram them down their throats!

   Howard Aiken





 --
  - michael dykman
  - mdyk...@gmail.com

 Don’t worry about people stealing your ideas. If they’re any good,
 you’ll have to ram them down their throats!

   Howard Aiken


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Does InnoDB ever not cluster data by primary key?

2009-07-30 Thread Kyong Kim
We have a multi-column primary key with an auto-increment column as
the 3rd column in the primary key in InnoDB.
Is there a requirement to have the auto-increment column as the
leftmost column in the primary key in order for InnoDB to cluster by
the multi-column primary key?
I don't believe this to be the case but there has been some discussion
on this topic.
I haven't been able to find any definitive answers.
Judging by the query profiling results and explain output,  we are
seeing the benefits of clustering by primary key.
If you have any insight on this matter, it would be much appreciated.
Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Does InnoDB ever not cluster data by primary key?

2009-07-30 Thread Kyong Kim
Michael,
Yeah. We're trying to maximize the benefits of clustering and had to
sacrifice on the length of the primary key.
And we got fairly good results from query profiling using maatkit.
One thing that shocked me was the overhead of random inserts primary
key updates.
It's definitely a tradeoff.
We're reasonably certain that we'll see a lot of ordered bulk inserts.

It ran counter to the results that we were seeing so I had to verify
that InnoDB always clusters by primary key regardless of the position
of the auto increment column in the primary key.
Kyong

On Thu, Jul 30, 2009 at 7:08 PM, Michael Dykmanmdyk...@gmail.com wrote:
 InnoDb storage format is (always) a b-tree based on the primary key,
 so the simple answer is: no, InnoDB never clusters by anything other
 than a primary key.  The size of that key can have significant impact
 on performance though, so be careful with the multi-icolumn primary
 key.  Assuming your primary key remains constant over the lifetime of
 the record. I don't think it matters much where you put the
 auto-increment key.

  - michael


 On Thu, Jul 30, 2009 at 10:00 PM, Kyong Kimkykim...@gmail.com wrote:
 We have a multi-column primary key with an auto-increment column as
 the 3rd column in the primary key in InnoDB.
 Is there a requirement to have the auto-increment column as the
 leftmost column in the primary key in order for InnoDB to cluster by
 the multi-column primary key?
 I don't believe this to be the case but there has been some discussion
 on this topic.
 I haven't been able to find any definitive answers.
 Judging by the query profiling results and explain output,  we are
 seeing the benefits of clustering by primary key.
 If you have any insight on this matter, it would be much appreciated.
 Kyong

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





 --
  - michael dykman
  - mdyk...@gmail.com

 Don’t worry about people stealing your ideas. If they’re any good,
 you’ll have to ram them down their throats!

   Howard Aiken


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



composite vs single column secondary index in innodb

2009-06-24 Thread Kyong Kim
We have a composite primary key consisting of
column a, column b, column c. We don't have a lot of variation on
column a and it makes sense for us to cluster by a.
Our queries are
SELECT column c FROM table WHERE column a=something and column e=something.
By creating a composite secondary index on column e (column a and
column e), we're thinking we can reduce the amount of seek on the
index on column e.

My question is -
is the column a value available already as bookmark lookups on single
column index on e or would we better off creating a composite index on
a and e if we always search by a and e? We're willing to accept some
overhead on inserts for selects.

Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Effect of NULL on index performance specific to InnoDB

2009-06-01 Thread Kyong Kim
It's often said that NOT NULL column is preferable in terms of index
performance.
I was wondering exactly why and how this is so specifically to InnoDB.
It would be great if someone can shed light on this matter in some detail.
Kyong

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Creating a New DB / User

2009-05-13 Thread Kyong Kim

You would be fine.
Grant all does not include with grant option.
Ability to grant has to be given explicitly.
Kyong

At 08:57 AM 5/13/2009, Carlos Williams wrote:

On Wed, May 13, 2009 at 11:15 AM, Michael Dykman mdyk...@gmail.com wrote:
 One thing though: your user does not need the 'WITH GRANT OPTION' bit
 as that gives them permission to gratn permissions to other users,
 which I don't think is what you want.

How do I change the bit for 'grant all' via the CLI? What command
would I run to keep the user
as having full rights to the 'dublin' database but not being able to
grant other users permissions?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=kimky...@fhda.edu



Kyong Kim
Instructional Multimedia/Web Programmer
Foothill College
12345 El Monte Rd
3601
Los Altos Hills, CA 94022
650-949-7091


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: splitting large tables vertically

2009-05-10 Thread Kyong Kim
Simon,
Thanks for the feedback.
I don't have all the details of the schema and workload. Just an
interesting idea that was presented to me.
I think the idea is to split a lengthy secondary key lookup into 2 primary
key lookups and reduce the cost of clustering secondary key with primary
key data by using a shorter INT type surrogate key. Another downside is
the possible need of foreign keys and added complexity of insertions and
multi-column updates.

Have you found primary key lookups to be at least twice as fast as
secondary key lookups with VARCHAR type primary key in InnoDB? The whole
idea is based on the assumption that it is.

Also, MyISAM conversion is an option too. Have you found the table
maintenance to be a significant overhead? I've experienced MyISAM table
corruptions in production and I'm more inclined to go with InnoDB for its
reliability. This is a fairly important table.

Any insight would be much appreciated.
Kyong

 kimky...@fhda.edu (Kyong Kim) writes:

 I was wondering about a scale out problem.
 Lets say you have a large table with 3 cols and 500+ million rows.

 Would there be much benefit in splitting the columns into different
 tables
 based on INT type primary keys across the tables?

 To answer your question properly requires more information:

 1. Expected table structure. Can you show the current CREATE TABLE xxx\G
 output?
 2. Expected use cases to extract data?
 3. Do you expect to delete data frequently, or are you only inserting
 data,
or is there a mix of inserts and deletes? If so provide more info.

 I've come across situations where a large table like this caused lots
 of problems. There were lots of concurrent delete batches (cleaning
 up) and at the same time lot of inserts. At the same time there were
 large groups of selects to collect certain sets of data for
 presentation. Perhaps you are doing something similar? If you do
 something similar you may find that it's extremely important to get
 the keys right especially the primary keys so that data retrieval (for
 SELECTs or DELETEs) is as fast as possible (using clustered indexes
 [PRIMARY KEY in innodb]). If not or if the queries overlap you may
 find performance degredation a big issue as Innobase manages the locks
 to ensure that the concurrent statements don't interfere.

 You can also use merge tables sitting on top of MyISAM per year or
 per whatever data in each table. That avoids you having to find data
 for 2009 as you look in table xxx_2009, so this can be a big
 win. MyISAM has the inconvenience that if the server ever crashes
 recovery of these tables can be very timeconsuming.  Innodb has a
 larger footprint for the same data.

 So it's hard without more information on the structure and the use
 cases to answer your question. In fact if you have the time, try out
 and benchmark different approaches and see which is best for your
 requirements. Just remember that as the data grows the initial
 measurements may not be consistent with behaviour you see later. Also
 if you are looking at a large amount of data like this appropriate
 server tuning can influence performance significantly.

 Hope this helps.

 Simon

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=kimky...@fhda.edu



Inst. Web Programmer
CMDBA 5.0

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: splitting large tables vertically

2009-05-10 Thread Kyong Kim
 That's why you really need to be more precise in the data structures
 you are planning on using. This can change the results significantly.

 So no, I don't have any specific answers to your questions as you don't
 provide any specific information in what you ask.

Yeah. Let me see if I can follow up with more concrete information
sometime in future. I find performance tuning to be workload dependent and
it is difficult to project without having all the details.

 Well disk (and memory) usage can also be important so as it seems
 InnoDB storage is less efficient this may actually degrade
 performance.  Until you are more concrete it's hard to say what will
 work best for you.

At this point I'm fairly convinced that this idea of vertical paritioning
a table into column tables will degrade performance unless the workload is
tailor-made for this.
The cost of joins and index lookup/column data seems a bit too high for
almost any scenario.
Thanks for the prompt response. I'll follow up with you if I have more
concrete details.

Thanks
Kyong




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=kimky...@fhda.edu



Inst. Web Programmer
CMDBA 5.0

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



splitting large tables vertically

2009-05-09 Thread Kyong Kim
I was wondering about a scale out problem.
Lets say you have a large table with 3 cols and 500+ million rows.

Would there be much benefit in splitting the columns into different tables
based on INT type primary keys across the tables? The split tables will be
hosted on a same physical instance but can be spread over multiple disks.
We're also open to splitting the query and reconstituting the data at the
application layer such as

select col1, col2 from t1 where col2='name';
select col2 from t2 where col1=t1.col1;
select col2 from t3 where col1=t1.col1;

as opposed to

select t1.col2, t2.col2, t3.col2 from t1 inner join t2 on t1.col1=t2.col1
inner join t3 on t1.col1=t3.col1;

My concern to this approach is the overhead of joins of such large number
of rows. I was doing some research into the cost of joins and as of 5.0,
the joins were still nested loop scans.

I was wondering if there are others with practical experience in this
matter and what they've found. Any feedback will be much appreciated.

Kyong
Inst. Web Programmer
CMDBA 5.0

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with mysql query, multiple list

2009-05-08 Thread Kyong Kim

Abhi,

I might not be understanding the problem but could you use the max 
and min timestamp values and use something like


SELECT * FROM TABLE WHERE start BETWEEN max AND min AND end BETWEEN max AND min

or

SELECT * FROM TABLE WHERE START IN (1,2,3,4,5) AND END IN(1,2,3,4,5)

I might be completely off-base here though as I don't think I fully 
comprehend what you're trying to do.


Kyong

At 09:36 AM 5/8/2009, Abhishek Pratap wrote:

aah okie I think I was trying to get too clever. Guess that won't work ...

Thanks,
-Abhi

On Fri, May 8, 2009 at 12:34 PM, Barney Boisvert bboisv...@gmail.comwrote:

 You'll have to iterate over your two lists of timestamps and build a
 set of ORed conditional pairs:

 sql = select ... from ... where 1 = 0
 for (i = 0; i  timestamps.length; i++) {
  sql +=  or start = + timestamps[i] +  and end =  + timestamps[i]
 }

 You'll want to use bind parameters in real life, of course.

 cheers,
 barneyb

 On Fri, May 8, 2009 at 9:26 AM, Abhishek Pratap abhishek@gmail.com
 wrote:
  Hi All
 
  I am kind of stuck with this query  , cant expand my thinking. May this
 is a
  limitation.  Here it is
 
  I have a database with many cols two of which are start and end position
 for
  an event.
 
  Now I have a list of event time stamps, I want to find all the info once
 the
  current event time stamp is = start time of event and =end time of
 event.
 
  something  like this
 
  select * from table_name where start = ( LIST of time stamps) AND end
 =(
  list of time stamps).
 
  Clearly above query accepts only one item in the list. Is there a way to
 do
  this for multiple items in the list ??? I can't think of anything at this
  moment.
 
 
  Thanks,
  -Abhi
 



 --
 Barney Boisvert
 bboisv...@gmail.com
 http://www.barneyb.com/




Kyong Kim
Instructional Multimedia/Web Programmer
Foothill College
12345 El Monte Rd
3601
Los Altos Hills, CA 94022
650-949-7091


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql 4.1 server optimization

2009-05-05 Thread Kyong Kim
://lists.mysql.com/mysql?unsub=kimky...@fhda.edu



Kyong Kim
Instructional Multimedia/Web Programmer
Foothill College
12345 El Monte Rd
3601
Los Altos Hills, CA 94022
650-949-7091


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql 4.1 server optimization

2009-05-05 Thread Kyong Kim
2
socket  /var/lib/mysql/mysql.sock
sort_buffer_size4194296
sql_mode
sql_notes   ON
sql_warningsON
storage_engine  MyISAM
sync_binlog 1
sync_frmON
sync_replication0
sync_replication_slave_id   0
sync_replication_timeout0
system_time_zoneCDT
table_cache 407
table_type  MyISAM
thread_cache_size   16
thread_stack196608
time_format %H:%i:%s
time_zone   SYSTEM
tmp_table_size  536870912
tmpdir
transaction_alloc_block_size8192
transaction_prealloc_size   4096
tx_isolationREPEATABLE-READ
version 4.1.22-standard-log
version_comment MySQL Community Edition - Standard (GPL)
version_compile_machine i686
version_compile_os  pc-linux-gnu
wait_timeout28800


Show status


Aborted_clients 234
Aborted_connects0
Binlog_cache_disk_use   0
Binlog_cache_use0
Bytes_received  225606412
Bytes_sent  3005443782
Com_admin_commands  66
Com_alter_db0
Com_alter_table 0
Com_analyze 0
Com_backup_table0
Com_begin   1
Com_change_db   183
Com_change_master   0
Com_check   0
Com_checksum0
Com_commit  0
Com_create_db   0
Com_create_function 0
Com_create_index0
Com_create_table0
Com_dealloc_sql 0
Com_delete  1280
Com_delete_multi3
Com_do  0
Com_drop_db 0
Com_drop_function   0
Com_drop_index  0
Com_drop_table  0
Com_drop_user   0
Com_execute_sql 0
Com_flush   0
Com_grant   0
Com_ha_close0
Com_ha_open 0
Com_ha_read 0
Com_help0
Com_insert  32819
Com_insert_select   43
Com_kill0
Com_load0
Com_load_master_data0
Com_load_master_table   0
Com_lock_tables 0
Com_optimize0
Com_preload_keys0
Com_prepare_sql 0
Com_purge   0
Com_purge_before_date   0
Com_rename_table0
Com_repair  0
Com_replace 0
Com_replace_select  0
Com_reset   0
Com_restore_table   0
Com_revoke  0
Com_revoke_all  0
Com_rollback0
Com_savepoint   0
Com_select  250474
Com_set_option  607
Com_show_binlog_events  0
Com_show_binlogs0
Com_show_charsets   0
Com_show_collations 158
Com_show_column_types   0
Com_show_create_db  0
Com_show_create_table   308
Com_show_databases  4
Com_show_errors 0
Com_show_fields 145
Com_show_grants 0
Com_show_innodb_status  0
Com_show_keys   45
Com_show_logs   0
Com_show_master_status  0
Com_show_ndb_status 0
Com_show_new_master 0
Com_show_open_tables0
Com_show_privileges 0
Com_show_processlist320
Com_show_slave_hosts1
Com_show_slave_status   0
Com_show_status 1
Com_show_storage_engines0
Com_show_tables 108
Com_show_variables  173
Com_show_warnings   0
Com_slave_start 0
Com_slave_stop  0
Com_stmt_close  0
Com_stmt_execute0
Com_stmt_prepare0
Com_stmt_reset  0
Com_stmt_send_long_data 0
Com_truncate0
Com_unlock_tables   1
Com_update  78123
Com_update_multi0
Connections 170
Created_tmp_disk_tables 5454
Created_tmp_files   1010
Created_tmp_tables  3
Delayed_errors  0
Delayed_insert_threads  0
Delayed_writes  0
Flush_commands  1
Handler_commit  0
Handler_delete  949
Handler_discover0
Handler_read_first  1599
Handler_read_key355904561
Handler_read_next   1142413734
Handler_read_prev   363834
Handler_read_rnd7371541
Handler_read_rnd_next   587538060
Handler_rollback161
Handler_update  20384
Handler_write   50259202
Key_blocks_not_flushed  47435
Key_blocks_unused   220362
Key_blocks_used 708736
Key_read_requests   3885623178
Key_reads   1106287
Key_write_requests  50896389
Key_writes  643557
Max_used_connections39
Not_flushed_delayed_rows0
Open_files  196
Open_streams0
Open_tables 232
Opened_tables   238
Qcache_free_blocks  204
Qcache_free_memory  28672400
Qcache_hits 857901
Qcache_inserts  249443
Qcache_lowmem_prunes63919
Qcache_not_cached   1030
Qcache_queries_in_cache 878
Qcache_total_blocks 2280
Questions   1222721
Rpl_status  NULL
Select_full_join155
Select_full_range_join  0
Select_range14658
Select_range_check  0
Select_scan 3788
Slave_open_temp_tables  0
Slave_retried_transactions  0
Slave_running   OFF
Slow_launch_threads 0
Slow_queries4143
Sort_merge_passes   503
Sort_range  41071
Sort_rows   7353497
Sort_scan   5967
Table_locks_immediate   458067
Table_locks_waited  850
Threads_cached  14
Threads_connected   17
Threads_created 70
Threads_running 2
Uptime  444983

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=kimky...@fhda.edu



Kyong Kim
Instructional Multimedia/Web Programmer
Foothill College
12345 El Monte Rd
3601
Los Altos Hills, CA 94022
650-949-7091


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com