Re: 50 things to know before migrating from Oracle to MySQL

2010-01-29 Thread fsb
On 1/28/10 5:21 AM, "changuno"  wrote:

> Hi folks,
> 
> Read a blog which states 50 things to know before migrating from Oracle to
> MySQL. Any comments on this?

as a relatively unsophisticated dbms user (just dynamic web site back end),
i thought it was very interesting to see the kinds of things oracle users do
that i'd never have imagined.

more than 61 joins in a query?! man, those guys are hardcore.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to synchronize two database schemas

2010-01-23 Thread fsb
On 1/21/10 12:03 PM, "Price, Randall"  wrote:

> I have a two databases, one in a production environment (let's call it
> db_prod) and the other in a testing environments (Let's call it db_test).
> 
> What is the best way to synchronize the database schemas?  db_test has had a
> few indexes and constraints added to several tables and I need to generate a
> MySQL script to apply these changes to db_prod.  So basically I want to dump
> the schemas of the two database, compare, and generate the necessary script to
> apply to db_prod.

i'd use "export structure" in phpmyadmin on both dbs, then run diff on the
two export files, and write a script by hand based on the diff.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: SCALING INSERT

2010-01-23 Thread fsb
non-linearity in the insert rate means you have indexes on some columns.

depending on your situation, mysql can be more efficient if drop those
indexes, do bulk inserts, and then add the indexes again.


On 1/23/10 5:02 AM, "Krishna Chandra Prajapati" 
wrote:

> Hi shawn,
> 
> As the data grows to 20 millions the insert rate will become very slow. In
> such case i am getting 2000 insert/seconds only.
> 
> Therefore my objective is not achieved.
> 
> I cannot slow up the insert rate of 10,000/second. I am getting data
> (inserted by users at this rate)
> 
> Is there any other way to do so. (distributed servers)
> 
> Thanks,
> Krishna
> 
> On Thu, Jan 21, 2010 at 7:39 PM, Shawn Green  wrote:
> 
>> Krishna Chandra Prajapati wrote:
>> 
>>> Hi list,
>>> 
>>> I want to insert 1 records/sec into table.  There can be n number of
>>> tables with unique data in each. What are the possible ways to do ?
>>> 
>>> Thanks,
>>> Krishna
>>> 
>>>  The manual is your friend. It doesn't hurt to consult it.
>> 
>> INSERT ...
>> http://dev.mysql.com/doc/refman/5.1/en/insert.html
>> 
>> LOAD DATA INFILE ...
>> http://dev.mysql.com/doc/refman/5.1/en/load-data.html
>> 
>> And we even have both general and specific suggestions on improving INSERT
>> performance in our Optimization chapter:
>> http://dev.mysql.com/doc/refman/5.1/en/optimization.html
>> 
>> Warmest regards,
>> --
>> Shawn Green, MySQL Senior Support Engineer
>> Sun Microsystems, Inc.
>> Office: Blountville, TN
>> 
>> 
>> 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: REGEXP and unicode weirdness

2010-01-21 Thread fsb
On 1/21/10 10:27 AM, "John Campbell"  wrote:

> I want to find rows that contain a word that matches a term, accent
> insensitive:  I am using utf8-general collation everywhere.
> 
> attempt 1:
> SELECT * FROM t WHERE txt LIKE '%que%'
> Matches que qué, but also matches 'queue'
> 
> attempt 1.5:
> SELECT * FROM t WHERE txt LIKE '% que %' OR LIKE 'que %' OR LIKE '% que';
> Almost, but misses "que!"  or 'que...'
> 
> attempt2:
> SELECT * FROM t WHERE txt REGEXP '[[:<:]]que[[:>:]]'
> Matches que, not queue, but doesn't match qué.
> 
> attempt3
> SELECT * FROM t WHERE txt REGEXP
> '[[:<:]]q[uùúûüũūŭůűųǔǖǘǚǜ][eèéêëēĕėęě][[:>:]]'
> Matches que, queue, qué.  (I have no idea why this matches queue, but
> the Regex behavior is bizarre with unicode.)
> 
> Does anyone know why the final regex acts weird?

"Warning

"The REGEXP and RLIKE operators work in byte-wise fashion, so they are not
multi-byte safe and may produce unexpected results with multi-byte character
sets. In addition, these operators compare characters by their byte values
and accented characters may not compare as equal even if a given collation
treats them as equal." -- Mysql 11.4.2


> It there a good solution?

doesn't look like it.

Sphinxsearch might work nicely for you (it does for me) but that may not be
an option for you. i generated a Sphinxsearch charset_table config that
mimics utf8_general_ci collation.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Quick help with Insert

2010-01-21 Thread fsb
that's exactly how i do it.


On 1/19/10 3:57 PM, "Intell! Soft"  wrote:

> Thanks! - I found a Insert Into on your tip:
> 
> Insert Into Lieferanten (Lieferant)
> select distinct
>   a.lieferant
> from artikelstamm a
>   left join lieferanten b on
> a.lieferant = b.lieferant
> where
>   b.lieferant is null
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Carlos Proal [mailto:carlos.pr...@gmail.com]
> Bereitgestellt: Dienstag, 19. Jänner 2010 20:15
> Bereitgestellt in: gmane.comp.db.mysql.general
> Unterhaltung: Quick help with Insert
> Betreff: Re: Quick help with Insert
> 
> 
> Hi !!
> 
> You need a left join and then an insert.
> Please read: http://dev.mysql.com/doc/refman/5.1/en/join.html or google
> for tutorials on left join
> 
> And tell me if you have further questions
> 
> Carlos
> 
> 
> On 1/19/2010 1:00 PM, Intell! Soft wrote:
>> Hey
>> Not really quick ;>  - But nobody knows an answer?
>> 
>> THX
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: Intell! Soft [mailto:intellis...@fachoptiker.net]
>> Bereitgestellt: Donnerstag, 14. Jänner 2010 17:40
>> Bereitgestellt in: gmane.comp.db.mysql.general
>> Unterhaltung: Quick help with Insert
>> Betreff: Quick help with Insert
>> 
>> Hey
>> 
>> I would need quick help with an Insert statement
>> So, I have two tables Table A&  Table B
>> So, in Table A I have a field called customerID
>> The same field I do have in Table B
>> So, I want to find out, which customerID from Table A is NOT in Table
> B
>> and fill the customerID's which are not present in Table B.
>> 
>> Understood? - Hope so
>> 
>> THX !
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Regarding storing non-english chars in mysql

2010-01-19 Thread fsb
On 1/19/10 2:19 AM, "Ningappa  Koneri"  wrote:

> 3.It worked even after removing  tag from the head tag as well <-- plz
> through some light y it got displayed at the browser ?

this is not a mysql question. but see:

http://www.w3.org/TR/1999/REC-html401-19991224/charset.html#h-5.2.2

the likelihood is that either your web server is sending a correct
Content-Type header or your web browser is choosing the right one either by
guesswork (eg http://www.mozilla.org/projects/intl/chardet.html) or by
confg/default.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Regarding storing non-english chars in mysql

2010-01-18 Thread fsb
you can specify the character encoding (called CHARSET in mysql) and
collation on a per column, per table or per database level. e.g.

CREATE DATABASE foo CHARACTER SET utf8 COLLATE utf8_general_ci;

or

CREATE TABLE foo ( ... ) CHARACTER SET utf8 COLLATE utf8_general_ci;

or

CREATE TABLE foo (
bar varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL);

mysql's default character encoding is latin1.

for mysql's collations, see: http://www.collation-charts.org/

i wouldn't use utf8 indiscriminately, only for columns that need it --
indexes need 3 bytes per character on utf8 columns. (utf16 would be better
in that regard!)

don't forget to set session system variables, e.g. using SET NAMES or with
your api.


On 1/18/10 2:44 PM, "Gavin Towey"  wrote:

> http://dev.mysql.com/doc/refman/5.0/en/charset.html
> 
> 
> -Original Message-
> From: Ningappa Koneri [mailto:ningappa.kon...@comviva.com]
> Sent: Sunday, January 17, 2010 11:32 PM
> To: mysql@lists.mysql.com
> Subject: Regarding storing non-english chars in mysql
> 
> Dear all,
> 
> I have a problem in migrating a GUI app(displays multi lingual data like
> Arabic) which currently uses oracle as backend db to mysql (5.1). Currenly in
> oracle I am using NVARCHAR datatype to store the Arabic chars. How do I
> incorporate the same functionality in mysql ? I have tried a sample
> servlet/jsp to insert the arabic chars into mysql by creating a table having
> two columns of NATIONAL VARCHAR type, but it's not displaying Arabic chars
> instead only questions marks are there.
> 
> One more thing in sample example is that before inserting I am converting to
> UTF-8.
> 
> Regards,
> Ningappa Koneri
> mLifestyle | www.comviva.com
> 
> 
> This e-mail and all material transmitted with it are for the use of the
> intended recipient(s) ONLY and contains confidential and/or privileged
> information. If you are not the intended recipient, please contact the sender
> by reply e-mail and destroy all copies and the original message. Any
> unauthorized review, use, disclosure, dissemination, forwarding, printing or
> copying of this email or any action taken pursuant to the contents of the
> present e-mail is strictly prohibited and is unlawful.
> The recipient acknowledges that Comviva Technologies Limited or its management
> or directors, are unable to exercise control or ensure the integrity over /of
> the contents of the information contained in e-mail. Any views expressed
> herein are those of the individual sender only and no binding nature of the
> contents shall be implied or assumed unless the sender does so expressly with
> due authority of Comviva Technologies Limited. E-mail and any contents
> transmitted with it are prone to viruses and related defects despite all
> efforts to avoid such by Comviva Technologies Limited.
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
> 
> 
> This message contains confidential information and is intended only for the
> individual named.  If you are not the named addressee, you are notified that
> reviewing, disseminating, disclosing, copying or distributing this e-mail is
> strictly prohibited.  Please notify the sender immediately by e-mail if you
> have received this e-mail by mistake and delete this e-mail from your system.
> E-mail transmission cannot be guaranteed to be secure or error-free as
> information could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. The sender therefore does not accept liability
> for any loss or damage caused by viruses or errors or omissions in the
> contents of this message, which arise as a result of e-mail transmission.
> [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA,
> FriendFinder.com
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=...@thefsb.org
> 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance of MySQL IN() given a list of constants.

2010-01-15 Thread fsb
the example you gave would work with a range constraint:

  WHERE `bar_id` > 0 AND `bar_id` < 63

but i guess this is not a general solution.

i've done exactly this kind of select using an IN constraint very often.
i've not had any trouble with lists of a few hundred so long as i have the
necessary index. 

however, when the list gets really long i've divided it into multiple
selects in the app. i've nothing concrete to go on but a very long list of
constants makes me nervous. can you break something if query length exceeds
packet size?

one thought i had was to create another table `baz` just to store the list
of `bar_id` values. the app would: 1) truncate `baz`, 2) write the list
`bar_ids` into `baz`, 3) select from `foo` joining `baz`. perhaps not a high
performance solution but you could make it scale all the way up.


On 1/15/10 12:15 PM, "kabel"  wrote:

> I'm facing a refactor of some slow code that engages the DB pretty heavily.
> We're looking for a way to load arbitrary sets of rows from a table using an
> indexed column of decently high cardinality, and I'm not sure if IN() is a
> good way to do it or not.
> 
> Basically, we're looking at this:
> 
> CREATE TABLE `foo` (
>   `foo_id` INT UNSIGNED NOT NULL AUTO_INCREMENT
> , `bar_id` INT UNSIGNED NOT NULL
> , `unimportant_1` VARCHAR(8)
> , `unimportant_2` VARCHAR(8)
> , `unimportant_3` VARCHAR(8)
> , PRIMARY KEY( `foo_id` )
> , KEY `idx_bar_id` ( `bar_id` )
> );
> 
> I need to load an arbitrary list of foos by their bar ID (1:N), and I'm not
> sure the best way to do it.  Joining against the `bar` table is,
> unfortunately, not really helpful since the set loaded is totally arbitrary.
> 
> As far as I can tell, the constant list would be ~100  elements at most, but
> take that with a grain of salt.
> 
> Right now, I'm considering:
> 
> SELECT *
> FROM `foo`
> WHERE `bar_id` IN( 1,2,3,4,5,6,7, ... ,61,62 );
> 
> I've done some testing and the performance doesn't seem to be too bad, but I'm
> worried about scalability.
> 
> The query execution isn't too bad.
> 
> |  1 | SIMPLE  | foo | range | idx_bar_id   | idx_bar_id | 5   | NULL
> |7 | Using where |
> 
> Does anyone have experience with the performance of IN(), given a list of
> constants, on an indexed column?  Any pointers to more information on this
> would be greatly appreciated.
> 
> Thanks in advance!
> 
> kabel



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Better that `NOT IN`

2010-01-15 Thread fsb
On 1/15/10 12:01 AM, "Junior Ortis"  wrote:

> Hi guys i have a problem, 3 big tables: item_instance about 15KK rows,
> character_inventory 15KK rows, guild_bank_item 2KK rows.
> 
> And i need i clean on item_instance how this query:
> 
> DELETE FROM `item_instance` WHERE guid NOT IN(SELECT item FROM
> `character_inventory`) AND guid NOT IN(SELECT item_guid FROM
> `guild_bank_item`) AND
> guid NOT IN(SELECT item_guid FROM `mail_items`) and guid NOT IN(SELECT
> itemguid FROM `auctionhouse`);
> 
> Well atm is running about 13 hours, State = Sending Data.
> 
> I will be a better option ?

would something like this work?

DELETE FROM item_instance i
LEFT JOIN character_inventory c ON c.item=i.guid
LEFT JOIN guild_bank_item g ON g.item_guid=i.guid
LEFT JOIN mail_items m ON m.item_guid=i.guid
LEFT JOIN auctionhouse a ON a.itemguid=i.guid
WHERE c.item IS NULL
AND g.item_guid IS NULL
AND m.item_guid IS NULL
AND a.itemguid IS NULL



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgrading mysql

2010-01-13 Thread fsb
On 1/13/10 2:28 PM, "Lawrence Sorrillo"  wrote:

> The issue is that in theory this should work given the facts announced
> by MySQL regarding binary logging and replication.
> I can certainly do it the way you propose, but to my mind I should also
> be able to do it using the fact that both machines are fully synced and
> hence at
> that point I should be able to to local respective dumps and restores
> and still be in sync.

i can't point at anything in your recipe and say that it doesn't work. it
might work. i'd be nervous that something in steps 5 and 6 might involve a
change on the master that needs to be replicated. since your using a dump
and not a binary copy of myisam file, i suppose this ought to be safe. but i
would be nervous all the same.

on the other hand, i do know that the recipe i gave works because i've used
it often. it also has the virtue of no need for "recording binlog file names
and position etc etc". plus it's the procedure recommended by the mysql folk
themselves, which is worth something to me.

the other thing i've done is:

initial status: A is the master and B is the slave. service is operating off
the master.

1 stop B, upgrade it, restart it, let it catch up.

2 stop service and then stop A

3 change B's conf file to make it the master. restart it

4 resume service using B

5 upgrade A and bring it online as a slave

this has the virtue of very short service outage. with some rehearsal, it
isn't beyond my skills.


> Anyone knows anything special about position 106? It seems to be the
> very initial position in MySQL 5.1 servers?

the manual says:

"If the master has been running previously without binary logging enabled,
the log name and position values displayed by SHOW MASTER STATUS or
mysqldump --master-data will be empty. In that case, the values that you
need to use later when specifying the slave's log file and position are the
empty string ('') and 4."

perhaps you have an init-file that advances it to position 106?



> mysql> show master status;
> +---+--+--+--+
> | File  | Position | Binlog_Do_DB | Binlog_Ignore_DB |
> +---+--+--+--+
> | X-bin.01 |  106 |  |  |
> +---+--+--+--+
> 1 row in set (0.00 sec)
> 
> 
> 
> r...@:/usr/local/mysql/data ] /usr/local/mysql/bin/mysqlbinlog
> mssdb2-bin.01
> /*!40019 SET @@session.max_insert_delayed_threads=0*/;
> /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
> DELIMITER /*!*/;
> # at 4
> #100113 13:50:40 server id 5  end_log_pos 106   Start: binlog v 4,
> server v 5.1.42-log created 100113 13:50:40 at startup
> # Warning: this binlog is either in use or was not closed properly.
> ROLLBACK/*!*/;
> BINLOG '
> ABZOSw8FZgAAAGoBAAQANS4xLjQyLWxvZwAA
> Fk5LEzgNAAgAEgAEBAQEEgAAUwAEGggICAgC
> '/*!*/;
> DELIMITER ;
> # End of log file
> ROLLBACK /* added by mysqlbinlog */;
> /*!50003 SET completion_ty...@old_completion_type*/;
> r...@:/usr/local/mysql/data ]
> 
> ~Lawrence
> 
> 
> 
> 
> Tom Worster wrote:
>> Frankly, I didn't entirely understand what you were proposing. I got lost
>> around step 6.
>> 
>> Is the issue total time for the procedure or service downtime?
>> 
>> 
>> On 1/12/10 12:58 PM, "Lawrence Sorrillo"  wrote:
>> 
>>   
>>> This is two upgrades done in sequence(the reload takes about three hours
>>> per machine) . I can do what I am proposing in parallel.
>>> 
>>> Do you see it as problematic?
>>> 
>>> ~Lawrence
>>> 
>>> 
>>> Tom Worster wrote:
>>> 
 How about:
 
 1 shut down the slave, upgrade it, restart it, let it catch up.
 
 2 shut down the master, upgrade it, restart it, let the slave catch up.
 
 ?
 
 
 
 
 
 On 1/12/10 12:34 PM, "Lawrence Sorrillo"  wrote:
 
   
   
> Hi:
> 
> I want to upgrade a master and slave server from mysql 4.1 to mysql 5.1.
> 
> I want to so something like follows:
> 
> 1. Stop all write access to the master server.
> 2. Ensure that replication on the slave is caught up to the last change
> on the master.
> 3. stop binary logging on the master.
> 4. stop replication on the slave.
> 5. dump the master, stop old 4.1 server, start new 5.1 server and reload
> master dump file under 5.1 server ( binary logging is turned off)
> 6. dump the slave, stop old 4.1 server, start new 5.1 server and reload
> slave dump file under 5.1 server.
> 7. After loading is complete, test then start binary logging on master
> while still preventing updates to updates.
> 8. After loading slave, test then start slave (get configs in place and
> restart server).
> 
> I am thinking that in this scenario I dont have to bother with recording
> binlog file na