Re: console input

2012-06-15 Thread Ananda Kumar
Did you try using IGNORE keyword while using the LOAD DATAFILE command.
This will ignore duplicate rows from getting inserted and proceed further.

On Fri, Jun 15, 2012 at 11:05 AM, Keith Keller 
kkel...@wombat.san-francisco.ca.us wrote:

 On 2012-06-14, Gary Aitken my...@dreamchaser.org wrote:
 
  So...  I wanted to read the data line at a time and use a plain INSERT
 statement.  That way I could check for duplicate keys and discover where
 the duplicate records are.  However, I can't find a way to read input from
 the console or a file.  What am I missing?  I know I could write a java or
 C++ program to do this, but it seems like overkill for what should be a
 trivial task.

 I don't know of any stock mysql program that does such a thing.  You
 could write a Perl or Python program for this task; this program would
 probably be fairly short.

 --keith


 --
 kkel...@wombat.san-francisco.ca.us



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: console input

2012-06-15 Thread Johan De Meersman
- Original Message -
 From: Gary Aitken my...@dreamchaser.org
 
 surprising as the source did not enforce uniqueness.  My problem is
 the load data simply dies without indicating which line of the input
 file was in error; the error message refers to line 3, which is not
 even the SQL statement for the LOAD DATA INTO statement:

Would it not refer to line 3 of the datafile? Not sure, just guessing.

 So...  I wanted to read the data line at a time and use a plain
 INSERT statement.  That way I could check for duplicate keys and
 discover where the duplicate records are.  However, I can't find a
 way to read input from the console or a file.  What am I missing?  I
 know I could write a java or C++ program to do this, but it seems
 like overkill for what should be a trivial task.

Yeah, that would be overkill :-p

You could easily use sed and awk to transform the input file into a list of SQL 
statements. Another solution is to disable all keys on your target table - or 
create a duplicate without any keys - and after the import run a select (unique 
key fields) group by (unique key fields) having count(*)  1.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key and uppercase / lowercase values

2012-06-15 Thread GF
I think the following might give complete information (I removed some
columns not involved in the problem)

Server version: 5.1.49-3 (Debian)

SET collation_connection = utf8_unicode_ci;
Query OK, 0 rows affected (0.00 sec)



show variables like '%colla%';
+--+-+
| Variable_name| Value   |
+--+-+
| collation_connection | utf8_unicode_ci |
| collation_database   | utf8_unicode_ci |
| collation_server | utf8_unicode_ci |
+--+-+
3 rows in set (0.00 sec)

show variables like '%char%';
+--++
| Variable_name| Value  |
+--++
| character_set_client | utf8   |
| character_set_connection | utf8   |
| character_set_database   | utf8   |
| character_set_filesystem | binary |
| character_set_results| utf8   |
| character_set_server | utf8   |
| character_set_system | utf8   |
| character_sets_dir   | /usr/share/mysql/charsets/ |
+--++
8 rows in set (0.00 sec)


select USER_ID from TBL_USER where USER_ID = 'GIUSEPPE';
+--+
| USER_ID  |
+--+
| GIUSEPPE |
+--+
1 row in set (0.00 sec)

 select USER_ID from TBL_USER where USER_ID = 'giuseppe';
+--+
| USER_ID  |
+--+
| GIUSEPPE |
+--+
1 row in set (0.00 sec)


update TBL_USER set USER_ID = LOWER(USER_ID) where USER_ID = 'GIUSEPPE';
ERROR 1451 (23000): Cannot delete or update a parent row: a foreign
key constraint fails (`myschema`.`TBL_COMMENT`, CONSTRAINT
`FK4F6E52581590B46E` FOREIGN KEY (`USER_ID`) REFERENCES `TBL_USER`
(`USER_ID`))



| TBL_USER | CREATE TABLE `TBL_USER` (
  `USER_ID` varchar(50) COLLATE utf8_unicode_ci NOT NULL,

   cut 
  PRIMARY KEY (`USER_ID`),
   cut 
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci |



| TBL_COMMENT | CREATE TABLE `TBL_COMMENT` (
   cut 
  `ID` bigint(20) NOT NULL AUTO_INCREMENT,
   cut 
  `USER_ID` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
   cut 
  PRIMARY KEY (`ID`),
   cut 
  KEY `FK4F6E52581590B46E` (`USER_ID`),
   cut 
  CONSTRAINT `FK4F6E52581590B46E` FOREIGN KEY (`USER_ID`) REFERENCES
`TBL_USER` (`USER_ID`),
   cut 
) ENGINE=InnoDB AUTO_INCREMENT=7876 DEFAULT CHARSET=utf8
COLLATE=utf8_unicode_ci |

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Which Database when lot of insert / update queries to execute

2012-06-15 Thread abhishek jain
hi,
I am biased on mysql, and hence i am asking this on mysql forum first.
I am designing a solution which will need me to import from CSV, i am using
my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
an hour, for 10 hours  a day.
The Queries will mainly be update but select and insert also at times,
The database size will be estimated to be about 5GB.
I need to know is this a classic case for a NOSQL database or mysql is a
good option.

Also , if i need to do 'group by', on a column on a large table what should
i keep in mind, is it advisable,

Please advice,

-- 
Thanks and kind Regards,
Abhishek jain


Re: Which Database when lot of insert / update queries to execute

2012-06-15 Thread Tsubasa Tanaka
Hello,

 I am designing a solution which will need me to import from CSV, i am using
 my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
 an hour, for 10 hours  a day.
try to use `LOAD DATA INFILE' to import from CSV file.

http://dev.mysql.com/doc/refman/5.5/en/load-data.html


 Also , if i need to do 'group by', on a column on a large table what should
 i keep in mind, is it advisable,

create index on columns used by 'group by' and columns used by
aggregate functions.


regards,

2012/6/15 abhishek jain abhishek.netj...@gmail.com:
 hi,
 I am biased on mysql, and hence i am asking this on mysql forum first.
 I am designing a solution which will need me to import from CSV, i am using
 my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
 an hour, for 10 hours  a day.
 The Queries will mainly be update but select and insert also at times,
 The database size will be estimated to be about 5GB.
 I need to know is this a classic case for a NOSQL database or mysql is a
 good option.

 Also , if i need to do 'group by', on a column on a large table what should
 i keep in mind, is it advisable,

 Please advice,

 --
 Thanks and kind Regards,
 Abhishek jain

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: console input

2012-06-15 Thread Shawn Green

On 6/14/2012 5:57 PM, Gary Aitken wrote:

Hi all,

I've looked high and low for what I hope is a trivial answer.

I was trying to load a table using LOAD DATA INFILE.  Unfortunately, it craps 
out because there are some duplicate primary keys.  Not surprising as the 
source did not enforce uniqueness.  My problem is the load data simply dies 
without indicating which line of the input file was in error; the error message 
refers to line 3, which is not even the SQL statement for the LOAD DATA INTO 
statement:

I can get the table loaded by specifying REPLACE INTO TABLE, but that still 
leaves me with not knowing where the duplicate records are.

So...  I wanted to read the data line at a time and use a plain INSERT 
statement.  That way I could check for duplicate keys and discover where the 
duplicate records are.  However, I can't find a way to read input from the 
console or a file.  What am I missing?  I know I could write a java or C++ 
program to do this, but it seems like overkill for what should be a trivial 
task.

Thanks for any pointers,

Gary



The trivial thing I do to solve this problem is to create a copy of the 
destination table without any PRIMARY KEY or UNIQUE constraints on it. 
This gives you an empty space to which you can bulk import your raw 
data. I am not sure if there is an official term for this but I call it 
a 'landing table'. This is the first step of the import process.


Once you can get your data off of disk and onto the landing table (it's 
where the raw import lands inside the database) you can check it for 
duplicates very easily.


1) create a normal index for the PRIMARY KEY column
2) create another table that has a list of duplicateslike this

CREATE TABLE dup_list ENGINE=MYISAM SELECT pkcol, count(*) freq FROM 
landing GROUP BY pkcol HAVING freq 1;


notes:
* use a MyISAM table for this preparation work even if the destination 
table is using the InnoDB storage engine, you really do not need to 
protect this data with a transaction yet.

* MyISAM indexes are also very fast for count(*) queries.

You have clearly identified all duplicate rows in the incoming data. You 
can also compare those rows with your existing rows to see if any of 
them duplicate each other (hint: INNER JOIN is your friend) or if any 
exist in one table but not the other (hint: LEFT JOIN).


From here you should be able to cleanse the incoming data (remove 
duplicates, adjust any weird fields) and merge it with your existing 
data to maintain the relational and logical integrity of your tables.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL Data Recovery on Linux from files recovered to lost+found

2012-06-15 Thread Traiano Welcome
Hi List

 I have (had) a mysql database running on a linux server which crashed
and suffered e2fsck file system corruption. I applied the e2fsck
filesystem checker, which recovered what appears to be most  of the
files comprising the data, storing them in the  lost+found
directory. This looks something like:

[root@vm-rec lost+found]# ls -l | head

-rw-r--r--  1 rootroot163 Jul 16  2009 #1442435
-rw---  1 rootroot   1753 Mar 27  2011 #1442436
-rw---  1 rootroot481 Jul 12  2011 #1442437
-rw-r--r--  1 rootroot 47 Jul 12  2011 #1442438
-rw-r-  1 rootroot646 Aug 21  2010 #1442439
-rw-r--r--  1 rootroot486 Nov 12  2010 #1442441

So, the names of these files are lost and we are left only with their
contents, which seems intact. I can distinguish all the files that
would have been part of a MySQL DB, e.g, if I use the linux file
utility on each file, I get:

---
.
.
.

#2474833: MySQL table definition file Version 10

#2474834: MySQL MISAM compressed data file Version 1
#2474836: MySQL table definition file Version 10
#2474839: MySQL MISAM compressed data file Version 1
#2474841: MySQL table definition file Version 10
#2474842: MySQL MISAM compressed data file Version 1
.
.
.
---

... etc ...

Also, if I extract the ascii content of one of the MySQL table
definition file using the linux strings utility, I seem to be able to
distinguish the MySQL schema of a table in the database. e.g:


---
[root@vm-rec quarrantine]# strings \#2475839
PRIMARY
FK3AB9A8B2CDB30B3D
InnoDB
)
timeslot_id
attendee_user_id
comments
signup_site_id
calendar_event_id
calendar_id
list_index
timeslot_id
attendee_user_id
comments
signup_site_id
calendar_event_id
calendar_id
list_index
---


Using these strings as keywords, searching through a copy of the
original schema which I have reveals that the file seems to contain
exactly the schema of a known table:


---
CREATE TABLE `signup_ts_attendees` (
  `timeslot_id` bigint(20) NOT NULL,
  `attendee_user_id` varchar(255) NOT NULL,
  `comments` text,
  `signup_site_id` varchar(255) NOT NULL,
  `calendar_event_id` varchar(255) DEFAULT NULL,
  `calendar_id` varchar(255) DEFAULT NULL,
  `list_index` int(11) NOT NULL,
  PRIMARY KEY (`timeslot_id`,`list_index`),
  KEY `FKBAB08100CDB30B3D` (`timeslot_id`),
  CONSTRAINT `FKBAB08100CDB30B3D` FOREIGN KEY (`timeslot_id`)
REFERENCES `signup_ts` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = @saved_cs_client */;
---

So it would seem that there might be some way to map these files back
to a format with which I can reconstruct the original database ...

My question is: Does any one have any recommendations of a process I
could use to do this effectively ?


Many Thanks in advance
Traiano Welcome

---

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: console input

2012-06-15 Thread Gary Aitken
Thanks, Shawn; I knew there was a better way to go about that.

Gary

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: console input

2012-06-15 Thread Keith Keller
On 2012-06-14, Gary Aitken my...@dreamchaser.org wrote:

 So...  I wanted to read the data line at a time and use a plain INSERT 
 statement.  That way I could check for duplicate keys and discover where the 
 duplicate records are.  However, I can't find a way to read input from the 
 console or a file.  What am I missing?  I know I could write a java or C++ 
 program to do this, but it seems like overkill for what should be a trivial 
 task.

Actually, what might make more sense is to use un*x command line tools
to help.  Use cut to extract only the columns of your primary key, use
sort (not with -u!) to sort the rows, and use uniq -d to print out only
duplicate lines.  e.g.

cut -f1 mytable.tab | sort | uniq -d

It won't tell you which line number(s) are problematic, however; it just
tells you which values for your PK are duplicated.  If you need the line
number use one of the other suggestions.

--keith

-- 
kkel...@wombat.san-francisco.ca.us



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Foreign key and uppercase / lowercase values

2012-06-15 Thread Rick James
You are very close to a standalone test case.  Please create such.  Then post 
it on bugs.mysql.com .

 -Original Message-
 From: GF [mailto:gan...@gmail.com]
 Sent: Friday, June 15, 2012 12:45 AM
 To: Rick James
 Cc: Shawn Green; mysql@lists.mysql.com
 Subject: Re: Foreign key and uppercase / lowercase values
 
 I think the following might give complete information (I removed some
 columns not involved in the problem)
 
 Server version: 5.1.49-3 (Debian)
 
 SET collation_connection = utf8_unicode_ci; Query OK, 0 rows affected
 (0.00 sec)
 
 
 
 show variables like '%colla%';
 +--+-+
 | Variable_name| Value   |
 +--+-+
 | collation_connection | utf8_unicode_ci |
 | collation_database   | utf8_unicode_ci |
 | collation_server | utf8_unicode_ci |
 +--+-+
 3 rows in set (0.00 sec)
 
 show variables like '%char%';
 +--++
 | Variable_name| Value  |
 +--++
 | character_set_client | utf8   |
 | character_set_connection | utf8   |
 | character_set_database   | utf8   |
 | character_set_filesystem | binary |
 | character_set_results| utf8   |
 | character_set_server | utf8   |
 | character_set_system | utf8   |
 | character_sets_dir   | /usr/share/mysql/charsets/ |
 +--++
 8 rows in set (0.00 sec)
 
 
 select USER_ID from TBL_USER where USER_ID = 'GIUSEPPE';
 +--+
 | USER_ID  |
 +--+
 | GIUSEPPE |
 +--+
 1 row in set (0.00 sec)
 
  select USER_ID from TBL_USER where USER_ID = 'giuseppe';
 +--+
 | USER_ID  |
 +--+
 | GIUSEPPE |
 +--+
 1 row in set (0.00 sec)
 
 
 update TBL_USER set USER_ID = LOWER(USER_ID) where USER_ID =
 'GIUSEPPE'; ERROR 1451 (23000): Cannot delete or update a parent row: a
 foreign key constraint fails (`myschema`.`TBL_COMMENT`, CONSTRAINT
 `FK4F6E52581590B46E` FOREIGN KEY (`USER_ID`) REFERENCES `TBL_USER`
 (`USER_ID`))
 
 
 
 | TBL_USER | CREATE TABLE `TBL_USER` (
   `USER_ID` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
 
cut 
   PRIMARY KEY (`USER_ID`),
cut 
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci |
 
 
 
 | TBL_COMMENT | CREATE TABLE `TBL_COMMENT` (
cut 
   `ID` bigint(20) NOT NULL AUTO_INCREMENT,
cut 
   `USER_ID` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
cut 
   PRIMARY KEY (`ID`),
cut 
   KEY `FK4F6E52581590B46E` (`USER_ID`),
cut 
   CONSTRAINT `FK4F6E52581590B46E` FOREIGN KEY (`USER_ID`) REFERENCES
 `TBL_USER` (`USER_ID`),
cut 
 ) ENGINE=InnoDB AUTO_INCREMENT=7876 DEFAULT CHARSET=utf8
 COLLATE=utf8_unicode_ci |

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysqldump not escaping single quotes in field data

2012-06-15 Thread James W. McNeely
My backups from a mysqldump process are useless, because the dump files are not 
escaping single quotes in the data in the fields. 

So, O'Brien kills it - instead of spitting out 
'O\'Brien' 
it spits out 
'O'Brien'

I don't see anywhere in the documentation about mysqldump where you can tweak 
this kind of thing. 

We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.

I tried to enter this into the Oracle support center but all of the 
navigational and SR tabs are gone. Maybe our accounting dept. forgot to pay the 
bill or something.

Thanks,

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqldump not escaping single quotes in field data

2012-06-15 Thread Ananda Kumar
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below

LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
(1,'ananda'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(5,'O\'Brien');
/*!4 ALTER TABLE `ananda` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;


Import it back to database

Database changed
mysql drop table ananda;
Query OK, 0 rows affected (0.00 sec)

mysql --database test  test.dmp

mysql select * from ananda;
+--+-+
| id   | name|
+--+-+
|1 | ananda  |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|2 | aditi   |
|3 | thims   |
|5 | O'Brien |
+--+-+

May be u want to upgrade you database

On Fri, Jun 15, 2012 at 10:48 PM, James W. McNeely
j...@newcenturydata.comwrote:

 My backups from a mysqldump process are useless, because the dump files
 are not escaping single quotes in the data in the fields.

 So, O'Brien kills it - instead of spitting out
 'O\'Brien'
 it spits out
 'O'Brien'

 I don't see anywhere in the documentation about mysqldump where you can
 tweak this kind of thing.

 We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.

 I tried to enter this into the Oracle support center but all of the
 navigational and SR tabs are gone. Maybe our accounting dept. forgot to pay
 the bill or something.

 Thanks,

 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Foreign key and uppercase / lowercase values

2012-06-15 Thread Shawn Green

On 6/15/2012 1:00 PM, Rick James wrote:

You are very close to a standalone test case.  Please create such.  Then post 
it on bugs.mysql.com .


-Original Message-
From: GF [mailto:gan...@gmail.com]
Sent: Friday, June 15, 2012 12:45 AM
To: Rick James
Cc: Shawn Green; mysql@lists.mysql.com
Subject: Re: Foreign key and uppercase / lowercase values

I think the following might give complete information (I removed some
columns not involved in the problem)

Server version: 5.1.49-3 (Debian)

SET collation_connection = utf8_unicode_ci; Query OK, 0 rows affected
...


Before he submits a test case, he should also review
http://bugs.mysql.com/bug.php?id=27877
http://dev.mysql.com/doc/refman/5.5/en/news-5-5-21.html

This has been a well-discussed problem both inside and outside the MySQL 
development processes.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Which Database when lot of insert / update queries to execute

2012-06-15 Thread Rick James
Let's see
SHOW CREATE TABLE ...
SELECT ...

It sounds doable with MySQL; might be too big for NOSQL.

 -Original Message-
 From: abhishek jain [mailto:abhishek.netj...@gmail.com]
 Sent: Friday, June 15, 2012 1:57 AM
 To: mysql@lists.mysql.com
 Subject: Which Database when lot of insert / update queries to execute
 
 hi,
 I am biased on mysql, and hence i am asking this on mysql forum first.
 I am designing a solution which will need me to import from CSV, i am
 using my JAVA code to parse. CSV file has 500K rows, and i need to do
 it thrice an hour, for 10 hours  a day.
 The Queries will mainly be update but select and insert also at times,
 The database size will be estimated to be about 5GB.
 I need to know is this a classic case for a NOSQL database or mysql is
 a good option.
 
 Also , if i need to do 'group by', on a column on a large table what
 should i keep in mind, is it advisable,
 
 Please advice,
 
 --
 Thanks and kind Regards,
 Abhishek jain

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Foreign key and uppercase / lowercase values

2012-06-15 Thread Rick James
Those refer _only_ to German 'ß' LATIN SMALL LETTER SHARP S.  The example GF 
gave did not involve that character.

To my knowledge, that is the only case where MySQL changed a collation after 
releasing it.

 -Original Message-
 From: Shawn Green [mailto:shawn.l.gr...@oracle.com]
 Sent: Friday, June 15, 2012 12:06 PM
 To: Rick James
 Cc: GF; mysql@lists.mysql.com
 Subject: Re: Foreign key and uppercase / lowercase values
 
 On 6/15/2012 1:00 PM, Rick James wrote:
  You are very close to a standalone test case.  Please create such.
 Then post it on bugs.mysql.com .
 
  -Original Message-
  From: GF [mailto:gan...@gmail.com]
  Sent: Friday, June 15, 2012 12:45 AM
  To: Rick James
  Cc: Shawn Green; mysql@lists.mysql.com
  Subject: Re: Foreign key and uppercase / lowercase values
 
  I think the following might give complete information (I removed
 some
  columns not involved in the problem)
 
  Server version: 5.1.49-3 (Debian)
 
  SET collation_connection = utf8_unicode_ci; Query OK, 0 rows
 affected
  ...
 
 Before he submits a test case, he should also review
 http://bugs.mysql.com/bug.php?id=27877
 http://dev.mysql.com/doc/refman/5.5/en/news-5-5-21.html
 
 This has been a well-discussed problem both inside and outside the
 MySQL development processes.
 
 Regards,
 --
 Shawn Green
 MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware
 and Software, Engineered to Work Together.
 Office: Blountville, TN
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key and uppercase / lowercase values

2012-06-15 Thread Shawn Green

On 6/15/2012 3:19 PM, Rick James wrote:

Those refer _only_ to German 'ß' LATIN SMALL LETTER SHARP S.  The example GF 
gave did not involve that character.

To my knowledge, that is the only case where MySQL changed a collation after 
releasing it.



Yes, it has been the only occurrence. However, the esset (sharp S) is 
just one example of the alternative spelling letters that were affected 
by the collation change. Thorn, the AE ligand, and many others fall into 
that same category.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: mysqldump not escaping single quotes in field data

2012-06-15 Thread Rick James
Are you using an abnormal CHARACTER SET or COLLATION?
SHOW CREATE TABLE
Show us the args to mysqldump.

 -Original Message-
 From: James W. McNeely [mailto:j...@newcenturydata.com]
 Sent: Friday, June 15, 2012 10:19 AM
 To: mysql@lists.mysql.com
 Subject: mysqldump not escaping single quotes in field data
 
 My backups from a mysqldump process are useless, because the dump files
 are not escaping single quotes in the data in the fields.
 
 So, O'Brien kills it - instead of spitting out 'O\'Brien'
 it spits out
 'O'Brien'
 
 I don't see anywhere in the documentation about mysqldump where you can
 tweak this kind of thing.
 
 We're using MySQL 5.0.70 enterprise on Ubuntu 8.04.1.
 
 I tried to enter this into the Oracle support center but all of the
 navigational and SR tabs are gone. Maybe our accounting dept. forgot to
 pay the bill or something.
 
 Thanks,
 
 Jim McNeely
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key and uppercase / lowercase values

2012-06-15 Thread Walter Tross
At 16.40 15/06/2012 -0400, Shawn Green wrote:
On 6/15/2012 3:19 PM, Rick James wrote:
Those refer _only_ to German 'ß' LATIN SMALL LETTER SHARP S.  The example GF 
gave did not involve that character.

To my knowledge, that is the only case where MySQL changed a collation after 
releasing it.

Yes, it has been the only occurrence. However, the esset (sharp S) is just one 
example of the alternative spelling letters that were affected by the 
collation change. Thorn, the AE ligand, and many others fall into that same 
category.

ß = Eszett (which in German is the spelling of SZ, although it originated as a 
double S ligature (U+017F + s) - SZ comes from its sharp pronunciation).
The absence of an uppercase equivalent and its ligature behavior more evident 
than for other ligatures (although this has changed with the 1996 reform of 
German writing) have caused more than a headache to people dealing with 
charsets and collations.
Speaking of collations, I found this website useful (especially when I had to 
compare collations of different RDBMSs):
http://www.collation-charts.org/
Sorry for being OT, but every now and then it's worthwhile to share also some 
OT knowledge.
Walter Tross



Regards,
-- 
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: console input - finding duplicate entries

2012-06-15 Thread Daevid Vincent
 -Original Message-
 From: Gary Aitken [mailto:my...@dreamchaser.org]
 Sent: Thursday, June 14, 2012 2:58 PM
 
 I can get the table loaded by specifying REPLACE INTO TABLE, but that
still
 leaves me with not knowing where the duplicate records are.

To find duplicate entries

select dupe_column, count(*) as n from mytable group by dupe_column having n
 1;

or

select n, m, count(*) as c from foo group by n, m having c  1;

Here's one solution that will find the oldest duplicate(s)

SELECT prod.prod_num,prod.prod_title,prod.updated FROM prod
JOIN (
 SELECT prod_title,max(updated) maxdate
 FROM prod GROUP BY prod_title 
 ) 
 AS proda
 ON prod.prod_title = proda.prod_title 
AND 
prod.updated != proda.maxdate

A simple and fast way is via an exclusion join

delete t1
from tbl t1
left join tbl t2 on t1.value=t2.value and t1.idt2.id
where t2.id is not null;

Deleting duplicate rows via temporary table

DROP TABLE IF EXISTS `dupes`;
CREATE TEMPORARY TABLE dupes 
  SELECT LogID FROM buglog GROUP BY BID, TS HAVING count(*)  1 ORDER BY
BID;
ALTER TABLE `dupes` ADD INDEX `LogID` ( `LogID` ); #//This index is
critical.
DELETE FROM buglog WHERE LogID IN (SELECT LogID FROM dupes);
DROP TABLE IF EXISTS `dupes`;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Which Database when lot of insert / update queries to execute

2012-06-15 Thread hsv
 2012/06/15 18:14 +0900, Tsubasa Tanaka 
try to use `LOAD DATA INFILE' to import from CSV file.

http://dev.mysql.com/doc/refman/5.5/en/load-data.html

Try is the operative word: MySQL s character format is _like_ CSV, but not 
the same. The treatment of NULL is doubtless the biggest stumbling-block.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql