Re: Minor collation question

2012-06-13 Thread Johan De Meersman
- Original Message -
 From: Shawn Green shawn.l.gr...@oracle.com

 The other tactical move has been to jump 5.1 and upgrade directly
 from 5.0 to 5.5 where that problem is resolved.  Sure, it's a bit more
 work (full dump/restore is highly recommended) but it avoids the collation
 bug which exists in all 5.1.x releases.

Ah! Everything I found in the bug tracker pointed towards an official attitude 
of No, that was a bug in 5.0 that got fixed in 5.1. Works as designed, will 
not fix.

Jumping straight to 5.5 is a viable path, then, so I'll see with the Unix boys 
wether they're willing to go that route. Thank you for that information.

 Yes, collations are used for equality and inequality comparisons,
 too, not just sorting. That's why having alternate spellings ,like the
 words strasse and straβe, will collide within a PK in 5.1 where they
 will not for 5.0 or 5.5 (with the appropriate collation).

Yes, although here it's colloquially known as the Scheiβe bug :-p



--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: i need advice on redundancy of mysql server.

2012-06-13 Thread Johan De Meersman
- Original Message -
 From: Joey L mjh2...@gmail.com
 
 It sounds like you are all consultants.

Hehe. I'm not :-p

A lot are, though, because the combined technical knowledge on this list draws 
in consultants looking for stuff, and having experienced consultants on the 
list in turn heightens the combined technical knowledge again.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Aborted clients

2012-06-13 Thread Johan De Meersman

- Original Message -
 From: Claudio Nanni claudio.na...@gmail.com
 
 @Johan, you say I'm having trouble with clients aborting, but for
 some reason they don't get logged.

Ah, it *did* start logging, now, and they come from multiple applications, too.

120612 12:19:09 [Warning] Aborted connection 13019149 to db: 'music' user: 
'music' host: 'viaprod1' (Got an error reading communication packets)
120612 13:13:52 [Warning] Aborted connection 13020111 to db: 'epg' user: 'epg' 
host: 'viaprod1' (Got timeout reading communication packets)
120612 14:21:10 [Warning] Aborted connection 13021624 to db: 'music' user: 
'music' host: 'viaprod1' (Got an error reading communication packets)

Am I wrong in thinking this looks more like a hardware/network problem?


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Aborted clients

2012-06-13 Thread Claudio Nanni
2012/6/13 Johan De Meersman vegiv...@tuxera.be


 - Original Message -
  From: Claudio Nanni claudio.na...@gmail.com
 
  @Johan, you say I'm having trouble with clients aborting, but for
  some reason they don't get logged.

 Ah, it *did* start logging, now, and they come from multiple applications,
 too.

 120612 12:19:09 [Warning] Aborted connection 13019149 to db: 'music' user:
 'music' host: 'viaprod1' (Got an error reading communication packets)
 120612 13:13:52 [Warning] Aborted connection 13020111 to db: 'epg' user:
 'epg' host: 'viaprod1' (Got timeout reading communication packets)
 120612 14:21:10 [Warning] Aborted connection 13021624 to db: 'music' user:
 'music' host: 'viaprod1' (Got an error reading communication packets)

 Am I wrong in thinking this looks more like a hardware/network problem?



Not at all.
Just to close completely the code 'option', are you sure the codebase is
completely different? since they still come from the same host.
In this way so we can totally exclude code 'bad' habit.

Then network can be a problem for sure, usually when there are firewalls in
between,
also when I had similar problems a network change took place, like changing
switches or some configuration.

Can you count the hops between MySQL and the app server?

Dank Je Wel ;)

Claudio






 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
Claudio


Re: Aborted clients

2012-06-13 Thread Ananda Kumar
is iptables service running on db server, if yes, trying stopping it and
check

On Wed, Jun 13, 2012 at 5:04 PM, Claudio Nanni claudio.na...@gmail.comwrote:

 2012/6/13 Johan De Meersman vegiv...@tuxera.be

 
  - Original Message -
   From: Claudio Nanni claudio.na...@gmail.com
  
   @Johan, you say I'm having trouble with clients aborting, but for
   some reason they don't get logged.
 
  Ah, it *did* start logging, now, and they come from multiple
 applications,
  too.
 
  120612 12:19:09 [Warning] Aborted connection 13019149 to db: 'music'
 user:
  'music' host: 'viaprod1' (Got an error reading communication packets)
  120612 13:13:52 [Warning] Aborted connection 13020111 to db: 'epg' user:
  'epg' host: 'viaprod1' (Got timeout reading communication packets)
  120612 14:21:10 [Warning] Aborted connection 13021624 to db: 'music'
 user:
  'music' host: 'viaprod1' (Got an error reading communication packets)
 
  Am I wrong in thinking this looks more like a hardware/network problem?
 
 
 
 Not at all.
 Just to close completely the code 'option', are you sure the codebase is
 completely different? since they still come from the same host.
 In this way so we can totally exclude code 'bad' habit.

 Then network can be a problem for sure, usually when there are firewalls in
 between,
 also when I had similar problems a network change took place, like changing
 switches or some configuration.

 Can you count the hops between MySQL and the app server?

 Dank Je Wel ;)

 Claudio






  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 



 --
 Claudio



Re: Aborted clients

2012-06-13 Thread Johan De Meersman
- Original Message -

 From: Claudio Nanni claudio.na...@gmail.com

Sigh. Because the application gets unstable when the connection falters, the 
Unix boys have a kill-and-restart script in place - so any number of the 
messages in the log may be due to that. Don't you love these complex 
environments :-) 

/me is off to correlate MySQL and killscript logs. Thank Darwin's beard for NTP 
synchronisation. 

-- 

Bier met grenadyn 
Is als mosterd by den wyn 
Sy die't drinkt, is eene kwezel 
Hy die't drinkt, is ras een ezel 


NoSQL help

2012-06-13 Thread Manivannan S.
Hi all,

I am using MySQL 5.1, in this I am inserting 5GB of data for 
two days into my database. I am trying to generate a report by processing these 
data which are available in my database. Our clients are planning to keep the 
records for 60 days then that will cross 150GB of data. To generate a report I 
have to use all this accumulated of 150 GB data. I have done all kind of 
optimizations in my procedure and  I have tuned up my MySQL server parameters 
also. But using MySQL getting the reports for this amount of data, within the 
short time is not possible.

I have seen the concept of NoSQL and I am planning to implement 
this NoSQL concept into my database.

Does anyone have any idea in NoSQL especially MongoDB 
technology and how to use this ?

Thanks in advance.

Regards,
Manivannan S

DISCLAIMER: This email message and all attachments are confidential and may 
contain information that is privileged, confidential or exempt from disclosure 
under applicable law.  If you are not the intended recipient, you are notified 
that any dissemination, distribution or copying of this email is strictly 
prohibited. If you have received this email in error, please notify us 
immediately by return email or to mailad...@spanservices.com and destroy the 
original message.  Opinions, conclusions and other information in this message 
that do not relate to the official business of SPAN, shall be understood to be 
neither given nor endorsed by SPAN.


Re: NoSQL help

2012-06-13 Thread Johan De Meersman

- Original Message -
 From: Manivannan S. manivanna...@spanservices.com
 
 Hi all,
 
 [lots of data]
 [slow reports]
 [wooo NoSQL magic]

Not that I want to discourage you, but my standard first question is why do 
you think NoSQL (let alone any specific product) is the right solution? :-)

Don't get me wrong, it might be; but from what little I now know about your 
environment, it sounds like applying some data warehousing techniques might 
suffice - and being the cynical dinosaur that I am, I have a healthy reluctance 
about welding new technology onto a stable environment.

To speed up reporting (and note that these techniques are often applied even 
when implementing NoSQL solutions, too) it is usually a good first step to set 
up a process of data summarization.

Basically, you pre-calculate averages, medians, groupings, whatever you need 
for your reports; and your job also saves the last record IDs it's processed; 
then on the next run, you only read the new records and update your summary 
tables to incorporate the new data.

Suppose I have a table like this:

ID | Val

 1 1
 2 7
 3 5
 413

I want to report the average on a daily basis, and calculating that over those 
rows is unbearably slow because I'm running the process on a wristwatch from 
1860 :-)

So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then gets a 
record saying this:

Avg | elementCount | lastSeen
-
6.5  4  4

Now, over the course of the day, the elements 4, 17 and 2 get added with 
sequential row numbers. Instead of calculating (1+7+5+13+4+17+2)/7, which would 
be slow; I can substitute the already summarized data by Avg*elementCount. 
Thus, I calculate (6.5*4 + 4+17+2)/7 = 7, which is a lot faster, and my summary 
table now looks like this:

Avg | elementCount | lastSeen
-
  7  7  7

This is of course a stupid example, but it saves you a lot of time if you 
already have the summary of several thousand elements and only need to update 
it for a handful. Similar tricks are possible for a lot of typical reporting 
stuff - you don't need to re-calculate data for past months over and over 
again, for instance - and that's what makes your reports run fast.


Just my 2 cents :-)
/johan

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NoSQL help

2012-06-13 Thread mos

At 07:27 AM 6/13/2012, Manivannan S. wrote:

Hi all,

I am using MySQL 5.1, in this I am inserting 5GB of 
data for two days into my database. I am trying to generate a 
report by processing these data which are available in my database. 
Our clients are planning to keep the records for 60 days then that 
will cross 150GB of data. To generate a report I have to use all 
this accumulated of 150 GB data. I have done all kind of 
optimizations in my procedure and  I have tuned up my MySQL server 
parameters also. But using MySQL getting the reports for this 
amount of data, within the short time is not possible.


I have seen the concept of NoSQL and I am planning 
to implement this NoSQL concept into my database.


Does anyone have any idea in NoSQL especially 
MongoDB technology and how to use this ?


Thanks in advance.

Regards,
Manivannan S

DISCLAIMER: This email message and all attachments are confidential 
and may contain information that is privileged, confidential or 
exempt from disclosure under applicable law.  If you are not the 
intended recipient, you are notified that any dissemination, 
distribution or copying of this email is strictly prohibited. If you 
have received this email in error, please notify us immediately by 
return email or to mailad...@spanservices.com and destroy the 
original message.  Opinions, conclusions and other information in 
this message that do not relate to the official business of SPAN, 
shall be understood to be neither given nor endorsed by SPAN.


You didn't say what the problem was when you tried to produce a 
report on this data.


1) Did the sorting take too long?
2) Did traversing the data take too long?
3) Were the reports tables locked by other processes?
4) Using too much resources like memory, CPU, or disk?
5) Joining tables takes too long?

You may want to look into Handler. I've used it often when I needed 
to traverse very large MyISAM tables. Handler requires no physical 
sorting of the table because it traverses the table using an index. 
It also ignores any locks on the table (which you may or may not 
like), but the Handler will start traversing the table immediately. 
It will solve problems 1,3 ,4 but not #2 because traversing a large 
table with an index will be slower than if the table was already 
sorted. One nice thing about the Handler is it uses virtually no 
additional memory regardless of table size and very little disk 
activity because there is no sorting. You can run it any time and it 
won't degrade other MySQL processes. I don't think Handler will join 
tables together; I have used it only to traverse a single table.


One other tip. When loading the data into the table, Load Data Infile 
will be much faster on an empty table so recreate your tables from 
scratch before loading the data. Also build all of the indexes after 
the data has been loaded using one Alter Table command, and if 
possible, reduce the number of unique indexes in the table.


http://dev.mysql.com/doc/refman/5.5/en/handler.html

Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: NoSQL help

2012-06-13 Thread Rick James
I'll second Johan's comments.

Count the disk hits!

One minor change:  Don't store averages in the summary table; instead store the 
SUM().  That lets you get the mathematically correct AVERAGE over any time 
range via
SUM(sum_foo) / SUM(count_foo)

Switching between MySQL and Mongo requires rewriting _all_ of the relevant code.

opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count the 
disk hits!

I recently built a system that topped out at 350GB (90 days' data).  It 
involved hourly ingestion of a few GB of data and a variety of reports.  The 
prototype showed that most reports would take about an hour to run.  Not good.  
The final product, with summary tables, lets the reports be run on-demand and 
online and each takes only a few seconds.  By careful use of MEMORY tables, 
LOAD DATA, etc, the ingestion takes 5 minutes (each hour) for the raw data and 
2 minutes (total) for the 7 summary tables.  PARTITIONing was vital for the 
design.  Once an hour a new partition is populated; once a day, 24 hourly 
partitions are rolled into a new daily partition and the 90-day old partition 
is DROPped.


 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 13, 2012 6:20 AM
 To: Manivannan S.
 Cc: mysql@lists.mysql.com
 Subject: Re: NoSQL help
 
 
 - Original Message -
  From: Manivannan S. manivanna...@spanservices.com
 
  Hi all,
 
  [lots of data]
  [slow reports]
  [wooo NoSQL magic]
 
 Not that I want to discourage you, but my standard first question is
 why do you think NoSQL (let alone any specific product) is the right
 solution? :-)
 
 Don't get me wrong, it might be; but from what little I now know about
 your environment, it sounds like applying some data warehousing
 techniques might suffice - and being the cynical dinosaur that I am, I
 have a healthy reluctance about welding new technology onto a stable
 environment.
 
 To speed up reporting (and note that these techniques are often applied
 even when implementing NoSQL solutions, too) it is usually a good first
 step to set up a process of data summarization.
 
 Basically, you pre-calculate averages, medians, groupings, whatever you
 need for your reports; and your job also saves the last record IDs it's
 processed; then on the next run, you only read the new records and
 update your summary tables to incorporate the new data.
 
 Suppose I have a table like this:
 
 ID | Val
 
  1 1
  2 7
  3 5
  413
 
 I want to report the average on a daily basis, and calculating that
 over those rows is unbearably slow because I'm running the process on a
 wristwatch from 1860 :-)
 
 So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
 gets a record saying this:
 
 Avg | elementCount | lastSeen
 -
 6.5  4  4
 
 Now, over the course of the day, the elements 4, 17 and 2 get added
 with sequential row numbers. Instead of calculating
 (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
 summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
 4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
 like this:
 
 Avg | elementCount | lastSeen
 -
   7  7  7
 
 This is of course a stupid example, but it saves you a lot of time if
 you already have the summary of several thousand elements and only need
 to update it for a handful. Similar tricks are possible for a lot of
 typical reporting stuff - you don't need to re-calculate data for past
 months over and over again, for instance - and that's what makes your
 reports run fast.
 
 
 Just my 2 cents :-)
 /johan
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



Re: NoSQL help

2012-06-13 Thread Ananda Kumar
Did you try with myisam tables.
They are supposed to be good for reporting requirement

On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.com wrote:

 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-
  From: Johan De Meersman [mailto:vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.
  Cc: mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -
   From: Manivannan S. manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those rows is unbearably slow because I'm running the process on a
  wristwatch from 1860 :-)
 
  So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
  gets a record saying this:
 
  Avg | elementCount | lastSeen
  -
  6.5  4  4
 
  Now, over the course of the day, the elements 4, 17 and 2 get added
  with sequential row numbers. Instead of calculating
  (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
  summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
  4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
  like this:
 
  Avg | elementCount | lastSeen
  -
7  7  7
 
  This is of course a stupid example, but it saves you a lot of time if
  you already have the summary of several thousand elements and only need
  to update it for a handful. Similar tricks are possible for a lot of
  typical reporting stuff - you don't need to re-calculate data for past
  months over and over again, for instance - and that's what makes your
  reports run fast.
 
 
  Just my 2 cents :-)
  /johan
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql




is it quiet out there?

2012-06-13 Thread hsv
**
Scanned by  MailScan AntiVirus and Content Security Software.
Visit http://www.escanav.com for more info on eScan and X-Spam.
**



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: i need advice on redundancy of mysql server.

2012-06-13 Thread Prabhat Kumar
Does really Master-Master replication provide load balancing feature?
since, each node need to replicate to other node, and MySQL replication
still a is single threaded replication , it mean there is only single
replication thread  sql_thread for DML queries.

eg.

There is two node with master master replication - Master -1  Master 2.

  *app1 --read/write* --- *Master 1 --* single thread*-- Master 2*
--- *read/write
-- app1*

Its just a *high availability* not a load balancing*.*

Thanks,
On Wed, Jun 13, 2012 at 2:33 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 - Original Message -
  From: Joey L mjh2...@gmail.com
 
  It sounds like you are all consultants.

 Hehe. I'm not :-p

 A lot are, though, because the combined technical knowledge on this list
 draws in consultants looking for stuff, and having experienced consultants
 on the list in turn heightens the combined technical knowledge again.


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


RE: i need advice on redundancy of mysql server.

2012-06-13 Thread Rick James
I prefer:
* Master-Master (dual master) but write to only one of them.  (Writing to 
both can lead to duplicate keys, etc., unless you are very careful in your 
code.)
* Have the two Masters geographically separate.  (Think tornados, floods, 
earthquakes, etc)
* Have Slave(s) hanging of each master -- (1) for read scaling, and (2) to 
avoid a major outage when one Master goes down and you need to take the other 
one down to clone it.

Another thing to consider:
Backing up via a LVM snapshot requires only a minute or so of downtime, 
regardless of dataset size.
Percona's XtraBackup is also very good.

I also agree that MyISAM in not best.  But, caution, InnoDB's disk footprint is 
2x=3x bigger than MyISAM's.

You can Load Balance reads (among slaves and, optionally, masters); you cannot 
do writes.

Any number of Apache servers can talk to MySQL.  But watch out -- MaxClients 
should not be so large that it swamps max_connections.

Load balancing:
DNS is the simple way to load balance Apache.
There are low-impact software solutions.
There are hardware solutions.  (This is what I am used to at work; it is severe 
overkill for most users.)

Bottom line:  There is no best or perfect solution.  First decide what 
'keeps you up at night'.

 -Original Message-
 From: Joey L [mailto:mjh2...@gmail.com]
 Sent: Monday, June 11, 2012 7:26 AM
 To: mysql@lists.mysql.com
 Subject: i need advice on redundancy of mysql server.
 
 I am running a site with about 50gig myisam databases which are the
 backend to different websites.
 I can not afford any downtime and the data is realtime.
 
 What is the best method for this setup? master-master or master-slave?
 
 What are the best utilities to create and maintain this setup?  as far
 as load balancing between the two physical servers that i am running.
 I am currently working with percona utilities - is there something
 better ?
 what would you use to load balance mysql ? what would you use to load
 balance apache.
 
 
 thanks
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Minor collation question

2012-06-13 Thread Rick James
A warning about β -- There was a change in the collation at 5.1.24.
Search http://mysql.rjweb.org/doc.php/charcoll for 'German'; there is a brief 
discussion near the end.

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 13, 2012 2:26 AM
 To: Shawn Green
 Cc: mysql@lists.mysql.com
 Subject: Re: Minor collation question
 
 - Original Message -
  From: Shawn Green shawn.l.gr...@oracle.com
 
  The other tactical move has been to jump 5.1 and upgrade directly
 from
  5.0 to 5.5 where that problem is resolved.  Sure, it's a bit more
 work
  (full dump/restore is highly recommended) but it avoids the collation
  bug which exists in all 5.1.x releases.
 
 Ah! Everything I found in the bug tracker pointed towards an official
 attitude of No, that was a bug in 5.0 that got fixed in 5.1. Works as
 designed, will not fix.
 
 Jumping straight to 5.5 is a viable path, then, so I'll see with the
 Unix boys wether they're willing to go that route. Thank you for that
 information.
 
  Yes, collations are used for equality and inequality comparisons,
 too,
  not just sorting. That's why having alternate spellings ,like the
  words strasse and straβe, will collide within a PK in 5.1 where
  they will not for 5.0 or 5.5 (with the appropriate collation).
 
 Yes, although here it's colloquially known as the Scheiβe bug :-p
 
 
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: Foreign key and uppercase / lowercase values

2012-06-13 Thread Rick James
To discuss this further, please provide SHOW CREATE TABLE for the table in 
question and the table(s) tied to it via FOREIGN KEYs.

 -Original Message-
 From: GF [mailto:gan...@gmail.com]
 Sent: Tuesday, June 12, 2012 6:20 AM
 To: Shawn Green
 Cc: mysql@lists.mysql.com
 Subject: Re: Foreign key and uppercase / lowercase values
 
 Good morning.
 The application is Java.
 The database version is : Server version: 5.1.49-3 (Debian)
 
 This is an example of the problem:
 __
 mysql SET collation_connection = utf8_unicode_ci;
 Query OK, 0 rows affected (0.00 sec)
 
 mysql show variables like '%colla%';
 +--+-+
 | Variable_name | Value |
 +--+-+
 | collation_connection | utf8_unicode_ci | collation_database |
 | utf8_unicode_ci | collation_server | utf8_unicode_ci |
 +--+-+
 3 rows in set (0.00 sec)
 
 mysql update MY_TABLE set USER_ID = LOWER(USER_ID) where USER_ID =
 mysql 'XXYYZZ';
 ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key
 constraint fails etc. etc.
 __
 
 Since the used collation is _ci (I suppose it means case
 insensitive) I don't understand why it is giving an error trying to
 change a value to lowercase.
 
 I did try on a test environment to use the trick (SET
 foreign_key_checks=0;) but I don't understand why I should disable the
 foreign key checks when I am NOT violating them.
 
 The application was able to write in some other tables the USER_ID in
 lowercase. And I think that was an expected behaviour because the
 collation is case insensitive!
 Why now I can't set some values from uppercase to lowercase?
 
 There is not any weird character in the USER_ID column, just from A to
 Z.
 
 Thank you.
 
 On Wed, May 16, 2012 at 5:35 PM, Shawn Green shawn.l.gr...@oracle.com
 wrote:
  Hello Ananda,
 
 
  On 5/16/2012 6:42 AM, Ananda Kumar wrote:
 
  why are not using any where condition in the update statment
 
 
  WHERE clauses are not required. Performing a command without one will
  affect ever row on the table.
 
  On Wed, May 16, 2012 at 1:24 PM, GFgan...@gmail.com  wrote:
 
  Good morning,
  I have an application where the user ids were stored lowercase.
  Some batch import, in the user table some users stored  a uppercase
  id, and for some applicative logic, in other tables that have a
  foreign key to the user table, their user ids are stored lowercase.
  ...
 
  Have you any idea how to solve this situation without
  stopping/recreating the DB? (it's a production environment) Thanks
 
 
  Have you tried ?
 
   SET foreign_key_checks=0;
  http://dev.mysql.com/doc/refman/5.5/en/server-system-
 variables.html#sy
  svar_foreign_key_checks
 
  If that does not work, you would need to first un-create your Foreign
  Key relationships, update your key values (the USER_ID fields), then
  re-create your Foreign Key relationships.
 
  Regards,
  --
  Shawn Green
  MySQL Principal Technical Support Engineer Oracle USA, Inc. -
 Hardware
  and Software, Engineered to Work Together.
  Office: Blountville, TN
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql To unsubscribe:
  http://lists.mysql.com/mysql
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql