Re: Decode Json in MySQL query

2014-03-21 Thread Sukhjinder K. Narula
Many Thanks for the kind replies.

I have decoded in my code but just wondering in case I missed any solution
to decode via query.


On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:

 Short answer, no.  There is nothing in MySQL to facilitate this. In
 general, storing structured data as a blob (JSON, CSV, XML-fragment,
 etc..) is an anti-pattern in a relational environment.  There are
 NoSQL solutions that provide the facility: Mongo comes to mind; there
 are some others, I am sure.



 On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote:
  Hi, you probably want to perform this conversion on your client.   There
 are JSON parser libraries available for Java, PHP and the like.   Cheers,
 Karr
 
  On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula narula...@gmail.com
 wrote:
 
  Hello,
  I would like to know if there is a way to decode the json string stored
 in
  one of the fields as text without using triggers or stored procedures.
  What I want to do is is within the query, I would like to get one row
 per
  element within the json string.
  For example: the json string is as follow:
 
  [
   {
 name : Abc,
 age : 20
   },
   {
 name : Xyz,
 age : 18
   }
  ]
 
  and after query, I want result as:
  NameAge
  Abc   20
  Xyz   18
 
 
  Would this be possible, I greatly appreciate any help regarding this
  matter.
 
  Many Thanks,
  Sukhjinder
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Decode Json in MySQL query

2014-03-21 Thread Andrew Moore
May also be of interest;

http://www.slideshare.net/blueskarlsson/using-json-with-mariadb-and-mysql




On Fri, Mar 21, 2014 at 12:27 PM, Sukhjinder K. Narula
narula...@gmail.comwrote:

 Many Thanks for the kind replies.

 I have decoded in my code but just wondering in case I missed any solution
 to decode via query.


 On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:

  Short answer, no.  There is nothing in MySQL to facilitate this. In
  general, storing structured data as a blob (JSON, CSV, XML-fragment,
  etc..) is an anti-pattern in a relational environment.  There are
  NoSQL solutions that provide the facility: Mongo comes to mind; there
  are some others, I am sure.
 
 
 
  On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote:
   Hi, you probably want to perform this conversion on your client.
 There
  are JSON parser libraries available for Java, PHP and the like.   Cheers,
  Karr
  
   On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula 
 narula...@gmail.com
  wrote:
  
   Hello,
   I would like to know if there is a way to decode the json string
 stored
  in
   one of the fields as text without using triggers or stored procedures.
   What I want to do is is within the query, I would like to get one row
  per
   element within the json string.
   For example: the json string is as follow:
  
   [
{
  name : Abc,
  age : 20
},
{
  name : Xyz,
  age : 18
}
   ]
  
   and after query, I want result as:
   NameAge
   Abc   20
   Xyz   18
  
  
   Would this be possible, I greatly appreciate any help regarding this
   matter.
  
   Many Thanks,
   Sukhjinder
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Locking a Database (not tables) x

2014-03-21 Thread David Lerer
Frequently, we import a production dump that contains only 1 or 2 databases 
into one of our QA instances that contains many more databases. (i.e. 
database being a schema or a catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.

Thanks, David.

David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Locking a Database (not tables)

2014-03-21 Thread David Lerer
Frequently, we import a production dump that contains only 1 or 2 databases 
into one of our QA instances that contains many more databases. (i.e. 
database being a schema or a catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.

Thanks, David.

David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

The information contained in this e-mail and any attached 
documents may be privileged, confidential and protected from 
disclosure.  If you are not the intended recipient you may not 
read, copy, distribute or use this information.  If you have 
received this communication in error, please notify the sender 
immediately by replying to this message and then delete it 
from your system.


RE: Locking a Database (not tables) x

2014-03-21 Thread David Lerer
Thanks Wayne. This a great idea to prevent user activity on the server. I’ll 
use it in the future.
But I’m looking for a way to prevent user activity on a database ((i.e. 
database  being a schema or a catalogue).
David.




David Lerer  |  Director, Database Administration  |  Interactive  |  605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522  |  Fax: (646) 487-1569  |  
dle...@univision.netmailto:dle...@univision.net  |  http://www.univision.net


[cid:1e909b.png@efba91b0.48b65711]http://www.univision.net

From: Wayne Leutwyler [mailto:wleut...@columbus.rr.com]
Sent: Friday, March 21, 2014 2:12 PM
To: David Lerer
Subject: Re: Locking a Database (not tables) x

You could set max_connections = 0; then kill off any remaining connections. Do 
your data load and then set you max_connections back to what it was prior.
show variables like ‘max_connections’; (note this number)
set global max_connections = 0
This will leave 1 connection open for a superuser, I dont know what ID you use 
for that a lot of people use root.
Now import your data.
Once the import is done set global max_connections back to what it was.

On Mar 21, 2014, at 1:42 PM, David Lerer 
dle...@univision.netmailto:dle...@univision.net wrote:
Frequently, we import a production dump that contains only 1 or 2 databases 
into one of our QA instances that contains many more databases. (i.e. 
database being a schema or a catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.

Thanks, David.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


Walter Wayne Leutwyler, RHCT
Sr. MySQL Database Administrator
Mobile: 614 519 5672
Office: 614 889 4956
E-mail: wayne.leutwy...@gmail.commailto:wayne.leutwy...@gmail.com
E-mail: wleut...@columbus.rr.commailto:wleut...@columbus.rr.com
Website: http://penguin-workshop.dyndns.org

Courage is being scared to death, but saddling up anyway. --John Wayne

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.


Re: Locking a Database (not tables) x

2014-03-21 Thread shawn l.green

Hi David.

On 3/21/2014 1:42 PM, David Lerer wrote:

Frequently, we import a production dump that contains only 1 or 2 databases into one of our QA instances 
that contains many more databases. (i.e. database being a schema or a 
catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.



If you start with a DROP DATABASE   that will pretty much ensure 
that nobody gets back into it.


Then re-create your tables in a new DB (yyy)

As a last set of steps do

  CREATE DATABASE 
  RENAME TABLE yyy.table1 to .table1, yyy.table2 to .table2, 
  (repeat for all your tables).

  DROP DATABASE yyy


Because this is essentially a metadata flip, the RENAME will be quite 
speedy.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Database migration from default configuration to innodb_file_per_table

2014-03-21 Thread Christophe
Hi list,

I'd like your advice, (one more time ;) ) about this case :

The context is :

A huge database using InnoDB engine from filling about several years
(without possible shrinking, as I've seen, except dropping all databases
to recreate them ... ) the ibdata file , which is taking over 9GiB on
filesystem.

We have to separate data from databases in two cases , whilst it is
running , after setting innodb_file_per_table in MySQL configuration ,
and restarting service.

Creating First database, containing at oldest 6 months of data.
Second database, considered archive containing data older than 6 month.

Not such a problem to separate actual data : using several mysqldump
with --where switch, which handles the case.

After this, Shell scripts using INSERT INTO archive SELECT * FROM
realtime WHERE ... seem to be reliable to do this.

*But*, in this one timed scheduled task in data migration (Previewed and
accepted by customer, by night / not tonight ... :) ), and *I'd like to
remove the ibdata1 file* , as it takes huge disk space.

Migration task also includes converting old tables (previously in
InnoDB), to alter them into  InnoDB, to recreate the InnoDB file
using innodb_file_per_table parameter.

Problem :

While testing this in lab, I came to fact that removing ibdata1 file,
cancels MySQL to get reference to any table in databases .

use database works ... but DESCRIBE table goes to : table
database.table does not exist.

Is there anyway to handle this case ?

Regards .
Christophe.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Database migration from default configuration to innodb_file_per_table

2014-03-21 Thread shawn l.green

Hello Christophe,

On 3/21/2014 4:47 PM, Christophe wrote:

Hi list,

I'd like your advice, (one more time ;) ) about this case :

The context is :

A huge database using InnoDB engine from filling about several years
(without possible shrinking, as I've seen, except dropping all databases
to recreate them ... ) the ibdata file , which is taking over 9GiB on
filesystem.

We have to separate data from databases in two cases , whilst it is
running , after setting innodb_file_per_table in MySQL configuration ,
and restarting service.

Creating First database, containing at oldest 6 months of data.
Second database, considered archive containing data older than 6 month.

Not such a problem to separate actual data : using several mysqldump
with --where switch, which handles the case.

After this, Shell scripts using INSERT INTO archive SELECT * FROM
realtime WHERE ... seem to be reliable to do this.

*But*, in this one timed scheduled task in data migration (Previewed and
accepted by customer, by night / not tonight ... :) ), and *I'd like to
remove the ibdata1 file* , as it takes huge disk space.

Migration task also includes converting old tables (previously in
InnoDB), to alter them into  InnoDB, to recreate the InnoDB file
using innodb_file_per_table parameter.

Problem :

While testing this in lab, I came to fact that removing ibdata1 file,
cancels MySQL to get reference to any table in databases .

use database works ... but DESCRIBE table goes to : table
database.table does not exist.

Is there anyway to handle this case ?



The system is operating exactly as designed. The ibdata* file(s) contain 
more than just your data and indexes. This is the common tablespace and 
it contains all the metadata necessary to identify where *all* your 
InnoDB tables actually are (where they are in a tablespace and which 
tablespace they are in) and several other things about them. In the 
terms of the InnoDB developers, this is the data dictionary.  This 
means that once you blow it away, MySQL has no details about any where 
any of your InnoDB tables are, exactly as the message says.


The table names are visible in a SHOW TABLES command because that is 
essentially performing a directory listing of any .FRM files in that 
database's folder. Without both parts (the definition in the .FRM file 
and the metadata in the common tablespace) your tables are broken. If 
you have the .frm file, you can find out which columns you have defined, 
what data types they are, if the table is partitioned or not and what 
your indexes and other constraints look like. The .frm file cannot tell 
the InnoDB engine which tablespace a table is in or what offset the root 
page of the table is within the tablespace. That information was stored 
in the ibdata file that you erased during your test run.


The proper way to change the size of your common tablespace is 
documented here in the user manual

http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

Search for the section header Decreasing the Size of the InnoDB Tablespace

Best regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql