gt; ident, given, surname 1 fredjones 2 johnhoward
3 henry wales 4
>> jennybrown
>>
>> status:
>> identyear 1 2017 2 2017 3 2017 4 2017 1 2018 3 2018
>>
>> I want my query to return the name and ident from the m
:
ident year
1 2017
2 2017
3 2017
4 2017
1 2018
3 2018
I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018
I have been working on the following query to achieve this, but it only
2017
1 2018
3 2018
I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018
I have been working on the following query to achieve this, but it only
returns data when there is no `year` entries
2017/09/19 17:19 ... Don Wieland:
Of these found rows, I want to omit those rows where there are rows found after
the END TimeStamp based on ?below ?where clause:
WHERE 1 AND apt.appt_status_id IN (16) AND apt.user_id IN (3) AND apt.time_start
> ‘1504238399'
We are trying to find Former
I have a working query:
/* start */
SELECT
u.user_id,
u.first_name AS u_first_name,
u.last_name AS u_last_name,
c.client_id AS c_client_id,
c.first_name AS c_first_name,
c.middle_name AS c_middle_name,
c.last_name AS c_last_name,
c.address AS c_address,
c.city AS c_city,
c.state
stien FLAESCH" <s...@4js.com>
To: "MySql" <mysql@lists.mysql.com>
Sent: Tuesday, 10 January, 2017 14:55:42
Subject: kill query and prepared statements
Hi all,
I have reported this problem before, but I raise it again, since I still get
this problem with 5.7.17
See att
Seb,
You should log a bug at http://bugs.mysql.com - this is not a developer list.
/Johan
- Original Message -
> From: "Sebastien FLAESCH" <s...@4js.com>
> To: "MySql" <mysql@lists.mysql.com>
> Sent: Tuesday, 10 January, 2017 14:55:42
> Subje
, since I still get
this problem with 5.7.17
See attached code:
I want to interrupt a long running statement with CTRL-C by starting a new
connect to make a KILL QUERY.
I am using the same technique as the mysql client code.
The difference here is that my code is using PREPARED STATEMENTS
to make a KILL QUERY.
I am using the same technique as the mysql client code.
The difference here is that my code is using PREPARED STATEMENTS with
mysql_stmt_prepare() etc.
Problem: After interrupting the first query with CTRL-C, the call to
mysql_stmt_close() hangs...
Maybe I am missing some
Hi all,
I have reported this problem before, but I raise it again, since I still get
this problem with 5.7.17
See attached code:
I want to interrupt a long running statement with CTRL-C by starting a new
connect to make a KILL QUERY.
I am using the same technique as the mysql client code
On 07/03/2016 06:55 PM, Sebastien FLAESCH wrote:
Hi all,
I use the following technique to cancel a long running query:
In the SIGINT signal handler, I restart a connection and I perform a
KILL QUERY mysql-process-id-of-running-query
This was working find with MySQL 5.6.
But with 5.7 (5.7.1
hangs in recvfrom():
recvfrom(3,
On 07/03/2016 06:55 PM, Sebastien FLAESCH wrote:
Hi all,
I use the following technique to cancel a long running query:
In the SIGINT signal handler, I restart a connection and I perform a
KILL QUERY mysql-process-id-of-running-query
This was working f
Hi all,
I use the following technique to cancel a long running query:
In the SIGINT signal handler, I restart a connection and I perform a
KILL QUERY mysql-process-id-of-running-query
This was working find with MySQL 5.6.
But with 5.7 (5.7.11), we get now a different result:
A) The query
table will
make Mysql take more and more memory. After scanning all of the tables,
mysql has started using more than 1GB swap.
2) We had a migration recently to add a column to half of the tables we
have. The query is like 'ALTER ONLINE TABLE table_name ADD COLUMN IF NOT
EXISTS (`col` smallint(3
On 3/26/2016 4:36 PM, shawn l.green wrote:
On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:
I have Freeradius 2.x with MySQL 5.5 in Ubuntu.
I want to query user quota for current date. I am using following code
SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM
radacct where
On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:
I have Freeradius 2.x with MySQL 5.5 in Ubuntu.
I want to query user quota for current date. I am using following code
SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where
(acctstarttime between DATE_FORMAT(NOW(),'%Y-%m
2016/03/25 06:39 ... JAHANZAIB SYED:
I want to query user quota for current date. I am using following code
SELECT SUM(acctinputoctets)+SUM(acctoutputoctets) AS Total FROM radacct where
(acctstarttime between DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND
acctstoptime between DATE_FORMAT(NOW
I have Freeradius 2.x with MySQL 5.5 in Ubuntu.
I want to query user quota for current date. I am using following code
SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where
(acctstarttime between DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND
acctstoptime between
Hi All,
Perhaps a bit of a trivial question, but in terms of query statistics (i.e.
how many SELECT / INSERT / DELETE / etc. queries has been ran against the
server)...
When you take an INSERT ... ON DUPLICATE KEY UPDATE ...
Under the two conditions (i.e. either INSERT, or UPDATE if the record
:51, Larry Martell wrote:
>>>>
>>>> I need to count the number of rows in a table that are grouped by a
>>>> list of columns, but I also need to exclude rows that have more then
>>>> some count when grouped by a different set of columns. Conceptually,
>&
>> some count when grouped by a different set of columns. Conceptually,
>> this is not hard, but I am having trouble doing this efficiently.
>>
>> My first counting query would be this:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY t
this efficiently.
My first counting query would be this:
SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id
But from this count I need to subtract the count of rows that have
more then 50 rows
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.
My first counting query
gang,
I have a query:
SELECT
p.pk_ProductID,
p.Description,
i.Quantity
FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)
WHERE p.pk_ProductID IN (1,2,3);
It produces a list like the following:
1,Banana,3
2,Orange,1
2
On 10/22/2015 11:48 AM, Don Wieland wrote:
On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
Which release of MySQL are you using?
Version 5.5.45-cll
How many rows do you get if you remove the GROUP_CONCAT operator? We don't need
to see the results.
I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
of i.fk...? It is the actual value you are selecting as well as being on
the primary table in the query.
On Thu, Oct 22, 2015, 5:18 PM Don Wieland <d...@pointmade.net> wrote:
> Hi gang,
>
> I have a qu
> On Oct 22, 2015, at 2:41 PM, Michael Dykman <mdyk...@gmail.com> wrote:
>
> I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
> of i.fk...? It is the actual value you are selecting as well as being on
> the primary table in the query.
Yeah I
Hi gang,
I have a query:
SELECT
p.pk_ProductID,
p.Description,
i.Quantity
FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)
WHERE p.pk_ProductID IN (1,2,3);
It produces a list like the following:
1,Banana
net> wrote:
>
> > On Oct 22, 2015, at 2:41 PM, Michael Dykman <mdyk...@gmail.com> wrote:
> >
> > I'm not at a terminal but have you tried grouping by p.pk_ProductID
> instead
> > of i.fk...? It is the actual value you are selecting as well as being on
> >
> On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
>
> Which release of MySQL are you using?
Version 5.5.45-cll
> How many rows do you get if you remove the GROUP_CONCAT operator? We don't
> need to see the results. (sometimes it is a good idea to look at the raw,
- Original Message -
> From: "Shawn Green" <shawn.l.gr...@oracle.com>
> Subject: Re: Query optimizer-miss with unqualified expressions, bug or
> feature?
>
> On a more serious note, indexes with limited cardinality are less useful
> than those with
On 2015-10-20 12:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LE
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_id = o.
On 10/20/2015 1:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LE
I have noticed that an unqualified boolean expression cannot be
optimized by MySQL to use an index in 5.6.24.
For example:
CREATE TABLE t (
i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a BOOLEAN NOT NULL,
KEY a (a)
) ENGINE=InnoDB;
This will hit key 'a':
SELECT * FROM t WHERE a = TRUE;
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use two starts: one on 'a > 0' and one on 'a < 0', taking a union
of the result? Which might make a significant result to something?
Ben.
On 2015-10-19 14:19, Roy Lyseng
Hi Ben,
On 19.10.15 15.10, Ben Clewett wrote:
I have noticed that an unqualified boolean expression cannot be optimized by
MySQL to use an index in 5.6.24.
For example:
CREATE TABLE t (
i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a BOOLEAN NOT NULL,
KEY a (a)
) ENGINE=InnoDB;
This
EAN rather than a TINYINT would give a better estimate of the
filtering effect, and thus of the estimated number of rows as the outcome of a
query.
*Actually, fuzzy logic has lots of practical application in real world
situations. They are just not using the MySQL BOOLEAN data type to store the
value
On 10/19/2015 3:48 PM, Roy Lyseng wrote:
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the result?
Which might make a significant result to
2 |
>> | 3 | 1 | 4 |
>> | 4 | 1 | 2 |
>> | 5 | 2 | 1 |
>> ++-+--+
>>
>> I am having trouble understanding a relational query. How can I select
>>
1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?
--
Mogens
+66 8701 33224
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
--
MySQ
|
| 4 | 1 | 2 |
| 5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?
FRUIT_ID |
++-+--+
| 2 | 3 | 2 |
| 3 | 1 | 4 |
| 4 | 1 | 2 |
| 5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that
2 |
| 3 | 1 | 4 |
| 4 | 1 | 2 |
| 5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?
I think you are goin
Further more, use logstash to collect the audit events and you can filter
out anything that wasn't an error and move it to a query error log.
On Wed, Jun 24, 2015 at 5:32 PM, Singer Wang w...@singerwang.com wrote:
Yep, as shown below:
root@audit-db.ec2:(none) select fark from fark from fark
From: Singer X.J. Wang w...@singerwang.com
Subject: Re: server-side logging of query errors?
You could log all queries using the audit plugin, 15% hit..
Fair point, though: maybe one of the different audit plugins has the capability
to specifically log faulty requests. Have a look through
(eg. STATUS, COMMAND, NAME).
On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman vegiv...@tuxera.be
wrote:
--
*From: *Singer X.J. Wang w...@singerwang.com
*Subject: *Re: server-side logging of query errors?
You could log all queries using the audit plugin, 15% hit
to write a JSON parser that extracts what you want
based on the log (eg. STATUS, COMMAND, NAME).
On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman
vegiv...@tuxera.be wrote:
-
FROM: Singer X.J. Wang w...@singerwang.com
SUBJECT: Re: server-side logging of query
at 11:05 AM, Johan De Meersman
vegiv...@tuxera.be wrote:
-
FROM: Singer X.J. Wang w...@singerwang.com
SUBJECT: Re: server-side logging of query errors?
You could log all queries using the audit plugin, 15% hit..
Fair point, though: maybe one of the different audit
) user could spam your server with
malformed requests until the logging disk runs full, at which point
the daemon would suspend operations until space is freed.
I don't think it's a valid argument - the same is true right now for
general query log. Any stupid/malicious user can produce
Suppose I run a query which has a syntax error:
mysql blah;
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'blah' at line 1
How can I get mysql server to log this error?
According
out there has support for such logging, I'm not really
familiar with any of them.
- Original Message -
From: Tomasz Chmielewski man...@wpkg.org
To: MySql mysql@lists.mysql.com
Sent: Tuesday, 23 June, 2015 09:35:46
Subject: server-side logging of query errors?
Suppose I run a query
risk; a malicious
(or just stupid, see Hanlon's razor) user could spam your server with
malformed requests until the logging disk runs full, at which point
the daemon would suspend operations until space is freed.
I don't think it's a valid argument - the same is true right now for
general query
until space is freed.
I don't think it's a valid argument - the same is true right now for
general query log. Any stupid/malicious user can produce loads of
queries and fill the disk if one has general query log enabled.
In short, anyone enabling any logging should consider what limitations
is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has Using
with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster
asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan
are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect
to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377
IDX_cc_agents_tier_status_log_72
date_logA23999(null)BTREE(null)(null)
And the query is:
set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour
On 15.11.2014 01:06, Peter Brawley wrote:
Let's see the results of Explain Extended this query, result of Show
Create Table cc_member_queue_end_log.
cc_member_queue_end_log is not of interest, it is used just as a series
of numbers. It may be any table with ids.
I've changed a bit
Let's see the results of Explain Extended this query, result of Show
Create Table cc_member_queue_end_log.
PB
-
On 2014-11-13 1:34 PM, Mimiko wrote:
Hello. I have this table:
show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
id int(10) unsigned
(null) BTREE (null) (null)
cc_agents_tier_status_log 1
IDX_cc_agents_tier_status_log_7 1 id A 23999 (null) BTREE (null) (null)
cc_agents_tier_status_log 1 IDX_cc_agents_tier_status_log_7 2
date_log A 23999 (null) BTREE (null) (null)
And the query is:
set @enddate:=now();
set @startdate:='2014
techniques do *you* use for avoiding this anti-pattern? Am I limited to
using a separate programming language (PHP, in this case) with a separate
COUNT(*) query for each possible column, then CASEing the generation of the
column SQL? Seems awfully ugly!
Thanks in advance for any insight offered
) to implement such logic.
PB
-
Am I limited to using a separate programming language (PHP, in this case) with
a separate COUNT(*) query for each possible column, then CASEing the generation
of the column SQL? Seems awfully ugly!
Thanks in advance for any insight offered!
(And the following
2014/10/08 11:38 -0700, Jan Steinman
However, this pattern will often result in numerous empty columns -- empties
that would not be there had the table not been pivoted.
2014/10/08 16:42 -0500, Peter Brawley
MySQL stored procedures are less incomplete, and can do it, but they're
awkward.
into dedicated databases.
b)
Set up the slave-parallel-workers parameter.
The above seems to work functionally fine, but we have one
doubt/query
about the scalability of this solution.
First, I will jot down the flow as far as I understand (please
correct
is the table
upon which the OPTIMIZE command was run).
Also note that the outputs are after the OPTIMIZE command had been run
on the respective instance-tables ::
1)
Instance 1, which showed massive improvement in INSERT query
completion times after OPTIMIZE command was run on table XX::
db1show
__
Date: Sun, 7 Sep 2014 23:06:09 +0530
Subject: Re: Query on some MySQL-internals
From: ajaygargn...@gmail.com
To: mgai...@hotmail.com
CC: mysql@lists.mysql.com
Hi Martin.
Thanks for the reply.
As I had mentioned, we are running both
.
The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution.
First, I will jot down the flow as far as I understand (please correct
if wrong) ::
Even in parallel-replication scenario, the master writes all the
binlog (combined
Hi all.
We are facing a very strange scenario.
We have two mysql-instances running on the same machine, and they had
been running functionally fine since about 6 years or so (catering to
millions of records per day).
However, since last few days, we were experiencing some elongated
slowness on
Date: Sat, 6 Sep 2014 14:26:22 +0530
Subject: Query on some MySQL-internals
From: ajaygargn...@gmail.com
To: mysql@lists.mysql.com
Hi all.
We are facing a very strange scenario.
We have two mysql-instances running on the same machine, and they had
been running functionally fine
parallel replication, by
incorporating the following changes ::
a)
To begin with, partitioned some tables into dedicated databases.
b)
Set up the slave-parallel-workers parameter.
The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution
into dedicated databases.
b)
Set up the slave-parallel-workers parameter.
The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution.
First, I will jot down the flow as far as I understand (please correct
if wrong
to setup parallel replication, by
incorporating the following changes ::
a)
To begin with, partitioned some tables into dedicated databases.
b)
Set up the slave-parallel-workers parameter.
The above seems to work functionally fine, but we have one doubt/query
about the scalability
-workers parameter.
The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution.
First, I will jot down the flow as far as I understand (please correct
if wrong) ::
Even in parallel-replication scenario, the master writes all the
binlog
with, partitioned some tables into dedicated databases.
b)
Set up the slave-parallel-workers parameter.
The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution.
First, I will jot down the flow as far as I understand (please correct
if wrong
when i used mysql as the keystone's backend in openstack ,i found that the
'token' table saved 29 millions record (using myisam as engine,the size of
token.MYD is 100G) and have 4 new token save per second. That result to the
slow query of a token .since of inserting new token frequently,how
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result from
disk (if that is not in cache/buffer)
Can anybody from the group please
Am 14.07.2014 12:48, schrieb Satendra:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result from
disk (if that is not in cache
/2011/05/23/monitoring-mysql-io-latency-with-performance_schema/
keith
On Mon, Jul 14, 2014 at 5:59 AM, Reindl Harald h.rei...@thelounge.net
wrote:
Am 14.07.2014 12:48, schrieb Satendra:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand
Hi Satendra,
On Jul 14, 2014, at 3:48 AM, Satendra stdra...@gmail.com wrote:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result
Hi Satendra,
On 7/14/2014 5:48 AM, Satendra wrote:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result from
disk
a test case like that.
The client knows about statement bounds from query delimiter. By default the
delimiter is semicolon. You can change it to something else with 'delimiter'
command:
delimiter |;
select 1; select 2;|
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB | Skype: sergefp
Hi, all,
In the C API, we can call mysql_query(select 1; select 2);
which just send the command once to the server, and server
return two result sets, So i want to know if there is a command in the
mysqltest framework to do the job?
I want to write a test case like that.
Thank you for your
Many Thanks for the kind replies.
I have decoded in my code but just wondering in case I missed any solution
to decode via query.
On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:
Short answer, no. There is nothing in MySQL to facilitate this. In
general, storing
to decode via query.
On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:
Short answer, no. There is nothing in MySQL to facilitate this. In
general, storing structured data as a blob (JSON, CSV, XML-fragment,
etc..) is an anti-pattern in a relational environment
Hello,
I would like to know if there is a way to decode the json string stored in
one of the fields as text without using triggers or stored procedures.
What I want to do is is within the query, I would like to get one row per
element within the json string.
For example: the json string
the json string stored in
one of the fields as text without using triggers or stored procedures.
What I want to do is is within the query, I would like to get one row per
element within the json string.
For example: the json string is as follow:
[
{
name : Abc,
age : 20
},
{
name
Hi,
http://blog.ulf-wendel.de/2013/mysql-5-7-sql-functions-for-json-udf/
This is not the exact solution for you query, but might help you better if
you add the libraries.
*thanks,*
*-- *Kishore Kumar Vaishnav
On Thu, Mar 20, 2014 at 11:35 AM, Sukhjinder K. Narula
narula...@gmail.comwrote
...@gmail.com
wrote:
Hello,
I would like to know if there is a way to decode the json string stored in
one of the fields as text without using triggers or stored procedures.
What I want to do is is within the query, I would like to get one row per
element within the json string.
For example: the json
Hi gang,
I am looking for someone that I can pay a few hours to work with me on coming
up with a few needed QUERIES for a large mysql database. The queries will span
across tables, so I great knowledge of JOINS will most likely be necessary. We
will work using SKYPE and GoToMeeting.
Please
Hi All,
I have situation here about Innodb locking.
In transaction, We select from XYZ transaction table values and then updates
it like below
SESSION 1:
START TRANSACTION;
SELECT ID INTO vID FROM XYZ WHERE FLAG = 1 ORDER BY TIME LIMIT 1 FOR UPDATE;
UPDATE XYZ SET FLAG=0 WHERE ID = vID;
Hi Anupam,
We
are keep on getting deadlock due to index locking, there is index on
FLAG, we can allow phantom read in session 1, we tried with READ
COMMITTED but still same, I think issue with next-key locking.
Did you try setting binlog-format=ROW as well?
I have a brief explanation of
Hi All,
I have situation here about Innodb locking.
In transaction, We select from XYZ transaction table values and then updates
it like below
SESSION 1:
START TRANSACTION;
SELECT ID INTO vID FROM XYZ WHERE FLAG = 1 ORDER BY TIME LIMIT 1 FOR UPDATE;
UPDATE XYZ SET FLAG=0 WHERE ID = vID;
2014/1/7 h...@tbbs.net
2014/01/06 17:07 +0100, Reindl Harald
what about look in the servers logfiles
most likely max_allowed_packet laughable low
Is this then, too, likly when the server and the client are the same
machine?
I left this out, that it only then happens when the client has
2014/01/06 17:07 +0100, Reindl Harald
what about look in the servers logfiles
most likely max_allowed_packet laughable low
Is this then, too, likly when the server and the client are the same machine?
I left this out, that it only then happens when the client has been idle, and
right
Now that I installed 5.6.14 on our Vista machine, when using mysql I often
see that error-message, which under 5.5.8 I never saw. What is going on?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
1 - 100 of 9056 matches
Mail list logo