gt; ident, given, surname 1 fredjones 2 johnhoward
3 henry wales 4
>> jennybrown
>>
>> status:
>> identyear 1 2017 2 2017 3 2017 4 2017 1 2018 3 2018
>>
>> I want my query to return the name and ident from the m
:
ident year
1 2017
2 2017
3 2017
4 2017
1 2018
3 2018
I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018
I have been working on the following query to achieve this, but it only
2017
1 2018
3 2018
I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018
I have been working on the following query to achieve this, but it only
returns data when there is no `year` entries for a
2017/09/19 17:19 ... Don Wieland:
Of these found rows, I want to omit those rows where there are rows found after
the END TimeStamp based on ?below ?where clause:
WHERE 1 AND apt.appt_status_id IN (16) AND apt.user_id IN (3) AND apt.time_start
> ‘1504238399'
We are trying to find Former Cl
I have a working query:
/* start */
SELECT
u.user_id,
u.first_name AS u_first_name,
u.last_name AS u_last_name,
c.client_id AS c_client_id,
c.first_name AS c_first_name,
c.middle_name AS c_middle_name,
c.last_name AS c_last_name,
c.address AS c_address,
c.city AS c_city,
c.state AS
stien FLAESCH"
To: "MySql"
Sent: Tuesday, 10 January, 2017 14:55:42
Subject: kill query and prepared statements
Hi all,
I have reported this problem before, but I raise it again, since I still get
this problem with 5.7.17
See attached code:
I want to interrupt a long running stat
Seb,
You should log a bug at http://bugs.mysql.com - this is not a developer list.
/Johan
- Original Message -
> From: "Sebastien FLAESCH"
> To: "MySql"
> Sent: Tuesday, 10 January, 2017 14:55:42
> Subject: kill query and prepared statements
>
gain, since I still get
this problem with 5.7.17
See attached code:
I want to interrupt a long running statement with CTRL-C by starting a new
connect to make a KILL QUERY.
I am using the same technique as the mysql client code.
The difference here is that my code is using PREPARED STATEMENTS
nect to make a KILL QUERY.
I am using the same technique as the mysql client code.
The difference here is that my code is using PREPARED STATEMENTS with
mysql_stmt_prepare() etc.
Problem: After interrupting the first query with CTRL-C, the call to
mysql_stmt_close() hangs...
Maybe I am missing
Hi all,
I have reported this problem before, but I raise it again, since I still get
this problem with 5.7.17
See attached code:
I want to interrupt a long running statement with CTRL-C by starting a new
connect to make a KILL QUERY.
I am using the same technique as the mysql client code
On 07/03/2016 06:55 PM, Sebastien FLAESCH wrote:
Hi all,
I use the following technique to cancel a long running query:
In the SIGINT signal handler, I restart a connection and I perform a
KILL QUERY mysql-process-id-of-running-query
This was working find with MySQL 5.6.
But with 5.7 (5.7.1
hangs in recvfrom():
recvfrom(3,
On 07/03/2016 06:55 PM, Sebastien FLAESCH wrote:
Hi all,
I use the following technique to cancel a long running query:
In the SIGINT signal handler, I restart a connection and I perform a
KILL QUERY mysql-process-id-of-running-query
This was working f
Hi all,
I use the following technique to cancel a long running query:
In the SIGINT signal handler, I restart a connection and I perform a
KILL QUERY mysql-process-id-of-running-query
This was working find with MySQL 5.6.
But with 5.7 (5.7.11), we get now a different result:
A) The query
7; on each table will
make Mysql take more and more memory. After scanning all of the tables,
mysql has started using more than 1GB swap.
2) We had a migration recently to add a column to half of the tables we
have. The query is like 'ALTER ONLINE TABLE table_name ADD COLUMN IF NOT
EXISTS (`
On 3/26/2016 4:36 PM, shawn l.green wrote:
On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:
I have Freeradius 2.x with MySQL 5.5 in Ubuntu.
I want to query user quota for current date. I am using following code
SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM
radacct where
On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:
I have Freeradius 2.x with MySQL 5.5 in Ubuntu.
I want to query user quota for current date. I am using following code
SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where
(acctstarttime between DATE_FORMAT(NOW(),'
2016/03/25 06:39 ... JAHANZAIB SYED:
I want to query user quota for current date. I am using following code
SELECT SUM(acctinputoctets)+SUM(acctoutputoctets) AS Total FROM radacct where
(acctstarttime between DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND
acctstoptime between DATE_
I have Freeradius 2.x with MySQL 5.5 in Ubuntu.
I want to query user quota for current date. I am using following code
SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where
(acctstarttime between DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND
acctstoptim
Hi All,
Perhaps a bit of a trivial question, but in terms of query statistics (i.e.
how many SELECT / INSERT / DELETE / etc. queries has been ran against the
server)...
When you take an INSERT ... ON DUPLICATE KEY UPDATE ...
Under the two conditions (i.e. either INSERT, or UPDATE if the record
he number of rows in a table that are grouped by a
>>>> list of columns, but I also need to exclude rows that have more then
>>>> some count when grouped by a different set of columns. Conceptually,
>>>> this is not hard, but I am having trouble doing thi
ouped by a different set of columns. Conceptually,
>> this is not hard, but I am having trouble doing this efficiently.
>>
>> My first counting query would be this:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, roiname, recipe_process,
this efficiently.
My first counting query would be this:
SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id
But from this count I need to subtract the count of rows that have
more then 50 rows with a
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.
My first counting query
04:15, Don Wieland wrote:
Hi gang,
I have a query:
SELECT
p.pk_ProductID,
p.Description,
i.Quantity
FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)
WHERE p.pk_ProductID IN (1,2,3);
It produces a list like the follo
On Oct 22, 2015, at 2:41 PM, Michael Dykman wrote:
> >
> > I'm not at a terminal but have you tried grouping by p.pk_ProductID
> instead
> > of i.fk...? It is the actual value you are selecting as well as being on
> > the primary table in the query.
>
> Yeah I
> On Oct 22, 2015, at 2:41 PM, Michael Dykman wrote:
>
> I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
> of i.fk...? It is the actual value you are selecting as well as being on
> the primary table in the query.
Yeah I tried that - actually
I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
of i.fk...? It is the actual value you are selecting as well as being on
the primary table in the query.
On Thu, Oct 22, 2015, 5:18 PM Don Wieland wrote:
> Hi gang,
>
> I have a query:
>
> SEL
Hi gang,
I have a query:
SELECT
p.pk_ProductID,
p.Description,
i.Quantity
FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)
WHERE p.pk_ProductID IN (1,2,3);
It produces a list like the following:
1,Banana
On 10/22/2015 11:48 AM, Don Wieland wrote:
On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
Which release of MySQL are you using?
Version 5.5.45-cll
How many rows do you get if you remove the GROUP_CONCAT operator? We don't need
to see the results. (sometimes it is a good idea to look
> On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
>
> Which release of MySQL are you using?
Version 5.5.45-cll
> How many rows do you get if you remove the GROUP_CONCAT operator? We don't
> need to see the results. (sometimes it is a good idea to look at the raw,
> unprocessed results)
>
On 10/20/2015 1:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT
On 2015-10-20 12:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_id = o.
- Original Message -
> From: "Shawn Green"
> Subject: Re: Query optimizer-miss with unqualified expressions, bug or
> feature?
>
> On a more serious note, indexes with limited cardinality are less useful
> than those with excellent cardinality. Cardinali
g effect of a predicate. Using
a true BOOLEAN rather than a TINYINT would give a better estimate of the
filtering effect, and thus of the estimated number of rows as the outcome of a
query.
*Actually, fuzzy logic has lots of practical application in real world
situations. They are just not using t
On 10/19/2015 3:48 PM, Roy Lyseng wrote:
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the result?
Which might make a significant result to something
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use two starts: one on 'a > 0' and one on 'a < 0', taking a union
of the result? Which might make a significant result to something?
Ben.
On 2015-10-19 14:19, Roy Lyseng wr
Hi Ben,
On 19.10.15 15.10, Ben Clewett wrote:
I have noticed that an unqualified boolean expression cannot be optimized by
MySQL to use an index in 5.6.24.
For example:
CREATE TABLE t (
i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a BOOLEAN NOT NULL,
KEY a (a)
) ENGINE=InnoDB;
This wi
I have noticed that an unqualified boolean expression cannot be
optimized by MySQL to use an index in 5.6.24.
For example:
CREATE TABLE t (
i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a BOOLEAN NOT NULL,
KEY a (a)
) ENGINE=InnoDB;
This will hit key 'a':
SELECT * FROM t WHERE a = TRUE;
Oranges |
>> | 4 | Kiwis |
>> ++-+
>> 4 rows in set (0.00 sec)
>>
>> mysql> select * from purchases;
>> ++-+--+
>> | ID | CUST_ID | FRUIT_ID |
>> ++-+--+
>> | 2 | 3 | 2
5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?
--
Mogens
+66 8701 33224
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to
FRUIT_ID |
++-+--+
| 2 | 3 | 2 |
| 3 | 1 | 4 |
| 4 | 1 | 2 |
| 5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joe
2 |
| 3 | 1 | 4 |
| 4 | 1 | 2 |
| 5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?
I think you are going t
|
| 4 | 1 | 2 |
| 5 | 2 | 1 |
++-+--+
I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?
Further more, use logstash to collect the audit events and you can filter
out anything that wasn't an error and move it to a query error log.
On Wed, Jun 24, 2015 at 5:32 PM, Singer Wang wrote:
> Yep, as shown below:
>
> root@audit-db.ec2:(none)> select fark from fark from fa
27; at line 1
root@audit-db.ec2:(none)>
Jun 24 16:29:52 audit-db percona-audit:
{"audit_record":{"name":"Query","record":"217_2015-06-24T16:29:52","timestamp":"2015-06-24T16:29:52
UTC","command_class":"error"
at 11:05 AM, Johan De Meersman
wrote:
-
FROM: "Singer X.J. Wang"
SUBJECT: Re: server-side logging of query errors?
You could log all queries using the audit plugin, 15% hit..
Fair point, though: maybe one of the different audit plugins has
the cap
(eg. STATUS, COMMAND, NAME).
On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman
wrote:
>
> --
>
> *From: *"Singer X.J. Wang"
> *Subject: *Re: server-side logging of query errors?
>
> You could log all queries using the audit plugin, 1
> From: "Singer X.J. Wang"
> Subject: Re: server-side logging of query errors?
> You could log all queries using the audit plugin, 15% hit..
Fair point, though: maybe one of the different audit plugins has the capability
to specifically log faulty requests. Have a look thro
pid, see Hanlon's razor) user could spam your server with
> >> malformed requests until the logging disk runs full, at which point
> >> the daemon would suspend operations until space is freed.
> >
> > I don't think it's a valid argument - the same is true
the daemon would suspend operations until space is freed.
>
> I don't think it's a valid argument - the same is true right now for
> "general query log". Any stupid/malicious user can produce loads of
> queries and fill the disk if one has "general quer
d security risk; a malicious
(or just stupid, see Hanlon's razor) user could spam your server with
malformed requests until the logging disk runs full, at which point
the daemon would suspend operations until space is freed.
I don't think it's a valid argument - the same is true righ
he proxies out there has support for such logging, I'm not really
familiar with any of them.
- Original Message -
> From: "Tomasz Chmielewski"
> To: "MySql"
> Sent: Tuesday, 23 June, 2015 09:35:46
> Subject: server-side logging of query errors?
&
Suppose I run a query which has a syntax error:
mysql> blah;
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'blah' at line 1
How can I get mysql server to log this error
rcuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has "Using
intersect(data_cst_bbccbce0,data_cst_fba12377)" in th
m is largish
>>>> (37,247,884 rows with 282 columns). Caching is off for my testing, so
>>>> it's not related to that. To short circuit anyone asking, these
>>>> queries are generated by python code, which is why there's an IN
>>>>
oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has "Using
intersect(data_cst_bbccbce0,data_cst_fba12377)" in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster
IN
>> clause with 1 value, as oppose to an =.
>>
>> Here are the queries and their explains. The significant difference is
>> that the faster query has "Using
>> intersect(data_cst_bbccbce0,data_cst_fba12377)" in the query plan -
>> those 2 indexes are on
is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query
t. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.
Here are the queries and their explains. The significant difference is
that the faster query has "Using
intersect(data_cst_bbccbce0,data_cst_fb
r_status_log1FK_cc_agents_tier_status_log_31
cc_queue_idA14(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_11
cc_agent_tier_status_idA2(null)BTREE
(null)(null)
cc_agents_tier_status
On 15.11.2014 01:06, Peter Brawley wrote:
Let's see the results of Explain Extended this query, & result of Show
Create Table cc_member_queue_end_log.
cc_member_queue_end_log is not of interest, it is used just as a series
of numbers. It may be any table with ids.
I've cha
Let's see the results of Explain Extended this query, & result of Show
Create Table cc_member_queue_end_log.
PB
-
On 2014-11-13 1:34 PM, Mimiko wrote:
Hello. I have this table:
> show create table cc_agents_tier_status_log:
CREATE TABLE "cc_agents_tier_status_log
atus_log_3 1
cc_queue_id A 14 (null) BTREE (null) (null)
cc_agents_tier_status_log 1 FK_cc_agents_tier_status_log_1 1
cc_agent_tier_status_id A 2 (null) BTREE (null) (null)
cc_agents_tier_status_log 1
IDX_cc_agents_tier_status_log_7 1 id A 23999 (null) BTREE (null) (null)
cc_agents_tier_stat
2014/10/08 11:38 -0700, Jan Steinman
However, this pattern will often result in numerous empty columns -- empties
that would not be there had the table not been pivoted.
2014/10/08 16:42 -0500, Peter Brawley
MySQL stored procedures are less incomplete, and can do it,
ward. I use the app language (eg PHP) to implement such logic.
PB
-
Am I limited to using a separate programming language (PHP, in this case) with
a separate COUNT(*) query for each possible column, then CASEing the generation
of the column SQL? Seems awfully ugly!
Thanks in advance for
table not been pivoted.
What techniques do *you* use for avoiding this anti-pattern? Am I limited to
using a separate programming language (PHP, in this case) with a separate
COUNT(*) query for each possible column, then CASEing the generation of the
column SQL? Seems awfully ugly!
Thanks in a
> >
> >> >
> >> >
> >> > Thanks and Regards,
> >> > Ajay
> >> >
> >> >
> >> > However db2-statement-2 can be picked up by
> >> >> any other sql worker thread.
> >> >>
> >&g
__
> Date: Sun, 7 Sep 2014 23:06:09 +0530
> Subject: Re: Query on some MySQL-internals
> From: ajaygargn...@gmail.com
> To: mgai...@hotmail.com
> CC: mysql@lists.mysql.com
>
> Hi Martin.
>
> Thanks for the reply.
>
&
ts (here "XX" is the table
upon which the OPTIMIZE command was run).
Also note that the outputs are after the OPTIMIZE command had been run
on the respective instance-tables ::
1)
Instance 1, which showed massive improvement in INSERT query
completion times after OPTIMIZE command
> Date: Sat, 6 Sep 2014 14:26:22 +0530
> Subject: Query on some MySQL-internals
> From: ajaygargn...@gmail.com
> To: mysql@lists.mysql.com
>
> Hi all.
>
>
> We are facing a very strange scenario.
>
> We have two mysql-instances running on the same mac
Hi all.
We are facing a very strange scenario.
We have two mysql-instances running on the same machine, and they had
been running functionally fine since about 6 years or so (catering to
millions of records per day).
However, since last few days, we were experiencing some elongated
slowness on
p by
>> >> any other sql worker thread.
>> >>
>> >> This is a good feature added in mysql, however still needs to go
>> >> through lot
>> >> of testing. Please share your observation and findings in case it
>> >> differs
>> >&g
2014 at 8:27 AM, Ajay Garg
> wrote:
> >>>
> >>> Hi all.
> >>>
> >>>
> >>> We have replication set-up, where we cater to HUUGEE amounts of data.
> >>> Since quite some time, we have been facing issues wherein the slave
&
en facing issues wherein the slave
>>> lags behind master quite a lot.
>>>
>>>
>>> So, yesterday we were able to setup parallel replication, by
>>> incorporating the following changes ::
>>>
>>> a)
>>> To begin with, partitioned
setup parallel replication, by
>> incorporating the following changes ::
>>
>> a)
>> To begin with, partitioned some tables into dedicated databases.
>>
>> b)
>> Set up the "slave-parallel-workers" parameter.
>>
>>
>> Th
erday we were able to setup parallel replication, by
> incorporating the following changes ::
>
> a)
> To begin with, partitioned some tables into dedicated databases.
>
> b)
> Set up the "slave-parallel-workers" parameter.
>
>
> The above seems to work functionall
with, partitioned some tables into dedicated databases.
b)
Set up the "slave-parallel-workers" parameter.
The above seems to work functionally fine, but we have one doubt/query
about the scalability of this solution.
First, I will jot down the flow as far as I understand (please correc
when i used mysql as the keystone's backend in openstack ,i found that the
'token' table saved 29 millions record (using myisam as engine,the size of
token.MYD is 100G) and have 4 new token save per second. That result to the
slow query of a token .since of inserting new token
Hi Satendra,
On 7/14/2014 5:48 AM, Satendra wrote:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine & some time to seek the result from
disk (if th
Hi Satendra,
On Jul 14, 2014, at 3:48 AM, Satendra wrote:
> Hi there, I'm struggling to find the total time taken by a database query
> on the disk? As I understand when a database query start execution it takes
> some time inside the database engine & some time to seek the
ngle:
http://www.markleith.co.uk/2011/05/23/monitoring-mysql-io-latency-with-performance_schema/
keith
On Mon, Jul 14, 2014 at 5:59 AM, Reindl Harald
wrote:
>
>
> Am 14.07.2014 12:48, schrieb Satendra:
> > Hi there, I'm struggling to find the total time taken by a database query
> >
Am 14.07.2014 12:48, schrieb Satendra:
> Hi there, I'm struggling to find the total time taken by a database query
> on the disk? As I understand when a database query start execution it takes
> some time inside the database engine & some time to seek the result from
> di
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine & some time to seek the result from
disk (if that is not in cache/buffer)
Can anybody from the gro
o the job?
> I want to write a test case like that.
The client knows about statement bounds from query delimiter. By default the
delimiter is semicolon. You can change it to something else with 'delimiter'
command:
delimiter |;
select 1; select 2;|
BR
Sergei
--
Sergei Petrunia,
Hi, all,
In the C API, we can call mysql_query("select 1; select 2");
which just send the command once to the server, and server
return two result sets, So i want to know if there is a command in the
mysqltest framework to do the job?
I want to write a test case like that.
Thank you for your repl
o decode via query.
>
>
> On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman wrote:
>
> > Short answer, no. There is nothing in MySQL to facilitate this. In
> > general, storing structured data as a blob (JSON, CSV, XML-fragment,
> > etc..) is an anti-pattern in a re
Many Thanks for the kind replies.
I have decoded in my code but just wondering in case I missed any solution
to decode via query.
On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman wrote:
> Short answer, no. There is nothing in MySQL to facilitate this. In
> general, storing structured d
gt;
>> Hello,
>> I would like to know if there is a way to decode the json string stored in
>> one of the fields as text without using triggers or stored procedures.
>> What I want to do is is within the query, I would like to get one row per
>> element within the js
Hi,
http://blog.ulf-wendel.de/2013/mysql-5-7-sql-functions-for-json-udf/
This is not the exact solution for you query, but might help you better if
you add the libraries.
*thanks,*
*-- *Kishore Kumar Vaishnav
On Thu, Mar 20, 2014 at 11:35 AM, Sukhjinder K. Narula
wrote:
> Hello,
> I
stored in
> one of the fields as text without using triggers or stored procedures.
> What I want to do is is within the query, I would like to get one row per
> element within the json string.
> For example: the json string is as follow:
>
> [
> {
>&q
Hello,
I would like to know if there is a way to decode the json string stored in
one of the fields as text without using triggers or stored procedures.
What I want to do is is within the query, I would like to get one row per
element within the json string.
For example: the json string is as
Hi gang,
I am looking for someone that I can pay a few hours to work with me on coming
up with a few needed QUERIES for a large mysql database. The queries will span
across tables, so I great knowledge of JOINS will most likely be necessary. We
will work using SKYPE and GoToMeeting.
Please con
Hi Anupam,
We
are keep on getting deadlock due to index locking, there is index on
FLAG, we can allow phantom read in session 1, we tried with READ
COMMITTED but still same, I think issue with next-key locking.
Did you try setting binlog-format=ROW as well?
I have a brief explanation of thi
Hi All,
I have situation here about Innodb locking.
In transaction, We select from XYZ transaction table values and then updates
it like below
SESSION 1:
START TRANSACTION;
SELECT ID INTO vID FROM XYZ WHERE FLAG = 1 ORDER BY TIME LIMIT 1 FOR UPDATE;
UPDATE XYZ SET FLAG=0 WHERE ID = vID;
COMMI
Hi All,
I have situation here about Innodb locking.
In transaction, We select from XYZ transaction table values and then updates
it like below
SESSION 1:
START TRANSACTION;
SELECT ID INTO vID FROM XYZ WHERE FLAG = 1 ORDER BY TIME LIMIT 1 FOR UPDATE;
UPDATE XYZ SET FLAG=0 WHERE ID = vID;
COMMIT
2014/1/7
> 2014/01/06 17:07 +0100, Reindl Harald
> what about look in the servers logfiles
> most likely "max_allowed_packet" laughable low
>
> Is this then, too, likly when the server and the client are the same
> machine?
>
> I left this out, that it only then happens when th
2014/01/06 17:07 +0100, Reindl Harald
what about look in the servers logfiles
most likely "max_allowed_packet" laughable low
Is this then, too, likly when the server and the client are the same machine?
I left this out, that it only then happens when the client has been idle, a
Am 06.01.2014 15:36, schrieb h...@tbbs.net:
> Now that I installed 5.6.14 on our Vista machine, when using "mysql" I often
> see that error-message, which under 5.5.8 I never saw. What is going on?
what about look in the servers logfiles
most likely "max_allowed_packet" laughable low
signatu
1 - 100 of 9292 matches
Mail list logo