select recipe_id,max(maxdatetime) from data_csmeta group by recipe_id
having recipe_id=19166;
On Mon, Sep 23, 2013 at 4:15 PM, shawn green shawn.l.gr...@oracle.comwrote:
Hi Larry,
On 9/23/2013 3:58 PM, Larry Martell wrote:
On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
Why don't u try snapshot backups, where the lock held for less duration. Or
can't u take mysql dumps during Night time when there is less bd activity
On Thursday, August 29, 2013, Ed L. mysql@bluepolka.net wrote:
Mysql newbie here, looking for some help configuring 5.0.45 master-slave
if i u have LVM's then lock is held only for the duration of taking
snapshot, which would be few min, if there is very less activity on the db.
On Wed, Aug 28, 2013 at 3:08 PM, Ed L. mysql@bluepolka.net wrote:
On 8/28/13 2:00 PM, Ananda Kumar wrote:
Why don't u try snapshot backups
can you please share the code of the trigger. Any kind of error your getting
On Wed, May 29, 2013 at 6:49 PM, Neil Tompkins neil.tompk...@googlemail.com
wrote:
Hi,
I've a trigger that writes some data to a temporary table; and at the end
of the trigger writes all the temporary table data
,NewValue,
LoggedOn)
VALUES (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',1,2,
NOW());
On Wed, May 29, 2013 at 2:49 PM, Ananda Kumar anan...@gmail.com wrote:
can you please share the code of the trigger. Any kind of error your
getting
On Wed, May 29, 2013 at 6:49 PM, Neil
;
On Wed, May 29, 2013 at 2:57 PM, Ananda Kumar anan...@gmail.com wrote:
did u check if data is getting inserted into tempHotelRateAvailability
On Wed, May 29, 2013 at 7:21 PM, Neil Tompkins
neil.tompk...@googlemail.com wrote:
This is my Trigger which doesn't seem to work; but doesn't cause
Does your query use proper indexes.
Does your query scan less number blocks/rows
can you share the explain plan of the sql
On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich
ilya.kazakev...@jetbrains.com wrote:
Hello,
I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it reads
Hello Guys,
I am trying to setup a mysql-cluster with two data nodes and one management
node.
The sequence of step I followed are:
Ran *'ndb_mgmd' *on management node
Ran '*ndbd --initial' *on both the data nodes
Ran '*mysqld' *on both the data nodes
Then the status of the cluster on
When i use mssql, i used the mail agent, so similar one expecting in MYSQL,
On Mon, Apr 8, 2013 at 4:02 PM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Bharani Kumar bharanikumariyer...@gmail.com
How to enable mail agent service in MYSQL. and what
not all the rows, only the distinct q_id,
On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman vegiv...@tuxera.bewrote:
--
*From: *Ananda Kumar anan...@gmail.com
*Subject: *Re: Retrieve most recent of multiple rows
select qid,max(atimestamp) from kkk where qid
:
--
*From: *Ananda Kumar anan...@gmail.com
*Subject: *Re: Retrieve most recent of multiple rows
select qid,max(atimestamp) from kkk where qid in (select distinct qid
from
kkk) group by qid;
What use is that where statement? It just says to use all the rows in the
table
)
--
---
11 13-MAR-13 02.04.04.00 PM
10 13-MAR-13 02.03.36.00 PM
12 13-MAR-13 02.03.48.00 PM
On Wed, Mar 13, 2013 at 7:28 PM, Ananda Kumar anan...@gmail.com wrote:
can you please share the sql that you executed to fetch
select * from tab where anwer_timestamp in (select max(anwer_timestamp)
from tab where q_id in (select distinct q_id from tab) group by q_id);
On Wed, Mar 13, 2013 at 6:48 PM, Norah Jones nh.jone...@gmail.com wrote:
I have a table which looks like this:
answer_id q_id answer qscore_id
can you please share the sql that you executed to fetch the above data
On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Norah Jones nh.jone...@gmail.com
Subject: Retrieve most recent of multiple rows
4 10
`.* TO 'myuserid'@'%'
|
+---+
2 rows in set (0.00 sec)
mysql
at % means I can do the operations from other hosts too? using ssh.
thank you.
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog
Thanks Kind Regards,
TRIMURTHY
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
you can use checksum to make sure there are not corruption in the file
On Wed, Nov 7, 2012 at 6:39 PM, Claudio Nanni claudio.na...@gmail.comwrote:
Gary,
It is always a good practice to test the whole solution backup/restore.
So nothing is better than testing a restore, actually it should be
why dont u create a softlink
On Tue, Oct 30, 2012 at 11:05 PM, Tim Johnson t...@akwebsoft.com wrote:
* Reindl Harald h.rei...@thelounge.net [121030 08:49]:
The drupal mysql datafiles are located at
/Applications/drupal-7.15-0/mysql/data
as opposed to /opt/local/var/db/mysql5 for
the socket. like on any other unix machine.
how did i connect mysql to what exactly?
On 10/18/12 6:42 AM, Ananda Kumar wrote:
how did u connect mysql on your laptop
On Thu, Oct 18, 2012 at 1:19 AM, kalin ka...@el.net
mailto:ka...@el.net wrote:
thanks amanda... the local worked
it
before.
this is all on os x - 10.8.2...
On 10/17/12 1:25 PM, Ananda Kumar wrote:
also try using load data local infile 'file path' and see if it works
On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar anan...@gmail.com
mailto:anan...@gmail.com wrote:
does both directory have permission 777
does both directory have permission 777
On Wed, Oct 17, 2012 at 9:27 PM, Rick James rja...@yahoo-inc.com wrote:
SELinux ?
-Original Message-
From: Lixun Peng [mailto:pengli...@gmail.com]
Sent: Tuesday, October 16, 2012 9:03 PM
To: kalin
Cc: Michael Dykman;
also try using load data local infile 'file path' and see if it works
On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar anan...@gmail.com wrote:
does both directory have permission 777
On Wed, Oct 17, 2012 at 9:27 PM, Rick James rja...@yahoo-inc.com wrote:
SELinux ?
-Original Message
try this command and see if you can get more info about the error
show innodb status\G
On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail
machiel.richa...@gmail.com wrote:
Hi All
I am hoping someone can point me in the right direction.
We have a mysql 5.0 database which is
overwrite the info, or there is nothing logged.
We even tried running the create statement and immediately running
Show innodb status, but nothing for that statement.
Regards
On 09/10/2012 11:05 AM, Ananda Kumar wrote:
try this command and see if you can get more info about the error
start with 500MB and try
On Mon, Sep 10, 2012 at 3:31 PM, Machiel Richards - Gmail
machiel.richa...@gmail.com wrote:
Hi, the sort_buffer_size was set to 8Mb as well as 32M for the session
(currently 1M) and retried with same result.
On 09/10/2012 11:55 AM, Ananda Kumar wrote:
can
this temp table will hold how many rows, what would be its size.
On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail
machiel.richa...@gmail.com wrote:
Hi,
We confirmed that the /tmp directory permissions is set to rwxrwxrwxt
and is owned by root , the same as all our other servers.
a temp table with only one field in order
to insert one row for testing, but we are currently not able to create any
temporary tables whatsoever as even the simplest form of table still gives
the same error.
Regards
On 09/10/2012 02:33 PM, Ananda Kumar wrote:
this temp table will hold how
the firewall settings and that is only rules for
connections.
On 09/10/2012 02:40 PM, Ananda Kumar wrote:
did u check if there any firewall settings, forbidding you to create
files, check if SELinux is disabled
On Mon, Sep 10, 2012 at 6:08 PM, Machiel Richards - Gmail
machiel.richa
if the server is offline , what kind of operation happens on it.
On Thu, Aug 2, 2012 at 11:31 AM, Pothanaboyina Trimurthy
skd.trimur...@gmail.com wrote:
Hi everyone
i have 4 mysql servers out of those one server will
be online always and the remaining will be offline and
Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
you can set this is in application server.
You can also set this parameter in my.cnf
wait_timeout=120 in seconds.
But the above parameter is only for inactive session
On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote:
Hi list,
is there a switch where i can restrict the
you can check the slow query log, this will give you all the sql's which
are taking more time to execute
On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote:
Am 23.07.2012 15:47, schrieb Ananda Kumar:
you can set this is in application server.
You can also set this parameter
why dont u setup a staging env, which is very much similar to your
production and tune all long running sql
On Mon, Jul 23, 2012 at 8:02 PM, walter harms wha...@bfs.de wrote:
Am 23.07.2012 16:10, schrieb Ananda Kumar:
you can check the slow query log, this will give you all the sql's which
so. its more of inactive connections, right.
What do you mean by NEVER LOGOUT
On Mon, Jul 23, 2012 at 8:17 PM, walter harms wha...@bfs.de wrote:
Am 23.07.2012 16:37, schrieb Ananda Kumar:
why dont u setup a staging env, which is very much similar to your
production and tune all long
, schrieb Ananda Kumar:
so. its more of inactive connections, right.
What do you mean by NEVER LOGOUT
The programms watch certain states in the database,
the connect automatic at db startup, disconnecting
is an error case.
re,
wh
On Mon, Jul 23, 2012 at 8:17 PM, walter harms wha...@bfs.de
SQL select * from orddd;
ORDERID PRODID
-- --
2 5
1 3
1 2
2 7
1 5
SQL select prodid,count(*) from orddd group by PRODID having count(*) 1;
PRODID COUNT(*)
--
column used in the order by caluse, should be the first column in the
select statement to make the index work
On Wed, Jul 11, 2012 at 3:16 PM, Reindl Harald h.rei...@thelounge.netwrote:
Am 11.07.2012 11:43, schrieb Ewen Fortune:
Hi,
On Wed, Jul 11, 2012 at 10:31 AM, Reindl Harald
can u show the explain plan for your query
On Tue, Jul 10, 2012 at 2:59 PM, Darek Maciera darekmaci...@gmail.comwrote:
Hello,
I have table:
mysql DESCRIBE books;
|id |int(255) | NO | PRI |
NULL | auto_increment |
| idu
Kumar anan...@gmail.com:
can u show the explain plan for your query
Thanks, for reply!
Sure:
mysql EXPLAIN SELECT * FROM books WHERE LOWER(ksd)=LOWER('4204661375');
++-+-+--+---+--+-+--++-+
| id
looks like the value that you give for myisam_max_sort_size is not enough
for the index creation and hence it doing a REPAIR WITH KEYCACHE
Use the below query to set the min values required for myisam_max_sort_size
to avoid repair with keycache
select
a.index_name as index_name,
mysqldump --databases test --tables ananda test.dmp
mysql show create table ananda\G;
*** 1. row ***
Table: ananda
Create Table: CREATE TABLE `ananda` (
`id` int(11) DEFAULT NULL,
`name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT
Did you try using IGNORE keyword while using the LOAD DATAFILE command.
This will ignore duplicate rows from getting inserted and proceed further.
On Fri, Jun 15, 2012 at 11:05 AM, Keith Keller
kkel...@wombat.san-francisco.ca.us wrote:
On 2012-06-14, Gary Aitken my...@dreamchaser.org wrote:
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below
LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
am having 8 innodb tables and at the same time
I am joining 4 tables to get the report.
I am maintaining 60days records because the user will try to generate the
report out of 60 days in terms of second, minute, hourly, weekly and
Monthly report also.
From: Ananda Kumar [mailto:anan
\,index_message_id
idx_unique_key_ib_xml
153
reports.pl.Message_Id
1
Using where
** **
Sorry for the previous mail….. this is my execution plan for 1.5 million
records….
** **
*From:* Ananda Kumar [mailto:anan...@gmail.com]
*Sent:* Thursday, June 14, 2012 3
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
--
Best Regards,
Prabhat Kumar
is iptables service running on db server, if yes, trying stopping it and
check
On Wed, Jun 13, 2012 at 5:04 PM, Claudio Nanni claudio.na...@gmail.comwrote:
2012/6/13 Johan De Meersman vegiv...@tuxera.be
- Original Message -
From: Claudio Nanni claudio.na...@gmail.com
Did you try with myisam tables.
They are supposed to be good for reporting requirement
On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.com wrote:
I'll second Johan's comments.
Count the disk hits!
One minor change: Don't store averages in the summary table; instead
store the
,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
is there anything you can see in /var/log/messages
On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni claudio.na...@gmail.comwrote:
Johan,
Print out warnings such as Aborted connection... to the error log.
the dots are not telling if they comprise Aborted clients as well.
I find the MySQL error
or you can check application logs to see why the client lost connectivity
from the app
On Tue, Jun 12, 2012 at 5:12 PM, Ananda Kumar anan...@gmail.com wrote:
is there anything you can see in /var/log/messages
On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni claudio.na...@gmail.comwrote
when u say redudency.
Do u just want replication like master-slave, which will be active-passive
or
Master-master which be active-active.
master-slave, will work just a DR, when ur current master fails you can
failover the slave, with NO LOAD balancing.
Master-master allows load balancing.
On
/mysql
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
To unsubscribe:
http://lists.mysql.com/mysql
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com
is the central database server just ONE server, to which all your 50 data
center app connects
On Thu, May 24, 2012 at 2:47 PM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:
Hi All,
I need architectural help for our requirement,
We have nearly 50 data centre through out different cities
Hi,
How much ever tuning you do at my.cnf will not help much, if you do not
tune your sql's.
Your first priority should be tune sql's, which will give you good
performance even with decent memory allocations and other settings
regards
anandkl
On Wed, May 23, 2012 at 3:45 PM, Andrew Moore
it gets decreased ?
*thanks regards,
__*
Kishore Kumar Vaishnav
*
*
On Tue, May 22, 2012 at 1:40 PM, Claudio Nanni claudio.na...@gmail.com
wrote:
Kishore,
No, as already explained, it is not possible, Innodb datafiles *never*
shrink.
Cheers
Claudio
On May
On Tue, May 22, 2012 at 2:58 PM, Kishore Vaishnav
kish...@railsfactory.orgwrote:
Right now one tablespace datafile. But does it matters if i have one file
per table.
*thanks regards,
__*
Kishore Kumar Vaishnav
*
*
On Tue, May 22, 2012 at 2:56 PM, Ananda Kumar anan
with file per table and
doing the optimization will reduce the size of the datafile size ? If yes,
then why this not possible on the datafile (one single file) too ?
*
*
*thanks regards,*
*__*
Kishore Kumar Vaishnav
*
*
On Tue, May 22, 2012 at 3:07 PM, Reindl Harald h.rei
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can enable
BARACUDA file format, which in turn provides data compression
and dynamic row format, which will reduce IO.
For more benefits read the doc
On Tue, May 22,
yes, Barracuda is limited to FILE_PER_TABLE.
Yes, true there is CPU cost, but very less.
To gain some you have to loss some.
On Tue, May 22, 2012 at 5:07 PM, Johan De Meersman vegiv...@tuxera.bewrote:
--
*From: *Ananda Kumar anan...@gmail.com
yes, there some
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data, then CPU cost will
be more.
On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
or it could be that your buffer size is too small, as mysql is spending lot
of CPU time for compress and uncompressing
On Tue, May 22, 2012 at 5:45 PM, Ananda Kumar anan...@gmail.com wrote:
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data
why are not using any where condition in the update statment
On Wed, May 16, 2012 at 1:24 PM, GF gan...@gmail.com wrote:
Good morning,
I have an application where the user ids were stored lowercase.
Some batch import, in the user table some users stored a uppercase
id, and for some
is accountid a number or varchar column
On Sat, May 12, 2012 at 7:38 PM, Andrés Tello mr.crip...@gmail.com wrote:
While doning a batch process...
show full processlist show:
| 544 | prod | 90.0.0.51:51262 | tmz2012 | Query |6 |
end | update `account` set
secuencial process with huge slow
inserts, to small parallel task with burst of inserts...
On Mon, May 14, 2012 at 8:18 AM, Ananda Kumar anan...@gmail.com wrote:
is accountid a number or varchar column
On Sat, May 12, 2012 at 7:38 PM, Andrés Tello mr.crip...@gmail.comwrote:
While doning
I used to have these issues in mysql version 5.0.41.
On Mon, May 14, 2012 at 8:13 PM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Ananda Kumar anan...@gmail.com
If numeric, then why are u using quotes. With quotes, mysql will
ignore the index and do
which version of mysql are you using.
Is this secondary index.?
On Mon, May 7, 2012 at 12:07 PM, Zhangzhigang zzgang_2...@yahoo.com.cnwrote:
hi all:
I have a question:
Creating indexes after inserting massive data rows is faster than before
inserting data rows.
Please tell me why.
Thanks for response .
I didn't set any open_files limit in my.cnf .
For testing i set open_files_limit to 300 but still MySQL crashing after
128.
~Vishesh
On Wed, May 2, 2012 at 4:28 PM, Reindl Harald h.rei...@thelounge.netwrote:
Am 02.05.2012 12:52, schrieb vishesh kumar:
Hi Members
-
Thanks
~Vishesh
On Wed, May 2, 2012 at 4:54 PM, vishesh kumar linuxtovish...@gmail.comwrote:
Thanks for response .
I didn't set any open_files limit in my.cnf .
For testing i
you have to restart the Server.
Am 02.05.2012 um 13:58 schrieb vishesh kumar linuxtovish...@gmail.com:
I am getting following in error log
Do you just want to replace current value in client column to NEW.
You can write a stored proc , with a cursor and loop through the cursor,
update each table.
regards
anandkl
On Mon, Apr 30, 2012 at 2:47 PM, Pothanaboyina Trimurthy
skd.trimur...@gmail.com wrote:
Hi all,
i have one
did you check permission of file /var/run/mysqld/mysqld.sock
On Wed, Apr 11, 2012 at 9:48 AM, Larry Martell larry.mart...@gmail.comwrote:
On Wed, Apr 11, 2012 at 2:51 AM, Ganesh Kumar bugcy...@gmail.com wrote:
Hi Guys,
I am using debian squeeze it's working good, I am trying to install
/mysql
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
SQL_NOTES=@OLD_SQL_NOTES */;
-- Dump completed on 2011-04-18 4:14:26
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
would be appreciated.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http
To unsubscribe:
http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Technology Enterprise
Iowa Department of Administrative Services
Telephone: 515.281.6139 Fax: 515.281.6137
Email: kay.rozeb...@iowa.gov
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Why dont you create a new table where id 2474,
rename the original table to _old and the new table to actual table name.
or
You need to write a stored proc to loop through rows and delete, which will
be faster.
Doing just a simple delete statement, for deleting huge data will take
ages.
Create PROCEDURE qrtz_purge() BEGIN
declare l_id bigint(20);
declare NO_DATA INT DEFAULT 0;
DECLARE LST_CUR CURSOR FOR select id from table_name where id 123;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
OPEN LST_CUR;
SET NO_DATA = 0;
FETCH LST_CUR INTO l_id;
--
Dan Nelson
dnel...@allantgroup.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
to monitor the UPDATE/INSERT performance, check out if there's any
performance bottleneck, for example:
slow INSERT/UPDATE
more I/O where execute INSERT
Regards
Thanks
J.W
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com
Hi,
Why dont u use a stored proc to update rows ,where u commit for every 1k or
10k rows.
This will be much faster than ur individual update stmt.
regards
anandkl
On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:
That is what I'm doing. I'm doing a correlated update on 200 million
no commit.
On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:
Hi,
Why dont u use a stored proc to update rows ,where u commit for every 1k
or 10k rows.
This will be much faster than ur individual update stmt.
regards
anandkl
On Thu, Sep 22, 2011 at 8:24 PM, Hank hes
May be if u can let the audience know a sip-net of ur sql, some can help u
On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote:
Sorry, but you do not understand my original issue or question.
-Hank
On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote
Your outer query select cpe_mac,max(r3_dt) from rad_r3cap, is doing a full
table scan, you might want to check on this and use a WHERE condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star suprstar1...@yahoo.com wrote:
I have a table with 24 million rows, I need to
key, but seq is included as a
covering index). There is no index on dest.seq -- that index is built once
the update is complete. This query takes about 3.5 hours when I don't use
LOCK TABLES, and over 4 hours when I do use LOCK TABLES.
-Hank
On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar
immediately by
e-mail
or by telephone and then to delete this e-mail.
Vox Orion accepts no liability for any loss, expense or damage
arising from this e-mail and/or any attachments.
--
Best Regards,
Prabhat Kumar
MySQL DBA
My
or u can use for loop, have only the database to be exported and use that
variable in --database and do mysqldump of each database.
On Thu, Sep 15, 2011 at 6:27 PM, Carsten Pedersen cars...@bitbybit.dkwrote:
On 15-09-2011 10:31, Chris Tate-Davies wrote:
Adarsh,
1)
When restoring a
Dr. Doctor,
What kind of 10 entries? Is it insert,update delete etc.
regards
anandkl
On Wed, Sep 14, 2011 at 6:30 PM, The Doctor doc...@doctor.nl2k.ab.cawrote:
Question:
How can you optimise MySQL for 10 entires?
Just running OSCemmerce and it is slow to pull up a who catalogue.
On Wed, Sep 14, 2011 at 9:24 AM, Ananda Kumar anan...@gmail.com wrote:
Dr. Doctor,
What kind of 10 entries? Is it insert,update delete etc.
regards
anandkl
On Wed, Sep 14, 2011 at 6:30 PM, The Doctor doc...@doctor.nl2k.ab.ca
wrote:
Question:
How can you optimise MySQL
Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Can you lets us know what is the output of
select * from user_info where user_id=16078845;
On Thu, Sep 8, 2011 at 1:02 PM, umapathi b umapath...@gmail.com wrote:
I wanted to change the login_date of one user . The original data of that
user is like this ..
select * from user_info where
Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
know our self written applications and
having test-environments, if you can do this can nobody say
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com
=eroomy...@gmail.com
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
://lists.mysql.com/mysql?**
unsub=sureshkumar...@gmail.com
http://lists.mysql.com/mysql?unsub=sureshkumar...@gmail.com
--
Thanks
Suresh Kuna
MySQL DBA
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Is this a production setup.
If not, take complete dump of the all databases.
Drop the xYZ database, see if you can see all the objects under XYZ.
Since the xYZ database is created, its obvious, that names are case
sensitive, and it show not show object from XYZ, when u under xYZ.
Can you please
1 - 100 of 845 matches
Mail list logo