when i used mysql as the keystone's backend in openstack ,i found that the
'token' table saved 29 millions record (using myisam as engine,the size of
token.MYD is 100G) and have 4 new token save per second. That result to the
slow query of a token .since of inserting new token frequently,how cou
l 16, 2013 2:06 AM
>> To: Ilya Kazakevich
>> Cc: MySQL
>> Subject: Re: Mesaure query speed and InnoDB pool
>>
>> Does your query use proper indexes.
>> Does your query scan less number blocks/rows can you share the explain
>> plan of the sql
>>
>>
ze is probably the biggest
memory consumer, so it is the easiest way to shrink mysqld's footprint.
> -Original Message-
> From: Ilya Kazakevich [mailto:ilya.kazakev...@jetbrains.com]
> Sent: Wednesday, April 17, 2013 8:05 AM
> To: Rick James
> Cc: 'MySQL'
> Su
to about 70% of available
RAM.
-Original Message-
From: Ananda Kumar [mailto:anan...@gmail.com]
Sent: Tuesday, April 16, 2013 2:06 AM
To: Ilya Kazakevich
Cc: MySQL
Subject: Re: Mesaure query speed and InnoDB pool
Does your query use proper indexes.
Does your query scan less number
Sent: Tuesday, April 16, 2013 8:38 AM
> To: mysql@lists.mysql.com
> Subject: Re: Mesaure query speed and InnoDB pool
>
> Hi Rick,
> I thought you have to dedicate 70-80% of available RAM not a total RAM.
> Saying if I have 2 gig of RAM on my exclusively innodb box, and I
&
o:anan...@gmail.com]
> Sent: Tuesday, April 16, 2013 2:06 AM
> To: Ilya Kazakevich
> Cc: MySQL
> Subject: Re: Mesaure query speed and InnoDB pool
>
> Does your query use proper indexes.
> Does your query scan less number blocks/rows can you share the explain
> plan of the sql
>
several ideas:
* Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual time
* Set pool as small as possible to reduce its effect on query speed
* Set pool larger than my db and run query to load all data into pool and
measure speed then
How do you measure your queries'
27; or 'Innodb_pages_read' instead of actual time
> * Set pool as small as possible to reduce its effect on query speed
> * Set pool larger than my db and run query to load all data into pool and
> measure speed then
>
> How do you measure your queries' speed?
>
>
ouping.State = advisor_counts.State
AND primary_grouping.Sub = advisor_counts.Sub
AND primary_grouping.ChapterType = advisor_counts.ChapterType;
- Original Message - From: "Jay Pipes" <[EMAIL PROTECTED]>
To: "Jesse" <[EMAIL PROTECTED]>
Cc: "mysql"
timing. However, if I can get your more efficient query working, I
would like to. Any ideas why it's not working?
Thanks,
Jesse
- Original Message -
From: "Jay Pipes" <[EMAIL PROTECTED]>
To: "Jesse" <[EMAIL PROTECTED]>
Cc: "mysql"
Sent
Jesse wrote:
I worked with the query for a while, trying equi-joins instead of JOINs,
and variuos other things. I found that the queries that I was using to
represent the TotMem & TotAdv columns was what was closing things down.
I finally ended up using a sub-query to solve the problem. I ga
pterType) AS sq ORDER BY State, Sub, ChapterType
Anyway, thanks for your help.
Jesse
- Original Message -
From: "Dan Buettner" <[EMAIL PROTECTED]>
Cc: "Jesse" <[EMAIL PROTECTED]>; "mysql"
Sent: Monday, June 26, 2006 8:18 PM
Subject: Re: Query S
, 'PRIMARY,IX_Schools1',
'IX_Schools1', '18', 'bpa.S.State,bpa.S.Sub', 65, 'Using where'
2, 'DEPENDENT SUBQUERY', 'C1', 'ref',
'PRIMARY,IX_Chapters_1,IX_Chapters_2', 'IX_Chapters_1', '
27;, 'S1', 'ref', 'PRIMARY,IX_Schools1',
'IX_Schools1', '18', 'bpa.S.State,bpa.S.Sub', 65, 'Using where'
2, 'DEPENDENT SUBQUERY', 'C1', 'ref',
'PRIMARY,IX_Chapters_1,IX_Chapters_2', '
Jesse, can you post table structures ( SHOW CREATE TABLE tablename )
and the output you get from EXPLAIN followed by the query below?
Also what version of MySQL you're on, and high level details of the
hardware (RAM, disks, processors, OS).
That will all be helpful in trying to help you out he
From: "Price, Randall" <[EMAIL PROTECTED]>
To: "Jesse" <[EMAIL PROTECTED]>; "MySQL List"
Sent: Monday, June 26, 2006 4:47 PM
Subject: RE: Query Speed
Hi Jesse,
I am not 100% sure cause I have only been using MySQL for ~6 months but
I do read this mailing li
-
From: Jesse [mailto:[EMAIL PROTECTED]
Sent: Monday, June 26, 2006 4:28 PM
To: MySQL List
Subject: Query Speed
I have a query which I can execute in Microsoft SQL, and it's
instantaneous.
However, In MySQL, I've only been able to get it down to 48 seconds:
SELECT S.State, ST.StateN
I have a query which I can execute in Microsoft SQL, and it's instantaneous.
However, In MySQL, I've only been able to get it down to 48 seconds:
SELECT S.State, ST.StateName, S.Sub, C.ChapterType, (SELECT Count(*) FROM
(Members M JOIN Chapters C1 ON C1.ID=M.ChapterID) JOIN Schools S1 on
S1.ID
That's the whole question.
Do foreign keys (FKs) affect query speed?
'Course the answer could lead to sub-questions , e.g.,
"If so, how best to optimize a query for them?"
And I guess a corollary question would be whether implementing FKs slows down
MySQL processing in g
That's the whole question.
Do foreign keys (FKs) affect query speed?
'Course the answer could lead to sub-questions , e.g.,
"If so, how best to optimize a query for them?"
And I guess a corollary question would be whether implementing FKs slows down
MySQL processing in g
Any suggestions?
On 2/3/06, سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote:
>
> Dear all,
> Thanks for your replies.
>
> The main table for me is traffic_log. I use combination of recipient_id
> and mobile_retry fields to uniquely identify each row in the traffic_log and
> use the same combination
Dear all,
Thanks for your replies.
The main table for me is traffic_log. I use combination of recipient_id and
mobile_retry fields to uniquely identify each row in the traffic_log and use
the same combination on status_log as my foreign key to traffic_log.
Each message is saved as a row in traffic
Sorry, but you gave us a "best guess" situation. Your tables do not have
any PRIMARY KEYs defined on them so I had to guess at what made each row
in each table unique from all other rows in that table based only on your
sample query.
What value or combination of values will allow me to uniquel
Another question is that if I run such CREATE TEMPORARY statements in my
query, is MySQL really can do it fast?
Cause this query may be run periodically !
On 2/2/06, سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote:
>
> Thanks for your suggestion,
> I forget to tell that each message in traffic_log
Thanks for your suggestion,
I forget to tell that each message in traffic_log may has at least 2 status
in status_log and I use to columns "recipients_id" and "mobile_retry"
to uniquely find each message's statuses.
May be I have to change my tables structure. I don't know.
It's really important f
سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006
11:07:49 AM:
> Dear All,
> I need your suggestions please.
>
> have to large tables with these schemas:
>
> Table: traffic_log
> Create Table: CREATE TABLE `traffic_log` (
> `recipient_id` int(11) NOT NULL default '0',
> `retry`
Hadi,
>But it's very slow.
>Do you have any suggestions to fast it?
Your query calls no aggregate functions, so what do you mean to achieve
by GROUP BY ... HAVING? For example this bit of logic extracted from
your query ...
SELECT * FROM table
GROUP BY pkcol
HAVING pkcol=MAX(pkcol)
is logica
Dear All,
I need your suggestions please.
have to large tables with these schemas:
Table: traffic_log
Create Table: CREATE TABLE `traffic_log` (
`recipient_id` int(11) NOT NULL default '0',
`retry` smallint(4) NOT NULL default '0',
`mobile_retry` tinyint(1) NOT NULL default '0',
`orig` v
t; <[EMAIL PROTECTED]>
To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc:
Sent: Tuesday, October 25, 2005 6:05 PM
Subject: Re: how to increase query speed ?
Sorry I missed the explain part.
You are doing a full table scan on the Updates table. There really is
no way around speeding u
t Baisley"
<[EMAIL PROTECTED]>
To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc:
Sent: Tuesday, October 25, 2005 4:15 PM
Subject: Re: how to increase query speed ?
How about posting the explain for your query. Just put explain
before it, MySQL with then tell you how it w
Adding compound (hash, years) index (or even better unique index if it
fits in your business logic) in both tables should speed up things.
--
Alexey Polyakov
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc:
Sent: Tuesday, October 25, 2005 4:15 PM
Subject: Re: how to increase query speed ?
How about posting the explain for your query. Just put explain before
it, MySQL with then tell you how it will go about executing the
query, like which
How about posting the explain for your query. Just put explain before
it, MySQL with then tell you how it will go about executing the
query, like which indexes it's using. I assume you have both columns
indexed?
On Oct 25, 2005, at 4:46 AM, C.R. Vegelin wrote:
Hi List,
I have a performan
Hi List,
I have a performance problem I can't get solved.
I have 2 tables, called Updates (1 mln rows) and Data (5 mln rows).
Table Updates has 2 (non-unique) keys, defined as:
> Hash bigint(20) unsigned default NULL
> Year tinyint(4) NOT NULL default '0'
Table Data has the same 2 (non-unique) keys
Craig Gardner wrote:
Thank you very much. That's what fixed my problem.
Robert J Taylor wrote:
Can you restrict to Not Null instead of != ""? (I.e, can you scrub
the data not to have empty strings?).
The explain shows 3 extra where calculations per row...that's painful.
Great! Glad that sol
I have two queries that are very similar. One of the queries takes a few
minutes (3:43:07 last run) to complete, while the other takes less than
a second to complete.
I know these are two different queries and shouldn't take the same
amount of time, but I based the fast query on the slower one.
Is index defined on all of your tables?
Saqib Ali
-
http://validate.sf.net < (X)HTML / DocBook Validator and Transformer
On Tue, 2 Mar 2004, Chris Fowler wrote:
> I have a query that is admittedly inefficient in that it is doing
> multiple OR clauses and joining multiple tables.
Chris Fowler wrote:
I have a query that is admittedly inefficient in that it
is doing multiple OR clauses and joining multiple tables. However, the
query runs at an acceptable speed if I am in a terminal session and run
the query directly in the terminal. On the other hand, when PHP
performs th
I have a query that is admittedly inefficient in that it is doing
multiple OR clauses and joining multiple tables. However, the query
runs at an acceptable speed if I am in a terminal session and run the
query directly in the terminal. On the other hand, when PHP performs
the same query for use
Hello,
one more idea, use something like this [ i hope it's correct :) ]
SELECT
content.row_id AS row_id,
content.app_id AS app_id,
CASE s1.field_id
when 69 then "niche",
when 70 then "type",
when 71 then "title",
when 72 then "descr
Thanks everyone for your input, I'll try the ramdisk idea, I read about
someone else who tried that and had some success. Beyond, that I'm gonna
take the long route and redesign the database to be a bit more
conventional.
Thanks!
Matt
On Thu, 2003-10-23 at 20:28, Peter Buri wrote:
> Hello,
>
> a
Hello,
as i see you use one table to store all the data, but the cohesive data are
split into 15! different rows.
I think to get the best performance you shoud redesign your tabel.
Use at last first normal form [1NF], if the app_id is uniq this can be the
primary key [which will speed up the quer
At 02:05 PM 10/23/2003, you wrote:
Hey All-
I am trying to improve the speed of a website and was wondering if
anyone had any ways I can rewrite these queries so that they actually
run with some descent speed.
Its a really nasty query and I'm not sure where to start, I'd like to
now have to redo t
ilto:[EMAIL PROTECTED]
-->Sent: Thursday, October 23, 2003 12:05 PM
-->To: [EMAIL PROTECTED]
-->Subject: Improving Query speed - any suggestions?
-->
-->Hey All-
-->
-->I am trying to improve the speed of a website and was wondering if
-->anyone had any ways I can rewrite these querie
Hey All-
I am trying to improve the speed of a website and was wondering if
anyone had any ways I can rewrite these queries so that they actually
run with some descent speed.
Its a really nasty query and I'm not sure where to start, I'd like to
now have to redo the tables and I already put some i
What are the configuration you are using? What's the size of your buffers?
What's your system?
Maybe increasing sort buffer and key buffer will be good.
;)
Alexis
Quoting Brad Teale <[EMAIL PROTECTED]>:
> Hello,
>
> The problem:
> I have the following query with is taking upwards of 2 minute
Hello,
The problem:
I have the following query with is taking upwards of 2 minutes to complete
and we need it faster, prefer less than 30 seconds (don't laugh):
select modelhr, avg(f.temp-b.temp), avg(abs(f.temp-b.temp)),
stddev(f.temp-b.temp), stddev(abs(f.temp-b.temp)), count(f.temp-b.temp) fro
It sounds like you are referring to full text indexing. Whenever you
have to put a wild card at the start of a word, you should probably
considering using full text indexing. It's easy to implement and the
manual pages are fairly informative.
On Thursday, July 10, 2003, at 02:41 PM, Wendell Din
I've been immersing myself in reading and trying to understand what I
can about how keys and indexes work and the flow of a query. Here's an
example of one that's not real efficient and I'm not sure if it can be
made any more efficient or done some other way to go faster though. I'm
guessing a "lik
On Wed, 02 Apr 2003 16:53:44 +0200, Grégoire Dubois
<[EMAIL PROTECTED]> wrote:
>Creating the tree doesn't give me problem. Where I ask me some
>questions, is the speed to get the whole tree from the database in a
>recursive way.
I've made a PHP script doing genealogical descendancy charts that
;.
>
>
>Good luck with it.
>Kevin
>
>-Original Message-
>From: Grégoire Dubois [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, April 02, 2003 6:54 AM
>To: [EMAIL PROTECTED]
>Subject: Conception - Tree - Recursivity -Address book - Query speed
>
an follow up his article by looking in Usenet groups
for the term "Nested Set Hierarchy" or "Nested Set Model".
Good luck with it.
Kevin
-Original Message-
From: Grégoire Dubois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 02, 2003 6:54 AM
To: [EMAIL PROTECTED]
Subject
, 2003 6:54 AM
To: [EMAIL PROTECTED]
Subject: Conception - Tree - Recursivity -Address book - Query speed
Hello all,
I am putting multiple "address book" (trees) into a MySQL database.
These "address book" are made of "directories" and "persons".
Hello all,
I am putting multiple "address book" (trees) into a MySQL database.
These "address book" are made of "directories" and "persons".
It gives something like that for the tables :
Directory
--
ID
Name
ID_father (the reference to the father directory)
Pers
platform: windows 2000 pro, mysql default table type myIsam, non-binary
distribution (install version).
I am still getting very slow query results when I join multiple tables
together. I have been trying to figure this out for days and am at a loss. I
added an index to my cross reference table
* [EMAIL PROTECTED]
>
> I have a question on how MySQL JOIN has effect on query (search)
> performance:
>
> Our database consists of about 250.000 datasets (rows).
> Right now, all of the 150 columns are in one big table.
>
> However, we do have certain columns that are empty for most rows (for
>
Hello
I have a question on how MySQL JOIN has effect on query (search) performance:
Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.
However, we do have certain columns that are empty for most rows (for
example information of the s
Hello
I have a question on how MySQL JOIN has effect on query (search) performance:
Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.
However, we do have certain columns that are empty for most rows (for
example information of the s
Hello
I have a question on how MySQL JOIN has effect on query (search) performance:
Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.
However, we do have certain columns that are empty for most rows (for
example information of the s
Hello
I have a question on how MySQL JOIN has effect on query (search) performance:
Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.
However, we do have certain columns that are empty for most rows (for
example information of the s
Sounds about right.
How have you got your my.cnf set?
Have you got RAID on you HDD?
Simon
-Original Message-
From: Florian Wilken [mailto:[EMAIL PROTECTED]]
Sent: 23 May 2002 10:17
To: [EMAIL PROTECTED]
Subject: MySQL Query Speed
Hello
our database consists of one table with approx
Hello
our database consists of one table with approx. 150 columns.
We have a query going over 11 columns (most tinyint, int and some varchar)
Out of 217237 rows the query found 56 matches.
Without Indexing, the query took 2,55 seconds.
With Indexing, the query took 0,04 seconds.
The database and
: Anvar Hussain K.M. [SMTP:[EMAIL PROTECTED]]
Sent: Thursday, February 14, 2002 11:38 AM
To: [EMAIL PROTECTED]
Subject:Re: Query Speed
Hi,
Your mail does not tell about the table structure or the index
available.
But try this
Hi,
I got stuck on queryabout a million rows in one table...it slows up my
server...
How can I improve my server on managing a request like this?
I manually loop the monthly data query...I got a database that is full of
data daily...so all I need to get the maximum of every month is to quer
I just added a fulltext index to a table, and MATCH queries on the table
are timing out. Can anyone offer any insight on this?
The table has 2000 rows of data in it, and phpMyAdmin reports it as
having a total size of 244,228 bytes.
Table Structure:
CREATE TABLE Customers (
ID mediumint(9) NOT
Hi, Philip,
Here is the query as you suggested:
SELECT ox.ensembl_id, x.dbprimary_id, x.display_id, db.db_name
FROM objectXref ox, Xref x, externalDB db
WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508')
AND x.xrefid = ox.xrefid
AND db.externalDBId = x.externalDBId;
>
> Maybe "WHERE ... I
Hi,
Upon closer examination of the query, it seems that both
ensembl_object_type, and xref_index does not have ensembl_id in the 1st
position for the index. So the hints would not work any way. I would still
appreciate if someone points out the syntax error I got.
Looking at the new schemaCore t
key buffer is about 8M
key_buffer_size | 8388600
I just tried bumping my settings up to these that I found in the
manual...
safe_mysqld -O key_buffer=64M -O table_cache=256 \
-O sort_buffer=4M -O record_buffer=1M &
It shaved a second off... 2.29, and later call took
only .88 sec
On Mon, Nov 19, 2001 at 03:29:26PM -0500, Anthony R. J. Ball wrote:
>
> 3.23.41 on Solaris
>
> I have an indexed table of cusips (9 character identifiers)
> which I am comparing against a lookup tables of over
> 1,000,000 securities, also keyed by cusip, both fields are
> char(9) fields.
How
3.23.41 on Solaris
I have an indexed table of cusips (9 character identifiers)
which I am comparing against a lookup tables of over
1,000,000 securities, also keyed by cusip, both fields are
char(9) fields.
My query is taking over 3 seconds, which may be the best I
can do, but I'm hoping I
At 10:26 AM + 11/1/01, Leon Noble wrote:
>Hi All,
>
>Tried the following three statements and they are wither too slow or do not
>give me what I want. Basically what I want is to search for records for a
>whole month and display totals for that month for each individual day. The
>date field is
Leon Noble writes:
> select dayofmonth(date) as mydate, count(num) as mycount from table_name
> where date='TO_DAYS(2001-08-01) - TO_DAYS(2001-08-31)' and action=1 group by
> dayofmonth(date);
This query makes no sense at all. I don't think the date will
ever be equal to that string constant, u
:26 p.m.
To: [EMAIL PROTECTED]
Subject: Query speed
Hi All,
Tried the following three statements and they are wither too slow or do not
give me what I want. Basically what I want is to search for records for a
whole month and display totals for that month for each individual day. The
date field
Hi All,
Tried the following three statements and they are wither too slow or do not
give me what I want. Basically what I want is to search for records for a
whole month and display totals for that month for each individual day. The
date field is indexed.
Tried..
select count(num) as mycoun
ger kanrouk
-Original Message-
From: Braxton Robbason [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 02, 2001 11:55 AM
To: Roger Karnouk; [EMAIL PROTECTED]
Subject: RE: Query speed
seems to me that the first query uses your primary key index. Since you have
specified qualifications on crci
, May 02, 2001 9:57 AM
To: [EMAIL PROTECTED]
Subject: Query speed
I am trying to run two queries which seem to me should execute at abut the
same speed.
My table is setup as follows:
day - number of days since 1970
crcid - a number between 0 and 24
tag - a number used to identify record type
I am trying to run two queries which seem to me should execute at abut the
same speed.
My table is setup as follows:
day - number of days since 1970
crcid - a number between 0 and 24
tag - a number used to identify record type
total - the value stored (the rest of the record is just to identi
If you are repeatedly querying tables on non-key fields you can improve query speeds
by implementing indexes on those fields...
For instance, if you had a personnel table with the following fields: id, lastname,
firstname, etc Where id was an auto-increment,primary key you could index the
Hi,
I have some huge databases generated from text file of a dictionary.
I would like to know if I make a query based on a field which was not metion as a
key
at the creation of the table. Is the speed of the query affected by the size of the
records and by the number of fields a rec
79 matches
Mail list logo