Hi Johan,
Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine, that's correct or not?
I don't know if is a problem or not, is a doubt/question for me. I'm not
sure if is an atypical behaviour.
Thanks in advance.
Regards,
Antonio.
*please* don't use reply-all on mailing-lists
the list by definition distributes your message
Am 30.06.2014 13:14, schrieb Antonio Fernández Pérez:
Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine,
that's correct or not?
practically
Hi Andre,
Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.
Any ideas?
Thanks in advance.
Regards,
Antonio.
Am 27.06.2014 09:48, schrieb Antonio Fernández Pérez:
Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.
Any ideas?
ideas for what?
* which files don't get shrinked (ls -lha)
*
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after
Hello Antonio,
On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in
- Original Message -
From: Antonio Fernández Pérez antoniofernan...@fabergames.com
Subject: Re: Optimizing InnoDB tables
I would like to know, if is possible, why after execute an analyze table
command on some fragmented table, after that, appears fragmented again.
Simple question
Hi again,
I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...
Thanks in advance.
Regards,
Antonio.
- Original Message -
From: Antonio Fernández Pérez antoniofernan...@fabergames.com
Subject: Re: Optimizing InnoDB tables
I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...
Then all new tables will be created in their own tablespace now
Have a look at this:
https://rtcamp.com/tutorials/mysql/enable-innodb-file-per-table/
--
Andre Matos
andrema...@mineirinho.org
On Jun 25, 2014, at 2:22 AM, Antonio Fernández Pérez
antoniofernan...@fabergames.com wrote:
Hi again,
I have enabled innodb_file_per_table (Its value is on).
I
Hi Antonio, como esta?
What's the mysql version you're running? Have you tried to ALTER TABLE x
ENGINE=InnoDB?
-- WB, MySQL Oracle ACE
Em 24/06/2014, às 08:03, Antonio Fernández Pérez
antoniofernan...@fabergroup.es escreveu:
Hi list,
I was trying to optimize the InnoDB tables. I have
Hi Wagner,
I'm running
MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute
ALTER TABLE (Analyze with InnoDB tables do that, or not?).
Thanks in advance.
Regards,
Antonio.
Hello Antonio,
On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql)
Am 24.06.2014 21:07, schrieb shawn l.green:
It makes a huge difference if the tables you are trying to optimize have
their own tablespace files or if they live
inside the common tablespace.
http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
which
Hello Reindl,
On 6/24/2014 3:29 PM, Reindl Harald wrote:
Am 24.06.2014 21:07, schrieb shawn l.green:
It makes a huge difference if the tables you are trying to optimize have their
own tablespace files or if they live
inside the common tablespace.
* Dan Nelson dnel...@allantgroup.com [110618 16:33]:
In the last episode (Jun 18), Tim Johnson said:
Is there an optimal 'alignment' for column widths for
varchar types?
I.E., divisible by 8 or 10 or by powers of 2?
No. Varchar fields are stored using only as many bytes as are in
In the last episode (Jun 18), Tim Johnson said:
Is there an optimal 'alignment' for column widths for
varchar types?
I.E., divisible by 8 or 10 or by powers of 2?
No. Varchar fields are stored using only as many bytes as are in that
particular entry. The size in the column definition is
Thanks for the suggestions everybody.
I added in columns to store the day, month and year of the created_at value,
and then added in an index on (newsletter_id, created_month, created_day),
and the the slow queries reduced from around 20 seconds to 0.5 seconds! I
also removed the redundant
Hi Simon,
once you apply functions to a field, an index on that field is pretty
much useless. For this particular query, I would be tempted to create
additional fields to store the values of MONTH(articles.created_at)
and DAY(articles.created_at). Create an index on (month_created,
day_created)
I concur. In addition to suggested index I would add a new column in articles
table called body_length, which is going to be updated every time the body
column is updated. Add that column to the composite index mentioned below.
This should speed up the query a lot.
Cheers,
Mihail
On Jan 18,
On 1/18/11 10:22 AM, Simon Wilkinson wrote:
SELECT articles.* FROM articles INNER JOIN newsletters ON
articles.newsletter_id = newsletters.id INNER JOIN users ON users.id =
newsletters.user_id WHERE users.id =12 AND MONTH(articles.created_at) = '12'
AND DAY(articles.created_at) = '5' ORDER BY
Mikesz,
The best way to approach this is to discover how it has been
implemented by others. There is no point re-inventing the wheel. I'm sure
you can get some ideas from this web site:
http://www.databaseanswers.org/data_models/. and the tutorial at
]
Sent: 06 October 2009 04:57
To: Rob Wultsch
Cc: mysql@lists.mysql.com
Subject: Re: Optimizing my.cnf
As you see on my my.cnf I skip innodb and federated. So I just use
myisam in this case. TIA.
Willy
On Mon, 2009-10-05 at 20:47 -0700, Rob Wultsch wrote:
On Mon, Oct 5, 2009 at 6:12 PM
On Mon, Oct 5, 2009 at 6:12 PM, sangprabv sangpr...@gmail.com wrote:
I have Dell PE2950iii with 16GB of RAM, and 1 Quadcore processor @2.00G.
Installed with MySQL 5.075 on 64bit Ubuntu Jaunty. I have these
parameters in my.cnf:
blah blah blah...
This heavily depends on workload. Are you
As you see on my my.cnf I skip innodb and federated. So I just use
myisam in this case. TIA.
Willy
On Mon, 2009-10-05 at 20:47 -0700, Rob Wultsch wrote:
On Mon, Oct 5, 2009 at 6:12 PM, sangprabv sangpr...@gmail.com wrote:
I have Dell PE2950iii with 16GB of RAM, and 1 Quadcore
On Tue, Apr 21, 2009 at 2:52 PM, Milan Andric mand...@gmail.com wrote:
Hello,
I have a rather burly Drupal based site that seems to be causing some
problems, today we had a major outage. There are many slow queries
and also mysql related iowait that causes server processes to hang, at
least
If you are running MySQL 5, try moving the WHERE condition into the
JOIN condition, which is really where you want the filter since it's
part of the join.
SELECT me.id, me.quotation_id, quotation.qt, part.id, part.cit
FROM cwGroup me
JOIN quotation ON (
quotation.id = me.quotation_id AND
Norman Elton wrote:
I've got a table that tracks expiration dates. Currently, if the
record hasn't expired, the column is stored as NULL. In order to find
records that haven't expired, I search for WHERE expire_at NOW() OR
expire_at IS NULL. This seems dirty, and I suspect it makes my
indexes
Hi,
I am getting copies of your email...I don't know why...Can you fix this
problem..
Kind regards,
Matthew
- Original Message -
From: Mark Goodge [EMAIL PROTECTED]
To: mysql mysql@lists.mysql.com
Sent: Monday, November 17, 2008 10:14 AM
Subject: Re: Optimizing nullable expiration
On Fri, Jul 25, 2008 at 12:27 AM, Michael Stearne [EMAIL PROTECTED] wrote:
I have a query:
SELECT Country, COUNT( Country ) AS Cnt
FROM properties WHERE (
Country != 'USA' AND
Country != 'US' AND
Country != 'Unit' AND
Country != 'United States'
AND Country !=
On Fri, Jul 25, 2008 at 12:35 PM, Arthur Fuller [EMAIL PROTECTED] wrote:
ORDER BY implies a sort of the result set. I don't think there is any way
around that.
I guess so. What I am doing is to just run the query once per day and
store the results in memcache.
Michael
Arthur
On Fri, Jul
On Tue, 10 Jun 2008, Martin wrote:
Lucio
So the net effect is to produce a cartesian join where ALL results from
Query1 (are combined with) ALL results from Query2
In order to prevent cartesian join can you use a more narrowly defined
predicate such as what is defined at
Charles Lambach schrieb:
Hi.
My hosting provider recommended me to optimize my 200,000 record table in
order to save resources.
I do _always_ this query:
SELECT * FROM books WHERE isbn='foo' LIMIT 1
The primary key of this table was 'id', and 'isbn' was and INDEX field.
I've modified this:
Thank you all for your suggestions.
So it's very important to make primary fields be as smaller as possible,
right? I'm going to change 'isbn' from VARCHAR(100) to VARCHAR(25) and, if
possible (I might change my code), from VARCHAR to BIGINT.
By the way, which are optimal values for key_len
On Tue, Apr 29, 2008 at 5:09 AM, Charles Lambach
[EMAIL PROTECTED] wrote:
Thank you all for your suggestions.
So it's very important to make primary fields be as smaller as possible,
right? I'm going to change 'isbn' from VARCHAR(100) to VARCHAR(25) and, if
possible (I might change my
Hi Rob.
Thank you very much for your answer.
CREATE TABLE `books` (
`id` int(11) unsigned NOT NULL auto_increment,
`title` varchar(200) NOT NULL,
`author_name` varchar(100) NOT NULL,
`category_name` varchar(100) NOT NULL,
`description` varchar(200) NOT NULL,
`isbn` varchar(100) NOT
I am going to assume that you are asking this question because
performance has not improved from this change. Is this correct?
I don't think that your surogate key (id) is useful, but that is
probably minor. I think that your hostings company suggestion is
probably a good idea, but will also
On Mon, Apr 28, 2008 at 6:49 AM, Rob Wultsch [EMAIL PROTECTED] wrote:
I am going to assume that you are asking this question because
performance has not improved from this change. Is this correct?
I don't think that your surogate key (id) is useful, but that is
probably minor. I think that
On Mon, April 28, 2008 09:44, Rob Wultsch wrote:
On Mon, Apr 28, 2008 at 6:49 AM, Rob Wultsch [EMAIL PROTECTED] wrote:
I am going to assume that you are asking this question because
performance has not improved from this change. Is this correct?
I don't think that your surogate key (id) is
On Sun, Apr 27, 2008 at 3:59 AM, Charles Lambach
[EMAIL PROTECTED] wrote:
I do _always_ this query:
SELECT * FROM books WHERE isbn='foo' LIMIT 1
The primary key of this table was 'id', and 'isbn' was and INDEX field.
This sentence could have been better written. If you have a primary
key
Chris, this should already be pretty fast as it is using a primary key in
its entirety, and as long as the index size remains manageable MySQL will be
able to keep it in memory for fast access.
That said, doing away with the aggregate function might speed things up just
slightly.
You don't care
Hello Chris,
Chris [EMAIL PROTECTED] writes:
Surendra Singhi wrote:
I am using mySQL 5.0 and I have 2 tables with few hundred millions of
records. To optimize things, I am using MyISAM tables, using the
smallest possible data
type and have set indexes.
Now, the problem which I am facing
Also, when a query takes too long I kill it by shutting down the
server, is this safe or is there a better alternative (MyISAM tables)?
Use show processlist to see the query that taking more time and use
kill processid; for terminating the query .
--
Praj
Surendra Singhi wrote:
Hello
I have an update operation where I am able to update 10 million records in
approx
2.5 mins.
But when I tried to do the same update on say 40-50 million records, mysql
takes forever to finish. Its the same table, and same update operation, i am
just changing the range of ids using where
Surendra Singhi wrote:
Hi,
I am using mySQL 5.0 and I have 2 tables with few hundred millions of
records.
To optimize things, I am using MyISAM tables, using the smallest possible data
type and have set indexes.
Now, the problem which I am facing is that mySql process is wasting lot of
Hi Jesse,
Have you tried the following:
1) ordering by only part.d and seeing how long the query takes
2) putting an index on (part.d, cwGroup.stripped_cw) and seeing how
long the query takes.
1 will help pinpoint the problem, and 2 might actually help.
-Sheeri
SELECT part.d,
Well, normally a DISTINCT has to do a type of sort and is slower than
non-DISTINCT queries. Each field of the result set is considered in the
DISTINCT logic. Can you modify the query so that it does not require the
DISTINCT? Can you post the query?
R.
-Original Message-
From: Stephen P.
Would you need the DISTINCT if you change the query like so?
SELECT Site.Site_ID, Site, Status, Type
FROM Site
JOIN Project ON Site.Site_ID = Project.Site_ID
ORDER BY Site;
You may also want to just try your initial query without the distinct to
see if that is
On 2006-05-01 1:14 PM, Robert DiFalco [EMAIL PROTECTED] wrote:
Would you need the DISTINCT if you change the query like so?
SELECT Site.Site_ID, Site, Status, Type
FROM Site
JOIN Project ON Site.Site_ID = Project.Site_ID
ORDER BY Site;
You may also want to just try your initial query
, 2006 1:27 PM
To: mysql@lists.mysql.com
Subject: Re: Optimizing DISTINCT searches
On 2006-05-01 1:14 PM, Robert DiFalco [EMAIL PROTECTED] wrote:
Would you need the DISTINCT if you change the query like so?
SELECT Site.Site_ID, Site, Status, Type FROM Site JOIN Project ON
Site.Site_ID
Hi all!
(Sorry for the late reply.)
Puiu Hrenciuc wrote (re-ordered):
Barry [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Puiu Hrenciuc wrote:
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network
traffic data :
record_time datetime - time when recording
First of all thanks all for your answers.
Second I'll explain what I manage to do in regard with this issue,
maybe someone else may need it in the future.
So, first I have started by changing the way I have stored the `ip`
field from varchar(15) to int unsigned and populated this field with
the 4
2006/4/13, Puiu Hrenciuc [EMAIL PROTECTED]:
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network traffic
data :
record_time datetime - time when recording was added
ip char(15) - ip that generated the traffic
type tinyint(3) - traffic type ( 1 - local, 2 - internet )
Puiu Hrenciuc wrote:
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network traffic
data :
record_time datetime - time when recording was added
ip char(15) - ip that generated the traffic
type tinyint(3) - traffic type ( 1 - local, 2 - internet )
inbound int(10) - in bytes
Hmmm, I have omited that :
I also have an index on (ip,type) in that order
Barry [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Puiu Hrenciuc wrote:
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network
traffic data :
record_time datetime - time when
Hello.
Taking into an account the nature of your queries, I can forward you to
these links:
http://dev.mysql.com/doc/refman/5.0/en/query-speed.html
http://dev.mysql.com/doc/refman/5.0/en/limit-optimization.html
If some of your fields have only two values ('Yes', 'No') you can
switch to
Hello.
You will get much more help from the list if you provide the current
table structure, short description of the information which is stored
in that fields, and the query you want to optimize (most probably it
takes much more time than others).
If point 1 is true then is there a way to
Sorry guys,
here is a table description, query and result:
SHOW CREATE TABLE Properties;
Properties |CREATE TABLE `Properties` (
`Property_ID` int(11) NOT NULL auto_increment,
`Insertion_Date` date default NULL,
`Status` varchar(20) default NULL,
`Uploader_ID` int(11) default NULL,
Hello.
mysqldump has --compress option, which can help in case
the network is the main bottleneck.
I'm using the typical --opt with mysqldump, over the internet.
the problem I'm having is the dump takes 30 mins or so, and during that
time, every table in the db is locked (so as not to
I tried -C (--compress), but it still took awhile, since the resulting .sql
file is 218Megs.
I am thinking I will have to write a function which logs into an ssh server
inside the local area network that the db server is on, and hopefully the
bottleneck of the internet will disappear, only
Very Easy Man.
Use this scripts...
http://www.silisoftware.com/scripts/?scriptname=backupDB
It 's powerful.
Please feel free to reply to this email if you have additional questions or
concerns
ªÇ¹ ¨Ø´ÒºØµÃ
01-7499093
Manager
Easyhorpak.com
If your servers is Linux based try running mysqldump locally on the
server via ssh. Then you can zip it up and transfer it over.
=C=
|
| Cal Evans
| http://www.calevans.com
|
Anthony Ettinger wrote:
I'm using the typical --opt with mysqldump, over the internet.
the problem I'm having is the
Kishore Jalleda [EMAIL PROTECTED] wrote on 10/13/2005 02:25:52 PM:
Hi All,
I have a query which takes approximately 0.5 seconds to execute , it is
as
follows
SELECT gamename, MAX(score) AS score, COUNT(valid=1) AS played FROM
gamesessions AS gamesessions
WHERE valid=1 AND sessiontype IN
That was an excellent reply, I always see you helping so many people, keep
the great work going ..
Sincerely,
Kishore Jalleda
On 10/13/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Kishore Jalleda [EMAIL PROTECTED] wrote on 10/13/2005 02:25:52 PM:
Hi All,
I have a query which takes
Devananda [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Dan Baker wrote:
Eric Bergen [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
When you add that index are more than 30% of the rows in the table
DateTimeNext1126215680?
There are currently 28.53% of the rows
When you add that index are more than 30% of the rows in the table
DateTimeNext1126215680?
Dan Baker wrote:
I have lots of tables that are similar in nature:
id int(11) PRI NULL auto_increment
Name varchar(30)
DateTimeNext int(11)
The DateTimeNext field represents when this records needs
Eric Bergen [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
When you add that index are more than 30% of the rows in the table
DateTimeNext1126215680?
There are currently 28.53% of the rows that have DateTimeNext1126215680
Does this mean something of interest? If so, what?
Thanks
Dan Baker wrote:
Eric Bergen [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
When you add that index are more than 30% of the rows in the table
DateTimeNext1126215680?
There are currently 28.53% of the rows that have DateTimeNext1126215680
Does this mean something of interest?
Michael Monashev [EMAIL PROTECTED] wrote on 07/26/2005 04:41:28 PM:
Hello,
How to optimize != statement?
SELECT * FROM table1 WHERE char_255_column != '';
How to fast select all rows, where CHAR(255) NOT NULL column not
empty?
Sincerely,
Michael,
You could try
WHERE
David,
David Griffiths wrote:
The manual is a little unclear on per-table-tablespaces in 4.1/5.0
(http://dev.mysql.com/doc/mysql/en/multiple-tablespaces.html)
Using per-table-tablespaces ignores the innodb_data_file_path (yes, it
uses it for the ibdata files, but not for the
Frank, thanks for the reply.
I'd thought of that, but I was worried about using HotBackup (we use
this for all of our backups on our production machines) - the hot backup
manual at http://www.innodb.com/manual.php doesn't mention if it can
follow a symlink to the data file.
Thanks for the
mos wrote:
At 07:01 PM 3/10/2005, Chris W. Parker wrote:
Homam S.A. mailto:[EMAIL PROTECTED]
on Wednesday, March 09, 2005 5:09 PM said:
If your tables are mostly read-only, you could
pre-generate page numbers on a periodic basis and
select only specific ranges WHERE row_number BETWEEN
At 09:18 AM 3/11/2005, sam wun wrote:
mos wrote:
At 07:01 PM 3/10/2005, Chris W. Parker wrote:
Homam S.A. mailto:[EMAIL PROTECTED]
on Wednesday, March 09, 2005 5:09 PM said:
If your tables are mostly read-only, you could
pre-generate page numbers on a periodic basis and
select only specific
--- mos [EMAIL PROTECTED] wrote:
Correct, but the speed difference will more than
make up for it. If you
have a busy website, it is very important not to
overtax the database
server with frills, like the number of rows found.
If you want to have the
record counts, then you will have to
Homam S.A. mailto:[EMAIL PROTECTED]
on Wednesday, March 09, 2005 5:09 PM said:
If your tables are mostly read-only, you could
pre-generate page numbers on a periodic basis and
select only specific ranges WHERE row_number BETWEEN
page_start AND page_finish.
How about just getting the
At 07:01 PM 3/10/2005, Chris W. Parker wrote:
Homam S.A. mailto:[EMAIL PROTECTED]
on Wednesday, March 09, 2005 5:09 PM said:
If your tables are mostly read-only, you could
pre-generate page numbers on a periodic basis and
select only specific ranges WHERE row_number BETWEEN
page_start AND
Does the app display all 1000 rows at once? Does your app require all
fields? Only retrieve what you need for the page. If the app displays all
1000 rows, it may remain slow depending on how you get them (order, group,
function) and indexing.. Also, the link and disk may matter depending on
the
difficult to suggest optimizations. Mysql does have a
great section in their manual re: optimizing select queries.
Scott.
On Wed, 9 Mar 2005 16:10:19 -0500, Peter J Milanese [EMAIL PROTECTED] wrote:
Does the app display all 1000 rows at once? Does your app require all
fields? Only retrieve what you
Carlos,
Apart from using the proper indexes, you should only retrieve the
number of rows that are going to be displayed on the page at one time. If
you are displaying 10 rows on the page, why retrieve 1000 rows? The user
would have to page through it 100x and you have to ask yourself,
If your tables are mostly read-only, you could
pre-generate page numbers on a periodic basis and
select only specific ranges WHERE row_number BETWEEN
page_start AND page_finish.
Or you could just send the top 1000 IDs of the table
to the client, and have the client figure out which
IDs belong to
Unfortunately this doesn't work well if you want to
tell your users how many pages were found in the
query. Sure, you could use SQL_CALC_FOUND_ROWS with
FOUND_ROWS(), but this will defeate the purpose of the
LIMIT clause -- speed -- because MySQL will have to
figure out all the results of the
At 07:19 PM 3/9/2005, you wrote:
Unfortunately this doesn't work well if you want to
tell your users how many pages were found in the
query.
Correct, but the speed difference will more than make up for it. If you
have a busy website, it is very important not to overtax the database
server with
I have a question about:
If you want to regain some of the space used by the INNODB file you
will have to convert all INNODB tables to MYISAM (or dump them to
a SQL file), recreate the INNODB file (s) and then recreate the
original INNODB tables.
So, just to be clear, is this the right
Look at
Created_tmp_disk_tables 14768
Created_tmp_tables 269520
Created_tmp_files 3
Increase tmp_table_size = 64M:
it's used to stop going to disk and some internal mysql operations.
Handler_read_rnd_next 58229817
Your tables are not
Shaun wrote:
Hello,
I was wondering if a more knowledgeable person could help me out with my
configuration and let me know how I could further optimize MySQL. Here's the
hardware on my dedicated server:
Processor #1 Vendor: GenuineIntel
Processor #1 Name: Intel(R) Pentium(R) 4 CPU 2.40GHz
Thanks for your help Dathan, I will make the required changes. I just have
one other problem.
I'm not sure what queries don't use an index. I've attmepted to turn on the
slow-queries-log, but nothing ever shows up in the file. Here's what I have
in the my.cnf
[mysqld]
skip-locking
set-variable
Shaun wrote:
Thanks for your help Dathan, I will make the required changes. I just have
one other problem.
I'm not sure what queries don't use an index. I've attmepted to turn on the
slow-queries-log, but nothing ever shows up in the file. Here's what I have
in the my.cnf
[mysqld]
skip-locking
Thanks, I changed the permissions, restarted, and everything is working
great now. Thanks a lot.
Shaun wrote:
Thanks for your help Dathan, I will make the required changes. I just
have
one other problem.
I'm not sure what queries don't use an index. I've attmepted to turn on
the
Razor Fish mailto:[EMAIL PROTECTED]
on Thursday, October 21, 2004 3:19 PM said:
i need consulting help for optimizing a database with
1.2 million records to handle 3-4 million hits a day.
this is going to be a 'duh' question on my part but i just had to ask
anyway.
my largest table
easily run into billions of records.
J.R.
-Original Message-
From: Chris W. Parker [mailto:[EMAIL PROTECTED]
Sent: Monday, October 25, 2004 3:41 PM
To: [EMAIL PROTECTED]
Subject: OT: table size WAS RE: optimizing database
Razor Fish mailto:[EMAIL PROTECTED]
on Thursday, October 21
Hey,
I'm interested. I currently run a few websites with tables of this
size and larger.
Thanks.
On Thu, 21 Oct 2004 15:19:16 -0700 (PDT), Razor Fish [EMAIL PROTECTED] wrote:
i need consulting help for optimizing a database with
1.2 million records to handle 3-4 million hits a day.
if
Hi,
According to the manual -
http://dev.mysql.com/doc/mysql/en/OPTIMIZE_TABLE.html
http://dev.mysql.com/doc/mysql/en/InnoDB_File_Defragmenting.html
running a null ALTER statement - ALTER TABLE tbl-name type=INNODB; will
rebuild the table thus optimizing the way the table is written to the
I think you are stuck. I can't think of any way to create an index on just
a piece of a column's data. I believe you need to create separate columns
of data for each portion of the date you want to search and index those.
That could mean up to 6 additional columns of data for your table,
chastang wrote:
select * from my_table where hour(dt)= 0
or
select * from my_table where month(dt) = 6
What index should I create to optimize selects on these sorts of queries?
An index isn't going to help you there unless you create separate
columns for hour and month. The columns will be
At 03:49 PM 6/24/2004, you wrote:
Hello.
My question concerns a large data table that has a DATETIME column called
dt. Specifically, I am often having to do selects based on date and time
functions that look like this:
select * from my_table where hour(dt)= 0
or
select * from my_table where
I found it interesting. Thanks Eric.
(reconstructed from archives i accidentally deleted the copy in my mailbox
Daniel Clark writes:
I don't see how LIMIT would make a difference. LAST_INSERT_ID() only
returns one record.
But it's worth trying in a big loop to get timing numbers.
Well, I
LAST_INSERT_ID is connection-specific, not table-specific. When you say,
SELECT LAST_INSERT_ID() FROM foo;
you are asking for the LAST_INSERT_ID to be repeated once for each row of
table foo. This is analogous to
SELECT NOW() FROM foo;
or
SELECT 2 FROM foo;
Adding a LIMIT would work, but
I don't see how LIMIT would make a difference. LAST_INSERT_ID() only returns one
record.
But it's worth trying in a big loop to get timing numbers.
I have a rather simple straightforward question that I was not able to find
in any of the documentation.
I am in the process of writing an
(reconstructed from archives i accidentally deleted the copy in my mailbox
Daniel Clark writes:
I don't see how LIMIT would make a difference. LAST_INSERT_ID() only
returns one record.
But it's worth trying in a big loop to get timing numbers.
Well, I decided to test this all out and see what
data points. I don't want duplicate entries, mostly due to sections of
the log accidentally being uploaded twice. I am currently doing a
Ok, so it is EXACTLY the same data that might be inserted twice?
- Make a UNIQUE index for the relevant column(s) that uniquely identify a
record.
- Use
1 - 100 of 192 matches
Mail list logo