On Thu, May 15, 2014 at 11:01 AM, Johan De Meersman wrote:
> - Original Message -
>> From: "Larry Martell"
>> Subject: Re: Performance boost by splitting up large table?
>>
>> This table is queried based on requests from the users. There are 10
- Original Message -
> From: "Larry Martell"
> Subject: Re: Performance boost by splitting up large table?
>
> This table is queried based on requests from the users. There are 10
> different lookup columns they can specify, and they can provide any or
That
On Thu, May 15, 2014 at 4:14 AM, Johan De Meersman wrote:
>
> You've already had some good advice, but there's something much more simpler
> that will also give you a significant boost: a covering index.
>
> Simply put, the engine is smart enough to not bother with row lookups if
> everything yo
You've already had some good advice, but there's something much more simpler
that will also give you a significant boost: a covering index.
Simply put, the engine is smart enough to not bother with row lookups if
everything you asked for is already in the index it was using. You'll need to
kee
Hi Larry,
On May 14, 2014, at 5:05 AM, Larry Martell wrote:
> We have a table with 254 columns in it. 80% of the time, a very small
> subset of these columns are queried. The other columns are rarely, if
> ever, queried. (But they could be at any time, so we do need to
> maintain them.). Would I
Hi,
You could split the table into two and can avoid code changes by creating a
view which matches what code is looking for.
I think loading few fields vs 254 into memory will make a difference but if
your select statement only have specific fields you want and not the whole
row (and also given t
We have a table with 254 columns in it. 80% of the time, a very small
subset of these columns are queried. The other columns are rarely, if
ever, queried. (But they could be at any time, so we do need to
maintain them.). Would I expect to get a marked performance boost if I
split my table up into 2
ote:
> thx all, the source data is in text file.
>
>
>
>
> - Original Message -
> From: "h...@tbbs.net"
> To: mysql list
> Cc:
> Sent: Saturday, April 6, 2013 8:02 PM
> Subject: Re: update a row only if any column has changed, in a very large
&
thx all, the source data is in text file.
- Original Message -
From: "h...@tbbs.net"
To: mysql list
Cc:
Sent: Saturday, April 6, 2013 8:02 PM
Subject: Re: update a row only if any column has changed, in a very large table
>>>> 2013/04/06 13:56 -0700, Rajee
2013/04/06 13:56 -0700, Rajeev Prasad
I have a table with around 2,000,000 records (15 columns). I have to sync this
from an outside source once every day. not all records are changed/removed
/new-added everyday. so what is the best way to update only those which have
changed/added/or
uch looks like a nail.
-Original Message-
From: Rajeev Prasad [mailto:rp.ne...@yahoo.com]
Sent: Saturday, April 06, 2013 3:57 PM
To: mysql list
Subject: update a row only if any column has changed, in a very large table
hello,
>
>I have a table with around 2,000,000 records (15 column
hello,
>
>I have a table with around 2,000,000 records (15 columns). I have to sync this
>from an outside source once everyday. not all records are changed/removed
>/new-added everyday. so what is the best way to update only those which have
>changed/added/or deleted?
>
>i can use update_or_crea
> Firstly check with queries and index if required. Check if queries using
>> this table can be fine tuned. Check if table getting locked.
>>
>> If size of table is problem and if the table type is innodb check for
>> innodb_file_per_table options. Also have a
tuned. Check if table getting locked.
>
> If size of table is problem and if the table type is innodb check for
> innodb_file_per_table options. Also have a look for portioning.
>
>
> On Fri, Jan 14, 2011 at 4:18 AM, Sairam Krishnamurthy
> wrote:
>>
>> All,
>>
Krishnamurthy
mailto:kmsram...@gmail.com>> wrote:
All,
I have a very large table. It has about 1 billion rows. Initially
everything was fine. But now the table is a bit slow. Loaded takes
a lot of time. I usually load in chunks of 22 million rows.
Is the size of the table a
partitioning will help u
Krishna
On Fri, Jan 14, 2011 at 4:18 AM, Sairam Krishnamurthy
wrote:
> All,
>
> I have a very large table. It has about 1 billion rows. Initially
> everything was fine. But now the table is a bit slow. Loaded takes a lot of
> time. I usually load in chunk
portioning.
On Fri, Jan 14, 2011 at 4:18 AM, Sairam Krishnamurthy
wrote:
> All,
>
> I have a very large table. It has about 1 billion rows. Initially
> everything was fine. But now the table is a bit slow. Loaded takes a lot of
> time. I usually load in chunks of 22 million rows.
>
&g
All,
I have a very large table. It has about 1 billion rows. Initially
everything was fine. But now the table is a bit slow. Loaded takes a lot
of time. I usually load in chunks of 22 million rows.
Is the size of the table any way related to the performance? I am not
sure about this.
Will
I have to delete old records from a very large table (1.6billion rows)
in a stored procedure.
CREATE TABLE mytable(
id BIGINT(20) UNSIGNED NOT NULL DEFAULT 0,
unix_time INT(11) NOT NULL DEFAULT 0,
value DOUBLE (20, 4) NOT NULL DEFAULT 0.,
UNIQUE INDEX history_1 USING BTREE (id
t;From: ?? [mailto:tx...@hotmail.com]
>Sent: Wednesday, June 02, 2010 6:14 AM
>To: mysql@lists.mysql.com
>Subject: RE: large table issue
>
>
>hi huys,
>
>here's the explain of a query on this table
>
>EXPLAIN SELECT COUNT(game_log_id) AS sum2 FROM game_log AS g, playe
Hi,
MySQL Partitioning will help you a lot.
Try it.
Regards,
Krishna
2010/6/2 曹凯
>
> Hi all,
>
> do you guys know how to deal with the large tables?
>
> here's my problem:
>
> I have two web servers( running Nginx ) , two DB servers( running MySQL
> 5.1.35 ) and a server for load balancing.
>
hi huys,
here's the explain of a query on this table
EXPLAIN SELECT COUNT(game_log_id) AS sum2 FROM game_log AS g, player AS p WHERE
g.player_id = p.player_id AND g.game_id=p.game_id=27 AND p.type=1 AND
g.event_id = 32
-> ;
++-+---+++---
给我看看你的表的索引及你的慢查询的sql语句
在 2010-6-2,下午5:08, 曹凯 写道:
>
> Hi all,
>
> do you guys know how to deal with the large tables?
>
> here's my problem:
>
> I have two web servers( running Nginx ) , two DB servers( running MySQL
> 5.1.35 ) and a server for load balancing.
>
> What I'm maintaining is a
Hi,
Can you please send us the query along with the explain .
Also , have u thought of partitioning the data.
regards
anandkl
2010/6/2 曹凯
>
> Hi all,
>
> do you guys know how to deal with the large tables?
>
> here's my problem:
>
> I have two web servers( running Nginx ) , two DB servers( run
Hi all,
do you guys know how to deal with the large tables?
here's my problem:
I have two web servers( running Nginx ) , two DB servers( running MySQL 5.1.35
) and a server for load balancing.
What I'm maintaining is a game data tracking system. There's a game_log table
which will record al
Nathan Harmston schrieb:
> Hi everyone,
>
> I am currently working on an application where I have a very large
> table called intervals ( atm its 80 000 000 records and growing ), and
> a smaller table ( token ) which join with it.
>
> interval is just an id, start, en
Hi everyone,
I am currently working on an application where I have a very large
table called intervals ( atm its 80 000 000 records and growing ), and
a smaller table ( token ) which join with it.
interval is just an id, start, end, word
token is id, interval_id, type, processed_by
There is a
On Tue, Aug 18, 2009 at 5:08 AM, Simon Kimber wrote:
> I have indexes on siteid, datestamp and msgtype.
>
> Queries such as the following are constantly appearing in the slow
> queries log:
>
> SELECT * FROM enquiries WHERE siteid = 59255 AND msgtype = 0 ORDER BY
> datestamp DESC LIMIT 5;
Read the
Simon Kimber schrieb:
> Hi Everyone,
>
> I'm having a very simple query often take several seconds to run and
> would be hugely grateful for any advice on how i might spped this up.
>
> The table contains around 500k rows and the structure is as follows:
>
> +---+--+
Hi Everyone,
I'm having a very simple query often take several seconds to run and
would be hugely grateful for any advice on how i might spped this up.
The table contains around 500k rows and the structure is as follows:
+---+--+--+-+---+---
Actually, that buffer is only for sorting queries that use an ORDER BY.
Only MyISAM tables can build an index via sorting, and the
configuration parameter for that is myisam_sort_buffer_size.
If the table is InnoDB, let it complete, no matter how long it takes.
If you kill it, it will have to
Hi Before creating the index.
set this at you mysql prompt.
set sort_buffer_size=1;
This will enable ordering of index happen in the memory and index creation
will be faster.
regards
anandkl
On 8/11/07, Mike Zupan <[EMAIL PROTECTED]> wrote:
>
> Yes it will take a long time depending o
Yes it will take a long time depending on the size of the index also
On 8/10/07, x x <[EMAIL PROTECTED]> wrote:
>
> I'm attempting to add an index to a table with 70
> million rows.
>
> DB server running on a reasonably fast Intel Mac on a
> relatively slow drive.
>
> Should this take hours to com
I'm attempting to add an index to a table with 70
million rows.
DB server running on a reasonably fast Intel Mac on a
relatively slow drive.
Should this take hours to complete ?
So far it's taken 5 hours and counting. I'm afraid
it's going to destroy my drive !
___
I have a large InnoDB table which I want to add a TIMESTAMP column to
and a new index. It's actually a few tables and in some cases I only
want to index the new column but in others it's a multi-key index with
the new column and another column.
The problem is that this is taking many, many hours
tgID WHERE
ctg='' GROUP BY ctg;
- Original Message -
From: "Shadow" <[EMAIL PROTECTED]>
To:
Sent: Thursday, March 29, 2007 6:59 PM
Subject: Querying large table
Hey, guys.
I have 2 tables: categories and items.
COUNT(*) categories = 63 833
COUNT(*) ite
In news:[EMAIL PROTECTED],
Shadow <[EMAIL PROTECTED]> wrote:
> I need to get number of items in a specific category, so I use
> SELECT COUNT(*) FROM items WHERE ctg=''
>
> But each query takes ~ 10seconds.
> Its really slow.
You may add an index on `items`.`ctg` if there is none.
Maciek
--
M
its doing a T/B scan on items because there isn't an
index define for ctg on the items TABLE.
Again, just one man's thought.
-Original Message-
From: Michael Gargiullo [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 29, 2007 6:10 PM
To: Shadow; mysql@lists.mysql.com
Subject:
)
-Original Message-
From: Shadow [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 29, 2007 07:00 PM Eastern Standard Time
To: mysql@lists.mysql.com
Subject:Querying large table
Hey, guys.
I have 2 tables: categories and items.
COUNT(*) categories = 63 833
COUNT(*) items = 742 993
Hey, guys.
I have 2 tables: categories and items.
COUNT(*) categories = 63 833
COUNT(*) items = 742 993
I need to get number of items in a specific category, so I use
SELECT COUNT(*) FROM items WHERE ctg=''
But each query takes ~ 10seconds.
Its really slow.
Can anybody propose some optimization?
,
MySQL will rebuild all indexes in parallel by assigning an index to be
rebuilt on a dedicated thread. I think this feature is currently in beta in
MySQL 5.1.
The suggestion I made is in probability the best way to handle your large table.
It is, therefore, imperative that you tune your server to
1) create table T1 like T;
This creates an empty table T1 with indexes ndx1,ndx2,ndx3 and ndx4.
2) alter table T1 drop index ndx3;
This drops index ndx3 on the empty T1, which should be instantaneous.
3) insert into T1 select * from T;
This will populate table T and load all three(3) indexes for
.
Just make sure that empty table T1 has all the indexes you want before loading.
Please Try This Method and let me know how it worked for you.
Thank You for reading this whole thing
- Original Message -
From: Chris Wagner (GE Infra, Non-GE, US) <[EMAIL PROTECTED]>
To: bowen <[EMAIL
A workaround is to use mysqlhotcopy to snapshot the table and also only copy
the header to the MYI file. Then delete the original and rename the copy back
to the original. This will effectively drop all indexes and should take no
more time than what the disk takes to copy the .my* files.
-
Why does mysql drop index very very slow in a large table?
I have a large table with more than 5M rows, and many indexes. Now I
want to drop some of them. But it seems that mysql can not handle this
very well. It takes a very long time (more than half an hour) to do ,
and make me unbearable
Hudson
Cc: MySQL List
Sent: Monday, September 25, 2006 1:23 PM
Subject: Re: adding columns to a large table
There must be some thing went wrong with your settings in my.cnf file.For an
3 million records 24 hrs is bad :(
> myisam_sort_buffer_size value in my.cnf ?
Also you can app
There must be some thing went wrong with your settings in my.cnf
file.For an 3 million records 24 hrs is bad :(
> |myisam_sort_buffer_size value in my.cnf ?
Also you can apporx check the status by doing file size check on the
table in database dir and the temoprory table created in the data ba
n; MySQL List
> Subject: Re: adding columns to a large table
>
>
> At 12:53 PM 9/25/2006, Helen M Hudson wrote:
> >I have a table with about 3million rows and I'm having trouble
> >altering it to add 4 more columns. I'm using the MySQL
> >Administrator
: Monday, September 25, 2006 12:35 PM
To: Helen M Hudson; MySQL List
Subject: Re: adding columns to a large table
At 12:53 PM 9/25/2006, Helen M Hudson wrote:
>I have a table with about 3million rows and I'm having trouble
>altering it to add 4 more columns. I'm using the MySQL
>Adm
At 12:53 PM 9/25/2006, Helen M Hudson wrote:
I have a table with about 3million rows and I'm having trouble
altering it to add 4 more columns. I'm using the MySQL
Administrator and I've left it more than 24hours and its still not
finished. I was going to try using a login to the console next,
I have a table with about 3million rows and I'm having trouble altering it
to add 4 more columns. I'm using the MySQL Administrator and I've left it
more than 24hours and its still not finished. I was going to try using a
login to the console next, but is there a better way? Any advice much
hile before management approves hardware
improvements.
Could anyone please advise whether MySQL really needs to store the entire
index in memory ?
Or is there some way to configure this ?
Thanks very much !
--
View this message in context:
http://www.nabble.com/MySQL%3A-index-for-a-very-large-tabl
very much !
--
View this message in context:
http://www.nabble.com/MySQL%3A-index-for-a-very-large-table-tf2172085.html#a6005797
Sent from the MySQL - General forum at Nabble.com.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysq
well, you'd still have to use limit and offset with your search table
would you store a different table for each unique query? That sounds
like a lot of [temporary?] tables.
are you doing ore than 3-4 table joins on ths one fulltext search
query? If not, it's probably more work.
If your q
Hi Sheeri:
Yes, you are misunderstanding my question. I certainly know how to limit
my resultset to certain rows. I'm asking more about the effiencency of
searching large volumes of data. Is making a search table like vBulletin
does a good mechanism to avoid resource contention on the main tabl
Sounds like you want LIMIT and OFFSET --
everything after my name and before your post is copied from the doc at
http://dev.mysql.com/doc/refman/4.1/en/select.html
(or am I misunderstanding your question?)
-Sheeri
The LIMIT clause can be used to constrain the number of rows returned
by the SEL
All:
I am developing a search engine and using MySQL as the backend database
management system. Under normal circumstances, when users search through
large volumes of records on a search engine site, the results are broken
down into pages. When a user clicks on the 'Next' link, the system will
r
Hello.
If you're not using per-file tablespace, the OPTIMIZing of the table
most probably won't decrease the size of ibdata files. See:
http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html
about how to decrease the size of the tablespace. OPTIMIZE table
is mapped to ALTER TABLE for I
Hello.
We had an INNoDB table of very large size of 47 GB. Recently we deleted a
lot of rows from the table in order to reduce the size. The number of rows
now stands at 164,000+, however, the disk usage remains the same. I tried
OPTIMIZEing the table; but the optimize fails after running for abou
e'
Subject: Re: Performance issues when deleting and reading on large table
> > It's a probably a case of not having the cardinality of indexes
right
and
> thus making wrong decisions for queries.
> - Currently there is not a single query in the application that does
not
use
> t
> > It's a probably a case of not having the cardinality of indexes right
and
> thus making wrong decisions for queries.
> - Currently there is not a single query in the application that does not
use
> the correct index. We only have key-reads. Wich would mean that MySQL is
> creating these incorre
sql@lists.mysql.com
Cc: 'Jigal van Hemert'; 'mathias fatene'
Subject: RE: Performance issues when deleting and reading on large table
Hi Jigal, Mathias,
Thanks the time you took to reply to my issue's!
I would like to clear out some things.
> It's a probably a c
Hi Jigal, Mathias,
Thanks the time you took to reply to my issue's!
I would like to clear out some things.
> It's a probably a case of not having the cardinality of indexes right and
thus making wrong decisions for queries.
- Currently there is not a single query in the application that does n
k using or
not of internal temporary tables, ...
Mathias
-Original Message-
From: Jigal van Hemert [mailto:[EMAIL PROTECTED]
Sent: dimanche 24 avril 2005 17:05
To: Almar van Pel; mysql@lists.mysql.com
Subject: Re: Performance issues when deleting and reading on large table
From: &quo
, but cannot
> understand,
> why this action is so heavy for the system.
Deleting a lot of records will have impact on the indexes, so it's quite a
job. The inserts/updates/deletes will also block the table for reading in
case of MyISAM.
Such a large table in a high concurrency situ
Hi,
I'm managing a large free website running on Linux, with MySQL 4.0.23 and
has 2Gb memory. (PIV 2,6 Ghz)
MySQL is configured to have 382M key-buffer.
There are two problems I have, wich have to do with a large table. This
table contains 5 million records
and is the core of our applic
Not a bad suggestion... but when I try it, I get the following output:
Checking MyISAM file: theTable
Data records: 22906970 Deleted blocks: 0
myisamchk: warning: Table is marked as crashed and last repair failed
- check file-size
myisamchk: error: Size of indexfile is: 2049552384Sh
On Wednesday 06 April 2005 20:05, jon wrote:
> Normal recovery seems to grab 490 rows... but, originally there were
> some 22 million rows in there.
Seems your data file was corruped too not only the indexes. and probably broke
when updating the 491st registry... try use myisamchk -e
-e, --ext
Hey folks...
While we weren't paying attention, one of the tables we were logging to
got big. Really big... like over 2 gigs... and then the server crashed
and the data became corrupt.
'Course, we'd like that data back...
Normal recovery seems to grab 490 rows... but, originally there were
some
ow that better than any of us.
>
> A query like this should take no longer than 0.1 seconds to execute in
most
> cases, even with a few gigs of data.
>
> Doonny
>
> > -Original Message-
> > From: news [mailto:[EMAIL PROTECTED] On Behalf Of Rob Keeling
&
ailto:[EMAIL PROTECTED] On Behalf Of Rob Keeling
> Sent: Thursday, December 23, 2004 11:55 AM
> To: mysql@lists.mysql.com
> Subject: Optimising a query on a large table.
>
> I have a 152MB MyISAM table that I am trying to execute a simple select
> statement on,
> I need to
Hi Rob!
Am Don, 2004-12-23 um 18.54 schrieb Rob Keeling:
> I have a 152MB MyISAM table that I am trying to execute a simple select
> statement on,
> I need to retreave all rows with a given index, sorted by date.
>
> This is taking a very long period of time to execute.
Can you give the time
"kernel" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Rob,
> Have you run "show full processlist;" to see if it is copying to a tmp
> table ??
>
> walt
Nothing listed other than the query itself. Should have said its on Mysql
3.21.
Rob Keeling
--
I love deadlines. I love the
Rob Keeling wrote:
I have a 152MB MyISAM table that I am trying to execute a simple select
statement on,
I need to retreave all rows with a given index, sorted by date.
This is taking a very long period of time to execute.
What can I do to speed up the query.
The sql is,
SELECT * FROM table WHERE
I have a 152MB MyISAM table that I am trying to execute a simple select
statement on,
I need to retreave all rows with a given index, sorted by date.
This is taking a very long period of time to execute.
What can I do to speed up the query.
The sql is,
SELECT * FROM table WHERE (person_id LIK
On Friday 22 October 2004 11:10, Mechain Marc wrote:
> If you don't want to have those error messages:
>
> Increase the value of "Open File" using ulimit shell command.
>
> Do a ulimit -a to see the current value
> Do a ulimit -n newvalue to increase it
>
> You can also modify the /etc/limits file
hard Bennett [mailto:[EMAIL PROTECTED]
Envoyé : vendredi 22 octobre 2004 00:48
À : [EMAIL PROTECTED]
Objet : Re: mysql optimizing large table
Hi,
On Thursday 21 October 2004 22:00, Dathan Vance Pattishall wrote:
> My.cnf.huge is not good enough for your system specs. Calculate the Key
>
Hi,
On Thursday 21 October 2004 22:00, Dathan Vance Pattishall wrote:
> My.cnf.huge is not good enough for your system specs. Calculate the Key
> efficiency from the show status command. I bet the key efficiency is less
> then 90% or so.
By my count it is 99.88, the status output is:
Key_blocks_u
Pattishall http://www.friendster.com
> -Original Message-
> From: Richard Bennett [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 21, 2004 5:48 AM
> To: [EMAIL PROTECTED]
> Subject: mysql optimising large table
>
> Hi,
>
> I have a table containing log
Hi,
I have a table containing logfiles in mysql v4.0, myISAM.
The table has about 8.5 million records.
I'm using the my_huge.cnf file on mandrake10 Linux with 1 gig ram and 250gig
HD space.
Some Info:
Space usage :
Type Usage
Data 3,063 MB
Index 660,855 KB
Total 3,708 MB
R
re rarely used can be moved into near-line storage (a Network
> >share or a SAN device) so that you save the faster local disk for the
> >other 95% of your queries.
> >
> >Shawn Green
> >Database Administrator
> >Unimin Corporation - Spruce Pine
> >
> >&qu
abase Administrator
Unimin Corporation - Spruce Pine
"Ronnie Sengupta" <[EMAIL PROTECTED]> wrote on 10/12/2004
02:11:44 AM:
> "Does splitting a large table (20 Million rows) growing at 5 million or
more
> a month into smaller tables improve performance given that the table
nimin Corporation - Spruce Pine
"Ronnie Sengupta" <[EMAIL PROTECTED]> wrote on 10/12/2004
02:11:44 AM:
> "Does splitting a large table (20 Million rows) growing at 5 million or
more
> a month into smaller tables improve performance given that the table can
be
> split in
"Does splitting a large table (20 Million rows) growing at 5 million or more
a month into smaller tables improve performance given that the table can be
split in a logical way such that 95% queries don't need to look at data
spanning across the split tables"
Table Desc
Hanno Fietz <[EMAIL PROTECTED]> wrote:
> Jul 2 03:10:28 t56 kernel: hda: dma_intr: status=0x51 { DriveReadySeekComplete
> Error }
> Jul 2 03:10:28 t56 kernel: hda: dma_intr: error=0x40 { UncorrectableError },
> LBAsect=429367, sector=316864
> Jul 2 03:10:28 t56 kernel: end_request: I/O error,
Try this:
dd if=/dev/hda of=/dev/null
This will exercise the entire drive.
You should see lots of errors if your drive is failing.
Another possibility is a bad cable. Cables don't usually go bad if
they are
not disturbed. Drives do.
A failing IDE contoller is another unlikely possibility.
I wo
Yes, I was suspecting that as well, but: Why do I get these messages
whenever I run myisamchk and (almost) never at any other time? Is
myisamchk using the hd more extensively than e. g. MySQL itself? Can the
rather large demand for temporary disk space account for that?
Thanks,
Hanno
gerald_cla
It is telling you that your hard drive is failing.
Replace it.
Hanno Fietz wrote:
Hello everybody,
I'm experiencing problems with a 4.0.15 MySQL-Server running on a SuSE
Linux 8.2 box with 512 MB RAM, some one-point-something GHz CPU and 40
GB IDE Harddisk.
We have a database with some administr
Hello everybody,
I'm experiencing problems with a 4.0.15 MySQL-Server running on a SuSE
Linux 8.2 box with 512 MB RAM, some one-point-something GHz CPU and 40
GB IDE Harddisk.
We have a database with some administrative tables and one large data
table (now ~ 30 M rows, ~ 1GB index file and ~ 80
Take a look at
http://lists.mysql.com/mysql/158737
for an interesting 'trick' to optimze index creation.
Basically it amounts to:
* Create the data without indexes
* Move (rename) the table.MYD file to a backup
* Create an empy MYD file using TRUNCATE TABLE
* Add indexes
* Move the backed up MYD
http://www.geocrawler.com/archives/3/8/2002/11/50/10245455/
Kyle
- Original Message -
From: "Jeffrey Horner" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, March 29, 2004 12:05 PM
Subject: Creating index on very large table
> Hi folks,
>
>
lt;[EMAIL PROTECTED]>
Sent: Monday, March 29, 2004 12:05 PM
Subject: Creating index on very large table
> Hi folks,
>
> I've got a problem creating an index on a MYISAM table with 302,000,000
lines,
> roughly 58 GB on disk. Attached you will find the table definition, output
>
8 hours to
reindex
-pete
-Original Message-
From: Richard Davey [mailto:[EMAIL PROTECTED]
Sent: Monday, March 29, 2004 10:12 AM
To: [EMAIL PROTECTED]
Subject: Re: Creating index on very large table
Hello Jeffrey,
Monday, March 29, 2004, 7:05:27 PM, you wrote:
JH> I've got a problem
Hello Jeffrey,
Monday, March 29, 2004, 7:05:27 PM, you wrote:
JH> I've got a problem creating an index on a MYISAM table with 302,000,000 lines,
JH> roughly 58 GB on disk. Attached you will find the table definition, output
JH> of "mysqladmin variables", and "mysql -e 'show status'".
JH> and it'
Hi folks,
I've got a problem creating an index on a MYISAM table with 302,000,000 lines,
roughly 58 GB on disk. Attached you will find the table definition, output
of "mysqladmin variables", and "mysql -e 'show status'".
After creating the above-mentioned table, I ran:
ALTER TABLE test_table ENA
IL PROTECTED]
> Subject: large table performance for WHERE queries
>
>
> I'm using MySQL for an information retrieval application where word
> occurrences are indexed. It seems that performance is not as good as
> I would expect (it seems nearly linear with the number of r
e details follow. The basic scenario is:
> - there are over 498M rows. This seems to qualify it as a
> rather large table
> - the table has 6 fixed-length fields
> - I have created indexes on each field
> - I set MAX_ROW_SIZE to 550,000,000
> - I've tried this with b
On Thu, Jan 15, 2004 at 02:52:30PM -0500, Brent Baisley wrote:
> It sounds like you are trying to do full text searching, but you
> implemented it "manually". Was MySQL's full text indexing not
> sufficient for your needs or am I totally missing what you are trying
> to do?
You're absolutely ri
It sounds like you are trying to do full text searching, but you
implemented it "manually". Was MySQL's full text indexing not
sufficient for your needs or am I totally missing what you are trying
to do?
On Jan 15, 2004, at 1:53 PM, Gregory Newby wrote:
I'm using MySQL for an information retri
The basic scenario is:
- there are over 498M rows. This seems to qualify it as a
rather large table
- the table has 6 fixed-length fields
- I have created indexes on each field
- I set MAX_ROW_SIZE to 550,000,000
- I've tried this with both MyISAM and INNODB tables, with very
similar perfo
1 - 100 of 182 matches
Mail list logo