Select data from large tables

2011-11-15 Thread Adarsh Sharma
Dear all, I have a doubt regarding fetching data from large tables. I need to fetch selected columns from a 90Gb Table 5Gb index on it. CREATE TABLE `content_table` ( `c_id` bigint(20) NOT NULL DEFAULT '0', `link_level` tinyint(4) DEFAULT NULL, `u_id` bigint(20) NOT NULL, `heading` varchar

Re: Select data from large tables

2011-11-15 Thread Adarsh Sharma
a doubt regarding fetching data from large tables. I need to fetch selected columns from a 90Gb Table 5Gb index on it. CREATE TABLE `content_table` ( `c_id` bigint(20) NOT NULL DEFAULT '0', `link_level` tinyint(4) DEFAULT NULL, `u_id` bigint(20) NOT NULL, `heading` varchar(150) DEFAULT NULL

Re: Select data from large tables

2011-11-15 Thread Johan De Meersman
:41 AM Subject: Re: Select data from large tables More than 20163845 rows are there and my application continuously insert data in the table. daily i think there is a increase in 2.5 Gb in that table. Thanks -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel

Re: splitting large tables vertically

2009-05-10 Thread Simon J Mudd
kimky...@fhda.edu (Kyong Kim) writes: I was wondering about a scale out problem. Lets say you have a large table with 3 cols and 500+ million rows. Would there be much benefit in splitting the columns into different tables based on INT type primary keys across the tables? To answer your

Re: splitting large tables vertically

2009-05-10 Thread Kyong Kim
Simon, Thanks for the feedback. I don't have all the details of the schema and workload. Just an interesting idea that was presented to me. I think the idea is to split a lengthy secondary key lookup into 2 primary key lookups and reduce the cost of clustering secondary key with primary key data

Re: splitting large tables vertically

2009-05-10 Thread Simon J Mudd
kimky...@fhda.edu (Kyong Kim) writes: I don't have all the details of the schema and workload. Just an interesting idea that was presented to me. I think the idea is to split a lengthy secondary key lookup into 2 primary key lookups and reduce the cost of clustering secondary key with primary

Re: splitting large tables vertically

2009-05-10 Thread Kyong Kim
That's why you really need to be more precise in the data structures you are planning on using. This can change the results significantly. So no, I don't have any specific answers to your questions as you don't provide any specific information in what you ask. Yeah. Let me see if I can

splitting large tables vertically

2009-05-09 Thread Kyong Kim
I was wondering about a scale out problem. Lets say you have a large table with 3 cols and 500+ million rows. Would there be much benefit in splitting the columns into different tables based on INT type primary keys across the tables? The split tables will be hosted on a same physical instance

Re: splitting large tables vertically

2009-05-09 Thread mos
Do the 3 tables have different column structures? Or do they all have the same table structure? For example, is Table1 storing only data for year 1990 and table 2 storing data for 1991 etc? If so you could use a merge table. (Or do you need transactions, in which case you will need to use

Re: MyISAM large tables and indexes managing problems

2009-03-01 Thread Baron Schwartz
Claudio, http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/ Your mileage may vary, use at your own risk, etc. Basically: convince MySQL that the indexes have already been built but need to be repaired, then run REPAIR TABLE. As long as the

Re: MyISAM large tables and indexes managing problems

2009-03-01 Thread Claudio Nanni
Hi Baron, I need to try some trick like that, a sort of offline index building. Luckily I have a slave on that is basically a backup server. Tomorrow I am going to play more with the dude. Do you think that there would be any improvement in converting the table to InnoDB forcing to use multiple

Re: MyISAM large tables and indexes managing problems

2009-03-01 Thread Brent Baisley
Be careful with using InnoDB with large tables. Performance drops quickly and quite a bit once the size exceeds your RAM capabilities. On Mar 1, 2009, at 3:41 PM, Claudio Nanni wrote: Hi Baron, I need to try some trick like that, a sort of offline index building. Luckily I have a slave

Re: MyISAM large tables and indexes managing problems

2009-02-28 Thread Claudio Nanni
Subject: MyISAM large tables and indexes managing problems Hi, I have one 15GB table with 250 million records and just the primary key, it is a very simple table but when a report is run (query) it just takes hours, and sometimes the application hangs. I was trying to play a little with indexes

Re: MyISAM large tables and indexes managing problems

2009-02-28 Thread Claudio Nanni
Yes I killed several times the query but now way, the server was continuing to hog disk space and not even shutdown worked! Thanks! Claudio 2009/2/27 Brent Baisley brentt...@gmail.com MySQL can handle large tables no problem, it's large queries that it has issues with. You couldn't just kill

MyISAM large tables and indexes managing problems

2009-02-27 Thread Claudio Nanni
Hi, I have one 15GB table with 250 million records and just the primary key, it is a very simple table but when a report is run (query) it just takes hours, and sometimes the application hangs. I was trying to play a little with indexes and tuning (there is not great indexes to be done though) but

Re: MyISAM large tables and indexes managing problems

2009-02-27 Thread Claudio Nanni
Great Brent, helps a lot! it is very good to know your experience. I will speak to developers and try to see if there is the opportunity to apply the 'Divide et Impera' principle! I am sorry to say MySQL it is a little out of control when dealing with huge tables, it is the first time I had to

RE: MyISAM large tables and indexes managing problems

2009-02-27 Thread Rolando Edwards
-Original Message- From: Claudio Nanni [mailto:claudio.na...@gmail.com] Sent: Friday, February 27, 2009 4:43 PM To: mysql@lists.mysql.com Subject: MyISAM large tables and indexes managing problems Hi, I have one 15GB table with 250 million records and just the primary key, it is a very

Dealing With Very Large Tables

2008-03-03 Thread rjcarr
. -- View this message in context: http://www.nabble.com/Dealing-With-Very-Large-Tables-tp15812712p15812712.html Sent from the MySQL - General mailing list archive at Nabble.com. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com

Mysql 4.0 Adding fields to large tables

2007-12-31 Thread James Sherwood
Hello all, I am trying to add a field to a very large table. The problem is that mysql locks up when trying to do so. I even tried deleting the foreign keys on the table and it wont even let me do that, again locking up. It works for around 5 minutes or so then just either locks or the

Restoring Large Tables Created with --extended-insert in mysqldump

2007-06-06 Thread dpgirago
Using mysqldump and mysql (Distribution 5.0.22) on CentOS: [?] Is it theoretically possible to create a mysqldump file using the default --opt option (i.e., with extended-inserts...) that would create packet sizes so large that the restore of the backup would fail because max_allowed_packet

Re: query on a very big table [MySQL partitioning of large tables]

2005-07-26 Thread Josh Chamas
Christos Andronis wrote: Hi all, we are trying to run the following query on a table that contains over 600 million rows: 'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10) UNSIGNED DEFAULT NULL FIRST' The query takes ages to run (has been running for over 10 hours

Re: slow count(1) behavior with large tables

2005-07-16 Thread pow
In this case, u require 2 indexes on table b. 1. WHERE b.basetype = 0 (requires index on b.basetype) 2. b.BoardID = m.BoardID (requires index on b.BoardID) However, you are only allowed one index per table join. Hence you need ONE composite index on table b with the fields b.basetype and

Re: slow count(1) behavior with large tables

2005-07-16 Thread Michael Stassen
pow wrote: In this case, u require 2 indexes on table b. 1. WHERE b.basetype = 0 (requires index on b.basetype) 2. b.BoardID = m.BoardID (requires index on b.BoardID) No, this requires an index on m.BoardID, which he already has and mysql is using. However, you are only allowed one index

Re: slow count(1) behavior with large tables

2005-07-16 Thread pow
Rereading his initial query, u are right. this is not a situation of not having the right composite index. Yup, u are counting many rows, and hence it will take awhile. Michael Stassen wrote: pow wrote: In this case, u require 2 indexes on table b. 1. WHERE b.basetype = 0 (requires

slow count(1) behavior with large tables

2005-07-15 Thread Jon Drukman
i'm trying to run this query: SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 0 AND m.BoardID = b.BoardID; MSGS has 9.5 million rows, and is indexed on BoardID MBOARD has 69K rows and is indexed on BaseType EXPLAIN shows: mysql explain SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE

Re: slow count(1) behavior with large tables

2005-07-15 Thread Andrew Braithwaite
Hi, You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the BoardID field indexed on the MSGS table too? If not then that may be your problem. Cheers, Andrew On 15/7/05 23:31, Jon Drukman [EMAIL PROTECTED] wrote: i'm trying to run this query: SELECT COUNT(1) FROM MSGS m,

Re: slow count(1) behavior with large tables

2005-07-15 Thread Jon Drukman
Andrew Braithwaite wrote: Hi, You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the BoardID field indexed on the MSGS table too? If not then that may be your problem. MSGS.BoardID is indexed, and the EXPLAIN output I included in the original message shows that it is

Re: slow count(1) behavior with large tables

2005-07-15 Thread Michael Stassen
Andrew Braithwaite wrote: Hi, You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the BoardID field indexed on the MSGS table too? If not then that may be your problem. Cheers, Andrew He said, MSGS ... is indexed on BoardID. Did you look at the EXPLAIN output? The query

Re: slow count(1) behavior with large tables

2005-07-15 Thread Andrew Braithwaite
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table too? Cheers, A On 16/7/05 00:01, Andrew Braithwaite [EMAIL PROTECTED] wrote: Hi, You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the BoardID field indexed on the MSGS table too? If not then that

Re: slow count(1) behavior with large tables

2005-07-15 Thread Jon Drukman
Andrew Braithwaite wrote: Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table too? yes, BoardID is the primary key. BaseType is also indexed. from the EXPLAIN output i can see that mysql is choosing to use BaseType as the index for MBOARD (as we know, mysql can only use

Re: slow count(1) behavior with large tables

2005-07-15 Thread Michael Stassen
Jon Drukman wrote: Andrew Braithwaite wrote: Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table too? yes, BoardID is the primary key. BaseType is also indexed. from the EXPLAIN output i can see that mysql is choosing to use BaseType as the index for MBOARD (as we

Myisamchk on really large tables.

2005-07-13 Thread RV Tec
replication, and everything works perfectly. Our database is now close to 20GB, divided in 160 tables. There are only 2 tables that are larger than 1GB, all others are below 300MB. These two large tables, they have about 30.000.000 rows and 11 keys of indexing (each). Every now and then, I

Re: Myisamchk on really large tables.

2005-07-13 Thread Gleb Paharenko
in 160 tables. There are only 2 tables that are larger than 1GB, all others are below 300MB. These two large tables, they have about 30.000.000 rows and 11 keys of indexing (each). Every now and then, I used to run myisamchk to fix and optimize this table (myisamchk -r, -S, -a). All

Re: Large tables

2005-04-11 Thread Gleb Paharenko
Hello. See also these links: http://dev.mysql.com/doc/mysql/en/table-size.html http://dev.mysql.com/tech-resources/crash-me.php and maybe this one :) http://www.mysql.com/news-and-events/success-stories/ Daniel Kiss [EMAIL PROTECTED] wrote: Hi All, I would like

Large tables

2005-04-09 Thread Daniel Kiss
Hi All, I would like to know how big is the biggest database that can be handled effectively by MySQL/InnoDB. Like physical size, number of tables, number of rows per table, average row lenght, number of indexes per table, etc. Practically, what if I have a master/detail table-pair, where the

Re: Large tables

2005-04-09 Thread olli
hi, if your table is indexed, i think it can theoretically hold 4.294.967.295 rows, because that's the maximum for an unsigned integer value in mysql and indexing doesn't work with bigint types as far as i know. but, i'm not really sure about that. Am 09.04.2005 um 11:42 schrieb Daniel Kiss:

Re: Large tables

2005-04-09 Thread Michael Stassen
On Apr 9, 2005, at 8:05 AM, olli wrote: hi, if your table is indexed, i think it can theoretically hold 4.294.967.295 rows, because that's the maximum for an unsigned integer value in mysql and indexing doesn't work with bigint types as far as i know. but, i'm not really sure about that. I

Re: Large tables

2005-04-09 Thread Daniel Kiss
Hi, Thanks, but I already checked the manual about these aspects, and I'm doing heavy tests for a while about the performance of MySQL (with InnoDB tables) with big databases. By the way, the indexing seems to be working on bigint fields, so probably the size of the int field is not a limit.

Re: Large tables

2005-04-09 Thread Kirchy
Daniel Kiss schrieb: However, I'm more interested in practical experience with huge databases. How effective is MySQL (with InnoDB) working with tables containing millions or rather billions of rows? How about the response time of queries, which return a few dozens of rows from these big tables

SELECT WHERE LIKE on large tables

2004-08-18 Thread KSTrainee
Hi, i've a rather large table (~ 1.800.000 rows) with five CHAR columns - let's say col1, col2, , col5. Col1has the primary key. The columns col2,col3,col4,col5 hold strings of variable length. I need to find duplicate entries that have the same value for col2,col3,col4 col5 but (and

RE: Full Text Index on Large Tables - Not Answered

2004-06-21 Thread SGreen
: Fax to: 06/18/2004 10:02 Subject: RE: Full Text Index on Large Tables - Not Answered PM

RE: Full Text Index on Large Tables - Not Answered

2004-06-20 Thread Robert A. Rosenberg
At 19:02 -0700 on 06/18/2004, Paul Chu wrote about Re: Full Text Index on Large Tables - Not Answered: Appreciate any help at all Thanks, Paul -Original Message- From: Paul Chu [mailto:[EMAIL PROTECTED] Sent: Friday, June 18, 2004 10:16 AM To: [EMAIL PROTECTED] Subject: Full Text Index

Full Text Index on Large Tables

2004-06-18 Thread Paul Chu
Hi, If I have a table with 100 - 200 million rows and I want to search For records with specific characteristics. Ex. Skills varchar(300) Skill id's 10 15 Accounting finance etc. Is it advisable to created a field with skill ids and then use the

RE: Full Text Index on Large Tables - Not Answered

2004-06-18 Thread Paul Chu
Appreciate any help at all Thanks, Paul -Original Message- From: Paul Chu [mailto:[EMAIL PROTECTED] Sent: Friday, June 18, 2004 10:16 AM To: [EMAIL PROTECTED] Subject: Full Text Index on Large Tables Hi, If I have a table with 100 - 200 million rows and I want to search

Index Building on Large Tables Stalling

2004-06-16 Thread Tim Brody
binary MySQL 4.0.20-standard on Redhat 7.2/Linux Dual-proc, 4Gb ram, raid I'm trying to change an index on a 12Gb table (270 million rows). Within an hour or so the entire table is copied, and the index reaches 3.7Gb of data. Then the database appears to do nothing more, except for touching the

Re: Index Building on Large Tables Stalling

2004-06-16 Thread gerald_clark
Are you running out of temp space? Tim Brody wrote: binary MySQL 4.0.20-standard on Redhat 7.2/Linux Dual-proc, 4Gb ram, raid I'm trying to change an index on a 12Gb table (270 million rows). Within an hour or so the entire table is copied, and the index reaches 3.7Gb of data. Then the database

Re: Index Building on Large Tables Stalling

2004-06-16 Thread Tim Brody
Nope. As far as I'm aware the only disk space being used is in the database's directory, and that file system has 200Gb spare. (/tmp has 19Gb free anyway) Regards, Tim. gerald_clark wrote: Are you running out of temp space? Tim Brody wrote: binary MySQL 4.0.20-standard on Redhat 7.2/Linux

Managing Very Large Tables

2004-03-30 Thread Chad Attermann
Hello, I am trying to determine the best way to manage very large (MyISAM) tables, ensuring that they can be queried in reasonable amounts of time. One table in particular has over 18 million records (8GB data) and is growing by more than 150K records per day, and that rate is increasing.

Re: Managing Very Large Tables

2004-03-30 Thread Victor Medina
hi! Chad Attermann wrote: Hello, I am trying to determine the best way to manage very large (MyISAM) tables, ensuring that they can be queried in reasonable amounts of time. --8 Why insisting in using myIsam, and not use some table format that can assure you some degree of crash recovery

RE: Managing Very Large Tables

2004-03-30 Thread Henrik Schröder
key is created and used. /Henrik -Original Message- From: Chad Attermann [mailto:[EMAIL PROTECTED] Sent: den 30 mars 2004 19:42 To: [EMAIL PROTECTED] Subject: Managing Very Large Tables Hello, I am trying to determine the best way to manage very large (MyISAM) tables, ensuring

RE: Managing Very Large Tables

2004-03-30 Thread Dathan Vance Pattishall
Tips on managing very large tables for myISAM: 1) Ensure that the table type is not DYNAMIC but Fixed. = Issue the show table status command. = Look at Row Format = if Row Format != Dynamic the your ok else get rid of varchar type columns = Reason: Your myISAM table can

RE: Managing Very Large Tables

2004-03-30 Thread Keith C. Ivey
On 30 Mar 2004 at 10:30, Dathan Vance Pattishall wrote: 1) Ensure that the table type is not DYNAMIC but Fixed. = Issue the show table status command. = Look at Row Format = if Row Format != Dynamic the your ok else get rid of varchar type columns = Reason: Your myISAM table

Re: Managing Very Large Tables

2004-03-30 Thread Jeremy Zawodny
On Tue, Mar 30, 2004 at 10:30:03AM -0800, Dathan Vance Pattishall wrote: Tips on managing very large tables for myISAM: 1) Ensure that the table type is not DYNAMIC but Fixed. = Issue the show table status command. = Look at Row Format = if Row Format != Dynamic the your ok else get

copy data between very large tables

2003-10-16 Thread virtual user for ouzounis cgi
Hi, We copy data from one table to another using: insert into TBL1 select * from TBL 2; The current database hangs and the process never finish when copying huge tables (around 25million rows). Looking at the processlist it states that the process stays in closing table or wait on cond

re: copy data between very large tables

2003-10-16 Thread mhlists
gt;We copy data from one table to another using: gt;insert into TBL1 select * from TBL 2; gt;The current database hangs and the process never finish when copying gt;huge gt;tables (around 25million rows). Looking at the processlist it states that gt;the process stays in \closing table\ or

Adding indexes on large tables

2003-10-07 Thread Brendan J Sherar
Greetings to all, and thanks for the excellent resource! I have a question regarding indexing large tables (150M+ rows, 2.6G). The tables in question have a format like this: word_id mediumint unsigned doc_id mediumint unsigned Our indexes are as follows: PRIMARY KEY (word_id, doc_id) INDEX

RE: Adding indexes on large tables

2003-10-07 Thread Dan Greene
: Adding indexes on large tables Greetings to all, and thanks for the excellent resource! I have a question regarding indexing large tables (150M+ rows, 2.6G). The tables in question have a format like this: word_id mediumint unsigned doc_id mediumint unsigned Our indexes

RE: Adding indexes on large tables

2003-10-07 Thread Brad Teale
to another drive like Dan, said. Brad -Original Message- From: Brendan J Sherar [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 07, 2003 6:27 AM To: [EMAIL PROTECTED] Subject: Adding indexes on large tables Greetings to all, and thanks for the excellent resource! I have a question regarding

GROUP BY performance on large tables

2003-09-26 Thread jantorres
Hi: Issuing a simple group by like this: select C_SF, count(*), sum(je) as sum_je from kp_data group by C_SF; against a large (1.4G) table holding a 5 mln records with 60 columns takes about 330 secs on my Win2000 development box, a 2.0GHz P4 w/ 1G RAM and an IDE

RE: GROUP BY performance on large tables

2003-09-26 Thread Dan Greene
total_count = total_count - 1, je_total = je_total - :old.je_total; hope this helps, Dan Greene -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, September 26, 2003 7:34 AM To: [EMAIL PROTECTED] Subject: GROUP BY performance on large tables Hi: Issuing

How to monitor indexing progress on large tables

2003-09-24 Thread Jeff Neuenschwander
would increase exponentially based on the number of records being indexed, or if it is a linear relationship. Also, does anyone know of a method to get the faster indexing method to work on large tables? I tried bumping up the myisam_max_sort_file_size in my.cnf, but it tops out at 4G

Re: Optimizing Mysql for large tables

2003-08-22 Thread Kayra Otaner
Joseph, How big your table files are? Are they MyISAM or Innodb ? Do you have indexes? How much memory do you have? Is your MySQL running on a dedicated server or do you run anything else on your db server? This questions needs to be answered before suggesting anything logical. But general

Optimizing Mysql for large tables

2003-08-21 Thread Joseph Norris
Group, I have been working with Mysql for about 5 years - mainly in LAMP shops. The tables have been between 20-100 thousand records size. Now I have a project where the tables are in the millions of records. This is very new to me and I am noticing that my queries are really sloww! What

speedup 'alter table' for large tables

2003-02-07 Thread Johannes Ullrich
I just had to alter a large (25Gig, 100Million rows) table to increase the max_rows parameter. The 'alter table' query is now running 60+ hours, the last 30+hours it spend in 'repair with keycache' mode. Is there any way to speed up this operation? I realize, it is probably too late now. But next

Re: proposal: new back end tailored to data mining very large tables

2003-02-03 Thread Steven Roussey
First of all, I'd try optimizing your app before writing a whole new back-end. As such, I'd keep to the normal mysql list. For example, even if the indexes are big, try to index all the columns that might be searched. Heck, start by indexing all of them. If the data is read-only, try myisampack.

proposal: new back end tailored to data mining very large tables

2003-02-02 Thread Heitzso
We (Centers for Disease Control and Prevention) want to mount relatively large read only tables that rarely change on modest ($10K ??) hardware and get 10 second response time. For instance, 10 years of detailed natality records for the United States in which each record has some 200 fields and,

Recreating indexes on large tables

2003-01-20 Thread Salvesen, Jens-Petter
Hello, everyone I have the following situation: After enjoying problems related to deleting a large portion of a table, subsequent slow selects and such, I decided to do an alternate route when removing data from a table: The table had transactions for one year, and the table really only needs

Re: Recreating indexes on large tables

2003-01-20 Thread Joseph Bueno
Hi, Instead of using separate CREATE INDEX statements, you can build all your index at once with ALTER TABLE: ALTER TABLE my_table ADD INDEX ..., ADD INDEX ... , ADD INDEX ... ; Hope this helps, -- Joseph Bueno Salvesen, Jens-Petter wrote: Hello, everyone I have the following

Creating indexes on large tables -- tmp directory is full

2002-09-05 Thread heath boutwell
Is there a way to change the directory used when mySQL copies the table for creating indexes on large tables? My tmp directory is partitioned for 509 megs and adding an index via ATLER TABLE or CREATE TABLE yields this: ERROR 3: Error writing file '/tmp/STFgNG04' (Errcode: 28) the .MYI file

Re: Bug related to large tables and it's indexes on Win2k

2002-06-05 Thread Jared Richardson
- Original Message - From: Keith C. Ivey [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Jared Richardson [EMAIL PROTECTED] Sent: Tuesday, June 04, 2002 5:24 PM Subject: Re: Bug related to large tables and it's indexes on Win2k | On 4 Jun 2002, at 15:43, Jared Richardson wrote

Re: Bug related to large tables and it's indexes on Win2k

2002-06-05 Thread Keith C. Ivey
On 5 Jun 2002, at 7:50, Jared Richardson wrote: This table is part of a product that contains publicly available (and always expanding) publicly avilable biological data in addition to large companies internal data. A one terrabyte cap very well could come back to haunt us one day! (sadly

Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Jared Richardson
Hi all, When large tables are being addressed, we seem to have encountered a bug related to having large indexes on the table. We have several tables in our system that have reached 4 gigs in size. We altered the table definition to allow it to get larger... this is our current table creation

Re: Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Jared Richardson
]; [EMAIL PROTECTED] Cc: Jared Richardson [EMAIL PROTECTED] Sent: Tuesday, June 04, 2002 9:40 AM Subject: Re: Bug related to large tables and it's indexes on Win2k At 08:17 4/6/2002 -0400, Jared Richardson wrote: Hi, When large tables are being addressed, we seem to have encountered a bug related

Re: Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Jared Richardson
The table type is the default, MYISAM - Original Message - From: Schneck Walter [EMAIL PROTECTED] To: 'Jared Richardson ' [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, June 04, 2002 10:11 AM Subject: AW: Bug related to large tables and it's indexes on Win2k

Re: Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Jared Richardson
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, June 04, 2002 10:20 AM Subject: AW: Bug related to large tables and it's indexes on Win2k | Well, | | im not an expert in MYSQL tabletypes, | but what if seen yet InnoDB is the most | preferred tabletyp for real appis. | if possible

Re: Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Ian Gilfillan
indexes are healthy 2) Try using a MERGE table regards, ian gilfillan - Original Message - From: Jared Richardson [EMAIL PROTECTED] To: Schneck Walter [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, June 04, 2002 4:24 PM Subject: Re: Bug related to large tables

Re: Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Roger Baklund
* Jared Richardson [...] CREATE TABLE IcAlias( IcAliasID BIGINT NOT NULL PRIMARY KEY, mID VARCHAR(255) NOT NULL, IcEntityID BIGINT NOT NULL, IcTypeID SMALLINT NOT NULL, IcDupSortID VARCHAR(255) NOT NULL, INDEX mIDIdx (mID), INDEX IcTypeIDIdx (IcTypeID), INDEX

Re: Bug related to large tables and it's indexes on Win2k

2002-06-04 Thread Jared Richardson
I replied below - Original Message - From: Roger Baklund [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, June 04, 2002 3:24 PM Subject: Re: Bug related to large tables and it's indexes on Win2k | * Jared Richardson | [...] | CREATE TABLE IcAlias( | IcAliasID BIGINT NOT NULL

RE: Large tables on FreeBSD

2002-05-15 Thread Jay Blanchard
[snip] I watch the .MYD file grow to about 4.2 GB and stop with this error from mysqlimport. mysqlimport: Error: Can't open file: 'temp.MYD'. (errno: 144) mysqlimport: Error: The table 'temp' is full, when using table: temp I've tried starting safe_mysqld with the --big-tables option, and that

Re: Large tables on FreeBSD

2002-05-15 Thread Ken Menzel
] To: [EMAIL PROTECTED] Sent: Tuesday, May 14, 2002 6:44 PM Subject: Large tables on FreeBSD Hello, I need to create a table that is around 15 GB on a FreeBSD 4.5RELEASE system. I compiled mysql-3.23.49 without any extraneous flags such as (--disable-largefile) I use mysqlimport to import

Large tables on FreeBSD

2002-05-14 Thread Ovanes Manucharyan
Hello, I need to create a table that is around 15 GB on a FreeBSD 4.5RELEASE system. I compiled mysql-3.23.49 without any extraneous flags such as (--disable-largefile) I use mysqlimport to import the table from a flatfile which is about 9GB. I watch the .MYD file grow to about 4.2 GB and

Large Tables

2002-04-30 Thread Nigel Edwards
I need to use some large tables using MySQL under Linux and would welcome suggestions from anyone with any experience. Currently one months data creates a table of 3M records about 750Mb in size. I should like to consolidate 12 Months data (which is expected to grow to say 50M records per month

Re: Large Tables

2002-04-30 Thread Jörgen Winqvist
, fiberchannel and SAN w 1 TB raid 5. /Jörgen Nigel Edwards wrote: I need to use some large tables using MySQL under Linux and would welcome suggestions from anyone with any experience. Currently one months data creates a table of 3M records about 750Mb in size. I should like to consolidate 12 Months

RE: Large Tables

2002-04-30 Thread Jon Frisby
such a limitation I'd be sure that you use a large-file capable system regardless of the table type you choose. -JF -Original Message- From: Nigel Edwards [mailto:[EMAIL PROTECTED]] Sent: Tuesday, April 30, 2002 7:19 AM To: [EMAIL PROTECTED] Subject: Large Tables I need to use some large

RE: MySQL + Access + MyODBC + LARGE Tables

2002-02-22 Thread Bill Adams
All, there were many emails posted about this on the MyODBC list which, of course, can be viewed via the archive on the mysql.com site. For the most part I will neither quote nor repeat the information from those emails here. The conclusion is that MySQL + Merge Tables is perfectly capable of

RE: MySQL + Access + MyODBC + LARGE Tables

2002-02-22 Thread Venu
Hi, -Original Message- From: Bill Adams [mailto:[EMAIL PROTECTED]] Sent: Friday, February 22, 2002 2:04 PM To: MyODBC Mailing List; MySQL List Subject: RE: MySQL + Access + MyODBC + LARGE Tables All, there were many emails posted about this on the MyODBC list which, of course

RE: MySQL + Access + MyODBC + LARGE Tables

2002-02-15 Thread Bill Adams
Spoiler: Venu's Suggestion about Dynamic Cursor is the answer On Thu, 2002-02-14 at 20:34, Venu wrote: MyODBC, as compiled today, uses mysql_store_result to get records. This is fine for reasonably sized tables. However, if the table has millions of records, writing the results to a

RV: MySQL + Access + MyODBC + LARGE Tables

2002-02-15 Thread Eugenio Ricart
-Mensaje original- De: Eugenio Ricart [mailto:[EMAIL PROTECTED]] Enviado el: viernes, 15 de febrero de 2002 7:00 Para: MyODBC Mailing List Asunto: RE: MySQL + Access + MyODBC + LARGE Tables Hello, I work with VB 6.0 ADO 2.5 Access , I am trying work with MySql and the Last MyODBC

MySQL + Access + MyODBC + LARGE Tables

2002-02-14 Thread Bill Adams
Monty, Venu, I hope you read this... :) I really, really want to use MySQL as the database backend for my datawarehouse. Mind you I have played around with merge tables quite a bit and know that MySQL is more than up to the task. There are numerous (not necessarily cost related) reasons as to

RE: MySQL + Access + MyODBC + LARGE Tables

2002-02-14 Thread Keith A. Calaman
of this issue but it didn't hurt to say it (*_*). I hope this helped at least a little. -Original Message- From: Bill Adams [mailto:[EMAIL PROTECTED]] Sent: Thursday, February 14, 2002 6:05 PM To: MySQL List; MyODBC Mailing List Subject: MySQL + Access + MyODBC + LARGE Tables Monty, Venu, I

RE: MySQL + Access + MyODBC + LARGE Tables

2002-02-14 Thread Venu
Hi, Monty, Venu, I hope you read this... :) I really, really want to use MySQL as the database backend for my datawarehouse. Mind you I have played around with merge tables quite a bit and know that MySQL is more than up to the task. There are numerous (not necessarily cost

Re: MySQL + Access + MyODBC + LARGE Tables

2002-02-14 Thread BD
Bill, Some databases can use a live result set when retrieving a lot of records and I really really wish MySQL could do the same. A live result set does not create a temporary table or use memory to retrieve all the records. It will grab 50 or so records at a time, and when scrolling

Combining and sorting large tables

2002-01-30 Thread Brendan Pirie
Hi, I have two tables I wish to combine into a single table. Both tables use the same format and include a unique key (ID) auto-increment field. Each row also contains a date field. Whoever managed this database in the past at some point set up a new server, and initially had new data sent to

Re: optimize for SELECTs on multiple large tables

2002-01-25 Thread Florin Andrei
On Wed, 2001-12-05 at 16:33, Arjen G. Lentz wrote: - Original Message - From: Florin Andrei [EMAIL PROTECTED] SELECT event.cid, iphdr.ip_src, iphdr.ip_dst, tcphdr.tcp_dport FROM event, iphdr, tcphdr WHERE event.cid = iphdr.cid AND event.cid = tcphdr.cid AND tcphdr.tcp_flags =

Re: optimize for SELECTs on multiple large tables

2001-12-06 Thread Jeremy Zawodny
tables. Maybe tweak parameters like join_buffer_size? table_cache? Anyone has some experience with these?... What's the best way to optimize MySQL for running SELECTs on multiple large tables? Yes, server settings are important, and the 'right' settings depend on your system (amount of RAM

optimize for SELECTs on multiple large tables

2001-12-05 Thread Florin Andrei
that i will be able to actually do a SELECT on those tables. Maybe tweak parameters like join_buffer_size? table_cache? Anyone has some experience with these?... What's the best way to optimize MySQL for running SELECTs on multiple large tables? This is the only thing that keeps me from deploying

Re: optimize for SELECTs on multiple large tables

2001-12-05 Thread Robert Alexander
At 14:45 -0800 2001/12/05, Florin Andrei wrote: The problem is, MySQL-3.23.46 takes forever to return from SELECT (i let it run over night, in the morning i still didn't got any results, so i killed the query). Hi Florin, It would help if you could also provide: the hardware and OS the

Re: optimize for SELECTs on multiple large tables

2001-12-05 Thread Florin Andrei
On Wed, 2001-12-05 at 15:01, Robert Alexander wrote: At 14:45 -0800 2001/12/05, Florin Andrei wrote: The problem is, MySQL-3.23.46 takes forever to return from SELECT (i let it run over night, in the morning i still didn't got any results, so i killed the query). the hardware and OS SGI

Re: optimize for SELECTs on multiple large tables

2001-12-05 Thread Carl Troein
Florin Andrei writes: SELECT tbl1.col1, tbl2.col2 FROM tbl1, tbl2 WHERE \ tbl1.col3 = tbl2.col4 AND tbl1.col5 = '123'; The problem is, MySQL-3.23.46 takes forever to return from SELECT (i let it run over night, in the morning i still didn't got any results, so i killed the query).

Re: optimize for SELECTs on multiple large tables

2001-12-05 Thread Arjen G. Lentz
join_buffer_size? table_cache? Anyone has some experience with these?... What's the best way to optimize MySQL for running SELECTs on multiple large tables? Yes, server settings are important, and the 'right' settings depend on your system (amount of RAM, etc) as well as on your database

  1   2   >