9:41 AM
> Subject: Re: Select data from large tables
>
> More than 20163845 rows are there and my application continuously
> insert
> data in the table.
> daily i think there is a increase in 2.5 Gb in that table.
>
> Thanks
>
--
Bier met grenadyn
Is als mosterd by d
doubt regarding fetching data from large tables.
I need to fetch selected columns from a 90Gb Table & 5Gb index on it.
CREATE TABLE `content_table` (
`c_id` bigint(20) NOT NULL DEFAULT '0',
`link_level` tinyint(4) DEFAULT NULL,
`u_id` bigint(20) NOT NULL,
`heading` varchar(150
Dear all,
I have a doubt regarding fetching data from large tables.
I need to fetch selected columns from a 90Gb Table & 5Gb index on it.
CREATE TABLE `content_table` (
`c_id` bigint(20) NOT NULL DEFAULT '0',
`link_level` tinyint(4) DEFAULT NULL,
`u_id` bigint(20) NOT N
> That's why you really need to be more precise in the data structures
> you are planning on using. This can change the results significantly.
>
> So no, I don't have any specific answers to your questions as you don't
> provide any specific information in what you ask.
Yeah. Let me see if I can f
kimky...@fhda.edu ("Kyong Kim") writes:
> I don't have all the details of the schema and workload. Just an
> interesting idea that was presented to me.
> I think the idea is to split a lengthy secondary key lookup into 2 primary
> key lookups and reduce the cost of clustering secondary key with pr
Simon,
Thanks for the feedback.
I don't have all the details of the schema and workload. Just an
interesting idea that was presented to me.
I think the idea is to split a lengthy secondary key lookup into 2 primary
key lookups and reduce the cost of clustering secondary key with primary
key data by
kimky...@fhda.edu ("Kyong Kim") writes:
> I was wondering about a scale out problem.
> Lets say you have a large table with 3 cols and 500+ million rows.
>
> Would there be much benefit in splitting the columns into different tables
> based on INT type primary keys across the tables?
To answer y
Do the 3 tables have different column structures? Or do they all have the
same table structure? For example, is Table1 storing only data for year
1990 and table 2 storing data for 1991 etc? If so you could use a merge
table. (Or do you need transactions, in which case you will need to use
Inno
I was wondering about a scale out problem.
Lets say you have a large table with 3 cols and 500+ million rows.
Would there be much benefit in splitting the columns into different tables
based on INT type primary keys across the tables? The split tables will be
hosted on a same physical instance but
Be careful with using InnoDB with large tables. Performance drops
quickly and quite a bit once the size exceeds your RAM capabilities.
On Mar 1, 2009, at 3:41 PM, Claudio Nanni wrote:
Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave on
Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave on that is basically a backup server.
Tomorrow I am going to play more with the dude.
Do you think that there would be any improvement in converting the table
to InnoDB
forcing to use multiple
Claudio,
http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/
Your mileage may vary, use at your own risk, etc.
Basically: convince MySQL that the indexes have already been built but
need to be repaired, then run REPAIR TABLE. As long as the ind
Yes I killed several times the query but now way, the server was continuing
to hog disk space and not even shutdown worked!
Thanks!
Claudio
2009/2/27 Brent Baisley
> MySQL can handle large tables no problem, it's large queries that it
> has issues with. You couldn't jus
inal Message-
> From: Claudio Nanni [mailto:claudio.na...@gmail.com]
> Sent: Friday, February 27, 2009 4:43 PM
> To: mysql@lists.mysql.com
> Subject: MyISAM large tables and indexes managing problems
>
> Hi,
> I have one 15GB table with 250 million records and just the primar
-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Friday, February 27, 2009 4:43 PM
To: mysql@lists.mysql.com
Subject: MyISAM large tables and indexes managing problems
Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very
Great Brent, helps a lot!
it is very good to know your experience.
I will speak to developers and try to see if there is the opportunity to
apply the 'Divide et Impera' principle!
I am sorry to say MySQL it is a little out of control when dealing with
huge tables, it is the first time I had to k
Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very simple table but when a report is run (query) it just takes
hours,
and sometimes the application hangs.
I was trying to play a little with indexes and tuning (there is not great
indexes to be done though)
but
ed it. Any feedback is appreciated.
--
View this message in context:
http://www.nabble.com/Dealing-With-Very-Large-Tables-tp15812712p15812712.html
Sent from the MySQL - General mailing list archive at Nabble.com.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To uns
Hello all,
I am trying to add a field to a very large table. The problem is that mysql
locks up when trying to do so. I even tried deleting the foreign keys on
the table and it wont even let me do that, again locking up.
It works for around 5 minutes or so then just either locks or the data
Using mysqldump and mysql (Distribution 5.0.22) on CentOS:
[?] Is it theoretically possible to create a mysqldump file using the
default --opt option (i.e., with extended-inserts...) that would create
packet sizes so large that the restore of the backup would fail because
max_allowed_packet wo
Christos Andronis wrote:
Hi all,
we are trying to run the following query on a table that contains over 600 million rows:
'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10)
UNSIGNED DEFAULT NULL FIRST'
The query takes ages to run (has been running for over 10 hours now).
Rereading his initial query, u are right. this is not a situation of not
having the right composite index.
Yup, u are counting many rows, and hence it will take awhile.
Michael Stassen wrote:
pow wrote:
In this case, u require 2 indexes on table b.
1. WHERE b.basetype = 0 (requires index
pow wrote:
In this case, u require 2 indexes on table b.
1. WHERE b.basetype = 0 (requires index on b.basetype)
2. b.BoardID = m.BoardID (requires index on b.BoardID)
No, this requires an index on m.BoardID, which he already has and mysql is
using.
However, you are only allowed one index p
In this case, u require 2 indexes on table b.
1. WHERE b.basetype = 0 (requires index on b.basetype)
2. b.BoardID = m.BoardID (requires index on b.BoardID)
However, you are only allowed one index per table join.
Hence you need ONE composite index on table b with the fields b.basetype
and b.Boa
Jon Drukman wrote:
Andrew Braithwaite wrote:
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?
yes, BoardID is the primary key. BaseType is also indexed.
from the EXPLAIN output i can see that mysql is choosing to use BaseType
as the index for MBOARD (as we kno
Andrew Braithwaite wrote:
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?
yes, BoardID is the primary key. BaseType is also indexed.
from the EXPLAIN output i can see that mysql is choosing to use BaseType
as the index for MBOARD (as we know, mysql can only use
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?
Cheers,
A
On 16/7/05 00:01, "Andrew Braithwaite" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
> BoardID field indexed on the MSGS table too? If not th
Andrew Braithwaite wrote:
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that may be your
problem.
Cheers,
Andrew
He said, "MSGS ... is indexed on BoardID". Did you look at the EXPLAIN
output? The query
Andrew Braithwaite wrote:
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that may be your
problem.
MSGS.BoardID is indexed, and the EXPLAIN output I included in the
original message shows that it is indeed
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that may be your
problem.
Cheers,
Andrew
On 15/7/05 23:31, "Jon Drukman" <[EMAIL PROTECTED]> wrote:
> i'm trying to run this query:
>
> SELECT COUNT(1) FROM M
i'm trying to run this query:
SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 0 AND m.BoardID
= b.BoardID;
MSGS has 9.5 million rows, and is indexed on BoardID
MBOARD has 69K rows and is indexed on BaseType
EXPLAIN shows:
mysql> explain SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.
tly.
>
> Our database is now close to 20GB, divided in 160 tables. There are only 2
> tables that are larger than 1GB, all others are below 300MB.
>
> These two large tables, they have about 30.000.000 rows and 11 keys of
> indexing (each). Every now and then, I used to r
replication, and everything works perfectly.
Our database is now close to 20GB, divided in 160 tables. There are only 2
tables that are larger than 1GB, all others are below 300MB.
These two large tables, they have about 30.000.000 rows and 11 keys of
indexing (each). Every now and then, I
Hello.
See also these links:
http://dev.mysql.com/doc/mysql/en/table-size.html
http://dev.mysql.com/tech-resources/crash-me.php
and maybe this one :)
http://www.mysql.com/news-and-events/success-stories/
Daniel Kiss <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I would li
Daniel Kiss schrieb:
However, I'm more interested in practical experience with huge databases.
How effective is MySQL (with InnoDB) working with tables containing millions or
rather billions of rows? How about the response time of queries, which return a
few dozens of rows from these big tables
Hi,
Thanks, but I already checked the manual about these aspects, and I'm doing
heavy tests for a while about the performance of MySQL (with InnoDB tables)
with big databases. By the way, the indexing seems to be working on bigint
fields, so probably the size of the int field is not a limit.
On Apr 9, 2005, at 8:05 AM, olli wrote:
hi,
if your table is indexed, i think it can theoretically hold
4.294.967.295 rows, because that's the maximum for an unsigned integer
value in mysql and indexing doesn't work with bigint types as far as i
know. but, i'm not really sure about that.
I woul
hi,
if your table is indexed, i think it can theoretically hold
4.294.967.295 rows, because that's the maximum for an unsigned integer
value in mysql and indexing doesn't work with bigint types as far as i
know. but, i'm not really sure about that.
Am 09.04.2005 um 11:42 schrieb Daniel Kiss:
H
Hi All,
I would like to know how big is the biggest database that can be handled
effectively by MySQL/InnoDB.
Like physical size, number of tables, number of rows per table, average row
lenght, number of indexes per table, etc.
Practically, what if I have a master/detail table-pair, where the
Hi,
i've a rather large table (~ 1.800.000 rows) with five CHAR columns - let's say col1,
col2, , col5. Col1has the primary key. The columns col2,col3,col4,col5 hold
strings of variable length. I need to find duplicate entries that have the same value
for col2,col3,col4 & col5 but (and that
CTED]>, <[EMAIL PROTECTED]>
net> cc:
Fax to:
06/18/2004 10:02 Subjec
At 19:02 -0700 on 06/18/2004, Paul Chu wrote about Re: Full Text
Index on Large Tables - Not Answered:
Appreciate any help at all
Thanks, Paul
-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Full Text Index
Appreciate any help at all
Thanks, Paul
-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Full Text Index on Large Tables
Hi,
If I have a table with 100 - 200 million rows and I want to search
For
Hi,
If I have a table with 100 - 200 million rows and I want to search
For records with specific characteristics.
Ex.
Skills varchar(300)
Skill id's 10 15
Accounting finance etc.
Is it advisable to created a field with skill ids and then use the
Nope.
As far as I'm aware the only disk space being used is in the database's
directory, and that file system has 200Gb spare.
(/tmp has 19Gb free anyway)
Regards,
Tim.
gerald_clark wrote:
Are you running out of temp space?
Tim Brody wrote:
binary MySQL 4.0.20-standard on Redhat 7.2/Linux
Dual-pr
Are you running out of temp space?
Tim Brody wrote:
binary MySQL 4.0.20-standard on Redhat 7.2/Linux
Dual-proc, 4Gb ram, raid
I'm trying to change an index on a 12Gb table (270 million rows). Within an
hour or so the entire table is copied, and the index reaches 3.7Gb of data.
Then the database app
binary MySQL 4.0.20-standard on Redhat 7.2/Linux
Dual-proc, 4Gb ram, raid
I'm trying to change an index on a 12Gb table (270 million rows). Within an
hour or so the entire table is copied, and the index reaches 3.7Gb of data.
Then the database appears to do nothing more, except for touching the In
On Tue, Mar 30, 2004 at 10:30:03AM -0800, Dathan Vance Pattishall wrote:
> Tips on managing very large tables for myISAM:
>
> 1) Ensure that the table type is not DYNAMIC but Fixed.
> => Issue the show table status command.
> => Look at Row Format
> => if Row Fo
On 30 Mar 2004 at 10:30, Dathan Vance Pattishall wrote:
> 1) Ensure that the table type is not DYNAMIC but Fixed.
> => Issue the show table status command.
> => Look at Row Format
> => if Row Format != Dynamic the your ok else get rid of varchar type
> columns
> => Reason:
> Your myISA
Tips on managing very large tables for myISAM:
1) Ensure that the table type is not DYNAMIC but Fixed.
=> Issue the show table status command.
=> Look at Row Format
=> if Row Format != Dynamic the your ok else get rid of varchar type
columns
=> Reason:
Your
how the
primary key is created and used.
/Henrik
-Original Message-
From: Chad Attermann [mailto:[EMAIL PROTECTED]
Sent: den 30 mars 2004 19:42
To: [EMAIL PROTECTED]
Subject: Managing Very Large Tables
Hello,
I am trying to determine the best way to manage very large (MyISAM) tables
hi!
Chad Attermann wrote:
Hello,
I am trying to determine the best way to manage very large (MyISAM) tables, ensuring that they can be queried in reasonable amounts of time.
--8<
Why insisting in using myIsam, and not use some table format that can
assure you some degree of crash recovery a
Hello,
I am trying to determine the best way to manage very large (MyISAM) tables, ensuring
that they can be queried in reasonable amounts of time. One table in particular has
over 18 million records (8GB data) and is growing by more than 150K records per day,
and that rate is increasing. Bes
>We copy data from one table to another using:
>insert into TBL1 select * from TBL 2;
>The current database hangs and the process never finish when copying >huge
>tables (around 25million rows). Looking at the processlist it states that
>the process stays in \"closing table\" or \"wait on cond
Hi,
We copy data from one table to another using:
insert into TBL1 select * from TBL 2;
The current database hangs and the process never finish when copying huge
tables (around 25million rows). Looking at the processlist it states that
the process stays in "closing table" or "wait on cond" st
another drive like Dan, said.
Brad
-Original Message-
From: Brendan J Sherar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 07, 2003 6:27 AM
To: [EMAIL PROTECTED]
Subject: Adding indexes on large tables
Greetings to all, and thanks for the excellent resource!
I have a question regarding
IL PROTECTED]
> Subject: Adding indexes on large tables
>
>
> Greetings to all, and thanks for the excellent resource!
>
> I have a question regarding indexing large tables (150M+ rows, 2.6G).
>
> The tables in question have a format like this:
>
> word_id mediumint uns
Greetings to all, and thanks for the excellent resource!
I have a question regarding indexing large tables (150M+ rows, 2.6G).
The tables in question have a format like this:
word_id mediumint unsigned
doc_id mediumint unsigned
Our indexes are as follows:
PRIMARY KEY (word_id, doc_id)
INDEX
_total;
on delete
update count_table set total_count = total_count - 1, je_total = je_total -
:old.je_total;
hope this helps,
Dan Greene
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Friday, September 26, 2003 7:34 AM
> To: [EMAIL PROTE
Hi: Issuing a simple group by like this:
select C_SF, count(*), sum(je) as sum_je
from kp_data
group by C_SF;
against a large (1.4G) table holding a 5 mln records with 60 columns
takes about 330 secs on my Win2000 development box,
a 2.0GHz P4 w/ 1G RAM and an IDE MAXT
ure if the time would
increase exponentially based on the number of records being indexed, or if it
is a linear relationship.
Also, does anyone know of a method to get the faster indexing method to work
on large tables? I tried bumping up the myisam_max_sort_file_size in my.cnf,
but it tops
Joseph,
How big your table files are? Are they MyISAM or Innodb ? Do you have
indexes? How much memory do you have? Is your MySQL running on a
dedicated server or do you run anything else on your db server?
This questions needs to be answered before suggesting anything logical.
But general sugges
Group,
I have been working with Mysql for about 5 years - mainly in LAMP shops. The
tables have been between 20-100 thousand records size. Now I have a project
where the tables are in the millions of records.
This is very new to me and I am noticing that my queries are really
sloww!
What ar
R_TABLE_problems.html
* http://www.mysql.com/doc/en/ALTER_TABLE.html
* http://www.mysql.com/doc/en/Packet_too_large.html
* http://www.mysql.com/doc/en/Data_Definition.html
This was an automated response to your email 'speedup 'alter table' for large tables'.
Final se
I just had to alter a large (25Gig, 100Million rows) table to
increase the max_rows parameter. The 'alter table' query is now
running 60+ hours, the last 30+hours it spend in 'repair with
keycache' mode. Is there any way to speed up this operation?
I realize, it is probably too late now. But next
First of all, I'd try optimizing your app before writing a whole new
back-end. As such, I'd keep to the normal mysql list.
For example, even if the indexes are big, try to index all the columns that
might be searched. Heck, start by indexing all of them. If the data is
read-only, try myisampack.
We (Centers for Disease Control and Prevention) want to mount
relatively large read only tables that rarely change on
modest ($10K ??) hardware and get < 10 second response time.
For instance, 10 years of detailed natality records for the United
States in which each record has some 200 fields and,
Hi,
Instead of using separate "CREATE INDEX" statements, you can build all
your index at once with "ALTER TABLE":
ALTER TABLE my_table
ADD INDEX ...,
ADD INDEX ... ,
ADD INDEX ... ;
Hope this helps,
--
Joseph Bueno
Salvesen, Jens-Petter wrote:
> Hello, everyone
>
> I have the following
Hello, everyone
I have the following situation:
After "enjoying" problems related to deleting a large portion of a table,
subsequent slow selects and such, I decided to do an alternate route when
removing data from a table:
The table had transactions for one year, and the table really only needs
Is there a way to change the directory used when mySQL
copies the table for creating indexes on large tables?
My tmp directory is partitioned for 509 megs and
adding an index via ATLER TABLE or CREATE TABLE yields
this:
ERROR 3: Error writing file '/tmp/STFgNG04' (Errcode:
28)
the
On 5 Jun 2002, at 7:50, Jared Richardson wrote:
> This table is part of a product that contains publicly available (and
> always expanding) publicly avilable biological data in addition to
> large companies internal data. A one terrabyte cap very well could
> come back to haunt us one day! (sadl
- Original Message -
From: "Keith C. Ivey" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Jared Richardson" <[EMAIL PROTECTED]>
Sent: Tuesday, June 04, 2002 5:24 PM
Subject: Re: Bug related to large tables and it's indexes on Win2k
| On 4
On 4 Jun 2002, at 15:43, Jared Richardson wrote:
> | >AVG_ROW_LENGTH=4096 MAX_ROWS=4294967295;
> |
> | Why do you use AVG_ROW_LENGTH=4096? It seems to me the max record |
> length is 528...? |
>
> According to the MySql docs, the max table size is AVG_ROW_LENGTH *
> MAX_ROWS
>
> We were try
I replied below
- Original Message -
From: "Roger Baklund" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, June 04, 2002 3:24 PM
Subject: Re: Bug related to large tables and it's indexes on Win2k
| * Jared Richardson
| [...]
| > CREATE TABLE Ic
| Jared, I can't help solve your problem, but I'd be very interested if you
| got an answer!
|
| Two suggestions though that may be of use:
| 1) Make sure your indexes are healthy
| 2) Try using a MERGE table
Thanks! The indexes are the problem as I understand it... it appears that
MySql is
* Jared Richardson
[...]
> CREATE TABLE IcAlias(
>IcAliasID BIGINT NOT NULL PRIMARY KEY,
>mID VARCHAR(255) NOT NULL,
>IcEntityID BIGINT NOT NULL,
>IcTypeID SMALLINT NOT NULL,
>IcDupSortID VARCHAR(255) NOT NULL,
>INDEX mIDIdx (mID),
>INDEX IcTypeIDIdx (IcTypeID),
>IN
bles for our application.
>
> - Original Message -
> From: "Schneck Walter" <[EMAIL PROTECTED]>
> To: "'Jared Richardson '" <[EMAIL PROTECTED]>; "Schneck Walter"
> <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
&g
ed Richardson '" <[EMAIL PROTECTED]>; "Schneck Walter"
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Tuesday, June 04, 2002 10:20 AM
Subject: AW: Bug related to large tables and it's indexes on Win2k
| Well,
|
| im not an exper
The table type is the default, MYISAM
- Original Message -
From: "Schneck Walter" <[EMAIL PROTECTED]>
To: "'Jared Richardson '" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, June 04, 2002 10:11 AM
Subjec
t;[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Cc: "Jared Richardson" <[EMAIL PROTECTED]>
Sent: Tuesday, June 04, 2002 9:40 AM
Subject: Re: Bug related to large tables and it's indexes on Win2k
At 08:17 4/6/2002 -0400, Jared Richardson wrote:
Hi,
>
Hi all,
When large tables are being addressed, we seem to have encountered a bug
related to having large indexes on the table.
We have several tables in our system that have reached 4 gigs in size. We
altered the table definition to allow it to get larger... this is our
current table creation
t; <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 14, 2002 6:44 PM
Subject: Large tables on FreeBSD
> Hello,
>
> I need to create a table that is around 15 GB on a FreeBSD
4.5RELEASE system.
>
> I compiled mysql-3.23.49 without any extraneous flags such as
[snip]
I watch the .MYD file grow to about 4.2 GB and stop with this error from
mysqlimport.
mysqlimport: Error: Can't open file: 'temp.MYD'. (errno: 144)
mysqlimport: Error: The table 'temp' is full, when using table: temp
I've tried starting safe_mysqld with the --big-tables option, and that
d
Hello,
I need to create a table that is around 15 GB on a FreeBSD 4.5RELEASE system.
I compiled mysql-3.23.49 without any extraneous flags such as (--disable-largefile)
I use mysqlimport to import the table from a flatfile which is about 9GB.
I watch the .MYD file grow to about 4.2 GB and stop
k around such a limitation I'd be sure that you use a
large-file capable system regardless of the table type you choose.
-JF
> -Original Message-
> From: Nigel Edwards [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, April 30, 2002 7:19 AM
> To: [EMAIL PROTECTED]
> Subjec
, fiberchannel and SAN w 1 TB raid 5.
/Jörgen
Nigel Edwards wrote:
>I need to use some large tables using MySQL under Linux and would welcome
>suggestions from anyone with any experience. Currently one months data
>creates a table of 3M records about 750Mb in size. I should like to
>co
I need to use some large tables using MySQL under Linux and would welcome
suggestions from anyone with any experience. Currently one months data
creates a table of 3M records about 750Mb in size. I should like to
consolidate 12 Months data (which is expected to grow to say 50M records per
month
Hi,
> -Original Message-
> From: Bill Adams [mailto:[EMAIL PROTECTED]]
> Sent: Friday, February 22, 2002 2:04 PM
> To: MyODBC Mailing List; MySQL List
> Subject: RE: MySQL + Access + MyODBC + LARGE Tables
>
>
> All, there were many emails posted about this
All, there were many emails posted about this on the MyODBC list which,
of course, can be viewed via the archive on the mysql.com site. For the
most part I will neither quote nor repeat the information from those
emails here.
The conclusion is that MySQL + Merge Tables is perfectly capable of
b
-Mensaje original-
De: Eugenio Ricart [mailto:[EMAIL PROTECTED]]
Enviado el: viernes, 15 de febrero de 2002 7:00
Para: MyODBC Mailing List
Asunto: RE: MySQL + Access + MyODBC + LARGE Tables
Hello,
I work with VB 6.0 ADO 2.5 Access , I am trying work with MySql & and the
Last My
Spoiler: Venu's Suggestion about "Dynamic Cursor" is the answer
On Thu, 2002-02-14 at 20:34, Venu wrote:
> > MyODBC, as compiled today, uses mysql_store_result to get records. This
> > is fine for reasonably sized tables. However, if the table has millions
> > of records, writing the resul
Bill,
Some databases can use a live result set when retrieving a lot of
records and I really really wish MySQL could do the same. A live result set
does not create a temporary table or use memory to retrieve all the
records. It will grab 50 or so records at a time, and when scrolling
f
Hi,
>
>
> Monty, Venu, I hope you read this... :)
>
>
> I really, really want to use MySQL as the database backend for my
> datawarehouse. Mind you I have played around with merge tables quite a
> bit and know that MySQL is more than up to the task. There are numerous
> (not necessarily co
know you probably are aware of this issue but it
didn't hurt to say it (*_*).
I hope this helped at least a little.
-Original Message-
From: Bill Adams [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 14, 2002 6:05 PM
To: MySQL List; MyODBC Mailing List
Subject: MySQL + Access + MyO
Monty, Venu, I hope you read this... :)
I really, really want to use MySQL as the database backend for my
datawarehouse. Mind you I have played around with merge tables quite a
bit and know that MySQL is more than up to the task. There are numerous
(not necessarily cost related) reasons as to
Hi,
I have two tables I wish to combine into a single table. Both tables use
the same format and include a unique key (ID) auto-increment field. Each
row also contains a date field. Whoever managed this database in the past
at some point set up a new server, and initially had new data sent to
On Wed, 2001-12-05 at 16:33, Arjen G. Lentz wrote:
>
> - Original Message -
> From: "Florin Andrei" <[EMAIL PROTECTED]>
>
> > SELECT event.cid, iphdr.ip_src, iphdr.ip_dst, tcphdr.tcp_dport FROM
> > event, iphdr, tcphdr WHERE event.cid = iphdr.cid AND event.cid =
> > tcphdr.cid AND tcphdr
ill be
> > able to actually do a SELECT on those tables. Maybe tweak
> > parameters like join_buffer_size? table_cache? Anyone has some
> > experience with these?... What's the best way to optimize MySQL
> > for running SELECTs on multiple large tables?
>
> Yes
Hi again Florin,
- Original Message -
From: "Florin Andrei" <[EMAIL PROTECTED]>
> SELECT event.cid, iphdr.ip_src, iphdr.ip_dst, tcphdr.tcp_dport FROM
> event, iphdr, tcphdr WHERE event.cid = iphdr.cid AND event.cid =
> tcphdr.cid AND tcphdr.tcp_flags = '2';
Your only search condition i
there's any way to optimise MySQL so that i will be able
> to actually do a SELECT on those tables.
> Maybe tweak parameters like join_buffer_size? table_cache? Anyone has
> some experience with these?... What's the best way to optimize MySQL for
> running SELECTs on multiple
1 - 100 of 131 matches
Mail list logo