Dear all,
I have a doubt regarding fetching data from large tables.
I need to fetch selected columns from a 90Gb Table 5Gb index on it.
CREATE TABLE `content_table` (
`c_id` bigint(20) NOT NULL DEFAULT '0',
`link_level` tinyint(4) DEFAULT NULL,
`u_id` bigint(20) NOT NULL,
`heading` varchar
a doubt regarding fetching data from large tables.
I need to fetch selected columns from a 90Gb Table 5Gb index on it.
CREATE TABLE `content_table` (
`c_id` bigint(20) NOT NULL DEFAULT '0',
`link_level` tinyint(4) DEFAULT NULL,
`u_id` bigint(20) NOT NULL,
`heading` varchar(150) DEFAULT NULL
:41 AM
Subject: Re: Select data from large tables
More than 20163845 rows are there and my application continuously
insert
data in the table.
daily i think there is a increase in 2.5 Gb in that table.
Thanks
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
kimky...@fhda.edu (Kyong Kim) writes:
I was wondering about a scale out problem.
Lets say you have a large table with 3 cols and 500+ million rows.
Would there be much benefit in splitting the columns into different tables
based on INT type primary keys across the tables?
To answer your
Simon,
Thanks for the feedback.
I don't have all the details of the schema and workload. Just an
interesting idea that was presented to me.
I think the idea is to split a lengthy secondary key lookup into 2 primary
key lookups and reduce the cost of clustering secondary key with primary
key data
kimky...@fhda.edu (Kyong Kim) writes:
I don't have all the details of the schema and workload. Just an
interesting idea that was presented to me.
I think the idea is to split a lengthy secondary key lookup into 2 primary
key lookups and reduce the cost of clustering secondary key with primary
That's why you really need to be more precise in the data structures
you are planning on using. This can change the results significantly.
So no, I don't have any specific answers to your questions as you don't
provide any specific information in what you ask.
Yeah. Let me see if I can
I was wondering about a scale out problem.
Lets say you have a large table with 3 cols and 500+ million rows.
Would there be much benefit in splitting the columns into different tables
based on INT type primary keys across the tables? The split tables will be
hosted on a same physical instance
Do the 3 tables have different column structures? Or do they all have the
same table structure? For example, is Table1 storing only data for year
1990 and table 2 storing data for 1991 etc? If so you could use a merge
table. (Or do you need transactions, in which case you will need to use
Claudio,
http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/
Your mileage may vary, use at your own risk, etc.
Basically: convince MySQL that the indexes have already been built but
need to be repaired, then run REPAIR TABLE. As long as the
Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave on that is basically a backup server.
Tomorrow I am going to play more with the dude.
Do you think that there would be any improvement in converting the table
to InnoDB
forcing to use multiple
Be careful with using InnoDB with large tables. Performance drops
quickly and quite a bit once the size exceeds your RAM capabilities.
On Mar 1, 2009, at 3:41 PM, Claudio Nanni wrote:
Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave
Subject: MyISAM large tables and indexes managing problems
Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very simple table but when a report is run (query) it just takes
hours,
and sometimes the application hangs.
I was trying to play a little with indexes
Yes I killed several times the query but now way, the server was continuing
to hog disk space and not even shutdown worked!
Thanks!
Claudio
2009/2/27 Brent Baisley brentt...@gmail.com
MySQL can handle large tables no problem, it's large queries that it
has issues with. You couldn't just kill
Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very simple table but when a report is run (query) it just takes
hours,
and sometimes the application hangs.
I was trying to play a little with indexes and tuning (there is not great
indexes to be done though)
but
Great Brent, helps a lot!
it is very good to know your experience.
I will speak to developers and try to see if there is the opportunity to
apply the 'Divide et Impera' principle!
I am sorry to say MySQL it is a little out of control when dealing with
huge tables, it is the first time I had to
-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Friday, February 27, 2009 4:43 PM
To: mysql@lists.mysql.com
Subject: MyISAM large tables and indexes managing problems
Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very
.
--
View this message in context:
http://www.nabble.com/Dealing-With-Very-Large-Tables-tp15812712p15812712.html
Sent from the MySQL - General mailing list archive at Nabble.com.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com
Hello all,
I am trying to add a field to a very large table. The problem is that mysql
locks up when trying to do so. I even tried deleting the foreign keys on
the table and it wont even let me do that, again locking up.
It works for around 5 minutes or so then just either locks or the
Using mysqldump and mysql (Distribution 5.0.22) on CentOS:
[?] Is it theoretically possible to create a mysqldump file using the
default --opt option (i.e., with extended-inserts...) that would create
packet sizes so large that the restore of the backup would fail because
max_allowed_packet
Christos Andronis wrote:
Hi all,
we are trying to run the following query on a table that contains over 600 million rows:
'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10)
UNSIGNED DEFAULT NULL FIRST'
The query takes ages to run (has been running for over 10 hours
In this case, u require 2 indexes on table b.
1. WHERE b.basetype = 0 (requires index on b.basetype)
2. b.BoardID = m.BoardID (requires index on b.BoardID)
However, you are only allowed one index per table join.
Hence you need ONE composite index on table b with the fields b.basetype
and
pow wrote:
In this case, u require 2 indexes on table b.
1. WHERE b.basetype = 0 (requires index on b.basetype)
2. b.BoardID = m.BoardID (requires index on b.BoardID)
No, this requires an index on m.BoardID, which he already has and mysql is
using.
However, you are only allowed one index
Rereading his initial query, u are right. this is not a situation of not
having the right composite index.
Yup, u are counting many rows, and hence it will take awhile.
Michael Stassen wrote:
pow wrote:
In this case, u require 2 indexes on table b.
1. WHERE b.basetype = 0 (requires
i'm trying to run this query:
SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE b.BaseType = 0 AND m.BoardID
= b.BoardID;
MSGS has 9.5 million rows, and is indexed on BoardID
MBOARD has 69K rows and is indexed on BaseType
EXPLAIN shows:
mysql explain SELECT COUNT(1) FROM MSGS m, MBOARD b WHERE
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that may be your
problem.
Cheers,
Andrew
On 15/7/05 23:31, Jon Drukman [EMAIL PROTECTED] wrote:
i'm trying to run this query:
SELECT COUNT(1) FROM MSGS m,
Andrew Braithwaite wrote:
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that may be your
problem.
MSGS.BoardID is indexed, and the EXPLAIN output I included in the
original message shows that it is
Andrew Braithwaite wrote:
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that may be your
problem.
Cheers,
Andrew
He said, MSGS ... is indexed on BoardID. Did you look at the EXPLAIN
output? The query
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?
Cheers,
A
On 16/7/05 00:01, Andrew Braithwaite [EMAIL PROTECTED] wrote:
Hi,
You're doing a join on 'BoardID' on the tables MSGS and MBOARD. Is the
BoardID field indexed on the MSGS table too? If not then that
Andrew Braithwaite wrote:
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?
yes, BoardID is the primary key. BaseType is also indexed.
from the EXPLAIN output i can see that mysql is choosing to use BaseType
as the index for MBOARD (as we know, mysql can only use
Jon Drukman wrote:
Andrew Braithwaite wrote:
Sorry, I meant to say is the 'BoardID' field indexed on the MBOARD table
too?
yes, BoardID is the primary key. BaseType is also indexed.
from the EXPLAIN output i can see that mysql is choosing to use BaseType
as the index for MBOARD (as we
replication, and everything works perfectly.
Our database is now close to 20GB, divided in 160 tables. There are only 2
tables that are larger than 1GB, all others are below 300MB.
These two large tables, they have about 30.000.000 rows and 11 keys of
indexing (each). Every now and then, I
in 160 tables. There are only 2
tables that are larger than 1GB, all others are below 300MB.
These two large tables, they have about 30.000.000 rows and 11 keys of
indexing (each). Every now and then, I used to run myisamchk to fix and
optimize this table (myisamchk -r, -S, -a). All
Hello.
See also these links:
http://dev.mysql.com/doc/mysql/en/table-size.html
http://dev.mysql.com/tech-resources/crash-me.php
and maybe this one :)
http://www.mysql.com/news-and-events/success-stories/
Daniel Kiss [EMAIL PROTECTED] wrote:
Hi All,
I would like
Hi All,
I would like to know how big is the biggest database that can be handled
effectively by MySQL/InnoDB.
Like physical size, number of tables, number of rows per table, average row
lenght, number of indexes per table, etc.
Practically, what if I have a master/detail table-pair, where the
hi,
if your table is indexed, i think it can theoretically hold
4.294.967.295 rows, because that's the maximum for an unsigned integer
value in mysql and indexing doesn't work with bigint types as far as i
know. but, i'm not really sure about that.
Am 09.04.2005 um 11:42 schrieb Daniel Kiss:
On Apr 9, 2005, at 8:05 AM, olli wrote:
hi,
if your table is indexed, i think it can theoretically hold
4.294.967.295 rows, because that's the maximum for an unsigned integer
value in mysql and indexing doesn't work with bigint types as far as i
know. but, i'm not really sure about that.
I
Hi,
Thanks, but I already checked the manual about these aspects, and I'm doing
heavy tests for a while about the performance of MySQL (with InnoDB tables)
with big databases. By the way, the indexing seems to be working on bigint
fields, so probably the size of the int field is not a limit.
Daniel Kiss schrieb:
However, I'm more interested in practical experience with huge databases.
How effective is MySQL (with InnoDB) working with tables containing millions or
rather billions of rows? How about the response time of queries, which return a
few dozens of rows from these big tables
Hi,
i've a rather large table (~ 1.800.000 rows) with five CHAR columns - let's say col1,
col2, , col5. Col1has the primary key. The columns col2,col3,col4,col5 hold
strings of variable length. I need to find duplicate entries that have the same value
for col2,col3,col4 col5 but (and
:
Fax to:
06/18/2004 10:02 Subject: RE: Full Text Index on Large
Tables - Not Answered
PM
At 19:02 -0700 on 06/18/2004, Paul Chu wrote about Re: Full Text
Index on Large Tables - Not Answered:
Appreciate any help at all
Thanks, Paul
-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Full Text Index
Hi,
If I have a table with 100 - 200 million rows and I want to search
For records with specific characteristics.
Ex.
Skills varchar(300)
Skill id's 10 15
Accounting finance etc.
Is it advisable to created a field with skill ids and then use the
Appreciate any help at all
Thanks, Paul
-Original Message-
From: Paul Chu [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 10:16 AM
To: [EMAIL PROTECTED]
Subject: Full Text Index on Large Tables
Hi,
If I have a table with 100 - 200 million rows and I want to search
binary MySQL 4.0.20-standard on Redhat 7.2/Linux
Dual-proc, 4Gb ram, raid
I'm trying to change an index on a 12Gb table (270 million rows). Within an
hour or so the entire table is copied, and the index reaches 3.7Gb of data.
Then the database appears to do nothing more, except for touching the
Are you running out of temp space?
Tim Brody wrote:
binary MySQL 4.0.20-standard on Redhat 7.2/Linux
Dual-proc, 4Gb ram, raid
I'm trying to change an index on a 12Gb table (270 million rows). Within an
hour or so the entire table is copied, and the index reaches 3.7Gb of data.
Then the database
Nope.
As far as I'm aware the only disk space being used is in the database's
directory, and that file system has 200Gb spare.
(/tmp has 19Gb free anyway)
Regards,
Tim.
gerald_clark wrote:
Are you running out of temp space?
Tim Brody wrote:
binary MySQL 4.0.20-standard on Redhat 7.2/Linux
Hello,
I am trying to determine the best way to manage very large (MyISAM) tables, ensuring
that they can be queried in reasonable amounts of time. One table in particular has
over 18 million records (8GB data) and is growing by more than 150K records per day,
and that rate is increasing.
hi!
Chad Attermann wrote:
Hello,
I am trying to determine the best way to manage very large (MyISAM) tables, ensuring that they can be queried in reasonable amounts of time.
--8
Why insisting in using myIsam, and not use some table format that can
assure you some degree of crash recovery
key is created and used.
/Henrik
-Original Message-
From: Chad Attermann [mailto:[EMAIL PROTECTED]
Sent: den 30 mars 2004 19:42
To: [EMAIL PROTECTED]
Subject: Managing Very Large Tables
Hello,
I am trying to determine the best way to manage very large (MyISAM) tables,
ensuring
Tips on managing very large tables for myISAM:
1) Ensure that the table type is not DYNAMIC but Fixed.
= Issue the show table status command.
= Look at Row Format
= if Row Format != Dynamic the your ok else get rid of varchar type
columns
= Reason:
Your myISAM table can
On 30 Mar 2004 at 10:30, Dathan Vance Pattishall wrote:
1) Ensure that the table type is not DYNAMIC but Fixed.
= Issue the show table status command.
= Look at Row Format
= if Row Format != Dynamic the your ok else get rid of varchar type
columns
= Reason:
Your myISAM table
On Tue, Mar 30, 2004 at 10:30:03AM -0800, Dathan Vance Pattishall wrote:
Tips on managing very large tables for myISAM:
1) Ensure that the table type is not DYNAMIC but Fixed.
= Issue the show table status command.
= Look at Row Format
= if Row Format != Dynamic the your ok else get
Hi,
We copy data from one table to another using:
insert into TBL1 select * from TBL 2;
The current database hangs and the process never finish when copying huge
tables (around 25million rows). Looking at the processlist it states that
the process stays in closing table or wait on cond
gt;We copy data from one table to another using:
gt;insert into TBL1 select * from TBL 2;
gt;The current database hangs and the process never finish when copying gt;huge
gt;tables (around 25million rows). Looking at the processlist it states that
gt;the process stays in \closing table\ or
Greetings to all, and thanks for the excellent resource!
I have a question regarding indexing large tables (150M+ rows, 2.6G).
The tables in question have a format like this:
word_id mediumint unsigned
doc_id mediumint unsigned
Our indexes are as follows:
PRIMARY KEY (word_id, doc_id)
INDEX
: Adding indexes on large tables
Greetings to all, and thanks for the excellent resource!
I have a question regarding indexing large tables (150M+ rows, 2.6G).
The tables in question have a format like this:
word_id mediumint unsigned
doc_id mediumint unsigned
Our indexes
to another drive like Dan, said.
Brad
-Original Message-
From: Brendan J Sherar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 07, 2003 6:27 AM
To: [EMAIL PROTECTED]
Subject: Adding indexes on large tables
Greetings to all, and thanks for the excellent resource!
I have a question regarding
Hi: Issuing a simple group by like this:
select C_SF, count(*), sum(je) as sum_je
from kp_data
group by C_SF;
against a large (1.4G) table holding a 5 mln records with 60 columns
takes about 330 secs on my Win2000 development box,
a 2.0GHz P4 w/ 1G RAM and an IDE
total_count = total_count - 1, je_total = je_total -
:old.je_total;
hope this helps,
Dan Greene
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, September 26, 2003 7:34 AM
To: [EMAIL PROTECTED]
Subject: GROUP BY performance on large tables
Hi: Issuing
would
increase exponentially based on the number of records being indexed, or if it
is a linear relationship.
Also, does anyone know of a method to get the faster indexing method to work
on large tables? I tried bumping up the myisam_max_sort_file_size in my.cnf,
but it tops out at 4G
Joseph,
How big your table files are? Are they MyISAM or Innodb ? Do you have
indexes? How much memory do you have? Is your MySQL running on a
dedicated server or do you run anything else on your db server?
This questions needs to be answered before suggesting anything logical.
But general
Group,
I have been working with Mysql for about 5 years - mainly in LAMP shops. The
tables have been between 20-100 thousand records size. Now I have a project
where the tables are in the millions of records.
This is very new to me and I am noticing that my queries are really
sloww!
What
I just had to alter a large (25Gig, 100Million rows) table to
increase the max_rows parameter. The 'alter table' query is now
running 60+ hours, the last 30+hours it spend in 'repair with
keycache' mode. Is there any way to speed up this operation?
I realize, it is probably too late now. But next
First of all, I'd try optimizing your app before writing a whole new
back-end. As such, I'd keep to the normal mysql list.
For example, even if the indexes are big, try to index all the columns that
might be searched. Heck, start by indexing all of them. If the data is
read-only, try myisampack.
We (Centers for Disease Control and Prevention) want to mount
relatively large read only tables that rarely change on
modest ($10K ??) hardware and get 10 second response time.
For instance, 10 years of detailed natality records for the United
States in which each record has some 200 fields and,
Hello, everyone
I have the following situation:
After enjoying problems related to deleting a large portion of a table,
subsequent slow selects and such, I decided to do an alternate route when
removing data from a table:
The table had transactions for one year, and the table really only needs
Hi,
Instead of using separate CREATE INDEX statements, you can build all
your index at once with ALTER TABLE:
ALTER TABLE my_table
ADD INDEX ...,
ADD INDEX ... ,
ADD INDEX ... ;
Hope this helps,
--
Joseph Bueno
Salvesen, Jens-Petter wrote:
Hello, everyone
I have the following
Is there a way to change the directory used when mySQL
copies the table for creating indexes on large tables?
My tmp directory is partitioned for 509 megs and
adding an index via ATLER TABLE or CREATE TABLE yields
this:
ERROR 3: Error writing file '/tmp/STFgNG04' (Errcode:
28)
the .MYI file
- Original Message -
From: Keith C. Ivey [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: Jared Richardson [EMAIL PROTECTED]
Sent: Tuesday, June 04, 2002 5:24 PM
Subject: Re: Bug related to large tables and it's indexes on Win2k
| On 4 Jun 2002, at 15:43, Jared Richardson wrote
On 5 Jun 2002, at 7:50, Jared Richardson wrote:
This table is part of a product that contains publicly available (and
always expanding) publicly avilable biological data in addition to
large companies internal data. A one terrabyte cap very well could
come back to haunt us one day! (sadly
Hi all,
When large tables are being addressed, we seem to have encountered a bug
related to having large indexes on the table.
We have several tables in our system that have reached 4 gigs in size. We
altered the table definition to allow it to get larger... this is our
current table creation
];
[EMAIL PROTECTED]
Cc: Jared Richardson [EMAIL PROTECTED]
Sent: Tuesday, June 04, 2002 9:40 AM
Subject: Re: Bug related to large tables and it's indexes on Win2k
At 08:17 4/6/2002 -0400, Jared Richardson wrote:
Hi,
When large tables are being addressed, we seem to have encountered a bug
related
The table type is the default, MYISAM
- Original Message -
From: Schneck Walter [EMAIL PROTECTED]
To: 'Jared Richardson ' [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, June 04, 2002 10:11 AM
Subject: AW: Bug related to large tables and it's indexes on Win2k
[EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Tuesday, June 04, 2002 10:20 AM
Subject: AW: Bug related to large tables and it's indexes on Win2k
| Well,
|
| im not an expert in MYSQL tabletypes,
| but what if seen yet InnoDB is the most
| preferred tabletyp for real appis.
| if possible
indexes are healthy
2) Try using a MERGE table
regards,
ian gilfillan
- Original Message -
From: Jared Richardson [EMAIL PROTECTED]
To: Schneck Walter [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Tuesday, June 04, 2002 4:24 PM
Subject: Re: Bug related to large tables
* Jared Richardson
[...]
CREATE TABLE IcAlias(
IcAliasID BIGINT NOT NULL PRIMARY KEY,
mID VARCHAR(255) NOT NULL,
IcEntityID BIGINT NOT NULL,
IcTypeID SMALLINT NOT NULL,
IcDupSortID VARCHAR(255) NOT NULL,
INDEX mIDIdx (mID),
INDEX IcTypeIDIdx (IcTypeID),
INDEX
I replied below
- Original Message -
From: Roger Baklund [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, June 04, 2002 3:24 PM
Subject: Re: Bug related to large tables and it's indexes on Win2k
| * Jared Richardson
| [...]
| CREATE TABLE IcAlias(
| IcAliasID BIGINT NOT NULL
[snip]
I watch the .MYD file grow to about 4.2 GB and stop with this error from
mysqlimport.
mysqlimport: Error: Can't open file: 'temp.MYD'. (errno: 144)
mysqlimport: Error: The table 'temp' is full, when using table: temp
I've tried starting safe_mysqld with the --big-tables option, and that
]
To: [EMAIL PROTECTED]
Sent: Tuesday, May 14, 2002 6:44 PM
Subject: Large tables on FreeBSD
Hello,
I need to create a table that is around 15 GB on a FreeBSD
4.5RELEASE system.
I compiled mysql-3.23.49 without any extraneous flags such as
(--disable-largefile)
I use mysqlimport to import
Hello,
I need to create a table that is around 15 GB on a FreeBSD 4.5RELEASE system.
I compiled mysql-3.23.49 without any extraneous flags such as (--disable-largefile)
I use mysqlimport to import the table from a flatfile which is about 9GB.
I watch the .MYD file grow to about 4.2 GB and
I need to use some large tables using MySQL under Linux and would welcome
suggestions from anyone with any experience. Currently one months data
creates a table of 3M records about 750Mb in size. I should like to
consolidate 12 Months data (which is expected to grow to say 50M records per
month
, fiberchannel and SAN w 1 TB raid 5.
/Jörgen
Nigel Edwards wrote:
I need to use some large tables using MySQL under Linux and would welcome
suggestions from anyone with any experience. Currently one months data
creates a table of 3M records about 750Mb in size. I should like to
consolidate 12 Months
such a limitation I'd be sure that you use a
large-file capable system regardless of the table type you choose.
-JF
-Original Message-
From: Nigel Edwards [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 30, 2002 7:19 AM
To: [EMAIL PROTECTED]
Subject: Large Tables
I need to use some large
All, there were many emails posted about this on the MyODBC list which,
of course, can be viewed via the archive on the mysql.com site. For the
most part I will neither quote nor repeat the information from those
emails here.
The conclusion is that MySQL + Merge Tables is perfectly capable of
Hi,
-Original Message-
From: Bill Adams [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 22, 2002 2:04 PM
To: MyODBC Mailing List; MySQL List
Subject: RE: MySQL + Access + MyODBC + LARGE Tables
All, there were many emails posted about this on the MyODBC list which,
of course
Spoiler: Venu's Suggestion about Dynamic Cursor is the answer
On Thu, 2002-02-14 at 20:34, Venu wrote:
MyODBC, as compiled today, uses mysql_store_result to get records. This
is fine for reasonably sized tables. However, if the table has millions
of records, writing the results to a
-Mensaje original-
De: Eugenio Ricart [mailto:[EMAIL PROTECTED]]
Enviado el: viernes, 15 de febrero de 2002 7:00
Para: MyODBC Mailing List
Asunto: RE: MySQL + Access + MyODBC + LARGE Tables
Hello,
I work with VB 6.0 ADO 2.5 Access , I am trying work with MySql and the
Last MyODBC
Monty, Venu, I hope you read this... :)
I really, really want to use MySQL as the database backend for my
datawarehouse. Mind you I have played around with merge tables quite a
bit and know that MySQL is more than up to the task. There are numerous
(not necessarily cost related) reasons as to
of this issue but it
didn't hurt to say it (*_*).
I hope this helped at least a little.
-Original Message-
From: Bill Adams [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 14, 2002 6:05 PM
To: MySQL List; MyODBC Mailing List
Subject: MySQL + Access + MyODBC + LARGE Tables
Monty, Venu, I
Hi,
Monty, Venu, I hope you read this... :)
I really, really want to use MySQL as the database backend for my
datawarehouse. Mind you I have played around with merge tables quite a
bit and know that MySQL is more than up to the task. There are numerous
(not necessarily cost
Bill,
Some databases can use a live result set when retrieving a lot of
records and I really really wish MySQL could do the same. A live result set
does not create a temporary table or use memory to retrieve all the
records. It will grab 50 or so records at a time, and when scrolling
Hi,
I have two tables I wish to combine into a single table. Both tables use
the same format and include a unique key (ID) auto-increment field. Each
row also contains a date field. Whoever managed this database in the past
at some point set up a new server, and initially had new data sent to
On Wed, 2001-12-05 at 16:33, Arjen G. Lentz wrote:
- Original Message -
From: Florin Andrei [EMAIL PROTECTED]
SELECT event.cid, iphdr.ip_src, iphdr.ip_dst, tcphdr.tcp_dport FROM
event, iphdr, tcphdr WHERE event.cid = iphdr.cid AND event.cid =
tcphdr.cid AND tcphdr.tcp_flags =
tables. Maybe tweak
parameters like join_buffer_size? table_cache? Anyone has some
experience with these?... What's the best way to optimize MySQL
for running SELECTs on multiple large tables?
Yes, server settings are important, and the 'right' settings depend
on your system (amount of RAM
that i will be able
to actually do a SELECT on those tables.
Maybe tweak parameters like join_buffer_size? table_cache? Anyone has
some experience with these?... What's the best way to optimize MySQL for
running SELECTs on multiple large tables?
This is the only thing that keeps me from deploying
At 14:45 -0800 2001/12/05, Florin Andrei wrote:
The problem is, MySQL-3.23.46 takes forever to return from SELECT (i let
it run over night, in the morning i still didn't got any results, so i
killed the query).
Hi Florin,
It would help if you could also provide:
the hardware and OS
the
On Wed, 2001-12-05 at 15:01, Robert Alexander wrote:
At 14:45 -0800 2001/12/05, Florin Andrei wrote:
The problem is, MySQL-3.23.46 takes forever to return from SELECT (i let
it run over night, in the morning i still didn't got any results, so i
killed the query).
the hardware and OS
SGI
Florin Andrei writes:
SELECT tbl1.col1, tbl2.col2 FROM tbl1, tbl2 WHERE \
tbl1.col3 = tbl2.col4 AND tbl1.col5 = '123';
The problem is, MySQL-3.23.46 takes forever to return from SELECT (i let
it run over night, in the morning i still didn't got any results, so i
killed the query).
join_buffer_size? table_cache? Anyone has
some experience with these?... What's the best way to optimize MySQL for
running SELECTs on multiple large tables?
Yes, server settings are important, and the 'right' settings depend on your
system (amount of RAM, etc) as well as on your database
1 - 100 of 126 matches
Mail list logo