Hello.
See also these links:
http://dev.mysql.com/doc/mysql/en/table-size.html
http://dev.mysql.com/tech-resources/crash-me.php
and maybe this one :)
http://www.mysql.com/news-and-events/success-stories/
Daniel Kiss <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I would li
Daniel Kiss schrieb:
However, I'm more interested in practical experience with huge databases.
How effective is MySQL (with InnoDB) working with tables containing millions or
rather billions of rows? How about the response time of queries, which return a
few dozens of rows from these big tables
Hi,
Thanks, but I already checked the manual about these aspects, and I'm doing
heavy tests for a while about the performance of MySQL (with InnoDB tables)
with big databases. By the way, the indexing seems to be working on bigint
fields, so probably the size of the int field is not a limit.
On Apr 9, 2005, at 8:05 AM, olli wrote:
hi,
if your table is indexed, i think it can theoretically hold
4.294.967.295 rows, because that's the maximum for an unsigned integer
value in mysql and indexing doesn't work with bigint types as far as i
know. but, i'm not really sure about that.
I woul
hi,
if your table is indexed, i think it can theoretically hold
4.294.967.295 rows, because that's the maximum for an unsigned integer
value in mysql and indexing doesn't work with bigint types as far as i
know. but, i'm not really sure about that.
Am 09.04.2005 um 11:42 schrieb Daniel Kiss:
H
Hi Ovanes,
Even if you have large file support you must tell mysql to use long
pointers when creating the table. The way to accomplih this is to add
the MAX_ROWS table option when creating the table, just make it a
really large number like 50. You can verify this effect by
BEFORE mak
[snip]
I watch the .MYD file grow to about 4.2 GB and stop with this error from
mysqlimport.
mysqlimport: Error: Can't open file: 'temp.MYD'. (errno: 144)
mysqlimport: Error: The table 'temp' is full, when using table: temp
I've tried starting safe_mysqld with the --big-tables option, and that
d
It has been my (unfortunate) experience that OLAP type applications are not
MySQL's strong point. Large dataset applications involving queries that
perform aggregations, and scan most/all of the dataset tend to take a very
very long time to execute on MySQL even when using a star-schema (although
Hi,
I run something similar, apx 11 Gb MyIsam + 9 Gb index per month and I
use a merge table on top of sevral 9-11 Gb tables. I dont think you need
to have a merge table if doesnt bring you any other advantages, eg.
cycling of tables.
RH 7.2 + 2.4.18 kernel, ReiserFs
Compaq 4 way XENON, fiber
"Tadej Guzej" <[EMAIL PROTECTED]> wrote:
> select year(dDate), month(dDate), avg(zPrice), sum(zPrice)
> ... from tmp group by year(dDate), month(dDate);
>
> Is there a way to do this without using a temporary table?
You can use a join.
SELECT *
FROM table1, table2, table3
WHERE month(dDate)=8
AN
Hi.
I find that 1-to-1 relationships are often useful & appropriate, and they
would help you reduce the number of columns per row.
For instance in some db of people, addresses, salary info, medical info,
&c, --although they could be jammed into one giant row per person, make
perfect sense
11 matches
Mail list logo