Smart Software is the key to many problems hardware with 'standard'
software cannot solve or handle.

I believe that mySQL will, by year 2012, be able to handle it
gracefully. It will be able to do so much more by then (easy and
robust clustering / HA, for example) and even incorporate technologies
and ideas that we (as in, people of our time) have not thought of as
yet. It might even become the dominant database on the market..  7
years is a century's worth of time in our world.

It is the evolution of software. Regarding hardware, you should be
certain technological advances would make it more than possible. Just
look back at what was thought possible 7 years ago, and compare it
with today's standards.


On Fri, 12 Nov 2004 10:42:32 -0800, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> Adequate data warehouse performance requires more than just hardware. 2
> crucial make-or-break software features are partitioning and parallel
> query.
> 
> On very large tables - accessing a large slice of the data via index is
> completely unfeasible. Table scan is the only option. Partitioning allows
> you to scan only the necessary segments instead of reading the whole table
> and rejecting massive numbers of rows. Parallel query breaks the job up so
> that multiple processes of the OS can participate and speed up the
> process.
> 
> These features are an absolute necessity if we wanted to migrate our large
> databases from Oracle to MySQL. We are eager for MySQL to make them
> priority features. MySQL's market appeal would just explode. We will do
> our best to contribute to the effort if we can. I'd like to urge others
> who plan to use MySQL with large databases to consider doing the same.
> 
> Thanks,
> 
> Udi
> 
> "Heikki Tuuri" <[EMAIL PROTECTED]>
> 11/12/2004 06:57 AM
> 
>         To:     <[EMAIL PROTECTED]>
>         cc:
>         Subject:        Re: scalability of MySQL - future plans?
> 
> 
> 
> 
> Jacek,
> 
> ----- Original Message -----
> From: "Jacek Becla" <[EMAIL PROTECTED]>
> Newsgroups: mailing.database.myodbc
> Sent: Friday, November 12, 2004 2:30 AM
> Subject: scalability of MySQL - future plans?
> 
> > Hello,
> >
> > What are the plans regarding improving scalability of MySQL? We are
> > currently trying to decide what technology/product to use for a large
> > project that will generate ~600TB/year starting in 2012. Any pointers to
> > related articles or hints how safe is to assume that MySQL will be able
> > to handle petabyte-scale dataset in 8-10 years would be greatly
> > appreciated.
> 
> hmm... this mostly depends on hardware. With the innodb_file_per_table
> option, a single InnoDB table can be 64 TB in size, and you can have 4
> billion such tables.
> 
> With current PC hardware, the speed of a single CPU allows you to insert
> 10
> 000 rows per second, if the load is not disk-bound. Let us assume that a
> single row in 100 bytes. That makes 1 MB/s, which is 30 TB/year. CPU speed
> 
> will probably double every 4 years or so. Thus, CPU speed will suffice if
> you use a multiprocessor.
> 
> Normally, a database server has main memory at least 1 % of the data size.
> 
> Is 6000 GB RAM realistic in 2012? Memory sizes will probably double every
> 2
> to 3 years. If a high-end server today has 32 GB of RAM, in year 2012 it
> might have 512 GB of RAM. You will need a huge server.
> 
> The worst problem is the disk seek time. If your tables have secondary
> indexes where the insertion order is random, a modern disk, in combination
> 
> with the InnoDB insert buffer, can insert maybe 200 random records per
> second. That is 100 rows/s for a typical table. You are going to insert
> 200
> 000 rows/s. You may need a disk farm of 4000 physical disks. Such disk
> farms
> exist today, but they are expensive, and we have no experience how Linux
> performs on them. Probably by 2012, Linux is good enough, if not yet
> today.
> 
> If you insert rows in large batches to tables smaller than your main
> memory,
> or if you insert in the prder of the primary key, and you do not have
> secondary keys, then there are no random accesses to disks, and you do not
> 
> need a disk farm.
> 
> A typical disk in 2012 may store 1 TB. Thus, you will need at least 600
> disks anyway.
> 
> How long does it take to build an index to a 64 TB table if you have 6 TB
> of
> memory? If the index completely fits in the memory, then this is
> sequential
> disk I/O. With today's high end disks, you can read 60 MB/s. Building an
> index with a single disk would take 2 weeks. In 2012, it might take only 3
> 
> days.
> 
> Conclusion: MySQL/InnoDB is able to handle that workload of 600 TB/year in
> 
> year 2012. But you will need a huge server which has 10 x the memory of a
> high-end server, and 600 - 4000 physical disk drives.
> 
> The following link describes a system with 512 GB of memory, and 2000 disk
> 
> drives:
> http://www.tpc.org/results/individual_results/IBM/IBM_690_040217_es.pdf
> The system costs 5.6 million US dollars.
> 
> > Best regards,
> > Jacek Becla
> > Stanford University
> 
> Best regards,
> 
> Heikki Tuuri
> Innobase Oy
> Foreign keys, transactions, and row level locking for MySQL
> InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
> MyISAM
> tables
> http://www.innodb.com/order.php
> 
> Order MySQL technical support from https://order.mysql.com/
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


-- 
Mark Papadakis
Head of R&D
Phaistos Networks, S.A

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to