> I doubt that you understand the toolserver. The toolserver-CLUSTER is make of
> 18 different servers. Some run Solaris, some run Debian, some are (big)
> database-servers, some are userland-servers, some are web-servers, some are
> HA-nodes and some are aux-servers (forgetting such details like virtual
> machines, network-devices and external storage). It's a grown infrastructure
> and you can not just move it to labs or rebuild it there as a simple virtual
> machine or cloud-instance. Even if I would begin this very instance I doubt I
> would be finish in December 2013; and it would need the same 18 servers (or
> better: more).
>

The database servers (replicated and user-databases) are going to be
provided by the Labs infrastructure. That's the majority of
Toolserver's servers. We have shared storage provided already.

The capacity of Labs is far greater than Toolserver's. We have a
cluster in the pmtpa and eqiad datacenters. Each cluster has 7 compute
nodes for virtual machines, 2 database servers for replicas, 1
database server for user databases and 4 storage nodes for shared
storage access.

In total that's:

* 14 compute nodes with a total of 2.5TB of RAM, 224 CPU cores, and
14TB of storage for virtual machine images
* 4 database nodes for replicas with a total of 740GB of RAM and 64 CPU cores
* 2 database nodes for user-databases, with a total of 370GB of RAM
and 32 CPU cores
* 8 storage nodes with a total of 128TB of storage

In a project you can create as many instances as you'd like. The
default quota limit per-project is 10 instances, but we can increase
that number as much as needed.

It's definitely possible to re-create Toolserver inside of Labs. I'm
not suggesting that's what should be done for a migration, but it's an
option.

- Ryan

_______________________________________________
Toolserver-l mailing list (Toolserver-l@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/toolserver-l
Posting guidelines for this list: 
https://wiki.toolserver.org/view/Mailing_list_etiquette

Reply via email to