Hi,
The attached project has been proposed to the OpenSolaris PM community.
thanks,
-jane
--
This message posted from opensolaris.orgStorage PM Project
==
Currently, the main challenges in power managing disks on server
platforms are the issues of latency and reliability:
* Latency, as defined by time to first data access, is incurred when
powering up a disk to put it in service. While latency for a disk
in operation is measured in milliseconds, a disk may take seconds or
even tens of seconds to come on line if it needs to be powered up
and spin up.
* Reliability is an issue in multiple contexts. With RAID
configurations, which guard against random media errors, all disks
in the RAID group must be online or offline -- it is not possible to
achieve power savings by powering down some of the drives.
In addition, excessive head load/unload operations (to save a
limited amount of power) can cause disk failure over time. Any disk
power management must take this into account.
With the invention of ZFS, the latency issue and part of the
reliability issue can now be addressed. Specifically, ZFS's baked in
Volume Manager and its software RAID-Z feature are among the key
features responsible for a possible breakthrough in the area of power
managing server disks.
This project is the first step to enabling power savings from more
intelligent management of storage. It offers potential substantial
power savings with minimal impact to storage I/O performance for
server or storage platforms utilizing ZFS. The project positions ZFS
as Resource Manager that interacts with Solaris's Common Power
Management software to provide shrink-to-fit Elastic policy on
non-virtualized platforms.
Future projects will provide the above functionality in virtualized
environments, and moving furthur on, to explore opportunities to
provide similar a feature set on non-ZFS filesystems.
Modern SAS and SATA disks provide a variety of power states enabling a
reduction in power consumption. In some cases, these reduced power
consumption states allow data to remain on line for access at lower
throughput and/or higher latency (by slowing down head seek, for
example). In most cases, however these states result in the data
stored on the disk being rendered inaccessible until the host takes
action to return the disk to normal operation.
In order to allow the disk Resource Manager (i. e. ZFS in the current
project phase) to regulate the disk power consumption, it must be
possible for the software to:
* Identify the storage devices it is using (this information is
already available by other means).
* Identify the set of power saving states and their characteristics
(e. g. power requirement at each different state, time to bring
online). Note that different storage devices in the same system may
offer different power states, so it must be possible to discover the
power states available for each storage device in use.
* Identify the state that a storage device is currently in.
* Request that the device place itself in a specified low power state.
* Request that the device recover from the reduced power state into
the normal functional state.
The specific components that this project will deliver are as follows:
* Provide infrastructure that allows the disk driver and SATA
framework to retrieve industry-standard SAS or SATA power state
information from disks, set and change disk power states, and report
information on available power states to higher levels of software.
In this phase of the project, this interface will only function in a
non-virtualized environment; operation in virtualized environments
such as xVM or LDoms will be deferred to a future project.
* Provide a Resource Power Manager (RPM) software layer that interacts
with the Resource Manager and existing PM framework to provide the
Resource Manager ability to adjust the power states of disks to
achieve power savings.
* Enhance ZFS to provide Elastic mode power savings by setting disks
not currently in use to a lower power state and by optionally
configuring its available storage to minimize the number of drives
in use.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss