Enterprise Volume Management System
- Converting To EVMS
- User Guide
- Frequently Asked Questions
- Architecture Overview
- Clustering Design
- Papers and Presentations
- EVMS Files
- EVMS CVS Tree
- Extra Patches
- Debian Packages
- Mandrake Packages
- Gentoo Packages
- User Contributions
/ EVMS Terminology
Because of the different terms used to describe volume management on operating systems, we developed a set of terms specific to EVMS. This section defines some general terms used in EVMS and defines the different layers used with EVMS.
- General Terms
The lowest level of addressability on a block device. This definition is in keeping with the standard meaning found in other management systems. In most situations, a sector is 512 bytes.
Any memory structure in EVMS that is capable of being a block device. An ordered set of sectors.
The ordered set of sectors that represents a physical device. IDE and SCSI disks appear as Logical Disks in EVMS.
An ordered set of physically contiguous sectors residing on a logical disk or on another disk segment. The general analogy for a segment is to a traditional disk partition, such as in DOS or OS/2®.
An ordered set of logically contiguous sectors (that are not necessarily physically contiguous). The underlying mapping can be to logical disks, segments, or other regions. Linux LVM and AIX LVM LVs, as well as MD devices, are represented as regions in EVMS.
A collection of storage objects. Storage containers provide a re-mapping from this collection to a new set of storage objects that the container exports. The appropriate analogy for a storage container is to volume groups, such as in the AIX® LVM and the Linux LVM. However, EVMS containers are not restricted to any one remapping scheme, as is the case with volume groups in LVM or AIX. The remapping could be completely arbitrary.
A logically contiguous address space created from one or more disks, segments, regions or other feature objects through the use of an EVMS native feature. Feature Objects are essentially the same as Regions, except that Feature Objects contain EVMS-specific metadata.
EVMS Logical Volume
A mountable storage object. EVMS volumes contain metadata at the end of the underlying object, and at a minimum will have a static name and static minor number. Any object in EVMS can be made into an EVMS volume.
Compatibility Logical Volume
A mountable storage object that does not contain any EVMS native metadata. Many plug-ins in EVMS provide support for the capabilities of other volume management schemes. Volumes that are designated as "compatibility" are insured to be backwards compatible to that particular scheme because they do not contain any EVMS native metadata. Any disk, segment, or region can be a compatibility volume. Howevever, Feature objects cannot become compatibility volumes.
- Layer Definitions
Logical Device Managers
The first layer is the logical device managers. These plug-ins communicate with the hardware device drivers to create the first EVMS objects. Currently, all local devices (most IDE and SCSI disks) are handled by a single plug-in. Future releases of EVMS might have additional device managers to do network device management, such as for disks on a storage area network (SAN).
The second layer is the segment managers. In general, these plug-ins handle the segmenting, or partitioning, of disk drives. The engine components can replace partitioning programs, such as fdisk and disk druid, and the kernel components can replace the in-kernel disk partitioning code. Segment managers can also be "stacked," meaning that one segment manager can take input from another segment manager.
Currently, there are three plug-in in this layer. The most commonly used is the DOS Segment Manager. This plug-in handles the DOS partitioning scheme, which is the scheme traditionally used by Linux. This plug-in also handles some special cases that arise when using OS/2 partitions. There is also a plug-in to handle the new GPT partitioning scheme on IA-64 machines, and a plug-in to handle S/390 partitions (CDL/LDL/CMS). Both of these plug-ins are still in development, and only support discovery and the I/O path. Other segment manager plug-ins may be added for supporting other partitioning schemes (e.g. Macintosh, Sun, and SGI).
The third layer in EVMS is the region managers. This layer is intended to provide a place for plug-ins that ensure compatibility with existing volume management schemes in Linux or other operating systems. Region managers are intended to model systems that provide a logical abstraction above disks or partitions.
Like the segment managers, region managers can also be stacked. Therefore, the input object(s) to a region manager can be disks, segments, or other regions.
There are currently four region manager plug-ins in EVMS. The first is the LVM plug-in that provides compatibility with the Linux LVM and allows the creation of volume groups or containers and logical volumes or regions.
Two more plug-ins are the AIX and OS/2 region managers. The AIX LVM is very similar in functionality to the Linux LVM, and uses volume groups and logical volumes. The AIX plug-in is still under development. It currently provides most necessary kernel functionality, but is still limited in user-space. The OS/2 plug-in provides compatibility with volumes created under OS/2. Unlike the Linux and AIX LVMs, the OS/2 LVM is based on the linear linking of disk partitions, as well as bad-block-relocation.
The fourth region manager plug-in is the Multi-Disk (MD) plug-in for RAID. This plug-in provides RAID levels linear, 0, 1, 4, and 5 in software. The ability to stack region managers allows combinations of RAID and LVM. For instance, a stripe set (RAID 0) could be used as a PV in LVM, or two LVM LVs could be mirrored using RAID 1.
The next layer is EVMS Features. This layer is where new EVMS-native functionality is implemented. EVMS Features can be built on any object in the system, including disks, segments, regions, or other feature objects. EVMS Features all share a common type of metadata, which makes discovery of Feature objects much more efficient, and recovery of broken Features objects much more reliable.
There are three Features currently available in EVMS. The first Feature is Drive Linking. This plug-in simply allows any number of objects to be linearly concatenated together into a single object.
The second Feature is Bad-Block-Relocation (BBR). BBR monitors its I/O path and detects write failures (which may be caused by a damaged disk). In the event of such a failure, the data from that request is stored in a new location. BBR keeps track of this remapping, and any additional I/Os to that location are redirected to the new location.
The third Feature is Snapshotting. Snapshotting provides a mechanism for creating a "frozen" copy of a volume at a single instant in time, without having to take that volume off-line. This is very useful for performing backups on a live system. Snapshots work with any volume (EVMS or compatibility), and can use any other available object as a backing store. After a snapshot is created, writes to the "original" volume cause the original contents of that location to be copied to the snapshot's storage object. Then, I/Os to the snapshot volume look like they come from the original at the time the snapshot was created.
File System Interface Modules
File System Interface Modules, or FSIMs, are the one layer of EVMS that only exists in the user-space engine. These plug-ins are used to provide coordination with the filesystems during certain volume management operations. For instance, when expanding or shrinking a volume, the filesystem must also be expanded or shrunk to the appropriate size. Ordering in this example is also important; a filesystem cannot be expanded before the volume, and a volume cannot be shrunk before the filesystem. The FSIMs allow EVMS to ensure this coordination and ordering.
FSIMs also provide the ability to perform filesystem operations from one of the EVMS user interfaces. For instance, a user can make new filesystems and check existing filesystems by interacting with the FSIM.