Skip to main content

In this posting, I offer advice on how to configure your ftScalableTM Storage Array (hereafter abbreviated to ftScalable) to optimize its performance. By understanding your I/O workload and patterns and following some simple guidelines, you can configure your ftScalable to achieve your availability and performance requirements.

Terminology

The term “logical disk” refers to a VOS logical disk, which is comprised of one or more member disks. VOS stripes data across all of the member disks. Prior to the introduction of ftScalable, each member disk was a pair of physical disk drives. With the advent of ftScalable, each member disk is now associated with a single LUN. A LUN or “logical unit”, is a subdivision of a virtual disk (“VDISK”) on the ftScalable.

A VDISK is a collection of 1 or more physical disk drives, organized into a virtual disk using a specific RAID type.

“Degraded mode” refers to the operation of the VDISK after one its physical disk drives has failed but before the recovery operation starts.

“Recovery mode” refers to the operation of the VDISK while it is rebuilding after a drive failure.

RAID Types

While there are many RAID types supported by the ftScalable, I will describe only the commonly-used ones here.

RAID-0:  A RAID-0 VDISK stripes data across all the physical disk drives in the set. It provides the highest degrees of I/O performance, but offers NO fault tolerance. Loss of any physical disk drive will cause total loss of data. In addition, the ftScalable cannot automatically take marginal physical drives out of service and proactively rebuild the data using a spare drive. This RAID type should never be used in mission-critical environments.

RAID-1: A RAID-1 VDISK is a simple pair of mirrored physical disk drives. It offers good read and write performance and can survive loss of a single drive. Reads can be handled by either physical drive, while writes must be written to both drives.  Recovery from a failed drive is easy, requiring only a re-mirroring from the surviving partner. There is typically a minimum impact on performance while running in degraded or recovery mode.

RAID-10: A RAID-10 VDISK is comprised of multiple sets of RAID-1 disks, allowing data to be striped across all the RAID-1 pairs. A RAID-10 VDISK offers high performance and the ability to potentially survive multiple physical drive failures without losing data. The impact on performance while running in degraded or recovery mode is similar to that of a RAID-1 VDISK.

RAID-5/RAID-6:  These RAID types use parity-based algorithms and striping to offer high availability at a reduced cost compared to mirroring. A RAID-5 VDISK uses the equivalent of one physical disk drive capacity for parity, while a RAID-6 uses the equivalent of two drives. A RAID-5 VDISK can survive the failure of a single disk drive without data loss, while a RAID-6 VDISK can survive two drive failures.  Both types offer excellent read performance, but write performance is impacted by the need to write not only the data block, but also by the read / modify / re-write operations necessary for the parity block(s). Drive failure (degraded mode) has a medium impact on throughput, while recovery mode has a high impact on throughput.

Picking a RAID type

Each RAID type has specific benefits and drawbacks. By understanding them, you can select the RAID type best suited to your environment.

For data and applications where the speed of writing is not important, or where achieving the utmost speed of access is not important, RAID-5 is a good choice.  In return for accepting lower write throughput performance and higher latency, you can use fewer disks for a given capacity, yet still achieve a high degree of fault-tolerance. However, you must also consider the impact of running with a degraded RAID set (i.e., a failed disk drive) on your application.  I/O performance and latency in parity-based RAID types will suffer more during degraded mode and recovery mode compared to mirror-based RAID types.

For data and applications whose performance depends upon optimizing the speed and latency of writes, or which perform more writes than reads, or which must not encounter degraded performance in the event of a drive failure, mirror-based RAID types (RAID-1 or RAID-10) offer a better solution. Both of these RAID types eliminate the read-before-write penalty of RAID-5 or RAID-6, so writing data is a simple operation. RAID-10 is generally a better choice than RAID-1 because it allows you to stripe the data over multiple physical drives, which can significantly increase the overall read and write performance. (But please read the section titled “VOS Multi-Member Logical Disks versus ftScalable RAID-10 VDISKs”, below).

If you can’t decide whether to select a parity-based or mirror-based RAID type, then the safest choice is to use one of the mirror-based RAID types.

Assigning LUNs to VDISKS

To review, one or more physical disks comprise a VDISK. A VDISK can be divided into one or more LUNs. Each LUN is assigned to a specific VOS member disk.  One or more member disks are combined into a single VOS logical disk.

I strongly advise that you assign a single LUN to a VDISK.  While the ftScalable supports carving up a VDISK into multiple LUNs, using this option can introduce significant performance penalties that affect both I/O throughput and latency.

There are several reasons for these penalties, but the basic one is easy to understand. Each time the ftScalable has to access one of the LUNs in a multiple LUN per VDISK configuration, it has to seek the disk drive heads. The more LUNs that comprise a VDISK, the more head movement. The more head movement, the greater the latencies. Remember, all I/O must eventually be handled by the physical drives that make up the VDISK; the array’s cache memory cannot replace this physical I/O.

Stratus has run benchmarks demonstrating that the aggregate I/O throughput of a 4-LUN VDISK is about half the performance of the same VDISK configured as a single LUN, while the latency can be over four times greater!

Assigning VOS Logical Disks to LUNs

The simplest approach is to assign one VOS logical disk to each LUN. If you need a VOS logical disk that is bigger than a single LUN, or if you want to take advantage of the performance benefits of striping, then you can create a VOS multi-member logical disk, where each member disk is a single LUN.

VOS Multi-Member Logical Disks versus ftScalable RAID-10 VDISKs

You can implement striping at the VOS level (by creating a VOS multi-member logical disk), or at the ftScalable level (by creating a RAID-10 VDISK), or even a combination of both methods (say, by combining multiple LUNs, each of which is a RAID-5 VDISK, into a single VOS multi-member logical disk). If you wish to use striping, I recommend that you use RAID-1 or RAID-5 for the VDISKS, with 1 LUN per VDISK, and combine these LUNs into VOS multi-member logical disks.  VOS uses a separate queue of disk requests for each LUN and thus maximizing the number of LUNs maximizes throughput and minimizes latency.

Assigning Files to VOS Logical Disks

When possible, assign randomly-accessed files and sequentially-accessed files to separate logical disks. Mixing the two types of file access methods on the same logical disk increases the worst-case time needed to access the random-access files and reduces the maximum possible throughput of sequential files.

Summary

You can achieve reliable, high-throughput, low-latency disk access using these simple guidelines.

If you think you have a good case for using a different configuration than we have recommended here, please contact your account team. We are always available to review existing ftScalable configurations and offer guidance for specific customer situations.

I hope this information proves useful. If you have questions or comments, please respond to this post.

Acknowledgements

Joe Sanzio provided invaluable assistance during the writing of this post. Any errors that remain are mine.

© 2024 Stratus Technologies.