What can IT professionals learn from the history of flight?
FYI, this story is more than a year old
Scientific advances in flight, medicine and technology generally follow a revolution/evolution model.
Pioneers create unique technology, overcoming barriers consumers never thought possible. Flying machines. Antibiotics. Shared storage.
Technological evolution then occurs over many decades, leveraging disruptive technologies, continuous refinements and competition.
Since 1903 when the Wright Flyer left the ground for 12 seconds at Kitty Hawk, modern aviation has evolved to allow cost-effective and reliable 17-hour trans-continental flights.
By exploring the history of flight machines, we find parallels with storage architectures, modern application storage needs and datacentres in general that provide valuable takeaways for IT administrators today.
When the Wright Brothers first contemplated using their skills as master bicycle craftsmen to conquer human flight, they tested hundreds of variable architectures and designs.
Much like the early pioneers of data storage, the Wright Brothers were looking to solve a problem. As air travel matured, airplanes, blimps and helicopters improved in versatility, efficiency and safety. But, ultimately, the field of contenders consolidated and many of the pioneers lost out - much like what has happened in the enterprise storage industry.
Like the evolution of modern flight, enterprise data storage has recently undergone a heightened pace of innovation and the current crop of flash vendors have, since 2008, completely changed the economics, performance, scaling, reliability and management of storage in the datacentre.
In aviation, today’s advanced corporations, such as Boeing and Airbus, similarly competed by appealing to those who need greater flight:
- Performance, efficiency and passenger capacity
- Ability to scale
- Ease of operation
Networked storage advances in the modern datacentre are also being driven by the same four factors, pushing aside the incumbents.
A brief look at the evolution of shared storage reveals why those who were the pioneers are no longer at the forefront of storage advances.
Many storage architectures still compete today with some overlap. Storage admins, IT procurement and other consumers of storage deal with higher complexity, cost and migration downtime due to their forced multi-vendor, multi-architecture islands of storage.
The only advantage to having silos is you can apply best-of-breed solutions to every storage service level. But, for most storage IT shops, the negatives outweigh this benefit.
This heterogeneous mix of solutions exacts a high toll on all aspects of a company’s IT, including:
- Separate purchasing process
- Separate education/administration
- Separate installation and provisioning
- Separate maintenance and management
- Separate firmware/hardware upgrade processes
- Islands of utiliation
- Lack of data mobility
- Inefficient data protection
Today, there's an assortment of storage architectures to support IT requirements with varying strengths and weaknesses:
Legacy Hybrid Architectures
Legacy solutions offer a bolt-on tier of flash and sophisticated, complex algorithms that move data from tier to tier based on activity and performance needs. Built on decades-old architectures and forced to support older platforms, this storage architecture has not seen major enhancements. While offering comprehensive capabilities, these solutions can be expensive, inefficient, difficult to manage and no longer the highest performing.
New Hybrid Architectures
Modern storage vendors are building arrays with flash tightly integrated into the original design. They tend to be very fast, highly efficient and very easy to manage. Some offer large capacity scaling and full sets of data protection and management capabilities built-in.
All Flash Arrays (AFA)
This category includes new vendors as well as legacy vendors, in many ways playing catch-up by releasing AFAs. As a group, AFAs tend to be very fast, using unique data reduction methods such as in-line compression and dedupe. They tend to have the highest cost per gigabyte and do not scale capacity as well as systems using rotational disk drives.
Offering combination “blocks” of compute, storage and network in one modular purchase makes procurement very easy, in addition to condensing support to a single vendor. Drawbacks include that many platforms are unable to independently scale CPU and storage, and the end user loses the ability to procure best-of-breed storage. Storage is often the weakest link in these platforms.
Storage vendors have partnered with server/network/ation vendors to create reference architectures. Leveraging pre-validated compute, storage and networking expertise from partners, end users can deploy much faster than architecting from scratch. These offer extensive choices at both the compute and storage layers, as well as independent scale of compute and storage. However, management is less integrated than with hyperconverged architecture.
As storage architectures have evolved, only the New Hybrid category excels in the broad-based application consolidation long dreamed of by storage professionals
Flash prices are continuously dropping, even respective of traditional, rotating disk drives. Dedupe and compression implementations improve each year. We can one day imagine the storage platform of the future will be much like the Boeing 787 Dreamliner, capable of vast speed, performance and uptime - all at the highest industry efficiency.
For more educated insights on data storage, go to the Nimble Storage blog. You can talk to the Asia Pacific team at Nimble Storage by emailing:
firstname.lastname@example.org or calling 1-800-646-253 (1800 NIMBLE).
Article by Bede Hackney, Managing Director Nimble Storage ANZ