You would have heard of the term software-defined. Itis now being applied to “everything” — environment, storage, networking, data centres, W-LANs — and in all cases, it is about the decoupling of the control software from the hardware.
Perhaps the earliest and best example of software-defined was VMware – creating a virtual server or desktop running almost any operating system running on generic x86 hardware. SD (software defined) has taken that further and it’s the precursor to delivering everything as a service (XaaS).
Driven by server virtualisation, software-defined storage (SDS) has gained strong momentum over the past five years. It offers a simpler approach to traditional data storage because the software that controls the storage-related capabilities is separate from the physical storage hardware.
iTWire asked Craig Waters, virtualisation architect – APJ, Pure Storage, to give his take on SDS and why it is taking over data centres.
{loadposition ray}
Before we start, this guy has more TLAs after his name than Prince Philip – except he earned all of these.
- VCAP-DCD - VMware Certified Advanced Professional Data Centre Design
- VCAP-DCA - VMware Certified Advanced Professional Data Centre Administration
- VCP - VMware Certified Professional
- vExpert - Awarded VMware vExpert 2012, 2013, 2014 & 2015
- NPP - Nutanix Platform Professional
- MCP - Server Virtualization with Windows Server Hyper-V and System Centre
- MCSE - Microsoft Certified Systems Engineer
- CCNA - Cisco Certified Network Associate
- CNE - Certified Novell Engineer
- EMCPA - EMC Proven Associate
- ITIL - Service Management Essentials (Service Support & Service Delivery)
- VMUG Leader Melbourne
- Bachelor of Science (BS), Computing for Real-Time Systems
He writes:
SDS primarily provides reduced complexity – this means storage hardware no longer needs to be custom made. Innovation is not only tied to the manufacturing of hardware components but also borne out of software development, which is more agile, reduces development cycles and has a quicker time to market.
For business users who rely on IT infrastructure, the storage element of “software-defined” enables greater levels of responsiveness and agility. Customers want greater flexibility with their storage, from the physical footprint to simplification of deployment and ongoing management. So, removing the complexity from the hardware means we can also simplify the software.
A good example is the way data is protected on your standard hard drive. Having a single physical disk means if there is a mechanical failure of that disk, your data is lost. Using a Redundant Array Independent Disks (RAID) protects data by having multiple copies of the data spread across several physical disks, along with parity checking which is used to recreate the data in the event of a disk failure.
Traditional storage solutions utilise ‘hot spare’ disks that sit idle waiting for a failure to occur, to rebuild the lost data on the disk, using the parity information.
Instead of providing availability at the physical disk layer, a more efficient and reliable storage method is via hardware like Solid State Drives (SSDs). Being much less prone to failure allows for RAID to be abstracted from the physical disk into segments (that include parity) which are spread across multiple SSDs within the storage array. The key benefit is that in the event of an SSD failure, only the data in use on the SSD is rebuilt from parity in minutes as opposed to the whole physical disk that can take days.
Providing these capabilities in software means vendors can be much more agile in how they offer new features to customers. The ability to complete that upgrade non-disruptively in business hours, and on a repeatable basis helps build confidence within a customer. This process takes time and resources, but software defined means skilled support staff can monitor the change and any back-up or maintenance requirements remotely. This reduces operational costs for staff, who no longer need to work weekends or evenings to complete storage maintenance.
Another important benefit of software-defined is that changing the components comprising the storage solution doesn’t impact the system’s availability. You can start your investment small and grow into it over time, without any disruption to the business. This creates a subscription-style model where customers only buy what they need, as they need it. As requirements and capacity planning forecasts change, you can introduce additional components to scale capacity or performance independently.
Gone are the days of sizing a solution for three to five years and building as much scale into the configuration from the beginning, to get the best price from a vendor. This process has traditionally been fraught with uncertainties: Is the solution sized correctly for the performance/capacity I need for the lifecycle of these assets?
Did the architects take my unknown changing business requirements into consideration when sizing the solution? Will I be required to pay a huge ongoing maintenance fee to keep the solution supported after the warranty expires? What if I want to push the asset beyond its intended lifecycle? What if that lifecycle could be 10 years instead of 5? Will I be forced to repurchase the solution again and have to repeat the whole process?
Software-defined means you can change every component in the storage solution non-disruptively, without any impact on the availability or performance of production applications.
When new storage technologies are introduced – for example, NVMe – they can easily be integrated into the existing solution. You won’t be required to upgrade to storage product 2.0 and repurchase storage capacity already owned, not to mention the skills, resources and financial investment required to migrate data to the new platform.
Not only does SDS prove valuable for your IT team by saving time that can be redeployed back into the business, but it also supports the overall growth of your organisation. It's clear that software defined is the future of infrastructure components and we haven’t even started looking at how this affects orchestration/automation – that’s the next story.