While the term SDDC (Software Defined Data Center) has been cropping up in discussion more often, the concept of the catch-all term is still rather vague. Many have an idea of what they want it to be, which naturally doesn’t seem to be the same. Perhaps it is thought of as a fully flexible data center, a virtualized version of everything standing behind what it needed. But this may be a little too abstract; IT is looking to have a simplier means to roll out sevices rapidly, and the SDDC is the means to achieve the needed requirements of redundancy, provisioning and more.
In the past there were dedicated infrastructure components for each application with little sharing. Efficiency wasn’t easily achievable since there was over-provisioning of assets to meet needs that might never arrive. Additionally, changes meant additional infrastructure that would take months to implement. Then virtualization opened up the applications to share the infrastructure. IT managers could respond dynamically within days, sometimes sooner. And now, the applications are asking for the resources they need from the infrastructure. This is faster cloud that we are progressing toward, with agile applications that works fluidly with the infrastructure – the whole infrastructure – to meet its location, space, and reliability needs.
However, the physical data centers themselves, with an average age approaching 18 years, need to respond to allow the smarter, faster cloud to operate. They may become ranked for their cost, reliability, security, computing abilities and more, allowing applications and their users decide what the balance should be. In this way, the SDDCs become dashboards for users to choose from. Behind the scenes the equipment will need to remain operational to maintain their reputation, which may be the biggest disconnect between the software defined and the actual data center operations over the coming decades.