In the pursuit of data protection, businesses nowadays face more hurdles in the security landscape than ever before. From the recent Global Data Center Survey Report, many new trends in datacenter business come as a surprise. We know there’s a growing demand for reliable, scalable infrastructure, but issues with downtime are complicating businesses’ confidence in their existing systems, implicating all-too-precious data in the process.
For example, 31% of respondents experienced severe and damaging downtime, and almost 80% note that the downtime they did experience could have been avoided. Not to mention, prior to IoT, organizations had to protect their datacenter and ROBO locations. With the emergence of IoT, organizations need to protect their infrastructures at the edge and ensure reliability beyond the core of their datacenter alone.
Luckily, trends in data protection modernization are emerging in response to these and other challenges. To understand or utilize one of these solutions, it’s important to first understand what these trends are. From there, you can identify if you’re facing the same pitfalls many businesses are, and then develop application-specific solutions to meet your data protection needs. Let’s analyze the trends:
Where’s your data going? How much do you have? How many kinds of data do you have? Because of the complexity of data variety, choosing where to store data is a difficult decision. It can exist on servers, desktops, in block, file, or object storage, and in various clouds.
According to the Enterprise Strategy Group (ESG), two-thirds of services must be recovered within 2 hours. Because recovery time objectives (RTO) and recovery point objectives (RPO) are decreasing, businesses demand higher protection.
Virtualization, while a valuable “must-have” for enterprises today, has complicated data protection, which may force IT to change or add solutions. Moreover, multiple hypervisors contribute to the challenge, with over 72% of enterprises in 2015 using more than one hypervisor, according to IDC.
The time for data protection tasks, like replicating data, creating snapshots, and distributing copies to different locations, is decreasing, and the backup window has shrunk to zero. Traditional data protection is tedious, and businesses simply don’t have the time.
Multiple protection solutions, servers, appliances, and disk and tape media means increased complexity for businesses.
Costs rise with more data, long retention periods, and complex hardware and software. Not to mention, both the cost and challenges regarding datacenter downtime have risen. The Ponemon Institute reports that since 2010, the cost of downtime has increased by 38%. In particular, the maximum downtime costs increased 32% since 2013 and 81% since 2010. To put this into a price, the maximum downtime cost for 2016 is $2,409,991.
Copies of data are necessary. Businesses use them for backups, archives, and disaster recovery purposes, among others. However, the greater the amount of copies, the greater the cost and the more difficult decisions must be made regarding what to discard.
In response to these challenges, the following protection responses have been developed:
- Snapshots: If an RPO of 2 hours is the goal, traditional streaming backups won’t make the cut. Snapshots are the answer. They’re space-efficient, can be taken often, and only consume storage as data changes.
- App-centric protection: While traditional unified backup methods have not disappeared, more organizations have adopted virtualization and cloud computing. Therefore, it’s important to not only protect the data, but also the applications. For a more comprehensive backup strategy, many companies are replacing their old backup methods with an app-centric approach.
- Replication: If a primary datacenter goes offline, replication ensures recovery happens quickly, meeting RTO and RPO goals in the process. Asynchronous and synchronous replication are key players in this trend, rising to the task of recovering data and eliminating the possibility of data loss respectively.
- Cloud: Because the cloud provides an unlimited pool of capacity, it decreases cost and complexity and minimizes tape. Not to mention, the cloud eliminates the need for provisioning and capacity planning for data protection.
- Copy-data management: Too many copies can create clutter, making management difficult. The alternatives — snapshots and clones — address those issues, streamlining management and reducing storage consumption. According to IDC research, between 45% and 60% of total storage capacity is dedicated to accommodating copy data. One of the most prominent examples of copy data is database cloning for different use-cases (e.g., BI, Analytics, Dev/Test, etc.) across multiple groups in the organization. Therefore, leveraging technologies like space and time-efficient snapshots based on redirect-on-write implementation is crucial to optimize usage of infrastructure sources.
Data protection requirements, as a whole, are continuously increasing, but different applications have unique needs, and some require a higher level of protection than others. Not every solution will match your RTO and RPO goals either, so being able to juggle these issues can muddy the protection puzzle.
Depending on the type of application, your protection strategy will need to adapt to meet their requirements. Let’s take a look at a few key enterprise applications and provide some considerations for determining data protection requirements.
Enterprise Applications/Tier 0 and Tier 1 Databases and Applications
We’re starting at the top with mission-critical databases and associated applications. Because they are so critical, they require more complete, dedicated protection strategies.
- Backup/restore: These applications need regular, frequent backups. For applications in this tier, an application going offline or losing data has consequences. In case of human error or software bugs, backups ensure a quicker recovery.
- Disaster recovery: Choosing between synchronous and asynchronous replication depends on your RPO and RTO. Near synchronous replication, for instance, meets more granular RPO requirements as small as 1 minute.
Custom Databases and Applications
These are developed in JAVA, .NET, and other languages, and they’re backed by relational database management solutions. While not mission-critical, they still require specialized protection.
- Backup/restore: Regular backups and frequent snapshots are necessary to make sure these applications are running securely.
- Disaster recovery: Asynchronous replication is most appropriate since these apps require RPO and RTO of one hour or more.
Next-Generation Web-Based Applications
Web-based application frameworks lead in enterprises, and NoSQL databases are most often used. REST APIs may be used to integrate Nutanix data protection functions with an application.
- Backup/restore: To defend against human error and software bugs, backup and restore are vital. Snapshots and clones, for example, are convenient for organizations doing continuous delivery (CD).
- Disaster recovery: Resiliency is built-in, and multiple instances of each application service exist across a cluster and across locations.
Application and Desktop Virtualization: For applications moving to VDI, providing the same level of data protection from the traditional desktop is common. However, a failure in VDI could idle a large portion of employees, so it’s important to make sure the VDI environment is recoverable via backups.
The data protection landscape can be confusing, tedious, and daunting. Deciding what your applications need is imperative for choosing the best strategy to secure your data. We’ve only scratched the surface; if you’re ready to start safeguarding your data, check out The Definitive Guide to Data Protection and Disaster Recovery for more information on the state of data protection now, strategies for recovery, and how to get started with the Nutanix solutions that can help.