August 13, 2019

6 Ways to Remove Blockers for Continuous Database Deployment

Continuous database deployment isn’t easy.

The first step to tackling any problem is to see the problem. With databases, it’s difficult because there are a lot of problems that merge together to block the view of what’s really going on.

Here’s what you need to do to optimize your database deployment process and remove blockers to continuous deployment:

1. Treat everything as code

Our applications, infrastructure, database schemas… everything is code. The moment we start treating different kinds of code differently, we get into trouble. All code should flow through a deployment pipeline the same way, moving from Development to Production.

However, many companies make changes to the Production database first. ‘Production first’ is used for changes to schemas, reference data, and database logic, leading to database drift between the Production database and versions used by development and test teams. As you can guess (and are probably aware from first-hand experience), this is going to make continuous database deployments difficult.

2. Minimize changes in Production

Ideally, database changes should be made in Development and flow down. Adding new indexes and all other performance tuning should be made at the beginning. What’s the point of testing along the way if you’re not testing what will actually happen in Production?

When every environment is as close to Production as possible, you can increase the amount and effectiveness of testing. This increases confidence in database changes, encouraging a higher rate of change, and in turn, reducing fear around database deployment (not to mention reducing surprises). This also improves your team’s ability to track changes and understand which changes aren’t working.

3. Nix Production-only technology and configurations

Special Production-only database technologies, configurations, and routines are the enemy of database deployability. Even if the volumes of Production data can’t be re-created in upstream environments, the database technologies, configurations, and features need to be in every environment.

If you can’t test every element of your software – including the database – in an environment that simulates Production, you can’t have confidence that your software will deploy, let alone work as expected. Setting up Production-only technology and configurations works against database deployability. This translates into high rates of failed deployments and buggy releases.

Using the same technology and configuration across all environments reduces uncertainty and fear within teams, aids collaboration, diagnosis, and even helps to keep data size and complexity low, all of which help to make the databases as deployable as possible.

4. Don’t feed the database blob

Many companies that have been around a long time have reached a point where their core database effectively dictates the rate of change of their systems. These organizations all have a large, central database that started out small and gradually grew to become an unwieldy blob that’s risky to change and difficult to deploy.

The database was seen as the ‘single source of truth’ and allowed to grow out of control with live records, indexes, and views. And then every application at the company depended on it directly.

At first, it seems like a great idea. You might think ‘all data in the same place is great for rapid development’. Over time, the complexity of the data requires special skills to maintain. Changes become more painful and risky as multiple teams vie to make changes to suit their application’s goals.

This leads to very expensive and complicated database technology in order to manage the database, perhaps available only in Production, and other systems in the business suffer as budgets and resources are suddenly gone.

Sound familiar? You’ve been feeding the blob that has now taken over your IT group and it’s gumming up the works of your software delivery system. And it feels like there’s no way out. This is not an easy place to start from if your company wants to adopt Continuous Delivery.

Maybe a small portion of this is irreducible once your organization or application reaches a certain scale. But, a lot of it is opaque legacy processes, organizational red tape, and inexperience.

All of this complexity makes it difficult for individuals or teams to understand the database. If it’s hard to understand, then every database deployment difficult and diagnosing a failed database deployment is nearly impossible.

5. Reduce complexity

In short, the smaller and less complex an individual database, the easier it becomes to deploy. As a general rule, it’s always preferable to reduce the internal complexity of any system, especially any unnecessary complexity, down to just what is truly irreducible.

Moving non-core data and logic out of the main database and into separate databases helps to make the core database less complex and easier to understand. This reduces fear around database changes and reduces the number of possible failure states for database changes.

Reducing the size of databases by taking out accidental data allows for more rapid and frequent upstream testing.

6. Use the right tools to make both DBAs and developers happy

The DBA team wants to make the database as easy as possible to administer. Sometimes that comes at the expense of other priorities. For example, keeping track of data for audits is very important and data protection and privacy regulations increase every year. However, the ease of auditing at the database level by, for example, aggregating data in the same transactional database is an example of a local optimization that works against easy database deployments.

Many developers feel like they have to hurry up and develop feature code for apps only to wait on DBAs to review their database code. Implementing a solution that helps both DBAs and developers is crucial for your team’s chances to achieve continuous database deployments.

Liquibase has features that help with audits and database management.

Wrapping up

Tackling continuous database deployments requires close, effective collaboration between developers, DBAs, and operations teams to achieve the right balance between fast deployments and access to data. Datical works with other DevOps partner companies with integrations and expertise that help teams overcome these common blockers. Contact us directly or set up a demo to see how we can help your team.

Share on: