Skip to main content

How to Implement dbt CI/CD

If you want to test a dbt change, you need to generate staging data that represents that outcome of that change. Typically, dbt users do this in a CI/CD pipeline.

Spectacles will direct Looker to query this staging data to understand if any SQL references in Looker will break when the dbt changes are merged.

Configure your dbt CI runs

It's entirely up to you how you set up the dbt CI jobs. Since we're going to point Looker at the staging dataset, the most important thing is that your CI pipeline produces data that's reflective of what a full dbt run would produce (as of that commit).

Depending on your warehouse, you might be able to leverage dbt's Slim CI functionality as long as you can copy the unmodified tables to create a complete dataset.

For example, on Snowflake, you could do a zero-copy clone of your production data into your CI schema schema/database and then only run the modified models.

Trigger CI runs with dbt Cloud

The easiest way to set up a dbt CI job is using dbt Cloud. You can follow the dbt Labs guide which explains how to set it up.

Each time you open a new dbt PR or add a commit to an existing PR, dbt Cloud will run the job automatically, creating the tables and views in a schema prefixed with dbt_cloud_pr_.

Trigger CI runs with dbt CLI

You can also set up your dbt CI jobs using your a CI tool of choice, such as GitHub Actions, CircleCI, Gitlab CI, or BitBucket Pipelines.

dbt Labs provides a guide for BitBucket Pipelines. You can take a similar approach for GitHub Actions, CircleCI or another CI tool of your choice.