Fixing things with… terraform destroy

I like Terraform, because it’s so clean, fast and elegant; OK, I also suck at it, but hey – I’m trying! 🙂

The long story

Usually, Terraform and its providers are very good at doing things in the order they should be done. But sometimes people do come up with silly ideas, and mine was such (of course) – I’ve decided to rename something and it broke things. Just a little.

I have a simple lab in Azure, with a couple of virtual machines behind the Azure Load Balancer, no big deal. All this is being deployed (and redeployed) via Terraform, using the official azurerm provider. It’s actually a Standard SKU Azure Load Balancer (don’t ask why), with a single backend pool and a few probes and rules. Nothing special.

I’ve deployed this lab a few days ago (thanks to good people at Microsoft, I have some free credits to spare), everything worked just fine, but today I’ve got the wild idea – I’ve decided to rename my backend pool.

With all the automation in place, this shouldn’t be a problem… one would think. 🙂

So, as I’ve updated my code and during the make it so phase (terraform apply), I’ve got some errors (truncated, with only the useful stuff):

Error:
Error Creating/Updating LoadBalancer: network.LoadBalancersClient
CreateOrUpdate:
Failure sending request:
StatusCode=400
--
Original Error:
Code="InvalidResourceReference... .../k8sthw-lb/backendAddressPools/k8sthw-lb-pool referenced by resource
/subscriptions/fake-azure-subscription-id/resourceGroups/k8sthw-rg/providers/Microsoft.Network/... ... or verify that both resources are in the same region." Details=[]

Issue

After going through these errors, I’ve realized that my resources are indeed in the same region, but existing rules are referencing the current backend pool by name and actually blocking Terraform in renaming the backend pool.

There are a couple of options at this stage – you can destroy your deployment and run it again (as I normally would) and it should all be fine. Or you can try to fix only the dependent resources and make it work as part of the existing deployment.

With some spare time at my hands, I’ve tried to fix it using the second one and it actually worked.

Resolution

Terraform has a nice option for destroying just some parts of the deployment.

If you look at help for the terraform destroy command, you can see the target option:

terraform destroy -help
# ...
# -target=resource Resource to target. Operation will be limited to this
#                  resource and its dependencies. This flag can be used
#                  multiple times.
# ...

And if you run it to fix your issues, you’ll get a nice red warning saying that this is only for exceptional situations (and they mean it!):

terraform destroy -target azurerm_lb_outbound_rule.k8sthw-lb-out
# Warning: Resource targeting is in effect
#
# You are creating a plan with the -target option, which means that the result
# of this plan may not represent all of the changes requested by the current
# configuration.
# 
# The -target option is not for routine use, and is provided only for
# exceptional situations such as recovering from errors or mistakes, or when
# Terraform specifically suggests to use it as part of an error message.
# 
# Do you really want to destroy all resources?
# Terraform will destroy all your managed infrastructure, as shown above.
# There is no undo. Only 'yes' will be accepted to confirm.
# 
# Enter a value: yes
# ...

 

So… BE CAREFUL! (can’t stress this enough!)

 

Anyhow, I’ve destroyed rules which were preventing my rename operation:

terraform destroy -target azurerm_lb_outbound_rule.k8sthw-lb-out -target azurerm_lb_rule.k8sthw-lb-80-rule -target azurerm_lb_rule.k8sthw-lb-443-rule --auto-approve
# ...
#
# Destroy complete! Resources: 3 destroyed.

And then another terraform apply –auto-approve recreated everything that was needed, and finally – my backend pool got renamed:

# ...
# azurerm_lb_backend_address_pool.k8sthw-controllers-lb-pool: Refreshing state... [id=/subscriptions/fake-azure-subscription-id/resourceGroups/k8sthw-rg/providers/Micro...
# ...
# 
# Apply complete! Resources: 2 added, 0 changed, 3 destroyed.

Another idea I’ve had was to taint the resources (terraform taint -help), which would probably be a lot nicer. Oh, well… maybe next time. 🙂

As things are constantly improving, it shouldn’t be long until this is fixed (should it even be fixed?!). Until then, hope this will help you with similar issues!

Cheers!

Leave a Comment.