Skip to content

Blog


Scaling Splunk Securely by Building a Custom Terraform Provider

July 30, 2020

|

Esha Mallya

Terraform lets DoorDash programmatically manage certain accesses across our infrastructure. However, the lack of a Terraform provider for Splunk, the tool we use to search, analyze, and visualize our data, made it difficult to manage access for users programmatically and at scale.

The efficiency and security gained by integrating access through Terraform lead us to create a Terraform provider for Splunk.

Terraform providers, which currently include such diverse products as Akamai, Github, and MySQL, integrate services with Terraform. Before creating our Terraform provider for Splunk, we had to manage Splunk access separately, a process requiring more work for administrators without any improvement in data security.

We concluded that writing a new Terraform provider for Splunk was worth the effort for DoorDash, and the process worth documenting for the engineering community at large. Terraform provides a flexible and easy way to manage existing and custom services as we will see in this post. Furthermore, we intend to open source our Terraform provider so other organizations can benefit from this work and run more efficient, secure infrastructure.

What are custom Terraform providers? 

Terraform is a popular tool used for building, changing, and managing infrastructure in a safe, repeatable way by using files for configuration instead of manually editing service settings.

Terraform is logically divided into two parts: Terraform Core and Terraform Plugins: 

The Terraform Core component communicates with Plugins using RPC (remote procedure calls) and is responsible for discovering and loading plugins.

The Terraform Plugins component provides a mechanism for managing a specific service (i.e. Splunk, in this article). These service specific plugins are known as custom Terraform providers.

The primary responsibilities of custom Terraform providers are:

  1. Initialization of any included libraries used to make API calls
  2. Authentication with the Infrastructure Provider
  3. Definition of resources that map to specific services

Each Terraform provider can manage one or more resources where a resource represents a component within the provider that has a set of configurable attributes and a CRUD (create, read, update, delete) lifecycle. 

While it is possible to write a plugin in a different language, almost all Terraform plugins are written in Go to take advantage of the Terraform provided helper libraries. These plugins are then distributed in the form of Go binaries.

How do you write a custom Terraform provider? 

As we saw in the previous section, custom Terraform providers are nothing more than Go binaries written to manage certain resources within a service. In this section, we will provide step-by-step guidance on how to create your own Terraform provider.

Before you get started, make sure that you have access to an API client library for managing Splunk roles and SAML Groups. According to Terraform‚Äôs documentation, client library logic should not be implemented in the provider. Instead, Terraform should consume an independent client library which implements the core logic for communicating upstream. 

Listed below are the steps to create your own Terraform provider:

  1. Define the provider entrypoint.
  2. Define the provider itself.
  3. Define the resources that the provider will control.
  4. Build and test your Terraform Plugin binary

The Terraform framework automatically manages all provider operations once these are defined. These components utilize a Terraform-provided helper library that offers a high level interface for writing resource providers in Terraform.

Defining the provider entrypoint

The first step in creating a custom Terraform provider is to define the provider entrypoint. As mentioned above, custom Terraform providers are distributed as Go binaries. Terraform requires each of these binaries to be prefixed with “terraform-provider-”. Hence, when we create a directory in our GOPATH it should be named “terraform-provider-splunk”. Inside of this GOPATH directory, we specify the entrypoint to our provider in the main.go file. Below is a code sample that demonstrates this entrypoint to the provider.


package main


import (
	splunkplugin "github.com/doordash/terraform-provider-splunk/splunk"
	"github.com/hashicorp/terraform-plugin-sdk/plugin"
	"github.com/hashicorp/terraform-plugin-sdk/terraform"
)



func main() {
	plugin.Serve(&plugin.ServeOpts{
		ProviderFunc: func() terraform.ResourceProvider {
			return splunkplugin.Provider()
		},
	})
}

The code sample above shows three libraries being imported into main.go - two of them are standard libraries and the third is custom. Below are the two standard libraries:

  • ‚Äúgithub.com/hashicorp/terraform-plugin-sdk/plugin‚Äù library: which acts as the Terraform provider‚Äôs gRPC server. It executes any resource logic based on received parsed configuration and provides the Terraform Core updated state of a resource or appropriate feedback in the form of validation or other errors.
  • ‚Äúgithub.com/hashicorp/terraform-plugin-sdk/terraform‚Äù library: this interface must be implemented by any resource provider. It creates and manages resources in a Terraform configuration.

The custom library contains the provider implementation, which we will cover in the next section.

Defining the provider itself

In this section, we will cover the implementation details of the provider. A provider is responsible for understanding API interactions with a service and exposing its resources. The starting point of the provider along with all provider properties are defined in this section. These properties are used to initialize the API client, declare resources that will be controlled, and define any configuration details. These properties may consist of the following:

  1. Schema: This is an optional schema for the configuration of this provider. The helper schema standard library is used to define the schema for the provider.
  2. ResourceMap: This property declares the list of available resources that this provider will manage.
  3. ConfigureFunc: This is an optional function for configuring the provider. This can be used to authenticate to the service.

import (
	"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
)



func Provider() *schema.Provider {
	return &schema.Provider{
		Schema: map[string]*schema.Schema{
			"hostname": &schema.Schema{...
			},
			"port": &schema.Schema{...
			},
			"username": &schema.Schema{...
			},
			"password": &schema.Schema{...
			},
		},
		ResourcesMap: map[string]*schema.Resource{
			"splunk_role": resourceRole(),
			"splunk_saml_group": resourceSamlGroup(),
		},
		ConfigureFunc: providerConfigure,
	}
}

Defining the resources that the provider controls

In this section, we will define the Splunk resources and their attributes that we want the provider to control, i.e. Splunk roles and Splunk SAML Groups. Splunk roles determine the level of access that users can have and tasks they can perform, whereas Splunk SAML Groups allow authorized groups on your SAML server to login to your Splunk instance. A major component of Splunk authentication and authorization is managed through these two resources and their attributes, which we will define below. 

The resource schema from the helper library provides an abstraction that leaves management of only the CRUD operations to us while off-loading operations such as validation and diff generation to this library.

Each resource consists of the following:

1) Define Create, Read, Update, and Delete functions.

  • Each function definition gets passed the schema resource data
  • Create and Update functions set resource data ID (using SetIId). A non blank ID tells Terraform that a resource was created and is a value used to read the resource again.
  • Delete function sets ID to a blank string
func resourceRoleDelete(d *schema.ResourceData, m interface{}) error {
	apiClient := m.(*splunkclient.Client)

	err := apiClient.DeleteRole(d.Get("rolename").(string))
	if err != nil {
		return err
	}
	d.SetId("")
	return nil
}

2) Resource schema: the resource‚Äôs attributes. For each attribute, you can define the following (among others): 

  • ForceNew: delete and recreate a resource.
  • Required/Optional
  • Type
  • Description
  • Default
import (
	"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
)

func resourceRole() *schema.Resource {
	return &schema.Resource{
		Create: resourceRoleCreate,
		Read:   resourceRoleRead,
		Update: resourceRoleUpdate,
		Delete: resourceRoleDelete,

		Schema: map[string]*schema.Schema{
			"rolename": &schema.Schema{
				Type:        schema.TypeString,
				Required:    true,
				Description: "The name of the role, also acts as its unique ID. Required.",
				ForceNew:    true,
			},
			"capabilities": &schema.Schema{...
			},
			"cumulativertsrchjobsquota": &schema.Schema{...
			},
			"cumulativesrchjobsquota": &schema.Schema{...
			},
			"defaultapp": &schema.Schema{...
			},
			"srchindexesallowed": &schema.Schema{...
			},
			"srchindexesdefault": &schema.Schema{...
			},
			"srchjobsquota": &schema.Schema{...
			},
			"rtsrchjobsquota": &schema.Schema{...
			},
			"srchdiskquota": &schema.Schema{...
			},
			"srchfilter": &schema.Schema{...
			},
			"importedroles": &schema.Schema{...
			},
		},
	}
}

Build and test your Terraform Plugin binary

Now that all components of our custom Terraform provider are defined, let us build and test the binary on our local machines. Build the Go binary (which is the Terraform provider Plugin) by running `go build -o terraform-provider-splunk`. This command outputs a binary named `terraform-provider-splunk`.

To test the binary on your local machine, place it in $HOME/.terraform.d/plugins (for MacOS) then follow these steps:

1) Initialize the provider and define a SAML Group resource and a role resource. A sample Terraform file looks like the following code sample:

provider "splunk" {}

resource "splunk_role" "testrole" {
  rolename = "testrole"
  capabilities = [
    "search"
  ]
  srchindexesallowed = [
    "*"
  ]
  srchindexesdefault = [
    "test"
  ]
  cumulativertsrchjobsquota = 1
  cumulativesrchjobsquota   = 2
  defaultapp                = ""
  srchjobsquota             = 3
  rtsrchjobsquota           = 4
  srchdiskquota             = 5
  srchfilter                = "*"
}

resource "splunk_saml_group" "test" {
  samlgroupname = "testgroup"
  rolenames     = ["testrole"]
}

2) Initialize Terraform by running `terraform init` then, run `terraform plan`. You should see the following output:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # splunk_role.testrole will be created
  + resource "splunk_role" "testrole" {
      + capabilities              = [
          + "search",
        ]
      + cumulativertsrchjobsquota = 1
      + cumulativesrchjobsquota   = 2
      + id                        = (known after apply)
      + rolename                  = "testrole"
      + rtsrchjobsquota           = 4
      + srchdiskquota             = 5
      + srchfilter                = "*"
      + srchindexesallowed        = [
          + "*",
        ]
      + srchindexesdefault        = [
          + "test",
        ]
      + srchjobsquota             = 3
    }

  # splunk_saml_group.test will be created
  + resource "splunk_saml_group" "test" {
      + id            = (known after apply)
      + rolenames     = [
          + "testrole",
        ]
      + samlgroupname = "testgroup"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

3) Apply the change with `terraform apply`. Or discard the local plan.

Conclusion

By now you hopefully have a good understanding on how to build a custom Terraform provider for Splunk. In the future, we hope to open source this Splunk Terraform provider so it can be downloaded as a Go binary. Using such a provider will help your company increase visibility, auditability, and scalability in a standardized and secure manner. 

Once you build and implement this solution, every new permission change to a Splunk role as well as every Splunk SAML Group mapping can be done through code. All changes will be tracked. The provisioning and deprovisioning of these managed resources will be simple and fast. Any new individual taking over management of Splunk authorization will be able to do so quickly and easily.

In addition, you can leverage the steps covered in this article to write a Terraform provider of your own for another system.

About the Author

  • Esha Mallya

Related Jobs

Location
San Francisco, CA; Mountain View, CA; New York, NY; Seattle, WA
Department
Engineering
Location
San Francisco, CA; Sunnyvale, CA
Department
Engineering
Location
San Francisco, CA; Sunnyvale, CA; Seattle, WA
Department
Engineering
Location
Pune, India
Department
Engineering
Location
San Francisco, CA; Seattle, WA; Sunnyvale, CA
Department
Engineering