Difficulty: intermediate
Estimated Time: 30 minutes

Tutorial v1.0.5 accompanies the information in Monitoring Telemetry with Splunk


Vault emits rich operational and usage data to provide its users insight and intelligence about the server lifecycle. Two such streams of data available from an operating Vault server through telemetry metrics and audit device logs.

Diagram of Vault operational data

In addition to operational and audit logging, Vault can also export operational data in the form of audit devices and telemetry metrics.

You can configure an agent like Fluentd to read a Vault file audit device log and export it to Splunk, and you can also configure telemetry for either exporting metrics or pulling metrics depending on the solution you use. Aggregated metrics can then be visually analyzed with dashboards and alerted on based on your business and operational criteria.

This scenario helps you become familiar with Vault telemetry in Splunk. It uses Terraform to automate deployment of a simple Docker based infrastructure featuring a Vault server (v1.4.3), a Fluentd td-agent (v1.11), a Telegraf agent(v1.12.6), and a Splunk server (v8.0.4.1).

You a will gain basic idea of what is possible and help inform your own telemetry aggregation and analysis solution by assembling and using the solution in this scenario.

You will perform these tasks while you work through the scenario:

  1. Start the containers - this is where Terraform helps to automate deployment. By defining and applying a plan, you can orchestrate the entire deployment, down to all required configuration items for each component of the stack.
  2. Prepare Vault - Prepare Vault for use by initializing it and unsealing it. You will also authenticate to Vault with the initial root token and enable a file audit device.
  3. Access Splunk Web - Access the Splunk Web UI and initial Vault telemetry metrics.
  4. Perform actions to generate metrics - Vault generates some runtime metrics even in an uninitialized and sealed state, but in this step you will generate additional metrics that you can then revisit in Splunk.
  5. Analyze generated metrics - Use Splunk Web to analyze, search, and visualize your Vault telemetry metrics including a simple dashboard.

Click START SCENARIO to begin.

🎉

You have successfully completed this scenario!

Learn more about secrets management and data protection with HashiCorp Vault:

https://learn.hashicorp.com/vault

Vault Telemetry with Splunk

Step 1 of 5

Start containers

Click the command () to automatically copy it into the terminal and execute it.

The first step in this lab is to use Terraform to start the containers.

You will do this with 3 terraform commands, which accomplish the following tasks:

  1. Initialize the Terraform configuration
  2. Define a plan
  3. Apply the defined plan

Once the plan is applied, the infrastructure will be fully configured and ready to go after the Splunk provisioning completes. The entire process usually requires approximately 4 minutes time.

Begin by initializing the Terraform configuration.

terraform init

Successful output includes the message "Terraform has been successfully initialized!". If instead you encounter an error about terraform missing, try the command once again.

Next, define a plan that will be written to the file vault-metrics-lab.plan.

terraform plan -out vault-metrics-lab.plan

If this step is successful you will find the following message in the terraform output.

This plan was saved to: vault-metrics-lab.plan

Finally, apply the plan.

NOTE: The apply process can require more than 3 minutes to complete. The moment after you apply the plan would be a great time to grab a fresh beverage or take a short break.

terraform apply vault-metrics-lab.plan

If all goes according to plan, you should observe an "Apply complete!" message like this in the output.

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

Although Terraform has succeeded in deploying the infrastructure, the vtl-splunk container will still be provisioning and that takes additional time. To wait for Splunk to become fully ready with a healthy status, use this command.

export splunk_ready=0
while [ $splunk_ready = 0 ]
  do
    if docker ps -f name=vtl-splunk --format "{{.Status}}" \
    | grep -q '(healthy)'
        then
            export splunk_ready=1
            echo "Splunk is ready."
        else
            printf "."
    fi
    sleep 5s
done

You can also manually confirm the container status with docker ps like this.

docker ps -f name=vtl --format "table {{.Names}}\t{{.Status}}"

Output should resemble this example.

NAMES               STATUS
vtl-splunk          Up About a minute (healthy)
vtl-vault           Up About a minute (unhealthy)
vtl-telegraf        Up 2 minutes

NOTE: The Vault container, vtl-vault is listed as unhealthy when it is sealed; in this case, you have not yet initialized or unsealed Vault, so the status is correct and expected.

Once your vtl-splunk container has a healthy status, click Continue to proceed to step 2, where you will initialize and unseal Vault, then login to begin using it.