Getting Started
Installation
Currently, you can install Lula from Repository Releases, Homebrew, or by building from source. Lula is compatible only with Linux and macOS distributions.
Repository Release
Navigate to the Latest Release Page:
Open your web browser and go to the following URL to access the latest release of Lula:
https://github.com/defenseunicorns/lula/releases/latest
Download the Binary:
On the latest release page, find and download the appropriate binary for your operating system. E.g., lula_<version>_Linux_amd64
Download the checksums.txt:
In the list of assets on the release page, locate and download the checksums.txt file. This file contains the checksums for all the binaries in the release.
Verify the Download:
After downloading the binary and checksums.txt, you should verify the integrity of the binary using the checksum provided:
Open a terminal and navigate to the directory where you downloaded the binary and checksums.txt.
Run the following command to verify the checksum if using Linux:
sha256sum -c checksums.txt --ignore-missing
Run the following command to verify the checksum if using MacOS:
shasum -a 256 -c checksums.txt --ignore-missing
On most Linux distributions, install the binary onto your $PATH by moving the downloaded binary to the /usr/local/bin directory:
sudo mv ./download/path/lula_<version>_Linux_amd64 /usr/local/bin/lula
Homebrew
Homebrew is a package manager for macOS and Linux. You can install Lula with Homebrew by running the following:
brew tap defenseunicorns/tap && brew install lula
From Source
Clone the repository to your local machine and change into the lula
directory
git clone https://github.com/defenseunicorns/lula.git && cd lula
While in the lula
directory, compile the tool into an executable binary. This outputs the lula
binary to the bin
directory.
make build
On most Linux distributions, install the binary onto your $PATH by moving the downloaded binary to the /usr/local/bin directory:
sudo mv ./bin/lula /usr/local/bin/lula
Quick Start
See the following tutorials for some introductory lessons on how to use Lula. If you are unfamiliar with Lula, the best place to start is the “Simple Demo”.
Tutorials
Lula Validations
Lula Validation manifests are the underlying mechanisms that dictates the evaluation of a system against a control as resulting in satisfied
or not satisfied
. A Lula Validation is linked to a control within a component definition via the OSCAL-specific property, links.
Developing Lula Validations can sometimes be more art than science, but generally they should aim to be clear, concise, and robust to system changes. The following guides will help you get started with developing Lula Validations.
See our references documentation for additional information about Lula Validations.
Configuration
Lula supports the addition of a configuration file for specifying CLI flags and templating values. See our configuration guide for more information.
1 -
Configuration
Lula allows the use and specification of a config file in the following ways:
- Checking current working directory for a
lula-config.yaml
file - Specification with environment variable
LULA_CONFIG=<path>
Environment Variables can be used to specify configuration values through use of LULA_<VAR>
-> Example: LULA_TARGET=il5
Identification
If identified, Lula will log which configuration file is used to stdout:
Using config file /home/dev/work/lula/lula-config.yaml
Precedence
The precedence for configuring settings, such as target
, follows this hierarchy:
Command Line Flag > Environment Variable > Configuration File
Command Line Flag:
When a setting like target
is specified using a command line flag, this value takes the highest precedence, overriding any environment variable or configuration file settings.
Environment Variable:
If the setting is not provided via a command line flag, an environment variable (e.g., export LULA_TARGET=il5
) will take precedence over the configuration file.
Configuration File:
In the absence of both a command line flag and environment variable, the value specified in the configuration file will be used. This will override system defaults.
Support
Modification of command variables can be set in the configuration file:
lula-config.yaml
log_level: debug
target: il4
summary: true
Templating Configuration Fields
Templating values are set in the configuration file via the use of constants
and variables
fields.
Constants
A sample constants
section of a lula-config.yaml
file is as follows:
constants:
type: software
title: lula
resources:
name: test-pod-label
namespace: validation-test
imagelist:
- nginx
- nginx2
Constants will respect the structure of a map[string]interface{} and can be referenced as follows:
# validaiton.yaml
metadata:
name: sample {{ .const.type }} validation for {{ .const.title }}
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: myPod
resource-rule:
name: {{ .const.resources.name }}
version: v1
resource: pods
namespaces: [{{ .const.resources.namespace }}]
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
validate if {
input.myPod.metadata.name == {{ .const.resources.name }}
input.myPod.containers[_].name in { {{ .const.resources.imagelist | concatToRegoList }} }
}
And will be rendered as:
metadata:
name: sample software validation for lula
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: myPod
resource-rule:
name: myPod
version: v1
resource: pods
namespaces: [validation-test]
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
validate if {
input.myPod.metadata.name == "myPod"
input.myPod.containers[_].image in { "nginx", "nginx2" }
}
The constant’s keys should be in the format .const.<key>
and should not contain any ‘-’ or ‘.’ characters, as this will not respect the go text/template format.
[!IMPORTANT]
Due to viper limitations, all constants should be referenced in the template as lowercase values.
Variables
A sample variables
section of a lula-config.yaml
file is as follows:
variables:
- key: some_lula_secret
sensitive: true
- key: some_env_var
default: this-should-be-overridden
The variables
section is a list of key
, default
, and sensitive
fields, where sensitive
and default
are optional. The key
and default
fields are strings, and the sensitive
field is a boolean.
A default value can be specified in the case where an environment variable may or may not be set, however an environment variable will always take precedence over a default value.
The environment variable should follow the pattern of LULA_VAR_<key>
(not case sensitive), where <key>
is the key specified in the variables
section.
When using sensitive
variables, the default behavior is to mask the value in the output of the template.
2 -
Develop a Validation
This document describes some best practices for developing and using a Lula Validation, the primary mechanism for evaluation of system’s compliance against a specified control.
About
Lula Validations are constructed by establishing the collection of measurements about a system, given by the specified Domain, and the evaluation of adherence, performed by the Provider.
The currently supported domains are:
The currently supported providers are:
- OPA (Open Policy Agent)
- Kyverno
Creating a Sample Validation
Here, we will step through creating a sample validation using the Kubernetes domain and OPA provider. Generating a validation is in the scope of answering some control or standard. For instance, our control might be something like “system implements test application as target for development purposes”. Our validation should then seek to prove that some “test application” is running in our domain, i.e., Kubernetes.
Pre-Requistes
Steps
[!NOTE]
Demo files can be found in the lula repository under demo/develop-validation
Assume we have some component definition for Podinfo with the associated standard we are trying to prove the system satisfies:
component-definition:
uuid: a506014d-cb8a-4db9-ac48-ef72f7209a60
metadata:
last-modified: 2024-07-11T13:38:09.633174-04:00
oscal-version: 1.1.2
published: 2024-07-11T13:38:09.633174-04:00
remarks: Lula Generated Component Definition
title: Component Title
version: 0.0.1
components:
- uuid: 75859c1e-30f5-4fde-9ad4-c79f863b049f
type: software
title: podinfo
description: Sample application
control-implementations:
- uuid: a3039927-839c-5745-ac4e-a9993bcd60ed
source: https://github.com/defenseunicorns/lula
description: Control Implementation Description
implemented-requirements:
- uuid: 257d2b2a-fda7-49c5-9a2b-acdc995bc8e5
control-id: ID-1
description: >-
Podinfo, a sample application, is deployed into the cluster and exposed for testing purposes.
remarks: >-
System implements test application as target for development purposes.
We recognize that we can satisfy this control by proving that podinfo is alive in the cluster. If we know nothing about podinfo, we may first want to identify which Kubernetes constructs are used in it’s configuration:
$ kubectl get all -n podinfo
NAME READY STATUS RESTARTS AGE
pod/my-release-podinfo-fb6d4888f-ptlss 1/1 Running 0 17m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-podinfo ClusterIP 10.43.172.65 <none> 9898/TCP,9999/TCP 17m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-release-podinfo 1/1 1 1 17m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-release-podinfo-fb6d4888f 1 1 1 17m
Now that we know what resources are in the podinfo
namespace, we can use our kubernetes knowledge to deduce that proving podinfo is healthy in the cluster could be performed by looking at the status
of the podinfo deployment for the replicas
value to match readyReplicas
:
$ kubectl get deployment my-release-podinfo -n podinfo -o json | jq '.status'
{
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2024-07-11T17:36:53Z",
"lastUpdateTime": "2024-07-11T17:36:53Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2024-07-11T17:36:53Z",
"lastUpdateTime": "2024-07-11T17:36:56Z",
"message": "ReplicaSet \"my-release-podinfo-fb6d4888f\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 1,
"readyReplicas": 1,
"replicas": 1,
"updatedReplicas": 1
}
With this we should now have enough information to write our Lula Validation! First construct the top-matter metadata
:
Run lula tools uuidgen
to get a unique ID for your validation
$ lula tools uuidgen
ad38ef57-99f6-4ac6-862e-e0bc9f55eebe
Add a validation.yaml
file with the following
metadata:
name: check-podinfo-health
uuid: ad38ef57-99f6-4ac6-862e-e0bc9f55eebe
Construct the domain
:
Since we are extracting Kubernetes manifest data as validation “proof”, the domain we use should be kubernetes
.
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: podinfoDeployment
resource-rule:
name: my-release-podinfo
namespaces: [podinfo]
group: apps
version: v1
resource: deployments
Note a few things about the specification for obtaining these kubernetes resources:
resources
key is used as an array of resources we are asking for from the clustername
is the keyword that will be used as an input to the policy, stated below in the provider. Note - to play nicely with the policy, it is best to make this a single word, camel-cased if desired.resource-rule
is the api specification for the resource being extractedname
is the name of our deployment, my-release-podinfo
namespaces
is the list of namespaces, one can be provided but must be in list formatgroup
, version
, resource
is the compliant values to access the kubernetes API
See reference for more information about the Lula Validation schema and kubernetes domain.
Construct the provider
and write the OPA policy:
Any provider should be compatible with the domain outputs, here we’ve decided to use OPA and rego, so our provider
section is as follows:
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
# Default values
default validate := false
default msg := "Not evaluated"
# Validation result
validate if {
check_podinfo_healthy.result
}
msg = check_podinfo_healthy.msg
check_podinfo_healthy = {"result": true, "msg": msg} if {
input.podinfoDeployment.spec.replicas > 0
input.podinfoDeployment.status.availableReplicas == input.podinfoDeployment.status.replicas
msg := "Number of replicas > 0 and all replicas are available."
} else = {"result": false, "msg": msg} {
msg := "Podinfo not available."
}
output:
validation: validate.validate
observations:
- validate.msg
The Rego policy language can be a little funny looking at first glance, check out both the rego docs and the OPA Provider reference for more information about rego.
With that said, some things are important to highlight about the policy
package validate
is mandatory at the top (you can use any package name you want, but if a different value is used the output.validation
needs to be updated accordingly)import rego.v1
is optional, but recommended as OPA looks to upgrade to v1- The “Default values” section is best practice to set these to protect against a result that yields undefined values for these variables
- The “Validation result” section defines the rego evaluation on the
input.podinfoDeployment
- checking that both the number of replicas is greater than 0 and the available and requested replicas are equal.
Putting it all together we are left with the validation.yaml
, let’s run some commands to validate our validation:
Get the resources to visually inspect that they are what you expect from the domain
and in the right struction for the provider’s policy:
$ lula dev get-resources -f validation.yaml -o resources.json
The result should be a resources.json
file that looks roughly as follows:
{
"podinfoDeployment": {
"apiVersion": "apps/v1",
"kind": "Deployment",
# ... rest of the json
}
}
Now check the validation is resulting in the expected outcome:
$ lula dev validate -f validation.yaml
• Observations:
• --> validate.msg: Number of replicas > 0 and all replicas are available.
• Validation completed with 1 passing and 0 failing results
If we expected this validation to fail, we would have added -e=false
Now that we have our baseline validation, and we know it is returning an expected result for our current cluster configuration, we should probably ensure that the policy results are successful when other resource cases exist. There are a few options here:
Manually modify the resources in your cluster and re-run lula dev validate
Manually modify the resources.json
and test those
If we have a test cluster, perhaps changing some things about it is acceptable, but for this case I’m just going to take the path of least resistance and modify the resources.json
:
Copy your resources.json
and rename to resources-bad.json
. First, find podinfoDeployment.status.replicas
and change the value to 0. Run lula dev validate
with those resources as the input, along with our expected failure outcome:
$ lula dev validate -f validation.yaml -r resources-bad.json -e=false
• Observations:
• --> validate.msg: Podinfo not available.
• Validation completed with 0 passing and 1 failing results
Success! Additional conditions can be tested this way to fully stress-test the validity of the policy.
Preferred Approach Implement tests natively in the validation, using the testing guide. See the next tutorial test a validation for more information.
Finally, we can bring this back into the component-definition
. This validation should be added as a link to the respective implemented-requirement
:
# ... rest of component definition
implemented-requirements:
- uuid: 257d2b2a-fda7-49c5-9a2b-acdc995bc8e5
control-id: ID-1
description: >-
Podinfo, a sample application, is deployed into the cluster and exposed for testing purposes.
remarks: >-
System implements test application as target for development purposes.
links:
- href: 'file:./validation.yaml
ref: lula
text: Check that Podinfo is healthy
Now that we have our full OSCAL Component Definition model specified, we can take this off to validate
and evaluate
the system!
Limitations
We are aware that many of these validations are brittle to environment changes, for instance if namespaces change. See the templating doc for more information on how to create modular validations.
Additionally, since we are adding these validations to OSCAL yaml documents, there is some ugliness with having to compose strings of yaml into yaml. We support “remote” validations, where instead of a reference to a backmatter uuid, instead a link to a file is provided. A limitation of that currently is that it does not support authentication if the remote link is in a protected location.
3 -
Lula in CI
Lula is designed to evaluate the continual compliance of a system, and as such is a valuable tool to implement in a CI environment to provide rapid feedback to developers if a system moves out of compliance. The Lula-Action Repo supports the use of Lula in github workflows and this document provides an outline for implementation.
Pre-Requisite
To use Lula to validate
and evaluate
a system in development, a pre-requisite is having an OSCAL Component Definition model, along with linked Lula Validations
existing in the repository, a sample structure follows:
.
|-- .github
| |-- workflows
|-- |-- |-- lint.yaml # Existing workflow to lint
|-- |-- |-- test.yaml # Existing workflow to test system
|-- README.md
|-- LICENSE
|-- compliance
|-- |-- oscal-component.yaml # OSCAL Component Definition
|-- src
| |-- main
| |-- test
Steps
Add Lula linting to .github/workflows/lint.yaml
:
name: Lint
on:
pull_request:
branches: [ "main" ]
jobs:
lint:
runs-on: ubuntu-latest
# ... Other jobs
- name: Setup Lula
uses: defenseunicorns/lula-action/setup@main
with:
version: v0.4.1
- name: Lint OSCAL file
uses: defenseunicorns/lula-action/lint@main
with:
oscal-target: ./compliance/oscal-component.yaml
# ... Other jobs
Additional linting targets may be added to this list as comma separated values, e.g., component1.yaml,component2.yaml
. Note that linting is only validating the correctness of the OSCAL.
Add Lula validation and evaluation to the testing workflow, .github/workflows/test.yaml
:
name: Test
on:
pull_request:
branches: [ "main" ]
jobs:
test:
runs-on: ubuntu-latest
# ... Other jobs
- name: Setup Lula
uses: defenseunicorns/lula-action/setup@main
with:
version: v0.4.1
- name: Validate Component Definition
uses: defenseunicorns/lula-action/validate@main
with:
oscal-target: ./compliance/oscal-component.yaml
threshold: ./assessment-results.yaml
# ... Other jobs
test-upgrade:
runs-on: ubuntu-latest
# ... Jobs to deploy previous system version
- name: Setup Lula
uses: defenseunicorns/lula-action/setup@main
with:
version: v0.4.1
- name: Validate Component Definition
uses: defenseunicorns/lula-action/validate@main
with:
oscal-target: ./compliance/oscal-component.yaml
threshold: ./assessment-results.yaml
# ... Jobs to upgrade system to current version
The first validate
under test
outputs an assessment-results
model that provide the assessment of the system in the current state. The second validate
that occurs in the test-upgrade
job runs a validation on the previous version of the system prior to upgrade. It then compares the old and new assessment results to either pass or fail the job - failure occurs when the current system’s compliance is worse than the old system.
4 -
Simple Demo
The following simple demo will step through a process to validate and evaluate Kubernetes cluster resources for a simple component-definition
. The following pre-requisites are required to successfully run this demo:
Pre-Requisites
- Lula installed
- Kubectl
- A running Kubernetes cluster
- Kind
kind create cluster -n lula-test
- K3d
k3d cluster create lula-test
Steps
Clone the Lula repository to your local machine and change into the lula/demo/simple
directory
git clone https://github.com/defenseunicorns/lula.git && cd lula/demo/simple
Apply the namespace.yaml
file to create a namespace for the demo
kubectl apply -f namespace.yaml
Apply the pod.fail.yaml
to create a pod in your cluster
kubectl apply -f pod.fail.yaml
The oscal-component-opa.yaml
is a simple OSCAL Component Definition model which establishes a sample control <> validation mapping. The validation that provides a satisfied
or not-satisfied
result to the control simply checks if the required label value is set for pod.fail
. Run the following command to validate
the component given by the failing pod:
lula validate -f oscal-component-opa.yaml
The output in your terminal should inform you that the single control validated is not-satisfied
:
NOTE Saving log file to
/var/folders/6t/7mh42zsx6yv_3qzw2sfyh5f80000gn/T/lula-2024-07-08-10-22-57-840485568.log
• changing cwd to .
🔍 Collecting Requirements and Validations
• Found 1 Implemented Requirements
• Found 1 runnable Lula Validations
📐 Running Validations
✔ Running validation a7377430-2328-4dc4-a9e2-b3f31dc1dff9 -> evaluated -> not-satisfied
💡 Findings
• UUID: c80f76c5-4c86-4773-91a3-ece127f3d55a
• Status: not-satisfied
• OSCAL artifact written to: assessment-results.yaml
This will also produce an assessment-results
model - review the findings and observations:
assessment-results:
results:
- description: Assessment results for performing Validations with Lula version v0.4.1-1-gc270673
findings:
- description: This control validates that the demo-pod pod in the validation-test namespace contains the required pod label foo=bar in order to establish compliance.
related-observations:
- observation-uuid: f03ffcd9-c18d-40bf-85f5-d0b1a8195ddb
target:
status:
state: not-satisfied
target-id: ID-1
type: objective-id
title: 'Validation Result - Component:A9D5204C-7E5B-4C43-BD49-34DF759B9F04 / Control Implementation: A584FEDC-8CEA-4B0C-9F07-85C2C4AE751A / Control: ID-1'
uuid: c80f76c5-4c86-4773-91a3-ece127f3d55a
observations:
- collected: 2024-07-08T10:22:57.219213-04:00
description: |
[TEST]: a7377430-2328-4dc4-a9e2-b3f31dc1dff9 - lula-validation
methods:
- TEST
relevant-evidence:
- description: |
Result: not-satisfied
uuid: f03ffcd9-c18d-40bf-85f5-d0b1a8195ddb
props:
- name: threshold
ns: https://docs.lula.dev/oscal/ns
value: "false"
reviewed-controls:
control-selections:
- description: Controls Assessed by Lula
include-controls:
- control-id: ID-1
description: Controls validated
remarks: Validation performed may indicate full or partial satisfaction
start: 2024-07-08T10:22:57.219371-04:00
title: Lula Validation Result
uuid: f9ae56df-8709-49be-a230-2d3962bbd5f9
uuid: 5bf89b23-6172-47c9-9d1c-d308fa543d61
Now, apply the pod.pass.yaml
file to your cluster to configure the pod to pass compliance validation:
kubectl apply -f pod.pass.yaml
Run the following command in the lula
directory:
lula validate -f oscal-component-opa.yaml
The output should now show the pod as passing the compliance requirement:
NOTE Saving log file to
/var/folders/6t/7mh42zsx6yv_3qzw2sfyh5f80000gn/T/lula-2024-07-08-10-25-47-3097295143.log
• changing cwd to .
🔍 Collecting Requirements and Validations
• Found 1 Implemented Requirements
• Found 1 runnable Lula Validations
📐 Running Validations
✔ Running validation a7377430-2328-4dc4-a9e2-b3f31dc1dff9 -> evaluated -> satisfied
💡 Findings
• UUID: 5a991d1f-745e-4acb-9435-373174816fcc
• Status: satisfied
• OSCAL artifact written to: assessment-results.yaml
This will append to the assessment-results file with a new result:
- description: Assessment results for performing Validations with Lula version v0.4.1-1-gc270673
findings:
- description: This control validates that the demo-pod pod in the validation-test namespace contains the required pod label foo=bar in order to establish compliance.
related-observations:
- observation-uuid: a1d55b82-c63f-47da-8fab-87ae801357ac
target:
status:
state: satisfied
target-id: ID-1
type: objective-id
title: 'Validation Result - Component:A9D5204C-7E5B-4C43-BD49-34DF759B9F04 / Control Implementation: A584FEDC-8CEA-4B0C-9F07-85C2C4AE751A / Control: ID-1'
uuid: 5a991d1f-745e-4acb-9435-373174816fcc
observations:
- collected: 2024-07-08T10:25:47.633634-04:00
description: |
[TEST]: a7377430-2328-4dc4-a9e2-b3f31dc1dff9 - lula-validation
methods:
- TEST
relevant-evidence:
- description: |
Result: satisfied
uuid: a1d55b82-c63f-47da-8fab-87ae801357ac
props:
- name: threshold
ns: https://docs.lula.dev/oscal/ns
value: "false"
reviewed-controls:
control-selections:
- description: Controls Assessed by Lula
include-controls:
- control-id: ID-1
description: Controls validated
remarks: Validation performed may indicate full or partial satisfaction
start: 2024-07-08T10:25:47.6341-04:00
title: Lula Validation Result
uuid: a9736e32-700d-472f-96a3-4dacf36fa9ce
Now that two assessment-results are established, the threshold
can be evaluated. Perform an evaluate
to compare the old and new state of the cluster:
lula evaluate -f assessment-results.yaml
The output will show that now the new threshold for the system assessment is the more compliant evaluation of the control - i.e., the satisfied
value of the Control ID-1 is the threshold.
NOTE Saving log file to
/var/folders/6t/7mh42zsx6yv_3qzw2sfyh5f80000gn/T/lula-2024-07-08-10-29-53-4238890270.log
• New passing finding Target-Ids:
• ID-1
• New threshold identified - threshold will be updated to result a9736e32-700d-472f-96a3-4dacf36fa9ce
• Evaluation Passed Successfully
✔ Evaluating Assessment Results f9ae56df-8709-49be-a230-2d3962bbd5f9 against a9736e32-700d-472f-96a3-4dacf36fa9ce
• OSCAL artifact written to: ./assessment-results.yaml
5 -
Templating
Lula supports composition of both Component Definition and Lula Validation template files. See the configuration documentation for more information on how to configure Lula to use templating. See the compose CLI command documentation for more information on the lula tools compose
command flags to control how templating is applied.
Component Definition Templating
Component Definition templates can be used to create modular component definitions using values from the lula-config.yaml
file.
Example:
component-definition:
uuid: E6A291A4-2BC8-43A0-B4B2-FD67CAAE1F8F
metadata:
title: {{ .const.title }}
last-modified: "2022-09-13T12:00:00Z"
version: "20220913"
oscal-version: 1.1.2
parties:
- uuid: C18F4A9F-A402-415B-8D13-B51739D689FF
type: organization
name: Lula Development
links:
- href: {{ .const.website }}
rel: website
lula-config.yaml:
constants:
title: Lula Demo
website: https://github.com/defenseunicorns/lula
When this is composed
with templating applied (lula tools compose -f <file> --render all
) with the associated lula-config.yaml
, the resulting component definition will be:
component-definition:
uuid: E6A291A4-2BC8-43A0-B4B2-FD67CAAE1F8F
metadata:
title: Lula Demo
last-modified: "2022-09-13T12:00:00Z"
version: "20220913"
oscal-version: 1.1.2
parties:
- uuid: C18F4A9F-A402-415B-8D13-B51739D689FF
type: organization
name: Lula Development
links:
- href: https://github.com/defenseunicorns/lula
rel: website
Validation Templating
Validation templates can be used to create modular Lula Validations using values from the lula-config.yaml
file. These can be composed into the component definition using the lula tools compose
command.
Example:
component-definition:
uuid: E6A291A4-2BC8-43A0-B4B2-FD67CAAE1F8F
metadata:
title: Lula Demo
last-modified: "2022-09-13T12:00:00Z"
version: "20220913"
oscal-version: 1.1.2 # This version should remain one version behind latest version for `lula dev upgrade` demo
parties:
# Should be consistent across all of the packages, but where is ground truth?
- uuid: C18F4A9F-A402-415B-8D13-B51739D689FF
type: organization
name: Lula Development
links:
- href: https://github.com/defenseunicorns/lula
rel: website
components:
- uuid: A9D5204C-7E5B-4C43-BD49-34DF759B9F04
type: {{ .const.type }}
title: {{ .const.title }}
description: |
Lula - the Compliance Validator
purpose: Validate compliance controls
responsible-roles:
- role-id: provider
party-uuids:
- C18F4A9F-A402-415B-8D13-B51739D689FF # matches parties entry for Defense Unicorns
control-implementations:
- uuid: A584FEDC-8CEA-4B0C-9F07-85C2C4AE751A
source: https://raw.githubusercontent.com/usnistgov/oscal-content/master/nist.gov/SP800-53/rev5/json/NIST_SP-800-53_rev5_catalog.json
description: Validate generic security requirements
implemented-requirements:
- uuid: 42C2FFDC-5F05-44DF-A67F-EEC8660AEFFD
control-id: ID-1
description: >-
This control validates that the demo-pod pod in the validation-test namespace contains the required pod label foo=bar in order to establish compliance.
links:
- href: "./validation.tmpl.yaml"
text: local path template validation
rel: lula
Where ./validation.tmpl.yaml
is:
metadata:
name: Test validation with templating
uuid: 99fc662c-109a-4e26-8398-75f3db67f862
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: podvt
resource-rule:
name: {{ .const.resources.name }}
version: v1
resource: pods
namespaces: [{{ .const.resources.namespace }}]
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
# Default values
default validate := false
default msg := "Not evaluated"
# Validation result
validate if {
{ "one", "two", "three" } == { {{ .const.resources.exemptions | concatToRegoList }} }
"{{ .var.some_env_var }}" == "my-env-var"
"{{ .var.some_lula_secret }}" == "********"
}
msg = validate.msg
value_of_my_secret := {{ .var.some_lula_secret }}
Executing lula tools compose -f ./component-definition.yaml --render all --render-validations
will result in:
component-definition:
back-matter:
resources:
- description: |
domain:
kubernetes-spec:
create-resources: null
resources:
- description: ""
name: podvt
resource-rule:
group: ""
name: test-pod-label
namespaces:
- validation-test
resource: pods
version: v1
type: kubernetes
lula-version: ""
metadata:
name: Test validation with templating
uuid: 99fc662c-109a-4e26-8398-75f3db67f862
provider:
opa-spec:
rego: |
package validate
import rego.v1
# Default values
default validate := false
default msg := "Not evaluated"
# Validation result
validate if {
{ "one", "two", "three" } == { "one", "two", "three" }
"this-should-be-overridden" == "my-env-var"
"" == "********"
}
msg = validate.msg
value_of_my_secret :=
type: opa
title: Test validation with templating
uuid: 99fc662c-109a-4e26-8398-75f3db67f862
components:
- control-implementations:
- description: Validate generic security requirements
implemented-requirements:
- control-id: ID-1
description: This control validates that the demo-pod pod in the validation-test namespace contains the required pod label foo=bar in order to establish compliance.
links:
- href: '#99fc662c-109a-4e26-8398-75f3db67f862'
rel: lula
text: local path template validation
uuid: 42C2FFDC-5F05-44DF-A67F-EEC8660AEFFD
source: https://raw.githubusercontent.com/usnistgov/oscal-content/master/nist.gov/SP800-53/rev5/json/NIST_SP-800-53_rev5_catalog.json
uuid: A584FEDC-8CEA-4B0C-9F07-85C2C4AE751A
description: |
Lula - the Compliance Validator
purpose: Validate compliance controls
responsible-roles:
- party-uuids:
- C18F4A9F-A402-415B-8D13-B51739D689FF
role-id: provider
title: lula
type: software
uuid: A9D5204C-7E5B-4C43-BD49-34DF759B9F04
metadata:
last-modified: XXX
oscal-version: 1.1.2
parties:
- links:
- href: https://github.com/defenseunicorns/lula
rel: website
name: Lula Development
type: organization
uuid: C18F4A9F-A402-415B-8D13-B51739D689FF
title: Lula Demo
version: "20220913"
uuid: E6A291A4-2BC8-43A0-B4B2-FD67CAAE1F8F
Composing Validation Templates
If validations are composed into a component definition AND the validation is still intended to be a template, it must be a valid yaml document. For example, the above validation.tmpl.yaml
is invalid yaml, as the resource-rule.name
field is not ecapsulated in quotes. A valid yaml version of the above template would be:
metadata:
name: Test validation with templating
uuid: 99fc662c-109a-4e26-8398-75f3db67f862
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: podvt
resource-rule:
name: "{{ .const.resources.name }}"
version: v1
resource: pods
namespaces: ["{{ .const.resources.namespace }}"]
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
# Default values
default validate := false
default msg := "Not evaluated"
# Validation result
validate if {
{ "one", "two", "three" } == { {{ .const.resources.exemptions | concatToRegoList }} }
"{{ .var.some_env_var }}" == "my-env-var"
"{{ .var.some_lula_secret }}" == "********"
}
msg = validate.msg
value_of_my_secret := {{ .var.some_lula_secret }}
6 -
Test a Validation
Writing tests for a Lula Validation should be a key part of the validation development process. The purpose of testing is to ensure that the Domain is returning expected data AND the Provider is correctly interpretting and “validating” that data. The testing framework is valuable to both document the tests the domain/provider passes (e.g., to aid in validation review), as well as setting up a repeatable test suite for the validations to be verified when the environment changes.
About
This document will guide you through the process of writing tests for a Lula Validation. It will build on the Develop a Validation guide, so it is recommended to read that first. Additional documentation on the testing framework can be found in the testing reference.
Writing Tests for a Lula Validation
Pre-Requisites
Steps
[!NOTE]
Demo files can be found in the lula repository under demo/validation-tests
- Assume we have the following Lula Validation:
metadata:
name: check-podinfo-health
uuid: ad38ef57-99f6-4ac6-862e-e0bc9f55eebe
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: podinfoDeployment
resource-rule:
name: my-release-podinfo
namespaces: [podinfo]
group: apps
version: v1
resource: deployments
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
# Default values
default validate := false
default msg := "Not evaluated"
# Validation result
validate if {
check_podinfo_healthy.result
}
msg = check_podinfo_healthy.msg
check_podinfo_healthy = {"result": true, "msg": msg} if {
input.podinfoDeployment.spec.replicas > 0
input.podinfoDeployment.status.availableReplicas == input.podinfoDeployment.status.replicas
msg := "Number of replicas > 0 and all replicas are available."
} else = {"result": false, "msg": msg} if {
msg := "Podinfo not available."
}
output:
validation: validate.validate
observations:
- validate.msg
We’d like to verify that our rego policy is going to correctly evaluate the podinfoDeployment
resource if it should change.
- We need to identify the types of changes we could expect to occur to the
podinfoDeployment
resource:
- If the resource is not found, we expect the policy to be
not-satisfied
- If the resource is found, but the number of replicas is 0, we expect the policy to be
not-satisfied
- If the resource is found, and the number of replicas is greater than 0, but the available replicas are not equal to the requested replicas, we expect the policy to be
not-satisfied
- Now that we’ve enumerated the possible outcomes, we can write our tests. We’ll start with the first test, which is to verify that the policy is
not-satisfied
if the resource is not found. We know that if the podinfoDeployment
is not found, the following JSON will result from the domain spec:
{
"podinfoDeployment": {}
}
To mimic this json structure in our test, we need to add the following to the changes
section:
- path: podinfoDeployment
type: delete
- path: "."
type: add
value-map:
podinfoDeployment: {}
These changes generate the above json structure by first removing the podinfoDeployment
from the resources, and then adding it back with an empty map.
[!NOTE]
This is an interesting case that highlights the limitations of the change types - due to the way the underlying merge functionality works, it is not possible to update a map with empty keys. If a key exists, the only way to set it as empty is to first delete it, and then add it back with an empty map.
So we can add the following to the tests
section to the validation.yaml
:
tests:
- name: missing-podinfo-deployment
expected-result: not-satisfied
changes:
- path: podinfoDeployment
type: delete
- path: "."
type: add
value-map:
podinfoDeployment: {}
- For the second test case, we want to verify that the policy is
not-satisfied
if the resource is found, but the number of replicas is 0. This mimics a scenario where the deployment is in the cluster, but there are no pods.
An abridged version of the json manifest we expect for this scenario is:
{
"podinfoDeployment": {
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "podinfo"
// Rest of the metadata
},
"spec": {
"replicas": 0
// Rest of the spec
}
}
}
On first glance, we might be tempted to set the podinfoDeployment.spec.replicas
to 0 using the following change:
# invalid change for our resource!
- path: podinfoDeployment.status.replicas
type: update
value: "0"
However, this will NOT correctly generate the expected json structure since the replicas
field is a number, and not a string. Instead, we need to use the value-map
change type, which allows us to set the value of a field to any type of value, as follows:
- path: podinfoDeployment.status
type: update
value-map:
replicas: 0
Now the tests, containing both test cases, will become:
tests:
- name: missing-podinfo-deployment
expected-result: not-satisfied
changes:
- path: podinfoDeployment
type: delete
- path: "."
type: add
value-map:
podinfoDeployment: {}
- name: zero-replicas
expected-result: not-satisfied
changes:
- path: podinfoDeployment.status
type: update
value-map:
replicas: 0
- Finally, the last test case checks the scenario where replicas are available, but the number of expected replicas is not equal to the number of available replicas.
This case yeilds a structure similar to the previous case, where the change is:
- path: podinfoDeployment.status
type: update
value-map:
availableReplicas: 0
- We can bring this back together and compose our validation:
metadata:
name: check-podinfo-health
uuid: ad38ef57-99f6-4ac6-862e-e0bc9f55eebe
domain:
type: kubernetes
kubernetes-spec:
resources:
- name: podinfoDeployment
resource-rule:
name: my-release-podinfo
namespaces: [podinfo]
group: apps
version: v1
resource: deployments
provider:
type: opa
opa-spec:
rego: |
package validate
import rego.v1
# Default values
default validate := false
default msg := "Not evaluated"
# Validation result
validate if {
check_podinfo_healthy.result
}
msg = check_podinfo_healthy.msg
check_podinfo_healthy = {"result": true, "msg": msg} if {
input.podinfoDeployment.spec.replicas > 0
input.podinfoDeployment.status.availableReplicas == input.podinfoDeployment.status.replicas
msg := "Number of replicas > 0 and all replicas are available."
} else = {"result": false, "msg": msg} if {
msg := "Podinfo not available."
}
output:
validation: validate.validate
observations:
- validate.msg
tests:
- name: missing-podinfo-deployment
expected-result: not-satisfied
changes:
- path: podinfoDeployment
type: delete
- path: "."
type: add
value-map:
podinfoDeployment: {}
- name: zero-replicas
expected-result: not-satisfied
changes:
- path: podinfoDeployment.spec
type: update
value-map:
replicas: 0
- name: not-equal-replicas
expected-result: not-satisfied
changes:
- path: podinfoDeployment.status
type: update
value-map:
availableReplicas: 0
- Now that we have our validation and appropriate tests, we can run
lula dev validate
from our demo/test-validation
directory.
lula dev validate -f validation.yaml -r resources.json --run-tests --print-test-resources
And we should see the following output:
✔ Pass: missing-podinfo-deployment
• Result: not-satisfied
• --> validate.msg: Podinfo not available.
• Test Resources File Path: missing-podinfo-deployment.json
✔ Pass: zero-replicas
• Result: not-satisfied
• --> validate.msg: Podinfo not available.
• Test Resources File Path: zero-replicas.json
✔ Pass: not-equal-replicas
• Result: not-satisfied
• --> validate.msg: Podinfo not available.
• Test Resources File Path: not-equal-replicas.json
[!NOTE]
The --print-test-resources
flag is useful for debugging, as it will print the resources used for each test to the validation directory.