This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

Installation

Currently you can install Lula a couple different ways. A brew formulae is in the plan, but not currently implemented. Lula is currently only compatible with Linux/MacOS distributions.

From Source

  1. Clone the repository to your local machine and change into the lula directory

    git clone https://github.com/defenseunicorns/lula.git && cd lula
    
  2. While in the lula directory, compile the tool into an executable binary. This outputs the lula binary to the bin directory.

    make build
    
  3. On most Linux distributions, install the binary onto your $PATH by moving the downloaded binary to the /usr/local/bin directory:

    sudo mv ./bin/lula /usr/local/bin/lula
    

Download

  1. Navigate to the Latest Release Page: Open your web browser and go to the following URL to access the latest release of Lula: https://github.com/defenseunicorns/lula/releases/latest

  2. Download the Binary: On the latest release page, find and download the appropriate binary for your operating system. E.g., lula_<version>_Linux_amd64

  3. Download the checksums.txt: In the list of assets on the release page, locate and download the checksums.txt file. This file contains the checksums for all the binaries in the release.

  4. Verify the Download: After downloading the binary and checksums.txt, you should verify the integrity of the binary using the checksum provided:

    • Open a terminal and navigate to the directory where you downloaded the binary and checksums.txt.
    • Run the following command to verify the checksum if using Linux:
      sha256sum -c checksums.txt --ignore-missing
      
    • Run the following command to verify the checksum if using MacOS:
      shasum -a 256 -c checksums.txt --ignore-missing
      
  5. On most Linux distributions, install the binary onto your $PATH by moving the downloaded binary to the /usr/local/bin directory:

    sudo mv ./download/path/lula_<version>_Linux_amd64 /usr/local/bin/lula
    

Quick Start

See the following tutorials for some introductory lessons on how to use Lula. If you are unfamiliar with Lula, the best place to start is the “Simple Demo”.

Tutorials

Lula Validations

Lula Validation manifests are the underlying mechanisms that dictates the evaluation of a system against a control as resulting in satisfied or not satisfied. A Lula Validation is linked to a control within a component definition via the OSCAL-specific property, links.

Developing Lula Validations can sometimes be more art than science, but generally they should aim to be clear, concise, and robust to system changes. See our guide for developing Lula Validations and the references for additional information.

1 - Develop a Validation

This document describes some best practices for developing and using a Lula Validation, the primary mechanism for evaluation of system’s compliance against a specified control.

About

Lula Validations are constructed by establishing the collection of measurements about a system, given by the specified Domain, and the evaluation of adherence, performed by the Provider.

The currently supported domains are:

  • API
  • Kubernetes

The currently supported providers are:

  • OPA (Open Policy Agent)
  • Kyverno

Creating a Sample Validation

Here, we will step through creating a sample validation using the Kubernetes domain and OPA provider. Generating a validation is in the scope of answering some control or standard. For instance, our control might be something like “system implements test application as target for development purposes”. Our validation should then seek to prove that some “test application” is running in our domain, i.e., Kubernetes.

Pre-Requistes

  • Lula installed
  • Kubectl
  • Helm
  • A running Kubernetes cluster
    • Kind
      • kind create cluster -n lula-test
    • K3d
      • k3d cluster create lula-test
  • Podinfo deployed in the cluster
    $ helm repo add podinfo https://stefanprodan.github.io/podinfo
    
    $ helm upgrade -i my-release podinfo/podinfo -n podinfo --create-namespace
    

Steps

[!NOTE] Demo files can be found in the lula repository under demo/develop-validation

  1. Assume we have some component definition for Podinfo with the associated standard we are trying to prove the system satisfies:

    component-definition:
      uuid: a506014d-cb8a-4db9-ac48-ef72f7209a60
      metadata:
        last-modified: 2024-07-11T13:38:09.633174-04:00
        oscal-version: 1.1.2
        published: 2024-07-11T13:38:09.633174-04:00
        remarks: Lula Generated Component Definition
        title: Component Title
        version: 0.0.1
      components:
      - uuid: 75859c1e-30f5-4fde-9ad4-c79f863b049f
        type: software
        title: podinfo
        description: Sample application
        control-implementations:
        - uuid: a3039927-839c-5745-ac4e-a9993bcd60ed
          source: https://github.com/defenseunicorns/lula
          description: Control Implementation Description
          implemented-requirements:
          - uuid: 257d2b2a-fda7-49c5-9a2b-acdc995bc8e5
            control-id: ID-1
            description: >-
              Podinfo, a sample application, is deployed into the cluster and exposed for testing purposes.
            remarks: >-
              System implements test application as target for development purposes.
    
  2. We recognize that we can satisfy this control by proving that podinfo is alive in the cluster. If we know nothing about podinfo, we may first want to identify which Kubernetes constructs are used in it’s configuration:

    $ kubectl get all -n podinfo 
    
    NAME                                     READY   STATUS    RESTARTS   AGE
    pod/my-release-podinfo-fb6d4888f-ptlss   1/1     Running   0          17m
    
    NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
    service/my-release-podinfo   ClusterIP   10.43.172.65   <none>        9898/TCP,9999/TCP   17m
    
    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/my-release-podinfo   1/1     1            1           17m
    
    NAME                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/my-release-podinfo-fb6d4888f   1         1         1       17m
    
  3. Now that we know what resources are in the podinfo namespace, we can use our kubernetes knowledge to deduce that proving podinfo is healthy in the cluster could be performed by looking at the status of the podinfo deployment for the replicas value to match readyReplicas:

    $ kubectl get deployment my-release-podinfo -n podinfo -o json | jq '.status'
    
    {
    "availableReplicas": 1,
    "conditions": [
      {
        "lastTransitionTime": "2024-07-11T17:36:53Z",
        "lastUpdateTime": "2024-07-11T17:36:53Z",
        "message": "Deployment has minimum availability.",
        "reason": "MinimumReplicasAvailable",
        "status": "True",
        "type": "Available"
      },
      {
        "lastTransitionTime": "2024-07-11T17:36:53Z",
        "lastUpdateTime": "2024-07-11T17:36:56Z",
        "message": "ReplicaSet \"my-release-podinfo-fb6d4888f\" has successfully progressed.",
        "reason": "NewReplicaSetAvailable",
        "status": "True",
        "type": "Progressing"
      }
    ],
    "observedGeneration": 1,
    "readyReplicas": 1,
    "replicas": 1,
    "updatedReplicas": 1
    }
    
  4. With this we should now have enough information to write our Lula Validation! First construct the top-matter metadata:

    Run lula tools uuidgen to get a unique ID for your validation

    $ lula tools uuidgen              
    ad38ef57-99f6-4ac6-862e-e0bc9f55eebe
    

    Add a validation.yaml file with the following

    metadata:
      name: check-podinfo-health
      uuid: ad38ef57-99f6-4ac6-862e-e0bc9f55eebe
    
  5. Construct the domain:

    Since we are extracting Kubernetes manifest data as validation “proof”, the domain we use should be kubernetes.

    domain:
      type: kubernetes
      kubernetes-spec:
        resources:
          - name: podinfoDeployment
            resource-rule:
              name: my-release-podinfo
              namespaces: [podinfo]
              group: apps
              version: v1
              resource: deployments
    

    Note a few things about the specification for obtaining these kubernetes resources:

    • resources key is used as an array of resources we are asking for from the cluster
      • name is the keyword that will be used as an input to the policy, stated below in the provider. Note - to play nicely with the policy, it is best to make this a single word, camel-cased if desired.
      • resource-rule is the api specification for the resource being extracted
        • name is the name of our deployment, my-release-podinfo
        • namespaces is the list of namespaces, one can be provided but must be in list format
        • group, version, resource is the compliant values to access the kubernetes API

    See reference for more information about the Lula Validation schema and kubernetes domain.

  6. Construct the provider and write the OPA policy:

    Any provider should be compatible with the domain outputs, here we’ve decided to use OPA and rego, so our provider section is as follows:

    provider:
      type: opa
      opa-spec:
        rego: |
          package validate
          import rego.v1
    
          # Default values
          default validate := false
          default msg := "Not evaluated"
    
          # Validation result
          validate if {
            check_podinfo_healthy.result
          }
          msg = check_podinfo_healthy.msg
    
          check_podinfo_healthy = {"result": true, "msg": msg} if {
            input.podinfoDeployment.status.replicas > 0
            input.podinfoDeployment.status.availableReplicas == input.podinfoDeployment.status.replicas
            msg := "Number of replicas > 0 and all replicas are available."
          } else = {"result": false, "msg": msg} {
            msg := "Podinfo not available."
          }
        output:
          validation: validate.validate
          observations:
            - validate.msg
    

    The Rego policy language can be a little funny looking at first glance, check out both the rego docs and the OPA Provider reference for more information about rego.

    With that said, some things are important to highlight about the policy

    • package validate is mandatory at the top (you can use any package name you want, but if a different value is used the output.validation needs to be updated accordingly)
    • import rego.v1 is optional, but recommended as OPA looks to upgrade to v1
    • The “Default values” section is best practice to set these to protect against a result that yields undefined values for these variables
    • The “Validation result” section defines the rego evaluation on the input.podinfoDeployment - checking that both the number of replicas is greater than 0 and the available and requested replicas are equal.
  7. Putting it all together we are left with the validation.yaml, let’s run some commands to validate our validation:

    Get the resources to visually inspect that they are what you expect from the domain and in the right struction for the provider’s policy:

    $ lula dev get-resources -f validation.yaml -o resources.json
    

    The result should be a resources.json file that looks roughly as follows:

    {
      "podinfoDeployment": {
        "apiVersion": "apps/v1",
        "kind": "Deployment",
        # ... rest of the json
      }
    }
    

    Now check the validation is resulting in the expected outcome:

    $ lula dev validate -f validation.yaml                        
    •  Observations:
    •  --> validate.msg: Number of replicas > 0 and all replicas are available.
    •  Validation completed with 1 passing and 0 failing results
    

    If we expected this validation to fail, we would have added -e=false

  8. Now that we have our baseline validation, and we know it is returning an expected result for our current cluster configuration, we should probably ensure that the policy results are successful when other resource cases exist. There are really two options here:

    1. Manually modify the resources in your cluster and re-run lula dev validate

    2. Manually modify the resources.json and test those

      If we have a test cluster, perhaps changing some things about it is acceptable, but for this case I’m just going to take the path of least resistance and modify the resources.json:

      Copy your resources.json and rename to resources-bad.json. First, find podinfoDeployment.status.replicas and change the value to 0. Run lula dev validate with those resources as the input, along with our expected failure outcome:

      $ lula dev validate -f validation.yaml -r resources-bad.json -e=false                    
      • Observations: 
      •  --> validate.msg: Podinfo not available.                                                              
      •  Validation completed with 0 passing and 1 failing results 
      

      Success! Additional conditions can be tested this way to fully stress-test the validity of the policy.

  9. Finally, we can bring this back into the component-definition. This validation should be added as a link to the respective implemented-requirement:

    # ... rest of component definition
        implemented-requirements:
          - uuid: 257d2b2a-fda7-49c5-9a2b-acdc995bc8e5
            control-id: ID-1
            description: >-
            Podinfo, a sample application, is deployed into the cluster and exposed for testing purposes.
            remarks: >-
            System implements test application as target for development purposes.
            links:
              - href: 'file:./validation.yaml
                ref: lula
                text: Check that Podinfo is healthy
    
  10. Now that we have our full OSCAL Component Definition model specified, we can take this off to validate and evaluate the system!

Limitations

We are aware that many of these validations are brittle to environment changes, for instance if namespaces change. This is a known limitation and on our roadmap as something to offer a possible templating solution for in the future.

Additionally, since we are adding these validations to OSCAL yaml documents, there is some ugliness with having to compose strings of yaml into yaml. We support “remote” validations, where instead of a reference to a backmatter uuid, instead a link to a file is provided. A limitation of that currently is that it does not support authentication if the remote link is in a protected location.

2 - Lula in CI

Lula is designed to evaluate the continual compliance of a system, and as such is a valuable tool to implement in a CI environment to provide rapid feedback to developers if a system moves out of compliance. The Lula-Action Repo supports the use of Lula in github workflows and this document provides an outline for implementation.

Pre-Requisite

To use Lula to validate and evaluate a system in development, a pre-requisite is having an OSCAL Component Definition model, along with linked Lula Validations existing in the repository, a sample structure follows:

.
|-- .github
|   |-- workflows
|-- |-- |-- lint.yaml # Existing workflow to lint
|-- |-- |-- test.yaml # Existing workflow to test system
|-- README.md
|-- LICENSE
|-- compliance
|-- |-- oscal-component.yaml # OSCAL Component Definition
|-- src
|   |-- main
|   |-- test

Steps

  1. Add Lula linting to .github/workflows/lint.yaml:

    name: Lint
    
    on:
        pull_request:
            branches: [ "main" ]
    
    jobs:
        lint:
            runs-on: ubuntu-latest
    
            # ... Other jobs
    
            - name: Setup Lula
              uses: defenseunicorns/lula-action/setup@main
              with:
                version: v0.4.1
    
            - name: Lint OSCAL file
              uses: defenseunicorns/lula-action/lint@main
              with:
                oscal-target: ./compliance/oscal-component.yaml
    
            # ... Other jobs
    

    Additional linting targets may be added to this list as comma separated values, e.g., component1.yaml,component2.yaml. Note that linting is only validating the correctness of the OSCAL.

  2. Add Lula validation and evaluation to the testing workflow, .github/workflows/test.yaml:

    name: Test
    
    on:
    pull_request:
        branches: [ "main" ]
    
    jobs:
        test:
            runs-on: ubuntu-latest
    
            # ... Other jobs
    
            - name: Setup Lula
              uses: defenseunicorns/lula-action/setup@main
              with:
                version: v0.4.1
    
            - name: Validate Component Definition
              uses: defenseunicorns/lula-action/validate@main
              with:
                oscal-target: ./compliance/oscal-component.yaml
                threshold: ./assessment-results.yaml
    
            # ... Other jobs
        test-upgrade:
            runs-on: ubuntu-latest
    
            # ... Jobs to deploy previous system version 
    
            - name: Setup Lula
              uses: defenseunicorns/lula-action/setup@main
              with:
                version: v0.4.1
    
            - name: Validate Component Definition
              uses: defenseunicorns/lula-action/validate@main
              with:
                oscal-target: ./compliance/oscal-component.yaml
                threshold: ./assessment-results.yaml
    
            # ... Jobs to upgrade system to current version
    

    The first validate under test outputs an assessment-results model that provide the assessment of the system in the current state. The second validate that occurs in the test-upgrade job runs a validation on the previous version of the system prior to upgrade. It then compares the old and new assessment results to either pass or fail the job - failure occurs when the current system’s compliance is worse than the old system.

3 - Simple Demo

The following simple demo will step through a process to validate and evaluate Kubernetes cluster resources for a simple component-definition. The following pre-requisites are required to successfully run this demo:

Pre-Requisites

  • Lula installed
  • Kubectl
  • A running Kubernetes cluster
    • Kind
      • kind create cluster -n lula-test
    • K3d
      • k3d cluster create lula-test

Steps

  1. Clone the Lula repository to your local machine and change into the lula/demo/simple directory

    git clone https://github.com/defenseunicorns/lula.git && cd lula/demo/simple
    
  2. Apply the namespace.yaml file to create a namespace for the demo

    kubectl apply -f namespace.yaml
    
  3. Apply the pod.fail.yaml to create a pod in your cluster

    kubectl apply -f pod.fail.yaml
    
  4. The oscal-component-opa.yaml is a simple OSCAL Component Definition model which establishes a sample control <> validation mapping. The validation that provides a satisfied or not-satisfied result to the control simply checks if the required label value is set for pod.fail. Run the following command to validate the component given by the failing pod:

    lula validate -f oscal-component-opa.yaml
    

    The output in your terminal should inform you that the single control validated is not-satisfied:

    NOTE  Saving log file to
        /var/folders/6t/7mh42zsx6yv_3qzw2sfyh5f80000gn/T/lula-2024-07-08-10-22-57-840485568.log
    •  changing cwd to .
    
    🔍 Collecting Requirements and Validations   
    •  Found 1 Implemented Requirements
    •  Found 1 runnable Lula Validations
    
    📐 Running Validations   
    ✔  Running validation a7377430-2328-4dc4-a9e2-b3f31dc1dff9 -> evaluated -> not-satisfied                                            
    
    💡 Findings   
    •  UUID: c80f76c5-4c86-4773-91a3-ece127f3d55a
    •  Status: not-satisfied
    •  OSCAL artifact written to: assessment-results.yaml
    

    This will also produce an assessment-results model - review the findings and observations:

      assessment-results:
        results:
            - description: Assessment results for performing Validations with Lula version v0.4.1-1-gc270673
            findings:
                - description: This control validates that the demo-pod pod in the validation-test namespace contains the required pod label foo=bar in order to establish compliance.
                related-observations:
                    - observation-uuid: f03ffcd9-c18d-40bf-85f5-d0b1a8195ddb
                target:
                    status:
                    state: not-satisfied
                    target-id: ID-1
                    type: objective-id
                title: 'Validation Result - Component:A9D5204C-7E5B-4C43-BD49-34DF759B9F04 / Control Implementation: A584FEDC-8CEA-4B0C-9F07-85C2C4AE751A / Control:  ID-1'
                uuid: c80f76c5-4c86-4773-91a3-ece127f3d55a
            observations:
                - collected: 2024-07-08T10:22:57.219213-04:00
                description: |
                    [TEST]: a7377430-2328-4dc4-a9e2-b3f31dc1dff9 - lula-validation
                methods:
                    - TEST
                relevant-evidence:
                    - description: |
                        Result: not-satisfied
                uuid: f03ffcd9-c18d-40bf-85f5-d0b1a8195ddb
            props:
                - name: threshold
                ns: https://docs.lula.dev/oscal/ns
                value: "false"
            reviewed-controls:
                control-selections:
                - description: Controls Assessed by Lula
                    include-controls:
                    - control-id: ID-1
                description: Controls validated
                remarks: Validation performed may indicate full or partial satisfaction
            start: 2024-07-08T10:22:57.219371-04:00
            title: Lula Validation Result
            uuid: f9ae56df-8709-49be-a230-2d3962bbd5f9
        uuid: 5bf89b23-6172-47c9-9d1c-d308fa543d61
    
  5. Now, apply the pod.pass.yaml file to your cluster to configure the pod to pass compliance validation:

    kubectl apply -f pod.pass.yaml
    
  6. Run the following command in the lula directory:

    lula validate -f oscal-component-opa.yaml
    

    The output should now show the pod as passing the compliance requirement:

    NOTE  Saving log file to
        /var/folders/6t/7mh42zsx6yv_3qzw2sfyh5f80000gn/T/lula-2024-07-08-10-25-47-3097295143.log
    •  changing cwd to .
    
    🔍 Collecting Requirements and Validations   
    •  Found 1 Implemented Requirements
    •  Found 1 runnable Lula Validations
    
    📐 Running Validations   
    ✔  Running validation a7377430-2328-4dc4-a9e2-b3f31dc1dff9 -> evaluated -> satisfied                                                
    
    💡 Findings   
    •  UUID: 5a991d1f-745e-4acb-9435-373174816fcc
    •  Status: satisfied
    •  OSCAL artifact written to: assessment-results.yaml
    

    This will append to the assessment-results file with a new result:

      - description: Assessment results for performing Validations with Lula version v0.4.1-1-gc270673
      findings:
        - description: This control validates that the demo-pod pod in the validation-test namespace contains the required pod label foo=bar in order to establish compliance.
          related-observations:
            - observation-uuid: a1d55b82-c63f-47da-8fab-87ae801357ac
          target:
            status:
              state: satisfied
            target-id: ID-1
            type: objective-id
          title: 'Validation Result - Component:A9D5204C-7E5B-4C43-BD49-34DF759B9F04 / Control Implementation: A584FEDC-8CEA-4B0C-9F07-85C2C4AE751A / Control:  ID-1'
          uuid: 5a991d1f-745e-4acb-9435-373174816fcc
      observations:
        - collected: 2024-07-08T10:25:47.633634-04:00
          description: |
            [TEST]: a7377430-2328-4dc4-a9e2-b3f31dc1dff9 - lula-validation
          methods:
            - TEST
          relevant-evidence:
            - description: |
                Result: satisfied
          uuid: a1d55b82-c63f-47da-8fab-87ae801357ac
      props:
        - name: threshold
          ns: https://docs.lula.dev/oscal/ns
          value: "false"
      reviewed-controls:
        control-selections:
          - description: Controls Assessed by Lula
            include-controls:
              - control-id: ID-1
        description: Controls validated
        remarks: Validation performed may indicate full or partial satisfaction
      start: 2024-07-08T10:25:47.6341-04:00
      title: Lula Validation Result
      uuid: a9736e32-700d-472f-96a3-4dacf36fa9ce
    
  7. Now that two assessment-results are established, the threshold can be evaluated. Perform an evaluate to compare the old and new state of the cluster:

    lula evaluate -f assessment-results.yaml
    

    The output will show that now the new threshold for the system assessment is the more compliant evaluation of the control - i.e., the satisfied value of the Control ID-1 is the threshold.

     NOTE  Saving log file to
        /var/folders/6t/7mh42zsx6yv_3qzw2sfyh5f80000gn/T/lula-2024-07-08-10-29-53-4238890270.log
    •  New passing finding Target-Ids:                                                                                                                                                                                                                                                                          
    •  ID-1                                                                                                                                                                                                                                                                                                     
    •  New threshold identified - threshold will be updated to result a9736e32-700d-472f-96a3-4dacf36fa9ce                                                                                                                                                                                                      
    •  Evaluation Passed Successfully                                                                                                                                                                                                                                                                           
    ✔  Evaluating Assessment Results f9ae56df-8709-49be-a230-2d3962bbd5f9 against a9736e32-700d-472f-96a3-4dacf36fa9ce                                                                                                                                                                                          
    •  OSCAL artifact written to: ./assessment-results.yaml