Running automated tests with containerized workbench and agents on IBM® Cloud Private

To adopt IBM® Cloud Private fully and manage the entire development to deployment workflow on the cloud, you would want to start and stop capabilities with fewer clicks. By providing both the workbench and agents in containers, you can dynamically provision capability as required and run the test automation suites without procuring the machines and installing the products.

Before you begin

You must have configured IBM® Cloud Private as per the instructions in Configuring IBM Cloud Private.

About this task

You must use only floating licenses for the product and VT-pack when playing back tests. These licenses should be hosted on a server that can be accessed by the workbench.
Note: The version number of the container images and the desktop products must match. If you have previous version of the container image, uninstall it and install the current version. To uninstall the image, use these commands:
  1. Stop the container by running
    docker stop "CONTAINER  ID"
    .
  2. Uninstall the image by running
    docker rmi -f "image ID"
    .

Procedure

  1. In IBM® Cloud Private, create services for the workbench and agents by creating the services.yml file. Services are logical set of pods that can provide a single IP address and DNS name by which the pods can be accessed. Creating the services only reserves the IPs and does not create the actual workbench or agent pods. See the sample services.yml file.
    Sample services file:
    cat services.yml
    
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        io.kompose.service: <workbench_name>
      name: <workbench_name>
    spec:
      type: NodePort
      ports:
      - name: "7080"
        port: 7080
        targetPort: 7080
      - name: "7443"
        port: 7443
        targetPort: 7443
      selector:
        io.kompose.service: <workbench_name>
    status:
      loadBalancer: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        io.kompose.service: agent1
      name: agent1
    spec:
      ports:
      - name: "7080"
        port: 7080
        targetPort: 7080
      selector:
        io.kompose.service: agent1
    status:
      loadBalancer: {}
  2. Run the command to create the service.
    kubectl create -f services.yml
  3. Pass the command to get the IP addresses of the workbench and the agents so that you can use them in the deployment.yml file to connect the agents with the workbench.
    kubectl get services
  4. Create a deployment.yml file to specify the license, agents, workbench, license, and test asset information in the yml file.
    Sample deployment file:
    cat deployment.yml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        io.kompose.service: <workbench_name>
        pt.classification: workbench
      name: <workbench_name>
    spec:
      replicas: 1
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            io.kompose.service: <workbench_name>
            pt.classification: workbench
            pt.name: <workbench_name>
        spec:
          containers:
          - command:
            - cmdline
            - -workspace
            - /runData/<WORKSPACE_NAME>
            - -project
            - <TEST_PROJECT_NAME>
            - -suite
            - Tests/<TEST_SUITE>.testsuite
            - -results
            - autoResults
            - -stdout
            - -exportlog
            - /runData/<TEST_LOG>.txt
            - -protocolinput
            - distributed.tests=/Tests/<AFT_INPUT>.xml
    
            env:
              - name: RATIONAL_LICENSE_FILE
                value: <licenseServerPort>@<licenseServerIPAddress>
              - name: TEST_IMPORT_PATH
                value: /Tests/<TEST_ASSET_NAME>.zip
            image: mycluster.icp:8500/default/<imageName>:<imageVersion>
            name: <workbench_name>
            ports:
            - containerPort: 7080
            - containerPort: 7443
            resources: {}
          restartPolicy: Always
            volumeMounts:
            - mountPath: /Tests
              name: ft-wb-claim0
    #       Optional
    #        - mountPath: /runData
    #          name: ft-wb-claim1
          restartPolicy: Always
          volumes:
          - name: ft-wb-claim0
            hostPath:
              path: /pathToTestAsset.zip
    #      - name: ft-wb-claim1
    #        hostPath:
    #          path: /pathForWorkspace
    
    status: {}
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        io.kompose.service: agent1
        pt.classification: agent
      name: agent1
    spec:
      replicas: 1
      strategy: {}
      template:
        metadata:
          creationTimestamp: null
          labels:
            io.kompose.service: agent1
            pt.classification: agent
            pt.name: agent1
        spec:
          containers:
          - env:
            - name: AGENT_NAME
              value: agent1
            - name: AGENT_IP
              value: <ClusterIPAddress>
            - name: MASTER_NAME
              value: <workbench_name>
            image: mycluster.icp:8500/default/<imageName>:<imageVersion>
            name: agent1
            resources: {}
          restartPolicy: Always
    status: {}
  5. Run the deployment.yml file to create the workbench and agent containers.
    kubectl create -f deployment.yml
  6. Create PersistentVolume and PersistentVolumeClaim in IBM® Cloud Private. To create PersistentVolume, see this topic. To create PersistentVolumeClaim, see this topic.
  7. Get the list of containers and map the workbench container with a variable.
    kubectl get pods
  8. Run the command to copy the test assets to the workbench container.
    kubectl cp tests/HelloWorldDocker.zip ${WB_POD}:.
    Note: You can get the ${WB_POD} variable assigned to the workbench name by using this command:
    $ WB_POD=$(kubectl get  pods --selector pt.name=ptwb -o jsonpath='{.items[*].metadata.name}')
  9. Run the command to execute the test.
    kubectl exec -it ${WB_POD} -- bash -c 'export TEST_IMPORT_PATH=HelloWorldDocker.zip && cmdline -workspace /tmp/ws -project HelloWorldDocker -schedule Schedules/vuSch -exportlog tl.log'
  10. Run the command to copy the test results and log out.
    kubectl cp ${WB_POD}:/tmp/CommandLineLog.txt
    kubectl cp ${WB_POD}:/tmp/ws/.metadata/.log log.txt
    kubectl cp ${WB_POD}:tl.log
  11. Use IBM® Cloud Private Console to verify that the workbench and agent deployments are created and running successfully. Refer the test execution logs in the workbench and agent pod deployments.
  12. Run the commands to stop the workbench and agent containers and verify the test run status in the exported logs.
    kubectl delete -f deployment.yml
    kubectl delete -f services.yml