Developer Workflows

This section describes a typical workflow for developing the CORD control plane. This workflow does not include any data plane elements (e.g., the underlying switching fabric or access devices).

Setting Up a Local Development Environment

It is straightforward to set up a local Kubernetes environment on your laptop. The recommended way to do this is to use Minikube. This guide assumes you have done that. See the Single-Node case in the Installation Guide for more information, or you can go directly to the documentation for Minikube: https://kubernetes.io/docs/getting-started-guides/minikube/#installation

Note: If you are going to do development on Minikube you may want to increase its memory from the default 512MB. You can do this using this command to start Minikube: minikube start --cpus 2 --memory 4096

In addition to Minikube running on your laptop, you will also need to install Helm: https://docs.helm.sh/using_helm/#installing-helm.

Once both Helm and Minikube are installed, you can deploy the core components of XOS, along with the services that make up, for example, the SEBA profile. This uses images published on DockerHub:

cd ~/cord/helm-charts

In this folder you can choose from the different charts which one to deploy. For example to deploy SEBA you can follow these instructions. Alternatively, if you are working on a new profile or a new service that is not part of any existing profile, you can install just the CORD Platform.

Making and Deploying Changes

Assuming you have downloaded the CORD source code and the entire source tree for CORD is under ~/cord, you can edit and re-deploy the code as follows.

Note: To develop a single synchronizer you may not need the full CORD source, but this assume that you have a good knowledge of the system and you know what you’re doing.

First you will need to point Docker to the one provided by Minikube (note that you don’t need to have docker installed, as it comes with the Minikube installation).

eval $(minikube docker-env)

You will then need to build the containers from source:

cd ~/cord/automation-tools/developer
python imagebuilder.py -f ../../helm-charts/examples/filter-images.yaml -x

At this point, the images containing your changes will be available in the Docker environment used by Minikube.

Note: In some cases you can rebuild a single docker image to make the process faster, but this assume that you have a good knowledge of the system and you know what you’re doing.

All that is left is to teardown and re-deploy the containers.

helm del --purge <chart-name>
helm dep update <cart-name>
helm install <chart-name> -n <chart-name> -f examples/image-tag-candidate.yaml -f examples/imagePullPolicy-IfNotPresent.yaml

In some cases it is possible to use the helm upgrade command, but if you made changes to the XOS models we suggest you redeploy everything.

Note: if your changes are only in the synchronizer steps, after rebuilding the containers, you can just delete the corresponding POD and kubernetes will restart it with the new image.

Pushing Changes to a Docker Registry

If you have a remote POD that you want to test your changes on, you need to push your docker images to a docker registry that can be accessed from the POD.

The way we recommend doing this is via a private docker registry. You can find more information about what a docker registry is in the offline installation section.

Tag and Push Images to the Docker Registry

For the images to be consumed on the Kubernetes cluster, they need to be first tagged, and pushed to the local registry:

Supposing your docker-registry address is:

192.168.0.1:30500

and that your original image name is called:

xosproject/vsg-synchronizer

you'll need to tag the image as

192.168.0.1:30500/xosproject/vsg-synchronizer

For example, you can use the docker tag command to do this:

docker tag xosproject/vsg-synchronizer:candidate 192.168.0.1:30500/xosproject/vsg-synchronizer:candidate

Now, you can push the image to the registry. For example, with docker push:

docker push 192.168.0.1:30500/xosproject/vsg-synchronizer:candidate

The image should now be in the local docker registry on your cluster.

Identify, download, tag and push images

Sometimes you may need to identify, download, tag and push lots of images. This can become a long and error prone operation if done manually. For this reason, we provide a set of tool that automate procedure. The script can be found here.

image_from_helm.sh: identify images

The image_from_helm.sh script prints the list of images used by one or multiple charts. The script needs to be executed from the helm-charts directory. More info can be found invoking the -h or --help option of the command. The output can be piped in other utility scripts.

pull_images.sh: pull images from DockerHub

The pull_images.sh script pulls from DockerHub the list of images provided in input and prints the image name if the image is correctly pulled. More info can be found invoking the -h or --help option of the command. The output can be piped in other utility scripts.

tag_and_push.sh: tag and push images to a target Docker registry

The tag_and_push.sh script tags and push images to a target Docker registry (including DockerHub itself). It can add a prefix (useful for example when deploying on local registries) to images and change the tag of an image. More info can be found invoking the -h or --help option of the command. The output can be piped in other utility scripts.

Examples:

Assume you'd like to prepare an offline SEBA installation. As such, you need to identify all the images used in the charts, download them, tag them and push them to a local Docker registry (in the example, 192.168.0.100, port 30500). From the helm-charts folder, this can be done in one command:

bash images_from_charts.sh voltha onos xos-core xos-profiles/att-workflow nem-monitoring logging | bash pull_images.sh | bash tag_and_push.sh -r 192.168.0.100:30500

results matching ""

    No results matching ""