ExampleService
ExampleService
is a service intended to demonstrate integration with the XOS openstack
service. ExampleService
provides a ExampleServiceInstance
model that generates and hosts a web page, displaying two text strings on the web page: a service_message
and a tenant_message
. Each time a ExampleServiceInstance
is created, a corresponding OpenStackServiceInstance
will also be created which will in turn cause an OpenStack VM to be created that runs an apache web server hosting the web page.
Destroying the ExampleServiceInstance
will cause the linked OpenStackServiceInstance
to also be destroyed, which will in turn cause the OpenStack VM to be cleaned up.
Implementation
Inside the ExampleService
repository's xos/synchronizer
directory, there are three key parts to the service.
The
models
directory. This directory contains the models that compriseExampleService
. The full text of the models are specified in a file,exampleservice.xproto
. A summary of the models is below:ExampleService
holds global service-wide settings, including aservice_message
, which appears in all web pages generated byExampleService
, and aservice_secret
that is installed into all container that run the web servers.ExampleServiceInstance
holds per-tenant settings, including atenant_message
. EachExampleServiceInstance
corresponds to one web server serving one web page. This model has relations forforeground_color
andbackground_color
that allow some additional customization of the served page.tenant_secret
is a secret that is installed into the container running the web serverColor
implements the color model used by theforeground_color
andbackground_color
fields ofExampleServiceInstance
.EmbeddedImage
allows embedded images to be attached to web pages. As the foreign key relation is from the embedded image to the service instance, this forms a many-to-one relation that allows many images to be attached to a single web page.
The
model_policies
directory contains a model policy. This model policy is reponsible for automatically creating and deleting theOpenStackServiceInstance
associated with eachExampleServiceInstance
.The
sync_steps
directory contains a sync step that uses Ansible to provision the web server and configure the web page.
Demonstration
The following subsections work through a quick demonstration of ExampleService
.
Prerequisites
This document assumes that you have already installed OpenStack-helm.
Note: Depending on the method that was used to deploy your Kubernetes installation, your installation may require root privilege to interact with Kubernetes. If so, then you may need to use
sudo
with many of the commands in this tutorial, for examplesudo helm init
instead ofhelm init
.
Simulating fabric Internet connectivity
The ExampleServiceInstance
sync step requires connectivity to the public Internet so that it can fetch some apt packages. In order to support this, it's necessary that your fabric is properly connected to the Internet. This subsection describes how to setup a simulated fabric bridge, for example on a bare metal Ubuntu machine. If your deployment contains a physical fabric with Internet connectivity already, then you may skip this subsection.
First we need to setup the fabric bridge. Make sure to replace my.host.name
with the hostname of your head node.
# Create an inventory file.
cd ~/cord/automation-tools/interface-config
cat > my_inventory.yaml <<EOF
all:
hosts:
my.host.name:
fabric_net_ip_cidr: "10.8.1.1/24"
EOF
# Run playbook to setup fabric bridge
ansible-playbook -v -i my_inventory.yaml prep-interfaces-playbook.yaml
Note: Some environments do not like network interface files with dots in the name and/or place their interface files in unusual locations. If you have errors running the above playbook, then the following modifications to the playbook may be useful:
sed -i -e "s:/etc/network/interfaces.d:/etc/interfaces.d:g" roles/interface-config/tasks/main.yml
andsed -i -e "s:/etc/interfaces.d/fabric.cfg:/etc/interfaces.d/fabriccfg:g" roles/interface-config/tasks/main.yml
.
After the playbook has successfully completed, it's time to setup a veth pair. The reason for this is that VTN will place packets onto the fabric interface, but without a veth pair, those packets will never be "received" by Linux, and therefore never have an opportunity to be forwarded and masqueraded. The following commands set up the veth pair:
sudo ip link add fabricveth1 type veth peer name fabricveth2
sudo ip link set fabricveth2 address a4:23:05:06:01:01
sudo ifconfig fabricveth2 10.8.1.1/24 up
sudo ifconfig fabricveth1 up
sudo brctl addif fabricbridge fabricveth1
We also need to enable masquerade so the packets will be NATed. In our case the physical ethernet device is eno1
. Adjust for your configuration as necessary:
sudo iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
Deploy the necessary profiles
It's necessary for us to deploy three helm charts, xos-core
, base-openstack
. and demo-exampleservice
.
Note: If you've already installed a different set of XOS profile helm charts, such as the
base-openstack
profile or themcord
profile, then it will be necessary to first delete those helm charts (usinghelm del --purge <chartname>
). Please also delete theonos-cord
chart. Deleting and redeploying these charts is recommended so that the new fabric bridge configuration is used in VTN.
# Go into the helm-charts repository
cd ~/cord/helm-charts
# Initialize helm
helm init
# Install the onos-cord helm chart
helm dep update onos
helm install onos -n onos-cord
# Install the xos-core helm chart
helm dep update xos-core
helm install xos-core -n xos-core
# Install the base-openstack helm chart
helm dep update xos-profiles/base-openstack
helm install xos-profiles/base-openstack -n base-openstack \
--set computeNodes.master.name="$( hostname )" \
--set vtn-service.sshUser="$( whoami )" \
--set computeNodes.master.dataPlaneIntf=fabricbridge
# Install the demo-exampleservice helm chart
helm dep update xos-profiles/demo-exampleservice
helm install xos-profiles/demo-exampleservice -n demo-exampleservice \
--set global.proxySshUser="$( whoami )"
The helm charts above install successive layers of CORD. The first chart, xos-core
installs core components such as the XOS core, database, TOSCA engine, etc. The second chart, base-openstack
installs the XOS OpenStack Service, which provides modeling and synchronizers for instantiating OpenStack resources using the XOS data model. The argument --set computeNodes.master.dataPlaneIntf=fabricbridge
was passed to helm when deploying the base-openstack
helm chart, causing the fabricbridge device to be used instead of the default.
The final helm chart, demo-exampleservice
installs the synchronizer for ExampleService
, including registering models with the core.
Note: It will take some time for the various helm charts to deploy and the containers to come online. We recommend using
kubectl get pods
to explore the state of the system during deployment. In particular, note the presence oftosca-loader
containers. These containers are responsible for running TOSCA that configures services in the stack. Thetosca-loaders
may error and retry several times as they wait for services to be dynamically loaded. This is normal, and eventually thetosca-loader
containers will enter theCompleted
state.
Use kubectl get pods
to verify that all containers in the profile are successful and none are in error state. At this point, we've installed all of the necessary infrastructure to support ExampleService
. The chart also automatically creates an ExampleServiceInstance
and the OpenStack synchronizer will bring up a VM.
Wait for OpenStack VM to be created
Issue the following commands:
export OS_CLOUD=openstack_helm
openstack server list --all-projects
It may take some time for the instance to be created, but eventually you will see an instance, for example exampleservice-1
. Note the management IP address of that instance.
SSH into the VM
# adjust ssh key permissions
cp ~/cord/helm-charts/xos-services/exampleservice/files/id_rsa ~/exampleservice_rsa
chmod 0600 ~/exampleservice_rsa
# ssh into the VM
ssh -i ~/exampleservice_rsa ubuntu@<management-ip-of-vm>
You can view the created web page by doing the following:
curl http://localhost/
You should see a web page that contains "hello" and "world" strings embedded in it.