You are on page 1of 17

Notes from the VNF instantiation and config-drive demo.

The environment is the Intel POD 7 server “overcloud-controller-1” which I use for testing/demos, and development of the Models and Copper tests running under a
basic virtual OPNFV deployment (no SDN controller).
Using username "root".
root@10.2.117.113's password:
Last login: Fri Sep 16 14:38:06 2016 from 10.2.117.250
[root@overcloud-controller-1 ~]# opnfv-util undercloud
Last login: Fri Sep 16 13:40:32 2016
[stack@undercloud ~]$ source overcloudrc
[stack@undercloud ~]$ ls
apex-undercloud-install.log instackenv.json nics overcloud-full.qcow2 tempest-deployer-input.conf
bryan ironic-python-agent.initramfs nova_id_rsa overcloud-full.vmlinuz tripleo-overcloud-passwords
build_perf_image.sh ironic-python-agent.kernel nova_id_rsa.pub overcloudrc undercloud.conf
copper jumphost_id_rsa.pub opnfv-environment.yaml setenv.sh undercloud-passwords.conf
deploy_command models overcloud-env.json set_perf_images.sh virtual-environment.yaml
deploy_logs network-environment.yaml overcloud-full.initrd stackrc

(showed the models repo clone downloaded via git clone https://gerrit.opnfv.org/gerrit/models)
[stack@undercloud ~]$ cd models
[stack@undercloud models]$ git status
# On branch master
nothing to commit, working directory clean
[stack@undercloud models]$ git pull
Already up-to-date.
[stack@undercloud models]$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c34fec17629 copper-webapp "/bin/sh -c '/usr/sbi" 2 days ago Up 2 days 0.0.0.0:8257->80/tcp tiny_lalande
c95a7ed9aa4d ubuntu:xenial "/bin/bash" 2 days ago Up 2 days tacker
[stack@undercloud models]$ ls
INFO LICENSE tests

(showed the copper repo clone downloaded via git clone https://gerrit.opnfv.org/gerrit/copper)
[stack@undercloud models]$ cd ../copper/tests/adhoc
[stack@undercloud adhoc]$ ls
smoke01-clean.sh smoke01.sh

(ran the cleanup script https://git.opnfv.org/cgit/copper/plain/tests/adhoc/smoke01-clean.sh)


(showed the network topology view changing in Horizon, as the resources were released)
(initial view)
[stack@undercloud adhoc]$ bash smoke01-clean.sh
--2016-09-16 16:16:25-- https://git.opnfv.org/cgit/copper/plain/components/congress/install/bash/setenv.sh
Resolving git.opnfv.org (git.opnfv.org)... 198.145.29.81
Connecting to git.opnfv.org (git.opnfv.org)|198.145.29.81|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [text/plain]
Saving to: ‘/home/stack/setenv.sh’

100%[===========================================================================================================================>] 3,122 --.-K/s in 0s

2016-09-16 16:16:26 (758 MB/s) - ‘/home/stack/setenv.sh’ saved [3122/3122]

Centos-based install
Setup undercloud environment so we can get overcloud Controller server address
Get address of Controller node
Create the environment file
Delete cirros1 instance
Request to delete server 0db8ced3-23b5-4b21-b63c-bf3ebd91e0df has been accepted.
Delete cirros2 instance
Request to delete server e3bc7e50-fb2a-483f-8816-59728b3cd2ca has been accepted.
Wait for cirros1 and cirros2 to terminate
Delete 'smoke01' security group
Deleted security_group: 88cf2f77-c6aa-4be4-9dcf-55b1d2cec854
Delete floating ip
Deleted floatingip: 80ac1580-53bf-4f29-bb63-c88a7462e8bf
Delete smoke01 key pair
Get 'public_router' ID
Get internal port ID with subnet 10.0.0.1 on 'public_router'
If found, delete the port with subnet 10.0.0.1 on 'public_router'
Removed interface from router 9deb584f-206c-4f05-8693-c059eb3ba7eb.
Clear the router gateway
Removed gateway from router public_router
Delete the router
Deleted router: public_router
Delete neutron port with fixed_ip 10.0.0.1
Delete neutron port with fixed_ip 10.0.0.2
Delete internal subnet
Deleted subnet: internal
Delete internal network
Deleted network: internal

(view after the VMs etc were cleaned up)

(moved back to the models repo to clean up the Tacker VNF)


[stack@undercloud adhoc]$ cd ../../../models/tests
[stack@undercloud tests]$ ls
blueprints utils vHello_Cloudify.sh vHello.sh vHello_Tacker.sh

(executed the script to cleanup the Tacker VNF: https://git.opnfv.org/cgit/models/plain/tests/vHello_Tacker.sh)


(showed the network topology view changing in Horizon, as the resources were released)
[stack@undercloud tests]$ bash vHello_Tacker.sh tacker-cli stop
+ trap fail ERR
++ awk -F = '{print $2}'
++ grep DISTRIB_ID /etc/centos-release /etc/os-release /etc/redhat-release /etc/system-release
+ dist=
+ case "$2" in
+ [[ 2 -eq 2 ]]
+ forward_to_container tacker-cli stop
+ echo 'vHello_Tacker.sh: pass stop command to vHello.sh in tacker container'
vHello_Tacker.sh: pass stop command to vHello.sh in tacker container
++ sudo docker ps -a
++ awk '/tacker/ { print $1 }'
+ CONTAINER=c95a7ed9aa4d
+ sudo docker exec c95a7ed9aa4d /bin/bash /tmp/tacker/vHello_Tacker.sh tacker-cli stop stop
+ trap fail ERR
/tmp/tacker/vHello_Tacker.sh: setup OpenStack CLI environment
++ grep DISTRIB_ID /etc/lsb-release /etc/os-release
++ awk -F = '{print $2}'
+ dist=Ubuntu
+ case "$2" in
+ [[ 3 -eq 2 ]]
+ stop tacker-cli
+ echo '/tmp/tacker/vHello_Tacker.sh: setup OpenStack CLI environment'
+ source /tmp/tacker/admin-openrc.sh
++ export CONGRESS_HOST=192.0.2.7
++ CONGRESS_HOST=192.0.2.7
++ export KEYSTONE_HOST=192.0.2.7
++ KEYSTONE_HOST=192.0.2.7
++ export CEILOMETER_HOST=192.0.2.7
++ CEILOMETER_HOST=192.0.2.7
++ export CINDER_HOST=192.0.2.7
++ CINDER_HOST=192.0.2.7
++ export GLANCE_HOST=192.0.2.7
++ GLANCE_HOST=192.0.2.7
++ export NEUTRON_HOST=192.0.2.7
++ NEUTRON_HOST=192.0.2.7
++ export NOVA_HOST=192.0.2.7
++ NOVA_HOST=192.0.2.7
++ export HEAT_HOST=192.0.2.7
++ HEAT_HOST=192.0.2.7
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export OS_CLOUDNAME=overcloud
++ OS_CLOUDNAME=overcloud
++ export OS_AUTH_URL=http://192.168.37.10:5000/v2.0
++ OS_AUTH_URL=http://192.168.37.10:5000/v2.0
++ export NOVA_VERSION=1.1
++ NOVA_VERSION=1.1
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_USERNAME=admin
++ OS_USERNAME=admin
++ export no_proxy=,192.168.37.10,192.0.2.3
++ no_proxy=,192.168.37.10,192.0.2.3
++ export OS_PASSWORD=zuQZ8pra3E3DMtxm4jsxA4rqK
++ OS_PASSWORD=zuQZ8pra3E3DMtxm4jsxA4rqK
++ export 'PYTHONWARNINGS=ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ export OS_TENANT_NAME=admin
++ OS_TENANT_NAME=admin
+ [[ tacker-cli == \t\a\c\k\e\r\-\a\p\i ]]
+ echo '/tmp/tacker/vHello_Tacker.sh: uninstall vHello blueprint via CLI'
+ vid=($(tacker vnf-list|grep hello-world-tacker|awk '{print $2}'))
/tmp/tacker/vHello_Tacker.sh: uninstall vHello blueprint via CLI
++ tacker vnf-list
++ grep hello-world-tacker
++ awk '{print $2}'
+ for id in '${vid[@]}'
+ tacker vnf-delete 38b9286d-9298-4407-8f87-d5076ce31833
Deleted vnf: 38b9286d-9298-4407-8f87-d5076ce31833
+ vid=($(tacker vnfd-list|grep hello-world-tacker|awk '{print $2}'))
++ tacker vnfd-list
++ grep hello-world-tacker
++ awk '{print $2}'
+ for id in '${vid[@]}'
+ tacker vnfd-delete 3e35072e-27cd-48b5-ba55-2a8950aa29a6
Deleted vnfd: 3e35072e-27cd-48b5-ba55-2a8950aa29a6
+ fip=($(neutron floatingip-list|grep -v "+"|grep -v id|awk '{print $2}'))
++ neutron floatingip-list
++ grep -v +
++ grep -v id
++ awk '{print $2}'
+ for id in '${fip[@]}'
+ neutron floatingip-delete afb09392-c3a9-4d65-9f9a-2a68305f997e
Deleted floatingip(s): afb09392-c3a9-4d65-9f9a-2a68305f997e
+ sg=($(openstack security group list|grep vHello|awk '{print $2}'))
++ openstack security group list
++ grep vHello
++ awk '{print $2}'
+ for id in '${sg[@]}'
+ openstack security group delete 3346359b-d5ea-4b7d-ac5a-a2059392e1d2
/tmp/tacker/vHello_Tacker.sh: Hooray!
+ pass
+ echo '/tmp/tacker/vHello_Tacker.sh: Hooray!'
+ set +x
+ '[' 0 -eq 1 ']'
+ pass
+ echo 'vHello_Tacker.sh: Hooray!'
vHello_Tacker.sh: Hooray!
+ set +x

(view after the resources were cleaned up)


(only the VM is missing, since the test network environment – two internal networks with routers connected to the public network - must be retained)

(also showed that the Heat stack was deleted)


(executed the script to recreate the Tacker VNF: https://git.opnfv.org/cgit/models/plain/tests/vHello_Tacker.sh)
[stack@undercloud tests]$ bash vHello_Tacker.sh tacker-cli start
+ trap fail ERR
++ awk -F = '{print $2}'
++ grep DISTRIB_ID /etc/centos-release /etc/os-release /etc/redhat-release /etc/system-release
+ dist=
+ case "$2" in
+ [[ 2 -eq 2 ]]
+ forward_to_container tacker-cli start
+ echo 'vHello_Tacker.sh: pass start command to vHello.sh in tacker container'
vHello_Tacker.sh: pass start command to vHello.sh in tacker container
++ sudo docker ps -a
++ awk '/tacker/ { print $1 }'
+ CONTAINER=c95a7ed9aa4d
+ sudo docker exec c95a7ed9aa4d /bin/bash /tmp/tacker/vHello_Tacker.sh tacker-cli start start
+ trap fail ERR
++ awk -F = '{print $2}'
++ grep DISTRIB_ID /etc/lsb-release /etc/os-release
/tmp/tacker/vHello_Tacker.sh: setup OpenStack CLI environment
+ dist=Ubuntu
+ case "$2" in
+ [[ 3 -eq 2 ]]
+ start tacker-cli
+ echo '/tmp/tacker/vHello_Tacker.sh: setup OpenStack CLI environment'
+ source /tmp/tacker/admin-openrc.sh
++ export CONGRESS_HOST=192.0.2.7
++ CONGRESS_HOST=192.0.2.7
++ export KEYSTONE_HOST=192.0.2.7
++ KEYSTONE_HOST=192.0.2.7
++ export CEILOMETER_HOST=192.0.2.7
++ CEILOMETER_HOST=192.0.2.7
++ export CINDER_HOST=192.0.2.7
++ CINDER_HOST=192.0.2.7
++ export GLANCE_HOST=192.0.2.7
++ GLANCE_HOST=192.0.2.7
++ export NEUTRON_HOST=192.0.2.7
++ NEUTRON_HOST=192.0.2.7
++ export NOVA_HOST=192.0.2.7
++ NOVA_HOST=192.0.2.7
++ export HEAT_HOST=192.0.2.7
++ HEAT_HOST=192.0.2.7
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export OS_CLOUDNAME=overcloud
++ OS_CLOUDNAME=overcloud
++ export OS_AUTH_URL=http://192.168.37.10:5000/v2.0
++ OS_AUTH_URL=http://192.168.37.10:5000/v2.0
++ export NOVA_VERSION=1.1
++ NOVA_VERSION=1.1
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_USERNAME=admin
++ OS_USERNAME=admin
++ export no_proxy=,192.168.37.10,192.0.2.3
++ no_proxy=,192.168.37.10,192.0.2.3
++ export OS_PASSWORD=zuQZ8pra3E3DMtxm4jsxA4rqK
++ OS_PASSWORD=zuQZ8pra3E3DMtxm4jsxA4rqK
++ export 'PYTHONWARNINGS=ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ export OS_TENANT_NAME=admin
++ OS_TENANT_NAME=admin
+ [[ tacker-cli == \t\a\c\k\e\r\-\a\p\i ]]
+ echo '/tmp/tacker/vHello_Tacker.sh: Get external network for Floating IP allocations'
+ echo '/tmp/tacker/vHello_Tacker.sh: create VNFD'
+ cd /tmp/tacker/blueprints/tosca-vnfd-hello-world-tacker
+ tacker vnfd-create --vnfd-file blueprint.yaml --name hello-world-tacker
/tmp/tacker/vHello_Tacker.sh: Get external network for Floating IP allocations
/tmp/tacker/vHello_Tacker.sh: create VNFD
Created a new vnfd:
+---------------+------------------------------------------------------------------------+
| Field | Value |
+---------------+------------------------------------------------------------------------+
| description | Hello World |
| id | 800d391d-42b6-4571-a0e7-43a531ce634c |
| infra_driver | heat |
| mgmt_driver | noop |
| name | hello-world-tacker |
| service_types | {"service_type": "vnfd", "id": "3373b89f-a4e8-4b8e-b242-6ddaf1a0381f"} |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+---------------+------------------------------------------------------------------------+
+ '[' 0 -eq 1 ']'
+ echo '/tmp/tacker/vHello_Tacker.sh: create VNF'
+ tacker vnf-create --vnfd-name hello-world-tacker --name hello-world-tacker
/tmp/tacker/vHello_Tacker.sh: create VNF
Created a new vnf:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| description | Hello World |
| id | 156b4353-8f37-463a-ab59-b3fcb331463e |
| instance_id | fe03972c-4961-4c25-a7cc-197945ab67f5 |
| mgmt_url | |
| name | hello-world-tacker |
| placement_attr | {"vim_name": "VIM0"} |
| status | PENDING_CREATE |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
| vim_id | 087e006d-1ee4-4ac8-b4b9-14d8b790f3ae |
| vnfd_id | 800d391d-42b6-4571-a0e7-43a531ce634c |
+----------------+--------------------------------------+
/tmp/tacker/vHello_Tacker.sh: wait for hello-world-tacker to go ACTIVE
+ '[' 0 -eq 1 ']'
+ echo '/tmp/tacker/vHello_Tacker.sh: wait for hello-world-tacker to go ACTIVE'
+ active=
+ [[ -z '' ]]
++ tacker vnf-show hello-world-tacker
++ grep ACTIVE
+ active=
++ tacker vnf-show hello-world-tacker
++ grep -c ERROR
+ '[' 0 == 1 ']'
+ sleep 10
+ [[ -z '' ]]
++ tacker vnf-show hello-world-tacker
++ grep ACTIVE
+ active=
++ tacker vnf-show hello-world-tacker
++ grep -c ERROR
+ '[' 0 == 1 ']'
+ sleep 10
+ [[ -z '' ]]
++ tacker vnf-show hello-world-tacker
++ grep ACTIVE
+ active=
++ tacker vnf-show hello-world-tacker
++ grep -c ERROR
+ '[' 0 == 1 ']'
+ sleep 10
+ [[ -z '' ]]
++ tacker vnf-show hello-world-tacker
++ grep ACTIVE
+ active='| status | ACTIVE
|'
++ tacker vnf-show hello-world-tacker
++ grep -c ERROR
+ '[' 0 == 1 ']'
+ sleep 10
/tmp/tacker/vHello_Tacker.sh: directly set port security on ports (bug/unsupported in Mitaka Tacker?)
+ [[ -z | status | ACTIVE
| ]]
+ echo '/tmp/tacker/vHello_Tacker.sh: directly set port security on ports (bug/unsupported in Mitaka Tacker?)'
++ tacker vnf-show hello-world-tacker
++ awk '/instance_id/ { print $4 }'
+ HEAT_ID=fe03972c-4961-4c25-a7cc-197945ab67f5
++ openstack stack resource list fe03972c-4961-4c25-a7cc-197945ab67f5
++ awk '/VDU1 / { print $4 }'
+ SERVER_ID=fed3fd15-20de-46d7-b58e-35031a4a95e9
+ id=($(neutron port-list|grep -v "+"|grep -v name|awk '{print $2}'))
++ neutron port-list
++ grep -v +
++ awk '{print $2}'
++ grep -v name
+ for id in '${id[@]}'
++ neutron port-show 04146caa-80cc-4042-a996-ff5354dc39c5
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n '' ]]
+ for id in '${id[@]}'
++ neutron port-show 24cdb9a0-739b-41b0-bd97-54083656b0c4
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n '' ]]
+ for id in '${id[@]}'
++ neutron port-show 2c02f6bc-bccb-4d7a-a3bc-b94d975ccc63
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n | device_id | fed3fd15-20de-46d7-b58e-35031a4a95e9 | ]]
+ neutron port-update 2c02f6bc-bccb-4d7a-a3bc-b94d975ccc63 --port-security-enabled=True
Updated port: 2c02f6bc-bccb-4d7a-a3bc-b94d975ccc63
+ for id in '${id[@]}'
++ neutron port-show 57a665e0-bf2c-4cf6-90a1-e3c3b3e4e9d2
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n '' ]]
+ for id in '${id[@]}'
++ neutron port-show 6d8fbcb6-0bdc-4a41-9c92-cb4136ad0280
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n '' ]]
+ for id in '${id[@]}'
++ neutron port-show c8e90541-e1ea-497d-8ada-d8235c027377
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n | device_id | fed3fd15-20de-46d7-b58e-35031a4a95e9 | ]]
+ neutron port-update c8e90541-e1ea-497d-8ada-d8235c027377 --port-security-enabled=True
Updated port: c8e90541-e1ea-497d-8ada-d8235c027377
+ for id in '${id[@]}'
++ neutron port-show cedd5e74-9df7-499f-92fa-30b95993fc4a
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
+ [[ -n '' ]]
+ for id in '${id[@]}'
++ neutron port-show d3071bd9-6377-4950-b1cd-0325bd267d73
++ grep fed3fd15-20de-46d7-b58e-35031a4a95e9
/tmp/tacker/vHello_Tacker.sh: directly assign security group (unsupported in Mitaka Tacker)
+ [[ -n '' ]]
+ echo '/tmp/tacker/vHello_Tacker.sh: directly assign security group (unsupported in Mitaka Tacker)'
++ neutron security-group-list
++ awk '/ vHello / { print $2 }'
+ [[ -n '' ]]
+ neutron security-group-create vHello
Created a new security_group:
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value
|
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------+
| description |
|
| id | 07ca8d1e-8b58-45c2-a1a8-238f5a3873dd
|
| name | vHello
|
| security_group_rules | {"remote_group_id": null, "direction": "egress", "protocol": null, "description": "", "ethertype": "IPv4", "remote_ip_prefix": null, "port_range_max": null, "security_group_id": "07ca8d1e-8b58-
45c2-a1a8-238f5a3873dd", "port_range_min": null, "tenant_id": "0636394ba51f4014896ea4146cc445fd", "id": "f3e200ea-9ffb-49e8-bb12-94acf6c36607"} |
| | {"remote_group_id": null, "direction": "egress", "protocol": null, "description": "", "ethertype": "IPv6", "remote_ip_prefix": null, "port_range_max": null, "security_group_id": "07ca8d1e-8b58-
45c2-a1a8-238f5a3873dd", "port_range_min": null, "tenant_id": "0636394ba51f4014896ea4146cc445fd", "id": "e0d386d7-0d3c-4905-be2f-8955e9973f40"} |
| tenant_id | 0636394ba51f4014896ea4146cc445fd
|
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------+
+ neutron security-group-rule-create --direction ingress --protocol=TCP --port-range-min=22 --port-range-max=22 vHello
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | b1f411cb-8d41-425c-8bfb-0239f0f49c7c |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | 07ca8d1e-8b58-45c2-a1a8-238f5a3873dd |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------+--------------------------------------+
+ neutron security-group-rule-create --direction ingress --protocol=TCP --port-range-min=80 --port-range-max=80 vHello
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | 9dcee8d1-a93a-4934-b827-dd72f467cc6c |
| port_range_max | 80 |
| port_range_min | 80 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | 07ca8d1e-8b58-45c2-a1a8-238f5a3873dd |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------+--------------------------------------+
+ openstack server add security group fed3fd15-20de-46d7-b58e-35031a4a95e9 vHello
+ openstack server add security group fed3fd15-20de-46d7-b58e-35031a4a95e9 default
/tmp/tacker/vHello_Tacker.sh: associate floating IP
+ echo '/tmp/tacker/vHello_Tacker.sh: associate floating IP'
+ get_floating_net
+ network_ids=($(neutron net-list|grep -v "+"|grep -v name|awk '{print $2}'))
++ neutron net-list
++ grep -v +
++ awk '{print $2}'
++ grep -v name
+ for id in '${network_ids[@]}'
++ neutron net-show fb6ff3ca-79cc-42ad-b502-d23c4bb90996
++ grep router:external
++ grep -i true
+ [[ | router:external | True | != '' ]]
+ floating_network_id=fb6ff3ca-79cc-42ad-b502-d23c4bb90996
+ for id in '${network_ids[@]}'
++ neutron net-show 0cce3f27-6902-457c-a023-1b24b59652d2
++ grep router:external
++ grep -i true
+ [[ '' != '' ]]
+ for id in '${network_ids[@]}'
++ neutron net-show 42c9e40e-8ad3-4ea1-9198-66f80bd55af9
++ grep router:external
++ grep -i true
+ [[ '' != '' ]]
+ [[ -n fb6ff3ca-79cc-42ad-b502-d23c4bb90996 ]]
++ openstack network show fb6ff3ca-79cc-42ad-b502-d23c4bb90996
++ awk '/ name / { print $4 }'
+ floating_network_name=external
++ neutron floatingip-create external
++ awk '/floating_ip_address/ { print $4 }'
+ fip=192.168.37.212
+ nova floating-ip-associate fed3fd15-20de-46d7-b58e-35031a4a95e9 192.168.37.212
+ echo '/tmp/tacker/vHello_Tacker.sh: get vHello server address'
/tmp/tacker/vHello_Tacker.sh: get vHello server address
++ openstack server show fed3fd15-20de-46d7-b58e-35031a4a95e9
++ awk '/ addresses / { print $6 }'
/tmp/tacker/vHello_Tacker.sh: wait 30 seconds for vHello server to startup
+ SERVER_IP=192.168.37.212
+ SERVER_URL=http://192.168.37.212
+ echo '/tmp/tacker/vHello_Tacker.sh: wait 30 seconds for vHello server to startup'
+ sleep 30
/tmp/tacker/vHello_Tacker.sh: start vHello web server
+ echo '/tmp/tacker/vHello_Tacker.sh: start vHello web server'
+ chown root /tmp/tacker/vHello.pem
+ ssh -i /tmp/tacker/vHello.pem -x -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ubuntu@192.168.37.212
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '192.168.37.212' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:


http://www.ubuntu.com/business/services/cloud

0 packages can be updated.


0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by


applicable law.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by


applicable law.

sudo: unable to resolve host ta-4353-8f37-463a-ab59-b3fcb331463e-vdu1-eerkezglh4ph


<!DOCTYPE html>
<html>
<head>
<title>Hello World!</title>
<meta name="viewport" content="width=device-width, minimum-scale=1.0, initial-scale=1"/>
<style>
body { width: 100%; background-color: white; color: black; padding: 0px; margin: 0px; font-family: sans-serif; font-size:100%; }
</style>
</head>
<body>
Hello World!<br>
<a href="http://wiki.opnfv.org"><img src="https://www.opnfv.org/sites/all/themes/opnfv/logo.png"></a>
</body></html>
+ echo '/tmp/tacker/vHello_Tacker.sh: wait 10 seconds for vHello web server to startup'
+ sleep 10
/tmp/tacker/vHello_Tacker.sh: wait 10 seconds for vHello web server to startup
/tmp/tacker/vHello_Tacker.sh: verify vHello server is running
+ echo '/tmp/tacker/vHello_Tacker.sh: verify vHello server is running'
+ apt-get install -y curl
Reading package lists...
Building dependency tree...
Reading state information...
curl is already the newest version (7.47.0-1ubuntu2.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
++ curl http://192.168.37.212
++ grep -c 'Hello World'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 442 100 442 0 0 134k 0 --:--:-- --:--:-- --:--:-- 143k
/tmp/tacker/vHello_Tacker.sh: Hooray!
+ [[ 2 == 0 ]]
+ pass
+ echo '/tmp/tacker/vHello_Tacker.sh: Hooray!'
+ set +x
+ '[' 0 -eq 1 ']'
+ pass
+ echo 'vHello_Tacker.sh: Hooray!'
vHello_Tacker.sh: Hooray!
+ set +x

(Showed that Tacker - the VNFM - was running in a docker container)


[stack@undercloud tests]$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c34fec17629 copper-webapp "/bin/sh -c '/usr/sbi" 2 days ago Up 2 days 0.0.0.0:8257->80/tcp tiny_lalande
c95a7ed9aa4d ubuntu:xenial "/bin/bash" 2 days ago Up 2 days tacker

(Attached to the docker container running Tacker)


[stack@undercloud tests]$ sudo docker attach tacker
root@c95a7ed9aa4d:/#
root@c95a7ed9aa4d:/# ls /var/log/tacker
tacker.log
root@c95a7ed9aa4d:/# less /var/log/tacker/tacker.log

(In tacker.log, showed the process of heat-translator conversion of the TOSCA blueprint to Heat, and the invocation of Heat to create the stack which was then
visible through Horizon.)
root@c95a7ed9aa4d:/#

(hit CTRL-p CTRL-q to exit the container and leave it running)


[stack@undercloud tests]$

(showed the environment variables needed so the tests can be run. These were setup through "source overcloudrc" earlier, see above.)
[stack@undercloud tests]$ cat ~/overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.168.37.10:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,192.168.37.10,192.0.2.3
export OS_PASSWORD=zuQZ8pra3E3DMtxm4jsxA4rqK
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
export OS_TENANT_NAME=admin

(showed the copper project test which creates two VMs, one with a config drive containing the VM ID)
[stack@undercloud tests]$ cd ../../copper
[stack@undercloud copper]$ cd tests/adhoc
[stack@undercloud adhoc]$ ls
smoke01-clean.sh smoke01.sh

(executed the script to setup the VMs etc: https://git.opnfv.org/cgit/copper/plain/tests/adhoc/smoke01.sh)


[stack@undercloud adhoc]$ bash smoke01.sh
--2016-09-16 16:34:36-- https://git.opnfv.org/cgit/copper/plain/components/congress/install/bash/setenv.sh
Resolving git.opnfv.org (git.opnfv.org)... 198.145.29.81
Connecting to git.opnfv.org (git.opnfv.org)|198.145.29.81|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [text/plain]
Saving to: ‘/home/stack/setenv.sh’

100%[===========================================================================================================================>] 3,122 --.-K/s in 0s

2016-09-16 16:34:36 (787 MB/s) - ‘/home/stack/setenv.sh’ saved [3122/3122]

Centos-based install
Setup undercloud environment so we can get overcloud Controller server address
Get address of Controller node
Create the environment file
Create cirros-0.3.3-x86_64 image
Create floating IP for external subnet
Create internal network
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-09-16T16:34:56 |
| description | |
| id | 273081c8-4c3c-4f3b-b16f-3f544b14a6c3 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1450 |
| name | internal |
| port_security_enabled | True |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 67 |
| qos_policy_id | |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
| updated_at | 2016-09-16T16:34:56 |
+---------------------------+--------------------------------------+
Create internal subnet
Created a new subnet:
+-------------------+--------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr | 10.0.0.0/24 |
| created_at | 2016-09-16T16:34:57 |
| description | |
| dns_nameservers | 8.8.8.8 |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 01d2aae4-8a4b-4052-a54a-d3300be345f0 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | internal |
| network_id | 273081c8-4c3c-4f3b-b16f-3f544b14a6c3 |
| subnetpool_id | |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
| updated_at | 2016-09-16T16:34:57 |
+-------------------+--------------------------------------------+
Create router
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| description | |
| distributed | False |
| external_gateway_info | |
| ha | False |
| id | 21cf97bd-4d32-4f3e-bb01-31aa4d907ab2 |
| name | public_router |
| routes | |
| status | ACTIVE |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------------+--------------------------------------+
Create router gateway
Set gateway for router public_router
Add router interface for internal network
Added interface a4fc6160-583e-4c77-a146-35b15786b632 to router public_router.
Wait up to a minute as 'neutron router-interface-add' blocks the neutron-api for some time...
Get the internal network ID: try 1
Create smoke01 security group
Created a new security_group:
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| description | |
| id | 10c560f4-7f0a-4330-aaad-3545f2990a89 |
| name | smoke01 |
| security_group_rules | {"remote_group_id": null, "direction": "egress", "protocol": null, "description": "", "ethertype": "IPv4", "remote_ip_prefix": null, |
| | "port_range_max": null, "security_group_id": "10c560f4-7f0a-4330-aaad-3545f2990a89", "port_range_min": null, "tenant_id": |
| | "0636394ba51f4014896ea4146cc445fd", "id": "2e726615-51ce-4376-a6dc-81256b6125d7"} |
| | {"remote_group_id": null, "direction": "egress", "protocol": null, "description": "", "ethertype": "IPv6", "remote_ip_prefix": null, |
| | "port_range_max": null, "security_group_id": "10c560f4-7f0a-4330-aaad-3545f2990a89", "port_range_min": null, "tenant_id": |
| | "0636394ba51f4014896ea4146cc445fd", "id": "a15dd8b9-9dcb-4122-aec4-f55ea1a7cd8d"} |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
Add rule to smoke01 security group
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | 9e02d60b-87a8-4117-bceb-ec149b6a5015 |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 10c560f4-7f0a-4330-aaad-3545f2990a89 |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------+--------------------------------------+
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | 8b2d8e7d-c716-4f61-9794-299d163f19bb |
| port_range_max | |
| port_range_min | |
| protocol | icmp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 10c560f4-7f0a-4330-aaad-3545f2990a89 |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------+--------------------------------------+
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| description | |
| direction | egress |
| ethertype | IPv4 |
| id | 78ff53c2-4bfd-4d58-baee-1610755f7e57 |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 10c560f4-7f0a-4330-aaad-3545f2990a89 |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------+--------------------------------------+
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| description | |
| direction | egress |
| ethertype | IPv4 |
| id | 0525f7e3-a8c5-4c9f-8012-dcf6bef0a53b |
| port_range_max | |
| port_range_min | |
| protocol | icmp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 10c560f4-7f0a-4330-aaad-3545f2990a89 |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
+-------------------+--------------------------------------+
Create Nova key pair
/home/stack/.ssh/known_hosts updated.
Original contents retained as /home/stack/.ssh/known_hosts.old
Boot cirros1
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-00000023 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | iby6ecuH4TaG |
| config_drive | True |
| created | 2016-09-16T16:35:26Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | eb0488dc-d2ed-4daa-a132-9796a7aa2a0a |
| image | cirros-0.3.3-x86_64 (5607ca64-e21f-4137-b4d7-f7ba381ec076) |
| key_name | smoke01 |
| name | cirros1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 0636394ba51f4014896ea4146cc445fd |
| properties | |
| security_groups | [{u'name': u'smoke01'}] |
| status | BUILD |
| updated | 2016-09-16T16:35:26Z |
| user_id | 448364052ccc4117912e2def95066b23 |
+--------------------------------------+------------------------------------------------------------+
Get cirros1 instance ID
Wait for cirros1 to go ACTIVE
Associate floating IP to cirros1
Boot cirros2
+--------------------------------------+------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | cirros2 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000024 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-a1zegbty |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | yDLs5fFkRAf4 |
| config_drive | |
| created | 2016-09-16T16:35:48Z |
| description | - |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | b74b44c5-5db2-448d-a108-6bd64c9330cf |
| image | cirros-0.3.3-x86_64 (5607ca64-e21f-4137-b4d7-f7ba381ec076) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | cirros2 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | smoke01 |
| status | BUILD |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
| updated | 2016-09-16T16:35:48Z |
| user_id | 448364052ccc4117912e2def95066b23 |
+--------------------------------------+------------------------------------------------------------+
Verify internal network connectivity
Warning: Permanently added '192.168.37.216' (RSA) to the list of known hosts.
Verify internal network connectivity
Warning: Permanently added '192.168.37.216' (RSA) to the list of known hosts.
Verify public network connectivity
Warning: Permanently added '192.168.37.216' (RSA) to the list of known hosts.
Hooray!

(connected to the cirros1 VM to show the metadata and config drive)


(noted that all that was needed to create the config drive was the “--config-drive True” flag on the Nova boot command)
(noted that it’s not clear how the config drive can be created through the TOSCA/Tacker blueprints, so for now we may have to use scripts such as smoke01.sh
directly)
[stack@undercloud adhoc]$ ssh -i /tmp/smoke01 -x -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no cirros@192.168.37.216
Warning: Permanently added '192.168.37.216' (RSA) to the list of known hosts.
$

(mounted the config drive)


$ sudo mount /dev/sr0 /mnt/

(showed that the metadata is available through a local web server)


$ curl http://169.254.169.254
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
latest$
$ curl http://169.254.169.254/latest
$ curl http://169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups$

(showed that the instance-id in this metadata is *NOT* the Nova VM ID. Still trying to figure out how to add the VM ID here.)
$ curl http://169.254.169.254/latest/meta-data/instance-id
i-00000023$
$ hostname
cirros1
$ curl http://169.254.169.254/latest/meta-data/public-hostname
cirros1$
$

(showed the similar metadata available in the config drive, which *does* contain the Nova VM ID))
$ ls /mnt/openstack/latest
meta_data.json network_data.json vendor_data.json
$ cat /mnt/openstack/latest/meta_data.json
{"admin_pass": "iby6ecuH4TaG", "random_seed":
"PLkEJQD0jLttZFy7p5XGiLG3pJNb0hNNhUAPdhTk5zicBvY50ceydHRhzC/ybvpFnIn1N4YIKXzb+HFF4Lf9lKhDROwZbHKE5cr2GPuLbEV9kYLFFqwq1W7t2A5YHTtu4vbWHHTGC8MqLiXtlCvI7G/EDTii7mU2SCcT7l3qtTbVGtxSvxY/v4BQCIGfSNo+z9OLb7l7yV5/GV4b66uAKpJ1C
2bJodreVK8Ii2wC/lf6n2pj3pPeatqm1oNL4JLiHbdqtpARVbznv7iPhfv113rf/KNhgLasFc2z3YXXVKwqBMY8OtgKrtBEFAdWM9/lGV0idRkjakWvGlp1vAyCxgnEZkHumcRQsVB23Ct2rZ1H5/ECTZzvf2wlHlSB/i/lpLlXve7Q1gx/Y9Un2m3cFo1EaN9kmT9uI0Igj32xnSOO3VDFzc8
Nai/RaZQv/xtv3MCOPUi/v0HxsDQbh+vtNEhshjX3C+jNRYRiVEPsS1nnmKEPLxK7F/trXAPsiTJwkSIgBf7uISQc+6Ucjt6xtGxA4rqrC7JOTrZBf8Vdp+BxRXpfwLjDlBKF9Ufr6I/9Q6kmlNTR4wgMREQ4rLdR1K6SKuc5NRPHChopgaGViCqK3exjY0EO2fPe8LXE1weqvDO2TKrsV7Wsx
LmK7ZdcB7QRTFX4B9qbs6GjTEJsw9k=", "uuid": "eb0488dc-d2ed-4daa-a132-9796a7aa2a0a", "availability_zone": "nova", "keys": [{"data": "ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCwCmOAd4AxhIQl/Fvr8HPT6UMSMGadS/cn15x1lE6hjP+IPp3n3baCfMQrNy3AbXA2VzA4EAfEjLqO0HiFJloMdwwXBUliT5cKOHxRHocj3RRVxeKqoAiQP6hxPmOUvIFZZ/GYbFH6frpu6GhiT4WFZ6BxtRSFvLsM5y5/zJk8K8gOJc0rtHGqIu5bhv
/zRNP1d+KmGYkMA8xgpdDENYWY9GVUPSsnRgP6V7lDfTurzGNSEh0GTvWzEmbxlY/qxbT9atCzomMFnrFFZWKHIZQVcjLoDzic3/S4IGsY0jR3ENdizRLnJKvLmxMtoP1k5ND3/8BAcWIys0LXNI0dzQd/ Generated-by-Nova", "type": "ssh", "name": "smoke01"}],
"hostname": "cirros1", "launch_index": 0, "public_keys": {"smoke01": "ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCwCmOAd4AxhIQl/Fvr8HPT6UMSMGadS/cn15x1lE6hjP+IPp3n3baCfMQrNy3AbXA2VzA4EAfEjLqO0HiFJloMdwwXBUliT5cKOHxRHocj3RRVxeKqoAiQP6hxPmOUvIFZZ/GYbFH6frpu6GhiT4WFZ6BxtRSFvLsM5y5/zJk8K8gOJc0rtHGqIu5bhv
/zRNP1d+KmGYkMA8xgpdDENYWY9GVUPSsnRgP6V7lDfTurzGNSEh0GTvWzEmbxlY/qxbT9atCzomMFnrFFZWKHIZQVcjLoDzic3/S4IGsY0jR3ENdizRLnJKvLmxMtoP1k5ND3/8BAcWIys0LXNI0dzQd/ Generated-by-Nova"}, "project_id":
"0636394ba51f4014896ea4146cc445fd", "name": "cirros1"}$
$ exit
Connection to 192.168.37.216 closed.

(showed the Tacker data for the VNF. The "instance_id" here is the *Heat* instance, not the Nova VM ID. Through this Heat ID you can get the details of the stack,
e.g. VMs.)
[stack@undercloud adhoc]$ sudo docker attach tacker
root@c95a7ed9aa4d:/#
root@c95a7ed9aa4d:/# source /tmp/tacker/admin-openrc.sh
root@c95a7ed9aa4d:/# tacker vnf-list
+-----------------------------+--------------------+-------------+---------------------------+--------+-----------------------------+------------------------+
| id | name | description | mgmt_url | status | vim_id | placement_attr |
+-----------------------------+--------------------+-------------+---------------------------+--------+-----------------------------+------------------------+
| 156b4353-8f37-463a- | hello-world-tacker | Hello World | {"VDU1": "192.168.200.4"} | ACTIVE | 087e006d- | {u'vim_name': u'VIM0'} |
| ab59-b3fcb331463e | | | | | 1ee4-4ac8-b4b9-14d8b790f3ae | |
+-----------------------------+--------------------+-------------+---------------------------+--------+-----------------------------+------------------------+
root@c95a7ed9aa4d:/# tacker vnf-show hello-world-tacker
+----------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| attributes | {"heat_template": "heat_template_version: 2013-05-23\ndescription: 'Hello World\n\n '\nparameters: {}\nresources:\n VDU1:\n type: |
| | OS::Nova::Server\n properties:\n availability_zone: nova\n config_drive: false\n flavor: {get_resource: VDU1_flavor}\n |
| | image: models-xenial-server\n networks:\n - port:\n get_resource: CP1\n - port:\n get_resource: CP2\n |
| | user_data_format: SOFTWARE_CONFIG\n CP1:\n type: OS::Neutron::Port\n properties:\n network: vnf_mgmt\n port_security_enabled: |
| | false\n CP2:\n type: OS::Neutron::Port\n properties:\n network: vnf_private\n port_security_enabled: false\n VDU1_flavor:\n |
| | properties: {disk: 4, ram: 1024, vcpus: 1}\n type: OS::Nova::Flavor\noutputs:\n mgmt_ip-VDU1:\n value:\n get_attr: [CP1, fixed_ips, |
| | 0, ip_address]\n", "monitoring_policy": "{\"vdus\": {}}"} |
| description | Hello World |
| id | 156b4353-8f37-463a-ab59-b3fcb331463e |
| instance_id | fe03972c-4961-4c25-a7cc-197945ab67f5 |
| mgmt_url | {"VDU1": "192.168.200.4"} |
| name | hello-world-tacker |
| placement_attr | {"vim_name": "VIM0"} |
| status | ACTIVE |
| tenant_id | 0636394ba51f4014896ea4146cc445fd |
| vim_id | 087e006d-1ee4-4ac8-b4b9-14d8b790f3ae |
+----------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
root@c95a7ed9aa4d:/#

You might also like