From: Mathieu Lemay Date: Thu, 24 Apr 2014 20:33:39 +0000 (-0400) Subject: Initial Documentation Commit X-Git-Tag: release/helium-sr1~82 X-Git-Url: https://git.opendaylight.org/gerrit/gitweb?a=commitdiff_plain;h=refs%2Fchanges%2F94%2F6394%2F3;p=docs.git Initial Documentation Commit Signed-off-by: Mathieu Lemay Change-Id: Ib8aa0d053db544c783a0c344ef91c19169120a43 --- diff --git a/.gitignore b/.gitignore new file mode 100644 index 000000000..eb5a316cb --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +target diff --git a/README.md b/README.md new file mode 100644 index 000000000..fce48f389 --- /dev/null +++ b/README.md @@ -0,0 +1 @@ +This project is providing manuals and documentation for OpenDaylight diff --git a/manuals/common/app_support.xml b/manuals/common/app_support.xml new file mode 100644 index 000000000..756492318 --- /dev/null +++ b/manuals/common/app_support.xml @@ -0,0 +1,109 @@ + + + + Community support + Many resources are available to help you run and use OpenDaylight. Members of the + OpenDaylight community can answer questions and help with bug suspicions. We are constantly + improving and adding to the main features of OpenDaylight, but if you have any problems, do + not hesitate to ask. Use the following resources to get OpenDaylight support and + troubleshoot your existing installations. +
+ Documentation + For the available OpenDaylight documentation, see docs.opendaylight.org. + The following books explain how to install and operate OpenDaylight SDN Controller: + + + + Installation Guide + + + + + End + User Guide + + + + + Command Line Interface Reference + + + + The following documentation provides reference and guidance information for the + OpenStack APIs: + + + OpenDaylight API + Reference + + +
+
+ OpenDaylight mailing lists + A great way to get answers and insights is to post your question or problematic + scenario to the OpenStack mailing list. You can learn from and help others who might + have similar issues. To subscribe or view the archives, go to https://lists.opendaylight.org/mailman/listinfo. You might be interested in + the other mailing lists for specific projects or development, which you can find on the wiki. + +
+
+ The OpenDaylight wiki + The OpenDaylight + wiki contains a broad range of topics but some of the information can be + difficult to find or is a few pages deep. Fortunately, the wiki search feature enables + you to search by title or content. If you search for specific information, such as about + networking or nova, you can find lots of relevant material. More is being added all the + time, so be sure to check back often. You can find the search box in the upper right + corner of any OpenDaylight wiki page. +
+
+ Bugzilla Bugs + The OpenDaylight community values your set up and testing efforts and wants your + feedback. To log a bug, you must sign up for an OpenDaylight account at https://launchpad.net/+login. You + can view existing bugs and report bugs in the Bugzilla bug tracker at . Use the search feature to determine + whether the bug has already been reported or even better, already fixed. If it still + seems like your bug is unreported, fill out a bug report. + Some tips: + + + Give a clear, concise summary! + + + Provide as much detail as possible in the description. Paste in your command + output or stack traces, links to screen shots, and so on. + + + Be sure to include the software and package versions that you are using, + especially if you are using a development branch, such as, "Hydrogen + release" vs git commit + bc79c3ecc55929bac585d04a03475b72e06a3208. + + + Any deployment specific information is helpful, such as Ubuntu 12.04 or + multi-node install. + + +
+
+ The OpenDaylight IRC channel + The OpenDaylight community is usually available in the #opendaylight IRC channel on + the Freenode network. You can hang out, ask questions, or get immediate feedback for + urgent and pressing issues. To install an IRC client or use a browser-based client, go + to http://webchat.freenode.net/. + You can also use Colloquy (Mac OS X, http://colloquy.info/), mIRC (Windows, http://www.mirc.com/), or XChat (Linux). + When you are in the IRC channel and want to share code or command output, the generally + accepted method is to use a Paste Bin. +
+
diff --git a/manuals/glossary/glossary-terms.xml b/manuals/glossary/glossary-terms.xml new file mode 100644 index 000000000..a6ed01698 --- /dev/null +++ b/manuals/glossary/glossary-terms.xml @@ -0,0 +1,4953 @@ + + + Glossary + + + Licensed under the Apache License, Version 2.0 (the + "License"); you may not use this file except in + compliance with the License. You may obtain a copy of + the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in + writing, software distributed under the License is + distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR + CONDITIONS OF ANY KIND, either express or implied. See + the License for the specific language governing + permissions and limitations under the License. + + + + + A + + absolute limit + + Impassable limits for guest VMs. Settings include + total RAM size, maximum number of vCPUs, and + maximum disk size. + + + + access control list + + A list of permissions attached to an object. An ACL specifies which users or system processes have access to objects. It also +defines which operations can be performed on specified objects. +Each entry in a typical ACL specifies a subject and an operation. For instance, ACL entry, (Alice, delete), for a file gives Alice permission to delete the file. + + + + access key + + Alternative term for an Amazon EC2 access key. + See EC2 Access key. + + + + account + + The Object Storage context of an account. Do not confuse + with a user account from an authentication service such + as Active Directory, /etc/passwd, OpenLDAP, + OpenStack Identity Service, and so on. + + + + account auditor + + Checks for missing replicas and incorrect or + corrupted objects in a specified Object Storage account by + running queries against the back-end SQLite + database. + + + + account database + + A SQLite database that contains Object Storage accounts + and related metadata and that the + accounts server accesses. + + + + account reaper + + An Object Storage worker that scans for and deletes + account databases and that the account server has marked + for deletion. + + + + + account server + + Lists containers in Object Storage and stores container + information in the account database. + + + + account service + + An Object Storage component that provides account services + such as list, create, modify, and audit. Do not + confuse with OpenStack Identity Service, OpenLDAP, or + similar user account services. + + + + accounting + + The Compute Service provides accounting information + through the event notification and system usage data + facilities. + + + + ACL + + See access control list. + + + + active/active configuration + + In a high availability setup with an active/active + configuration, several systems share the load together + and if one fails, the load is distributed to the + remaining systems. + + + + + Active Directory + + Authentication and Identity Service by + Microsoft, based on LDAP. Supported in + OpenStack. + + + + active/passive configuration + + In a high availability setup with an + active/passive configuration, systems are set up to + bring additional resources online to replace those that + have failed. + + + + + address pool + + A group of fixed and/or floating IP addresses + that are assigned to a project and can be + used by or assigned to the VM instances in a + project. + + + + admin API + + A subset of API calls that are accessible to + authorized administrators and are generally not + accessible to end users or the public internet, + can exist as a separate service (keystone) or can + be a subset of another API (nova). + + + + admin server + + In the context of the Identity Service, the worker + process that provides access to the admin API. + + + + Advanced Message Queuing Protocol + (AMQP) + + The open standard messaging protocol used by + OpenStack components for intra-service communications, + provided by RabbitMQ, Qpid, or + ZeroMQ. + + + + Advanced RISC Machine (ARM) + + Lower power consumption CPU often found in + mobile and embedded devices. Supported by + OpenStack. + + + + alert + + The Compute Service can send alerts through its + notification system, which includes a facility to + create custom notification drivers. Alerts can be + sent to and displayed on the horizon dashboard. + + + + allocate + + The process of taking floating IP address from + the address pool so it can be associated with a + fixed IP on a guest VM instance. + + + + Amazon Kernel Image (AKI) + + Both a VM container format and disk format. + Supported by Image Service. + + + + Amazon Machine Image (AMI) + + Both a VM container format and disk format. + Supported by Image Service. + + + + Amazon Ramdisk Image (ARI) + + Both a VM container format and disk format. + Supported by Image Service. + + + + AMQP + + Advanced Message Queue Protocol. An open + Internet protocol for reliably sending and + receiving messages. It enables building a diverse, + coherent messaging ecosystem. + + + + Anvil + + A project that ports the shell script-based + project named devstack to Python. + + + + Apache + + The Apache Software Foundation supports + the Apache community of open-source software + projects. These projects provide software products for the + public good. + + + + Apache License 2.0 + + All OpenStack core projects are provided under + the terms of the Apache License 2.0 + license. + + + + Apache Web Server + + The most common web server software currently + used on the Internet. + + + + API + + Application programming interface. + + + + API endpoint + + The daemon, worker, or service that a client + communicates with to access an API. API endpoints + can provide any number of services such as + authentication, sales data, performance + metrics, Compute VM commands, census data, and so + on. + + + + API extension + + + Custom modules that extend some OpenStack core APIs. + + + + + API extension plug-in + + Alternative term for a Networking plug-in or + Networking API extension. + + + + API key + + Alternative term for an API token. + + + + API server + + Any node running a daemon or worker that + provides an API endpoint. + + + + API token + + Passed to API requests and used by OpenStack to + verify that the client is authorized to run the + requested operation. + + + + API version + + In OpenStack, the API version for a project is + part of the URL. For example, + example.com/nova/v1/foobar. + + + + applet + + A Java program that can be embedded into a web + page. + + + + Application Programming Interface + (API) + + A collection of specifications used to access a + service, application, or program. Includes service + calls, required parameters for each call, and the + expected return values. + + + + application server + + A piece of software that makes available another + piece of software over a network. + + + + Application Service Provider (ASP) + + Companies that rent specialized applications + that help businesses and organizations provide + additional services with less cost. + + + + arptables + + Tool used for maintaining Address Resolution Protocol + packet filter rules in the Linux kernel firewall modules. + Used along with iptables, ebtables, and ip6tables in + Compute to provide firewall services for VMs. + + + + associate + + The process associating a Compute floating IP + address with a fixed IP address. + + + + Asynchronous JavaScript and XML + (AJAX) + + A group of interrelated web development + techniques used on the client-side to create + asynchronous web applications. Used extensively in + horizon. + + + + ATA over Ethernet (AoE) + + A disk storage protocol tunneled within + Ethernet. + + + + attach + + The process of connecting a VIF or vNIC to a L2 + network in Networking. In the context of Compute, this + process connects a storage volume to an + instance. + + + + attachment (network) + + Association of an interface ID to a logical + port. Plugs an interface into a port. + + + + auditing + + Provided in Compute through the system usage data + facility. + + + + auditor + + A worker process that verifies the integrity + of Object Storage objects, containers, and accounts. + Auditors is the collective term for the Object Storage + account auditor, container auditor, and object + auditor. + + + + Austin + + Project name for the initial release of + OpenStack. + + + + auth node + + Alternative term for an Object Storage authorization + node. + + + + authentication + + The process that confirms that the user, + process, or client is really who they say they are + through private key, secret token, password, + fingerprint, or similar method. + + + + authentication token + + A string of text provided to the client after + authentication. Must be provided by the user or + process in subsequent requests to the API + endpoint. + + + + AuthN + + The Identity Service component that provides + authentication services. + + + + authorization + + The act of verifying that a user, process, or + client is authorized to perform an action. + + + + authorization node + + An Object Storage node that provides authorization + services. + + + + AuthZ + + The Identity Service component that provides high-level + authorization services. + + + + Auto ACK + + Configuration setting within RabbitMQ that + enables or disables message acknowledgment. + Enabled by default. + + + + auto declare + + A Compute RabbitMQ setting that determines if a + message exchange is automatically created when the + program starts. + + + + availability zone + + An Amazon EC2 concept of an isolated area that + is used for fault tolerance. Do not confuse with + an OpenStack Compute zone or cell. + + + + AWS + + Amazon Web Services. + + + + + + B + + back-end + + Interactions and processes that are obfuscated + from the user, such as Compute volume mount, data + transmission to an isCSI target by a daemon, or + Object Storage object integrity checks. + + + + + back-end catalog + + The storage method used by the Identity Service catalog + service to store and retrieve information about + API endpoints that are available to the client. + Examples include a SQL database, LDAP database, or + KVS back end. + + + + back-end store + + The persistent data store used to save and + retrieve a information for a service, such as lists of + Object Storage objects, current state of guest VMs, lists + of user names, and so on. Also, the method that the + Image Service uses to get and store VM images. + Options include Object Storage, local file system, S3, and + HTTP. + + + + bandwidth + + The amount of available data used by + communication resources such as the Internet. + Represents the amount of data that is used to + download things or the amount of data available to + download. + + + + bare + + An Image Service container format that indicates that no + container exists for the VM image. + + + + base image + + An OpenStack-provided image. + + + + Bexar + + A grouped release of projects related to + OpenStack that came out in February of 2011. It + included Compute (nova) and Object Storage (swift) + only. + + + + binary + + Information that consists solely of ones and + zeroes, which is the language of computers. + + + + bit + + A bit is a single digit number that is in base + of 2 (either a zero or one). Bandwidth usage is + measured in bits-per-second. + + + + bit-per-second (BPS) + + The universal measurement of how quickly data is + transferred from place to place. + + + + block device + + A device that moves data in the form of blocks. + These device nodes interface the devices, such as + hard disks, CD-ROM drives, flash drives, and other + addressable regions of memory. + + + + block migration + + A method of VM live migration used by KVM to + evacuate instances from one host to another with + very little downtime during a user-initiated + switch-over. Does not require shared storage. + Supported by Compute. + + + + Block Storage + + The OpenStack core project that enables management + of volumes, volume snapshots, and volume types. The + project name of Block Storage is cinder. + + + + + BMC + + Baseboard Management Controller. The + intelligence in the IPMI architecture, which is a + specialized micro-controller that is embedded on + the motherboard of a computer and acts as a + server. Manages the interface between system + management software and platform hardware. + + + + bootable disk image + + A type of VM image that exists as a single, + bootable file. + + + + Bootstrap Protocol (BOOTP) + + A network protocol used by a network client to + obtain an IP address from a configuration server. + Provided in Compute through the dnsmasq daemon when + using either the FlatDHCP manager or VLAN manager + network manager. + + + + browser + + Any client software that enables a computer or + device to access the Internet. + + + + builder file + + Contains configuration information that Object + Storage uses to reconfigure a ring or recreate it from + scratch after a serious failure. + + + + + button class + + A group of related button types within horizon. + Buttons to start, stop, and suspend VMs are in one + class. Buttons to associate and disassociate + floating IP addresses are in another class, and so + on. + + + + byte + + Set of bits that make up a single character; + there are usually 8 bits to a byte. + + + + + + C + + CA + + Certificate Authority or Certification + Authority. In cryptography, an entity that issues + digital certificates. The digital certificate + certifies the ownership of a public key by the + named subject of the certificate. This enables + others (relying parties) to rely upon signatures + or assertions made by the private key that + corresponds to the certified public key. In this + model of trust relationships, a CA is a trusted + third party for both the subject (owner) of the + certificate and the party relying upon the + certificate. CAs are characteristic of many public + key infrastructure (PKI) schemes. + + + + cache pruner + + A program that keeps the Image Service VM image + cache at or below its configured maximum size. + + + + Cactus + + An OpenStack grouped release of projects that + came out in the spring of 2011. It included + Compute (nova), Object Storage (swift), and the + Image service (glance). + + + + CALL + + One of the RPC primitives used by the OpenStack + message queue software. Sends a message and waits + for a response. + + + + capability + + Defines resources for a cell, including CPU, + storage, and networking. Can apply to the specific + services within a cell or a whole cell. + + + + capacity cache + + A Compute back end database table that contains + the current workload, amount of free RAM, + number of VMs running on each host. Used to + determine on which VM a host starts. + + + + capacity updater + + A notification driver that monitors VM instances + and updates the capacity cache as needed. + + + + CAST + + One of the RPC primitives used by the OpenStack + message queue software. Sends a message and does + not wait for a response. + + + + catalog + + + A list of API endpoints that are available to a user + after authentication with the Identity Service. + + + + + catalog service + + + An Identity Service service that lists API endpoints + that are available to a user after authentication + with the Identity Service. + + + + + ceilometer + + The project name for the Telemetry service, which + is an integrated project that provides metering and + measuring facilities for OpenStack. + + + + cell + + Provides logical partitioning of Compute resources + in a child and parent relationship. Requests are + passed from parent cells to child cells if the + parent cannot provide the requested + resource. + + + + cell forwarding + + A Compute option that enables parent cells to pass + resource requests to child cells if the parent + cannot provide the requested resource. + + + + cell manager + + The Compute component that contains a list of the + current capabilities of each host within the cell + and routes requests as appropriate. + + + + CentOS + + A Linux distribution that is compatible with + OpenStack. + + + + Ceph + + Massively scalable distributed storage system + that consists of an object store, block store, and + POSIX-compatible distributed file system. + Compatible with OpenStack. + + + + CephFS + + The POSIX-compliant file system provided by + Ceph. + + + + certificate authority + + A simple certificate authority provided by Compute + for cloudpipe VPNs and VM image decryption. + + + + Challenge-Handshake Authentication Protocol + (CHAP) + + An iSCSI authentication method supported by + Compute. + + + + chance scheduler + + A scheduling method used by Compute that randomly + chooses an available host from the pool. + + + + changes since + + A Compute API parameter that downloads + changes to the requested item since your last + request, instead of downloading a new, fresh set + of data and comparing it against the old + data. + + + + Chef + + An operating system configuration management + tool supporting OpenStack deployments. + + + + child cell + + If a requested resource such as CPU time, disk + storage, or memory is not available in the parent + cell, the request is forwarded to its associated + child cells. If the child cell can fulfill the + request, it does. Otherwise, it attempts to pass + the request to any of its children. + + + + cinder + + A core OpenStack project that provides block + storage services for VMs. + + + + Cisco neutron plug-in + + A Networking plug-in for Cisco devices and + technologies including UCS and Nexus. + + + + cloud architect + + A person who plans, designs, and oversees the + creation of clouds. + + + + cloud computing + + A model that enables access to a shared pool of + configurable computing resources, such as + networks, servers, storage, applications, and + services, that can be rapidly provisioned and + released with minimal management effort or service + provider interaction. + + + + cloud controller + + Collection of Compute components that represent the + global state of the cloud, talks to services such + as Identity Service authentication, Object Storage, + and node/storage workers through a queue. + + + + cloud controller node + + A node that runs network, volume, API, scheduler + and image services. Each service may be broken out + into separate nodes for scalability or + availability. + + + + Cloud Data Management Interface + (CDMI) + + SINA standard that defines a RESTful API for + managing objects in the cloud, currently + unsupported in OpenStack. + + + + Cloud Infrastructure Management Interface + (CIMI) + + An in-progress specification for cloud + management. Currently unsupported in + OpenStack. + + + + cloud-init + + A package commonly installed in VM images that + performs initialization of an instance after boot + using information that it retrieves from the + metadata service such as the SSH public key and + user data. + + + + cloudadmin + + One of the default roles in the Compute RBAC + system. Grants complete system access. + + + + cloudpipe + + A Compute service that creates VPNs on a + per-project basis. + + + + cloudpipe image + + A pre-made VM image that serves as a cloudpipe + server. Essentially, OpenVPN running on + Linux. + + + + CMDB + + Configuration Management Database. + + + + command filter + + Lists allowed commands within the Compute rootwrap + facility. + + + + community project + + A project that is not officially endorsed by the + OpenStack Foundation. If the project is successful + enough, it might be elevated to an incubated + project and then to a core project, or it might be + merged with the main code trunk. + + + + compression + + Reduce the size of files by special encoding, the file + can be decompressed again to its original content. + OpenStack supports compression at the Linux file + system level but does not support compression for + things such as Object Storage objects or Image Service VM + images. + + + + Compute + + The OpenStack core project that provides compute + services. The project name of the Compute Service is nova. + + + + + Compute API + + The nova-api + daemon provides + access to nova services. Can communicate with + other APIs, such as the Amazon EC2 API. + + + + compute controller + + The Compute component that chooses suitable hosts + on which to start VM instances. + + + + compute host + + Physical host dedicated to running compute + nodes. + + + + compute node + + A node that runs the nova-compute daemon, a VM + instance that provides a wide range of services + such as a web services and analytics. + + + + compute service + + Name for the Compute component that + manages VMs. + + + + compute worker + + The Compute component that runs on each compute + node and manages the VM instance life cycle, + including run, reboot, terminate, attach/detach + volumes, and so on. Provided by the + nova-compute + daemon. + + + + concatenated object + + A set of segment objects that Object Storage combines + and sends to the client. + + + + + conductor + + In Compute, conductor is the process that proxies + database requests from the compute process. Using + conductor improves security as compute nodes do not + need direct access to the database. + + + + consistency window + + The amount of time it takes for a new Object Storage + object to become accessible to all clients. + + + + console log + + Contains the output from a Linux VM console in + Compute. + + + + container + + Organizes and stores objects in Object Storage. + Similar to the concept of a Linux directory but + cannot be nested. Alternative term for an Image Service + container format. + + + + container auditor + + Checks for missing replicas or incorrect objects + in specified Object Storage containers through queries + to the SQLite back-end database. + + + + container database + + + A SQLite database that stores Object Storage + containers and container metadata. The container + server accesses this database. + + + + + container format + + + A wrapper used by the Image Service that contains a + VM image and its associated metadata, such as + machine state, OS disk size, and so on. + + + + + container server + + An Object Storage server that manages containers. + + + + container service + + The Object Storage component that provides container + services, such as create, delete, list, and so + on. + + + + controller node + + Alternative term for a cloud controller + node. + + + + core API + + Depending on context, the core API is either the + OpenStack API or the main API of a specific core + project, such as Compute, Networking, Image Service, + and so on. + + + + core project + + An official OpenStack project. Currently consists of Compute (nova), Object + Storage (swift), Image Service (glance), Identity (keystone), Dashboard + (horizon), Networking (neutron), and Block Storage (cinder). The Telemetry + module (ceilometer) and Orchestration module (heat) are integrated projects as + of the Havana release. In the Icehouse release, the Database module (trove) + gains integrated project status. + + + + cost + + Under the Compute distributed scheduler this is + calculated by looking at the capabilities of each + host relative to the flavor of the VM instance + being requested. + + + + credentials + + Data that is only known to or accessible by a + user that is used to verify the user is who they + say they are and presented to the server during + authentication. Examples include a password, + secret key, digital certificate, fingerprint, and + so on. + + + + Crowbar + + An open source community project by Dell that + aims to provide all necessary services to quickly + deploy clouds. + + + + current workload + + An element of the Compute capacity cache that is + calculated based on the number of build, snapshot, + migrate, and resize operations currently in + progress on a given host. + + + + customer + + Alternative term for tenant. + + + + customization module + + A user-created Python module that is loaded by + horizon to change the look and feel of the + dashboard. + + + + + + D + + daemon + + A process that runs in the background and waits + for requests. May or may not listen on a TCP or + UDP port. Do not confuse with a worker. + + + + DAC + + Discretionary access control. Governs the + ability of subjects to access objects, while + enabling users to make policy decisions and assign + security attributes. The traditional UNIX system + of users, groups, and read-write-execute + permissions is an example of DAC. + + + + dashboard + + The web-based management interface for + OpenStack. An alternative name for horizon. + + + + data encryption + + Both Image Service and Compute support encrypted virtual + machine (VM) images (but not instances). + In-transit data encryption is supported in + OpenStack using technologies such as HTTPS, SSL, + TLS, and SSH. Object Storage does not support object + encryption at the application level but may support storage + that uses disk encryption. + + + + database ID + + A unique ID given to each replica of an Object Storage + database. + + + + database replicator + + An Object Storage component that copies changes in the + account, container, and object databases to other + nodes. + + + + deallocate + + The process of removing the association between + a floating IP address and a fixed IP address. + Once this association is removed, the floating IP + returns to the address pool. + + + + + Debian + + A Linux distribution that is compatible with + OpenStack. + + + + deduplication + + The process of finding duplicate data at the + disk block, file, and/or object level to minimize + storage use, currently unsupported within + OpenStack. + + + + default panel + + The default panel that is displayed when a user + accesses the horizon dashboard. + + + + default tenant + + New users are assigned to this tenant + if no tenant is specified when a user is + created. + + + + default token + + An Identity Service token that is not associated with a + specific tenant and is exchanged for a scoped + token. + + + + delayed delete + + An option within Image Service so that rather than + immediately delete an image, it is deleted after a + pre-defined number of seconds. + + + + delivery mode + + Setting for the Compute RabbitMQ message delivery + mode, can be set to either transient or + persistent. + + + + deprecated auth + + An option within Compute that enables administrators + to create and manage users through the + nova-manage + command as opposed to using the Identity Service. + + + + developer + + One of the default roles in the Compute RBAC system + and is the default role assigned to a new + user. + + + + device ID + + Maps Object Storage partitions to physical storage + devices. + + + + device weight + + + Distributes partitions proportionately across + Object Storage devices based on the storage + capacity of each device. + + + + + DevStack + + Community project that uses shell scripts to + quickly build complete OpenStack development + environments. + + + + DHCP + + Dynamic Host Configuration Protocol. A network + protocol that configures devices that are + connected to a network so they can communicate on + that network by using the Internet Protocol (IP). + The protocol is implemented in a client-server + model where DHCP clients request configuration + data such as, an IP address, a default route, and + one or more DNS server addresses from a DHCP + server. + + + + Diablo + + A grouped release of projects related to + OpenStack that came out in the fall of 2011, the + fourth release of OpenStack. It included Compute + (nova 2011.3), Object Storage (swift 1.4.3), and + the Image service (glance). + + + + direct consumer + + An element of the Compute RabbitMQ that comes to + life when a RPC call is executed. It connects to a + direct exchange through a unique exclusive queue, + sends the message, and terminates. + + + + direct exchange + + A routing table that is created within the Compute + RabbitMQ during RPC calls, one is created for each + RPC call that is invoked. + + + + direct publisher + + Element of RabbitMQ that provides a response to + an incoming MQ message. + + + + disassociate + + The process of removing the association between + a floating IP address and fixed IP and thus + returning the floating IP address to the address + pool. + + + + disk encryption + + The ability to encrypt data at the file system, + disk partition or whole disk level. Supported + within Compute VMs. + + + + disk format + + The underlying format that a disk image for a VM + is stored as within the Image Service back-end store. For + example, AMI, ISO, QCOW2, VMDK, and so on. + + + + dispersion + + In Object Storage, tools to test and ensure dispersion of + objects and containers to ensure fault + tolerance. + + + + Django + + A web framework used extensively in + horizon. + + + + DNS + + Domain Name Server. A hierarchical and + distributed naming system for computers, services, + and resources connected to the Internet or a + private network. Associates a human-friendly names + to IP addresses. + + + + DNS record + + A record that specifies information about a + particular domain and belongs to the + domain. + + + + dnsmasq + + Daemon that provides DNS, DHCP, BOOTP, and TFTP + services, used by the Compute VLAN manager and + FlatDHCP manager. + + + + domain + + Separates a web site from other sites. Often, + the domain name has two or more parts that are + separated by dots. For example, yahoo.com, + usa.gov, Harvard.edu, or mail.yahoo.com. + A domain is an entity or container of all + DNS-related information containing one or more + records. + + + + Domain Name Service (DNS) + + In Compute, the support that enables associating + DNS entries with floating IP addresses, nodes, or + cells so host names are consistent across + reboots. + + + + Domain Name System (DNS) + + A system by which Internet domain + name-to-address and address-to-name resolutions + are determined. + DNS helps navigate the Internet by translating + the IP address into an address that is easier to + remember For example, translating 111.111.111.1 + into www.yahoo.com. + All domains and their components, such as mail + servers, utilize DNS to resolve to the appropriate + locations. DNS servers are usually set up in a + master-slave relationship such that failure of the + master invokes the slave. DNS servers might also + be clustered or replicated such that changes made + to one DNS server are automatically propagated to + other active servers. + + + + download + + The transfer of data, usually in the form of + files, from one computer to another. + + + + DRTM + + Dynamic root of trust measurement. + + + + durable exchange + + The Compute RabbitMQ message exchange that remains + active when the server restarts. + + + + durable queue + + A Compute RabbitMQ message queue that remains + active when the server restarts. + + + + Dynamic Host Configuration Protocol + (DHCP) + + A method to automatically configure networking + for a host at boot time. Provided by both Networking + and Compute. + + + + Dynamic HyperText Markup Language + (DHTML) + + Pages that use HTML, + JavaScript, and CCS to enable users to interact + with a web page or show simple animation. + + + + + + E + + EBS boot volume + + An Amazon EBS storage volume that contains a + bootable VM image, currently unsupported in + OpenStack. + + + + ebtables + + Used in Compute along with arptables, iptables, and + ip6tables to create firewalls and to ensure + isolation of network communications. + + + + EC2 + + The Amazon commercial compute product, similar + to Compute. + + + + EC2 access key + + Used along with an EC2 secret key to access the + Compute EC2 API. + + + + EC2 API + + OpenStack supports accessing the Amazon EC2 API + through Compute. + + + + EC2 Compatibility API + + A Compute component that enables OpenStack to + communicate with Amazon EC2. + + + + EC2 secret key + + Used along with an EC2 access key when + communicating with the Compute EC2 API, is used to + digitally sign each request. + + + + Elastic Block Storage (EBS) + + The Amazon commercial block storage + product. + + + + encryption + + OpenStack supports encryption technologies such + as HTTPS, SSH, SSL, TLS, digital certificates, and + data encryption. + + + + endpoint + + See API endpoint. + + + + endpoint registry + + Alternative term for an Identity Service catalog. + + + + endpoint template + + A list of URL and port number endpoints that + indicate where a service, such as object storage, + compute, identity, and so on, can be + accessed. + + + + entity + + Any piece of hardware or software that wants to + connect to the network services provided by + Networking, the Network Connectivity service. An + entity can make use of Networking by implementing a + VIF. + + + + ephemeral image + + A VM image that does not save changes made to + its volumes and reverts them to their original + state after the instance is terminated. + + + + ephemeral volume + + Volume that does not save the changes made to it + and reverts to its original state when the current + user relinquishes control. + + + + Essex + + A grouped release of projects related to + OpenStack that came out in April 2012, the fifth + release of OpenStack. It included Compute (nova + 2012.1), Object Storage (swift 1.4.8), Image + (glance), Identity (keystone), and Dashboard + (horizon). + + + + ESX + + An OpenStack-supported hypervisor. + + + + ESXi + + An OpenStack-supported hypervisor. + + + + ebtables + + Filtering tool for a Linux bridging firewall, enabling + filtering of network traffic passing through a Linux bridge. + Used to restrict communications between hosts and/or nodes + in OpenStack Compute along with iptables, arptables, and + ip6tables. + + + + ETag + + MD5 hash of an object within Object Storage, used to + ensure data integrity. + + + + euca2ools + + A collection of command line tools for + administering VMs, most are compatible with + OpenStack. + + + + Eucalyptus Kernel Image (EKI) + + Used along with an ERI to create an EMI. + + + + Eucalyptus Machine Image (EMI) + + VM image container format supported by + Image Service. + + + + Eucalyptus Ramdisk Image (ERI) + + Used along with an EKI to create an EMI. + + + + evacuate + + The process of migrating one or all virtual + machine (VM) instances from one host to another, + compatible with both shared storage live migration + and block migration. + + + + exchange + + Alternative term for a RabbitMQ message + exchange. + + + + exchange type + + A routing algorithm in the Compute RabbitMQ. + + + + exclusive queue + + Connected to by a direct consumer in RabbitMQ / + Compute, the message can only be consumed by the + current connection. + + + + extended attributes (xattrs) + + File system option that enables storage of + additional information beyond owner, group, + permissions, modification time, and so on. The + underlying Object Storage file system must support extended + attributes. + + + + extension + + Alternative term for a Compute API extension or + plug-in. In the context of Identity Service this is a call + that is specific to the implementation, such as + adding support for OpenID. + + + + extra specs + + Specifies additional requirements + when Compute determines where to start a new + instance. Examples include a minimum amount of + network bandwidth or a GPU. + + + + + + F + + FakeLDAP + + An easy method to create a local LDAP directory + for testing Identity Service and Compute. Requires + Redis. + + + + fan-out exchange + + Within RabbitMQ and Compute it is the messaging + interface that is used by the scheduler service to + receive capability messages from the compute, + volume, and network nodes. + + + + Fedora + + A Linux distribution compatible with + OpenStack. + + + + Fibre Channel + + Storage protocol similar in concept to TCP/IP, + encapsulates SCSI commands and data. + + + + Fibre Channel over Ethernet (FCoE) + + The fibre channel protocol tunneled within + Ethernet. + + + + fill-first scheduler + + The Compute scheduling method that attempts to fill + a host with VMs rather than starting new VMs on a + variety of hosts. + + + + filter + + The step in the Compute scheduling process when + hosts that cannot run VMs are eliminated and not + chosen. + + + + firewall + + Used to restrict communications between hosts + and/or nodes, implemented in Compute using iptables, + arptables, ip6tables and etables. + + + + fixed IP address + + An IP address that is associated with the same + instance each time that instance boots, generally + not accessible to end users or the public + internet, used for management of the + instance. + + + + Flat Manager + + The Compute component that gives IP addresses to + authorized nodes and assumes DHCP, DNS, and + routing configuration and services are provided by + something else. + + + + flat mode injection + + A Compute networking method where the OS network + configuration information is injected into the VM + image before the instance starts. + + + + flat network + + The Network Controller provides virtual networks + to enable compute servers to interact with each + other and with the public network. All machines + must have a public and private network interface. + A flat network is a private network interface, + which is controlled by the flat_interface option + with flat managers. + + + + FlatDHCP Manager + + The Compute component that provides dnsmasq (DHCP, + DNS, BOOTP, TFTP) and radvd (routing) + services. + + + + flavor + + Alternative term for a VM instance type. + + + + flavor ID + + UUID for each Compute or Image Service VM flavor or + instance type. + + + + floating IP address + + An IP address that a project can associate + with a VM so the instance has the same public IP + address each time that it boots. You create a pool + of floating IP addresses and assign them to + instances as they are launched to maintain a + consistent IP address for maintaining DNS + assignment. + + + + Folsom + + A grouped release of projects related to + OpenStack that came out in the fall of 2012, the + sixth release of OpenStack. It includes Compute + (nova), Object Storage (swift), Identity + (keystone), Networking (neutron), Image service + (glance) and Volumes or Block Storage + (cinder). + + + + FormPost + + Object Storage middleware that uploads + (posts) an image through a form on a web + page. + + + + front-end + + The point where a user interacts with a service, + can be an API endpoint, the horizon dashboard, or + a command line tool. + + + + + + G + + gateway + + Hardware or software that translates between two + different protocols. + + + + glance + + A core project that provides the OpenStack Image + Service. + + + + glance API server + + Processes client requests for VMs, updates + Image Service metadata on the registry server, and + communicates with the store adapter to upload VM + images from the back-end store. + + + + glance registry + + Alternative term for the Image Service image + registry. + + + + global endpoint template + + The Identity Service endpoint template that contains + services available to all tenants. + + + + GlusterFS + + A file system designed to aggregate NAS hosts, + compatible with OpenStack. + + + + golden image + + A method of operating system installation where + a finalized disk image is created and then used by + all nodes without modification. + + + + Graphic Interchange Format (GIF) + + A type of image file that is commonly used for + animated images on web pages. + + + + Graphics Processing Unit (GPU) + + Choosing a host based on the existence of a GPU + is currently unsupported in OpenStack. + + + + Green Threads + + The cooperative threading model used by Python, + reduces race conditions, and only context switches + when specific library calls are made. Each + OpenStack service is its own thread. + + + + Grizzly + + Project name for the seventh release of + OpenStack. + + + + guest OS + + An operating system instance running under the + control of a hypervisor. + + + + + + H + + Hadoop + + Apache Hadoop is an open-source software + framework that supports data-intensive distributed + applications. + + + + handover + + An object state in Object Storage where a new replica of + the object is automatically created due to a drive + failure. + + + + hard reboot + + A type of reboot where a physical or virtual + power button is pressed as opposed to a graceful, + proper shutdown of the operating system. + + + + Havana + + Project name for the eighth release of + OpenStack. + + + + heat + + An integrated project that aims to orchestrate + multiple cloud applications for OpenStack. + + + + horizon + + OpenStack project that provides a dashboard, + which is a web interface. + + + + horizon plug-in + + A plug-in for the OpenStack dashboard + (horizon). + + + + host + + A physical computer, not a VM instance + (node). + + + + host aggregate + + A method to further subdivide availability zones + into hypervisor pools, a collection of common + hosts. + + + + Host Bus Adapter (HBA) + + Device plugged into a PCI slot such as a fibre + channel or network card. + + + + HTTP + + Hypertext Transfer Protocol. HTTP is an + application protocol for distributed, + collaborative, hypermedia information systems. It + is the foundation of data communication for the + World Wide Web. Hypertext is structured text that + uses logical links (hyper links) between nodes + containing text. HTTP is the protocol to exchange + or transfer hypertext. + + + + HTTPS + + Hypertext Transfer Protocol Secure (HTTPS) is a + communications protocol for secure communication + over a computer network, with especially wide + deployment on the Internet. Technically, it is not + a protocol in and of itself; rather, it is the + result of simply layering the Hypertext Transfer + Protocol (HTTP) on top of the SSL/TLS protocol, + thus adding the security capabilities of SSL/TLS + to standard HTTP communications. + + + + + Hyper-V + + One of the hypervisors supported by + OpenStack. + + + + hyper link + + Any kind of text that contains a link to some + other site, commonly found in documents where + clicking on a word or words opens up a different + web site. + + + + HyperText Transfer Protocol (HTTP) + + The protocol that tells browsers where to go to + find information. + + + + Hypertext Transfer Protocol Secure + (HTTPS) + + Encrypted HTTP communications using SSL or TLS, + most OpenStack API endpoints and many + inter-component communications support HTTPS + communication. + + + + hypervisor + + Software that arbitrates and controls VM access + to the actual underlying hardware. + + + + hypervisor pool + + A collection of hypervisors grouped together + through host aggregates. + + + + + + I + + IaaS + + Infrastructure as a Service. IaaS is a provision + model in which an organization outsources the + equipment used to support operations, including + storage, hardware, servers and networking + components. The service provider owns the + equipment and is responsible for housing, running + and maintaining it. The client typically pays on a + per-use basis. IaaS is a model for providing cloud + services. + + + + Icehouse + + Project name for the ninth release of OpenStack. + + + + ID number + + Unique numeric ID associated with each user in + Identity Service, conceptually similar to a Linux or LDAP + UID. + + + + Identity API + + Alternative term for the Identity Service + API. + + + + Identity back-end + + The source used by Identity Service to retrieve user + information an OpenLDAP server for example. + + + + Identity Service + + The OpenStack core project that provides a + central directory of users mapped to the OpenStack + services they can access. It also registers endpoints + for OpenStack services. It acts as a common + authentication system. The project name of the + Identity Service is keystone. + + + + Identity Service API + + The API used to access the OpenStack Identity + Service provided through keystone. + + + + IDS + + Intrusion Detection System + + + + + image + + A collection of files for a specific operating + system (OS) that you use to create or rebuild a + server. OpenStack provides pre-built images. You + can also create custom images, or snapshots, from + servers that you have launched. Custom images can + be used for data backups or as "gold" images for + additional servers. + + + + Image API + + The Image Service API endpoint for management of VM + images. + + + + image cache + + Used by Image Service to obtain images on the local host + rather than re-downloading them from + the image server each time one is + requested. + + + + image ID + + Combination of URI and UUID used to access + Image Service VM images through the image API. + + + + image membership + + A list of tenants that can access a given VM + image within Image Service. + + + + image owner + + The tenant who owns an Image Service virtual + machine image. + + + + image registry + + A list of VM images that are available through + Image Service. + + + + Image Service + + An OpenStack core project that provides + discovery, registration, and delivery services for disk + and server images. The project name of the Image + Service is glance. + + + + Image Service API + + Alternative name for the glance image + API. + + + + image status + + The current status of a VM image in Image Service, not + to be confused with the status of a running + instance. + + + + image store + + The back-end store used by Image Service to store VM + images, options include Object Storage, local file system, + S3, or HTTP. + + + + image UUID + + UUID used by Image Service to uniquely identify each VM + image. + + + + incubated project + + A community project may be elevated to this + status and is then promoted to a core + project. + + + + ingress filtering + + The process of filtering incoming network + traffic. Supported by Compute. + + + + injection + + The process of putting a file into a virtual + machine image before the instance is + started. + + + + instance + + A running VM, or a VM in a known state such as + suspended that can be used like a hardware server. + + + + + instance ID + + Alternative term for instance UUID. + + + + instance state + + The current state of a guest VM image. + + + + instance type + + Describes the parameters of the various virtual + machine images that are available to users, + includes parameters such as CPU, storage, and + memory. Alternative term for flavor. + + + + instance type ID + + Alternative term for a flavor ID. + + + + instance UUID + + Unique ID assigned to each guest VM + instance. + + + + interface ID + + Unique ID for a Networking VIF or vNIC in the form + of a UUID. + + + + Internet Service Provider (ISP) + + Any business that provides Internet access to + individuals or businesses. + + + + ironic + + OpenStack project that provisions bare metal, as + opposed to virtual, machines. + + + + IP address + + Number that is unique to every computer system + on the Internet. Two versions of the Internet + Protocol (IP) are in use for addresses: IPv4 and + IPv6. + + + + IP Address Management (IPAM) + + The process of automating IP address allocation, + deallocation, and management. Currently provided + by Compute, melange, and Networking. + + + + IPL + + Initial Program Loader. + + + + IPMI + + Intelligent Platform Management Interface. IPMI + is a standardized computer system interface used + by system administrators for out-of-band + management of computer systems and monitoring of + their operation. In layman's terms, it is a way to + manage a computer using a direct network + connection, whether it is turned on or not; + connecting to the hardware rather than an + operating system or login shell. + + + + ip6tables + + Tool used to set up, maintain, and inspect the tables of + IPv6 packet filter rules in the Linux kernel. In OpenStack + Compute, ip6tables is used along with arptables, ebtables, + and iptables to create firewalls for both nodes and + VMs. + + + + iptables + + Used along with arptables and ebtables, iptables + create firewalls in Compute. iptables are the tables + provided by the Linux kernel firewall (implemented + as different Netfilter modules) and the chains and + rules it stores. Different kernel modules and + programs are currently used for different + protocols; iptables applies to IPv4, ip6tables to + IPv6, arptables to ARP, and ebtables to Ethernet + frames. Requires root privilege to + manipulate. + + + + iSCSI + + The SCSI disk protocol tunneled within Ethernet, + supported by Compute, Object Storage, and Image Service. + + + + + ISO9960 + + One of the VM image disk formats supported by + Image Service. + + + + itsec + + A default role in the Compute RBAC system that + can quarantine an instance in any + project. + + + + + + J + + Java + + A programming language that is used to create + systems that involve more than one computer by way + of a network. + + + + JavaScript + + A scripting language that is used to build web + pages. + + + + JavaScript Object Notation (JSON) + + One of the supported response formats in + OpenStack. + + + + Jenkins + + Tool used to run jobs automatically for + OpenStack development. + + + + Juno + + Project name for the tenth release of OpenStack. + + + + + + K + + kernel-based VM (KVM) + + An OpenStack-supported hypervisor. + + + + keystone + + The project that provides OpenStack Identity + services. + + + + Kickstart + + A tool to automate system configuration and + installation on Red Hat, Fedora, and CentOS based + Linux distributions. + + + + + + L + + large object + + An object within Object Storage that is larger than 5 + GBs. + + + + Launchpad + + The collaboration site for OpenStack. + + + + Layer-2 network + + Term used for OSI network architecture for the + data link layer. + + + + libvirt + + Virtualization API library used by OpenStack to + interact with many of its supported + hypervisors. + + + + Linux bridge + + Software that enables multiple VMs to share a + single physical NIC within Compute. + + + + Linux Bridge neutron plug-in + + Enables a Linux bridge to understand + a Networking port, interface attachment, and other + abstractions. + + + + Linux containers (LXC) + + An OpenStack-supported hypervisor. + + + + live migration + + The ability within Compute to move running virtual + machine instances from one host to another with + only a small service interruption during + switch-over. + + + + load balancer + + A load balancer is a logical device which + belongs to a cloud account. It is used to + distribute workloads between multiple back-end + systems or services, based on the criteria defined + as part of its configuration. + + + + load balancing + + The process of spreading client requests between + two or more nodes to improve performance and + availability. + + + + + + M + + management API + + Alternative term for an admin API. + + + + management network + + A network segment used for administration, not + accessible to the public internet. + + + + manager + + Logical groupings of related code such as the + Block Storage volume manager or network manager. + + + + manifest + + Used to track segments of a large object within + Object Storage. + + + + manifest object + + A special Object Storage object that contains the + manifest for a large object. + + + + marconi + + OpenStack project that provides a queue service + to applications. + + + + melange + + Project name for OpenStack Network Information + Service. To be merged with Networking. + + + + membership + + The association between an Image Service VM image and a + tenant. Enables images to be shared with specified + tenants. + + + + membership list + + A list of tenants that can access a given VM + image within Image Service. + + + + memcached + + A distributed memory object caching system that + is used by Object Storage for caching. + + + + memory overcommit + + The ability to start new VM instances based on + the actual memory usage of a host, as opposed to + basing the decision on the amount of RAM each + running instance thinks it has available. Also + known as RAM overcommit. + + + + message broker + + The software package used to provide AMQP + messaging capabilities within Compute. Default + package is RabbitMQ. + + + + message bus + + The main virtual communication line used by all + AMQP messages for inter-cloud communications + within Compute. + + + + message queue + + Passes requests from clients to the appropriate + workers and returns the output to the client after + the job completes. + + + + Meta-Data Server (MDS) + + Stores CephFS metadata. + + + + migration + + The process of moving a VM instance from one + host to another. + + + + multinic + + Facility in Compute that allows each virtual + machine instance to have more than one VIF + connected to it. + + + + Modular Layer 2 (ML2) neutron plug-in + + Can concurrently use multiple + layer 2 networking technologies, such as 802.1Q and + VXLAN, in Networking. + + + + Monitor (Mon) + + A Ceph component that communicates with external + clients, checks data state and consistency, and + performs quorum functions. + + + + multi-factor authentication + + Authentication method that uses two or more + credentials, such as a password and a private key. + Currently not supported in Identity Service. + + + + MultiNic + + Facility in Compute that enables a virtual + machine instance to have more than one VIF + connected to it. + + + + + + N + + Nebula + + Released as open source by NASA in 2010 and is + the basis for Compute. + + + + netadmin + + One of the default roles in the Compute RBAC + system. Enables the user to allocate publicly + accessible IP addresses to instances and change + firewall rules. + + + + NetApp volume driver + + Enables Compute to communicate with NetApp storage + devices through the NetApp OnCommand Provisioning + Manager. + + + + network + + A virtual network that provides connectivity + between entities. For example, a collection of + virtual ports that share network connectivity. In + Networking terminology, a network is always a Layer-2 + network. + + + + Network Address Translation (NAT) + + The process of modifying IP address information + while in-transit. Supported by Compute and + Networking. + + + + network controller + + A Compute daemon that orchestrates the network + configuration of nodes including includes IP + addresses, VLANs, bridging, and manages routing + for both public and private networks. + + + + Network File System (NFS) + + A method for making file systems available over + the network. Supported by OpenStack. + + + + network ID + + Unique ID assigned to each network segment + within Networking. Same as network UUID + + + + network manager + + The Compute component that manages various network + components, such as firewall rules, IP address + allocation, and so on. + + + + network node + + Any Compute node that runs the network worker + daemon. + + + + network segment + + Represents a virtual, isolated OSI layer 2 + subnet in Networking. + + + + Network Time Protocol (NTP) + + A method of keeping a clock for a host or node + correct through communications with a trusted, + accurate time source. + + + + network UUID + + Unique ID for a Networking network segment. + + + + network worker + + The nova-network worker daemon, provides + services such as giving an IP address to a booting + nova instance. + + + + Networking + + A core OpenStack project that provides a network + connectivity abstraction layer to OpenStack + Compute. The project name of Networking is + neutron. + + + + Networking API + + API used to access OpenStack Networking. Provides an + extensible architecture to enable custom plug-in + creation. + + + + neutron + + A core OpenStack project that provides a network + connectivity abstraction layer to OpenStack + Compute. + + + + neutron API + + An alternative name for Networking API. + + + + neutron manager + + Enables Compute and Networking integration, which + enables Networking to perform network management for + guest VMs. + + + + neutron plug-in + + Interface within Networking that enables + organizations to create custom plug-ins for + advanced features such as QoS, ACLs, or + IDS. + + + + Nexenta volume driver + + Provides support for NexentaStor devices in + Compute. + + + + No ACK + + Disables server-side message acknowledgment in + the Compute RabbitMQ. Increases performance but + decreases reliability. + + + + node + + A VM instance that runs on a host. + + + + non-durable exchange + + Message exchange that is cleared when the + service restarts. Its data is not written to + persistent storage. + + + + non-durable queue + + Message queue that is cleared when the service + restarts. Its data is not written to persistent + storage. + + + + non-persistent volume + + Alternative term for an ephemeral volume. + + + + nova + + OpenStack project that provides compute + services. + + + + Nova API + + Alternative term for the Compute + API. + + + + nova-network + + A Compute component that manages IP address + allocation, firewalls, and other network-related + tasks. This is the legacy networking option and an + alternative to Networking. + + + + + + + O + + object + + A BLOB of data held by Object Storage, can be in any + format. + + + + object auditor + + Opens all objects for an object server and + verifies the MD5 hash, size, and metadata for each + object. + + + + object expiration + + A configurable option within Object Storage to + automatically delete objects after a specified + amount of time has passed or a certain date is + reached. + + + + object hash + + Uniquely ID for an Object Storage object. + + + + object path hash + + Used by Object Storage to determine the location of an + object in the ring. Maps objects to + partitions. + + + + object replicator + + An Object Storage component that copies and object to + remote partitions for fault tolerance. + + + + object server + + An Object Storage component that is responsible for + managing objects. + + + + Object Storage + + The OpenStack core project that + provides eventually consistent and redundant + storage and retrieval of fixed digital + content. The project name of OpenStack + Object Storage is swift. + + + + + Object Storage API + + API used to access OpenStack Object Storage. + + + + Object Storage Device (OSD) + + The Ceph storage daemon. + + + + object versioning + + Allows a user to set a flag on an Object Storage container + so all objects within the container are + versioned. + + + + Oldie + + Term for an Object Storage process that runs + for a long time. Can indicate a hung + process. + + + + Open Cloud Computing Interface + (OCCI) + + A standardized interface for managing compute, + data, and network resources, currently unsupported + in OpenStack. + + + + Open Virtualization Format (OVF) + + Standard for packaging VM images. Supported in + OpenStack. + + + + Open vSwitch neutron plug-in + + Provides support for + Open vSwitch in Networking. + + + + OpenLDAP + + An open source LDAP server. Supported by both + Compute and Identity Service. + + + + OpenStack + + + OpenStack is a cloud operating system that controls + large pools of compute, storage, and networking + resources throughout a datacenter, all managed through + a dashboard that gives administrators control while + empowering their users to provision resources through + a web interface. OpenStack is an Open Source project licensed + under the Apache License 2.0. + + + + + openSUSE + + A Linux distribution that is compatible with + OpenStack. + + + + operator + + The person responsible for planning and + maintaining an OpenStack installation. + + + + Orchestration + + An integrated project that + orchestrates multiple cloud applications for + OpenStack. The project name of Orchestration is + heat. + + + + orphan + + In the context of Object Storage this is a process that + is not terminated after an upgrade, restart, or + reload of the service. + + + + + + P + + parent cell + + If a requested resource, such as CPU time, disk + storage, or memory, is not available in the parent + cell, the request is forwarded to associated child + cells. + + + + partition + + A unit of storage within Object Storage used to store + objects, exists on top of devices, replicated for + fault tolerance. + + + + partition index + + Contains the locations of all Object Storage partitions + within the ring. + + + + partition shift value + + Used by Object Storage to determine which partition data + should reside on. + + + + pause + + A VM state where no changes occur (no changes in + memory, network communications stop, etc), the VM + is frozen but not shut down. + + + + PCI passthrough + + Gives guest VMs exclusive access to a + PCI device. Currently supported in OpenStack Havana + and later releases. + + + + persistent message + + A message that is stored both in memory and on + disk, the message is not lost after a failure or + restart. + + + + persistent volume + + Changes to these types of disk volumes are + saved. + + + + personality file + + A file used to customize a Compute instance, can be + used to inject SSH keys or a specific network + configuration. + + + + plug-in + + Software component providing the actual + implementation for Networking APIs, or for Compute + APIs, depending on the context. + + + + policy service + + Component of Identity Service that provides a rule + management interface and a rule based + authorization engine. + + + + port + + A virtual network port within Networking, VIFs / + vNICs are connected to a port. + + + + port UUID + + Unique ID for a Networking port. + + + + preseed + + A tool to automate system configuration and + installation on Debian-based Linux + distributions. + + + + private image + + An Image Service VM image that is only available to + specified tenants. + + + + private IP address + + An IP address used for management and + administration, not available to the public + internet. + + + + private network + + The Network Controller provides virtual networks + to enable compute servers to interact with each + other and with the public network. All machines + must have a public and private network interface. + A private network interface can be a flat or VLAN + network interface. A flat network interface is + controlled by the flat_interface with flat + managers. A VLAN network interface is controlled + by the vlan_interface option with VLAN managers. + + + + + project + + A logical grouping of users within Compute, used to + define quotas and access to VM images. + + + + project ID + + User defined alpha-numeric string in Compute, the + name of a project. + + + + project VPN + + Alternative term for a cloudpipe. + + + + provider + + An administrator who has access to all hosts and + instances. + + + + proxy node + + A node that provides the Object Storage proxy + service. + + + + proxy server + + Users of Object Storage interact with the service through + the proxy server which in-turn looks up the + location of the requested data within the ring and + returns the results to the user. + + + + public API + + An API endpoint used for both service to service + communication and end user interactions. + + + + public image + + An Image Service VM image that is available to all + tenants. + + + + public IP address + + An IP address that is accessible to + end-users. + + + + public network + + The Network Controller provides virtual networks + to enable compute servers to interact with each + other and with the public network. All machines + must have a public and private network interface. + The public network interface is controlled by the + public_interface option. + + + + Puppet + + An operating system configuration management + tool supported by OpenStack. + + + + Python + + Programming language used extensively in + OpenStack. + + + + + + Q + + QEMU Copy On Write 2 (QCOW2) + + One of the VM image disk formats supported by + Image Service. + + + + Qpid + + Message queue software supported by OpenStack, + an alternative to RabbitMQ. + + + + quarantine + + If Object Storage finds objects, containers, or accounts + that are corrupt they are placed in this state, + are not replicated, cannot be read by clients, and + a correct copy is re-replicated. + + + + Quick EMUlator (QEMU) + + QEMU is a generic and open source machine + emulator and virtualizer. + One of the hypervisors supported by OpenStack, + generally used for development purposes. + + + + quota + + In Compute and Block Storage, the ability to set + resource limits on a per-project basis. + + + + + + + R + + RabbitMQ + + The default message queue software used by + OpenStack. + + + + Rackspace Cloud Files + + Released as open source by Rackspace in 2010, + the basis for Object Storage. + + + + RADOS Block Device (RBD) + + Ceph component that enables a Linux block + device to be striped over multiple distributed + data stores. + + + + radvd + + The router advertisement daemon, used by the + Compute VLAN manager and FlatDHCP manager to provide + routing services for VM instances. + + + + RAM filter + + The Compute setting that enables or disables RAM + overcommitment. + + + + RAM overcommit + + The ability to start new VM instances based on + the actual memory usage of a host, as opposed to + basing the decision on the amount of RAM each + running instance thinks it has available. Also + known as memory overcommit. + + + + rate limit + + Configurable option within Object Storage to limit + database writes on a per-account and/or + per-container basis. + + + + raw + + One of the VM image disk formats supported by + Image Service, an unstructured disk image. + + + + rebalance + + The process of distributing Object Storage partitions + across all drives in the ring, used during initial + ring creation and after ring + reconfiguration. + + + + reboot + + Either a soft or hard reboot of a server. With a + soft reboot, the operating system is signaled to + restart, which enables a graceful shutdown of + all processes. A hard reboot is the equivalent of + power cycling the server. The virtualization + platform should ensure that the reboot action has + completed successfully even in cases in which the + underlying domain/vm is paused or halted/stopped. + + + + + rebuild + + Removes all data on the server and replaces it + with the specified image. Server ID and IP + addresses remain the same. + + + + Recon + + An Object Storage component that collects metrics. + + + + record + + Belongs to a particular domain and is used to + specify information about the domain. There are + several types of DNS records. Each record type + contains particular information used to describe + the purpose of that record. Examples include mail + exchange (MX) records, which specify the mail + server for a particular domain, and name server + (NS) records, which specify the authoritative name + servers for a domain. + + + + record ID + + A number within a database that is incremented + each time a change is made. Used by Object Storage when + replicating. + + + + Red Dwarf Lite + + Community project that aims to provide database + as a service. + + + + Red Hat Enterprise Linux (RHEL) + + A Linux distribution that is compatible with + OpenStack. + + + + reference architecture + + A recommended architecture for an OpenStack + cloud. + + + + region + + A Discrete OpenStack environment with dedicated API + endpoints that typically shares only the Identity + Service (keystone) with other regions. + + + + registry + + Alternative term for the Image Service + registry. + + + + registry server + + An Image Service that provides VM image metadata + information to clients. + + + + Reliable, Autonomic Distributed Object Store + (RADOS) + + A collection of components that provides object + storage within Ceph. Similar to OpenStack Object + Storage. + + + + Remote Procedure Call (RPC) + + The method used by the Compute RabbitMQ for + intra-service communications. + + + + replica + + Provides data redundancy and fault tolerance by + creating copies of Object Storage objects, accounts, and + containers so they are not lost when the + underlying storage fails. + + + + replica count + + The number of replicas of the data in an Object Storage + ring. + + + + replication + + The process of copying data to a separate + physical device for fault tolerance and + performance. + + + + replicator + + The Object Storage back-end process that creates and + manages object replicas. + + + + request ID + + Unique ID assigned to each request sent to + Compute. + + + + rescue image + + A special type of VM image that is booted when + an instance is placed into rescue mode. Allows an + administrator to mount the file systems for an + instance to correct the problem. + + + + resize + + Converts an existing server to a different + flavor, which scales the server up or down. + The original server is saved to enable rollback if + a problem occurs. All resizes must be tested + and explicitly confirmed, at which time the + original server is removed. + + + + RESTful + + A kind of web service API that uses REST, or + Representational State Transfer. REST is the style + of architecture for hypermedia systems that is + used for the World Wide Web. + + + + ring + + An entity that maps Object Storage data to partitions. A + separate ring exists for each service, such as + account, object, and container. + + + + ring builder + + Builds and manages rings within Object Storage, assigns + partitions to devices, and pushes the + configuration to other storage nodes. + + + + Role Based Access Control (RBAC) + + Provides a predefined list of actions that the + user can perform such as start or stop VMs, reset + passwords, and so on. Supported in both Identity Service + and Compute and can be configured using the horizon + dashboard. + + + + role + + A personality that a user assumes that enables + them to perform a specific set of operations. A + role includes a set of rights and privileges. A + user assuming that role inherits those rights and + privileges. + + + + role ID + + Alpha-numeric ID assigned to each Identity Service + role. + + + + rootwrap + + A feature of Compute that allows the unprivileged + "nova" user to run a specified list of commands as + the Linux root user. + + + + round-robin scheduler + + Type of Compute scheduler that evenly distributes + instances among available hosts. + + + + routing key + + The Compute direct exchanges, fanout exchanges, and + topic exchanges use this to determine how to + process a message, processing varies depending on + exchange type. + + + + RPC driver + + Modular system that allows the underlying + message queue software of Compute to be changed. For + example, from RabbitMQ to ZeroMQ or Qpid. + + + + + rsync + + Used by Object Storage to push object replicas. + + + + RXTX cap + + Absolute limit on the amount of network traffic + a Compute VM instance can send and receive. + + + + RXTX quota + + Soft limit on the amount of network traffic a + Compute VM instance can send and receive. + + + + Ryu neutron plug-in + + Enables the Ryu network operating system to + function as a Networking OpenFlow controller. + + + + + + S + + S3 + + Object storage service by Amazon, similar in + function to Object Storage, can act as a back-end store for + Image Service VM images. + + + + savanna + + OpenStack project that provisions Hadoop on top + of OpenStack to provide a data processing + service. + + + + scheduler manager + + A Compute component that determines where VM + instances should start. Uses modular design to + support a variety of scheduler types. + + + + scoped token + + An Identity Service API access token that is associated + with a specific tenant. + + + + scrubber + + Checks for and deletes unused VM, the component + of Image Service that implements delayed delete. + + + + secret key + + String of text only known by the user, used + along with an access key to make requests to the + Compute API. + + + + secure shell (SSH) + + Open source tool used to access remote hosts + through an encrypted communications channel, SSH + key injection is supported by Compute. + + + + security group + + A set of network traffic filtering rules that + are applied to a Compute instance. + + + + segmented object + + An Object Storage large object that has been broken up + into pieces, the re-assembled object is called a + concatenated object. + + + + server + + Computer that provides explicit services to the + client software running on that system, often + managing a variety of computer operations. + A server is a VM instance in the compute system. + Flavor and image are requisite elements when + creating a server. + + + + server image + + Alternative term for a VM image. + + + + server UUID + + Unique ID assigned to each guest VM + instance. + + + + service + + + An OpenStack service, such as Compute, Object + Storage, or Image Service. Provides one or more + endpoints through which users can access resources + and perform operations. + + + + + service catalog + + Alternative term for the Identity Service + catalog. + + + + service ID + + Unique ID assigned to each service that is + available in the Identity Service catalog. + + + + service registration + + An Identity Service feature that enables services, + such as Compute, + to automatically register with the + catalog. + + + + service tenant + + Special tenant that contains all + services that are listed in the catalog. + + + + service token + + An administrator defined token used by Compute to + communicate securely with the Identity Service. + + + + session back-end + + The method of storage used by horizon to track + client sessions such as local memory, cookies, a + database, or memcached. + + + + session persistence + + A feature of the load balancing service. It + attempts to force subsequent connections to a + service to be redirected to the same node as long + as it is online. + + + + session storage + + A horizon component that stores and tracks + client session information. Implemented through + the Django sessions framework. + + + + shared IP address + + An IP address that can be assigned to a VM + instance within the shared IP group. Public IP + addresses can be shared across multiple servers + for use in various high availability scenarios. + When an IP address is shared to another server, + the cloud network restrictions are modified to + enable each server to listen to and respond on that + IP address. You can optionally specify that the + target server network configuration be modified. + Shared IP addresses can be used with many standard + heartbeat facilities, such as keepalive, that + monitor for failure and manage IP failover. + + + + + shared IP group + + A collection of servers that can share IPs with + other members of the group. Any server in a group + can share one or more public IPs with any other + server in the group. With the exception of the + first server in a shared IP group, servers must be + launched into shared IP groups. A server may only + be a member of one shared IP group. + + + + shared storage + + Block storage that is simultaneously accessible + by multiple clients. For example, NFS. + + + + Sheepdog + + Distributed block storage system for QEMU, + supported by OpenStack. + + + + Simple Cloud Identity Management + (SCIM) + + Specification for managing identity in the + cloud, currently unsupported by OpenStack. + + + + Single-root I/O Virtualization + (SR-IOV) + + A specification that when implemented by a physical PCIe + device enables it to appear as multiple separate + PCIe devices. This enables multiple virtualized guests + to share direct access to the physical device, offering + improved performance over an equivalent virtual device. + Currently supported in OpenStack Havana and later + releases. + + + + SmokeStack + + Runs automated tests against the core OpenStack + API, written in Rails. + + + + snapshot + + A point-in-time copy of an OpenStack storage + volume or image. Use storage volume snapshots to + back up volumes. Use image snapshots to back up + data, or as "gold" images for additional servers. + + + + + soft reboot + + A controlled reboot where a VM instance is + properly restarted through operating system + commands. + + + + SolidFire Volume Driver + + The Block Storage driver for the SolidFire iSCSI + storage appliance. + + + + SPICE + + + The Simple Protocol for Independent Computing + Environments (SPICE) provides remote desktop access + to guest virtual machines. It is an alternative to + VNC. SPICE is supported by OpenStack. + + + + + spread-first scheduler + + The Compute VM scheduling algorithm that attempts + to start new VM on the host with the least amount + of load. + + + + SQL-Alchemy + + An open source SQL toolkit for Python, used in + OpenStack. + + + + SQLite + + A lightweight SQL database, used as the default + persistent storage method in many OpenStack + services. + + + + StackTach + + Community project that captures Compute AMQP + communications, useful for debugging. + + + + static IP address + + Alternative term for a fixed IP address. + + + + StaticWeb + + WSGI middleware component of Object Storage that serves + container data as a static web page. + + + + storage back-end + + The method that a service uses for persistent + storage such as iSCSI, NFS, or local disk. + + + + storage node + + An Object Storage node that provides container services, + account services, and object services, controls + the account databases, container databases, and + object storage. + + + + storage manager + + A XenAPI component that provides a pluggable + interface to support a wide variety of persistent + storage back-ends. + + + + storage manager back-end + + A persistent storage method supported by XenAPI + such as iSCSI or NFS. + + + + storage services + + Collective name for the Object Storage object services, + container services, and account services. + + + + strategy + + Specifies the authentication source used by + Image Service or Identity Service. + + + + subdomain + + A domain within a parent domain. Subdomains + cannot be registered. Subdomains enable you to + delegate domains. Subdomains can themselves have + subdomains, so third-level, fourth-level, + fifth-level, and deeper levels of nesting are + possible. + + + + SUSE Linux Enterprise Server (SLES) + + A Linux distribution that is compatible with + OpenStack. + + + + suspend + + Alternative term for a paused VM + instance. + + + + swap + + Disk-based virtual memory, used by operating + systems to provide more memory than is actually + available on the system. + + + + swawth + + An authentication and authorization service for + Object Storage, implemented through WSGI middleware, uses + Object Storage itself as the persistent backing + store. + + + + swift + + An OpenStack core project that provides object + storage services. + + + + swift All in One (SAIO) + + Creates a full Object Storage development environment + within a single VM. + + + + swift middleware + + Collective term for Object Storage components that + provide additional functionality. + + + + swift proxy server + + Acts as the gatekeeper to Object Storage and is + responsible for authenticating the user. + + + + swift storage node + + A node that runs Object Storage account, container, and + object services. + + + + sync point + + Point in time since the last container and + accounts database sync among nodes within + Object Storage. + + + + sysadmin + + One of the default roles in the Compute RBAC + system. Enables a user to add other users to a project, + interact with VM images that are + associated with the project, and start and stop VM + (VM) instances. + + + + system usage + + A Compute component that, along with the + notification system, collects metrics and usage + information. This information can be used for billing. + + + + + + + T + + Telemetry + + An integrated project that provides + metering and measuring facilities for OpenStack. The + project name of Telemetry is ceilometer. + + + + TempAuth + + An authentication facility within Object Storage that + enables Object Storage itself to perform authentication and + authorization. Frequently used in testing and + development. + + + + Tempest + + Automated software test suite designed to run + against the trunk of the OpenStack core + project. + + + + TempURL + + An Object Storage middleware component that enables creation of URLs for temporary object access. + + + + tenant + + A group of users, used to isolate access to Compute + resources. An alternative term for a + project. + + + + Tenant API + + An API that is accessible to tenants. + + + + tenant endpoint + + An Identity Service API endpoint that is associated with + one or more tenants. + + + + tenant ID + + Unique ID assigned to each tenant within + the Identity Service, the project IDs map to the + tenant IDs. + + + + token + + An alpha-numeric string of text used to access + OpenStack APIs and resources. + + + + token services + + An Identity Service component that manages and validates + tokens after a user or tenant has been + authenticated. + + + + tombstone + + Used to mark Object Storage objects that have been + deleted, ensures the object is not updated on + another node after it has been deleted. + + + + topic publisher + + A process that is created when a RPC call is + executed, used to push the message to the topic + exchange. + + + + Torpedo + + Community project used to run automated tests + against the OpenStack API. + + + + transaction ID + + Unique ID assigned to each Object Storage request, used + for debugging and tracing. + + + + transient + + Alternative term for non-durable. + + + + transient exchange + + Alternative term for a non-durable + exchange. + + + + transient message + + A message that is stored in memory and is lost + after the server is restarted. + + + + transient queue + + Alternative term for a non-durable queue. + + + + trove + + OpenStack project that provides database + services to applications. + + + + + + U + + Ubuntu + + A Debian-based Linux distribution. + + + + unscoped token + + Alternative term for an Identity Service default + token. + + + + updater + + Collective term for a group of Object Storage components + that processes queued and failed updates for + containers and objects. + + + + user + + In Identity Service each user is associated with one or + more tenants, and in Compute they can be associated + with roles, projects, or both. + + + + user data + + A blob of data that can be specified by the user + when launching an instance. This data can be + accessed by the instance through the metadata + service or config drive. Commonly used for passing + a shell script that is executed by the instance on + boot. + + + + User Mode Linux (UML) + + An OpenStack-supported hypervisor. + + + + + + V + + VIF UUID + + Unique ID assigned to each Networking VIF. + + + + Virtual Central Processing Unit + (vCPU) + + Sub-divides physical CPUs. Instances can then use those + divisions. + + + + Virtual Disk Image (VDI) + + One of the VM image disk formats supported by + Image Service. + + + + Virtual Hard Disk (VHD) + + One of the VM image disk formats supported by + Image Service. + + + + virtual IP + + An Internet Protocol (IP) address configured on + the load balancer for use by clients connecting to + a service that is load balanced. Incoming + connections are distributed to back-end nodes + based on the configuration of the load balancer. + + + + + virtual machine (VM) + + An operating system instance that runs on top of + a hypervisor. Multiple VMs can run at the same + time on the same physical host. + + + + virtual network + + An L2 network segment within Networking. + + + + Virtual Network Computing (VNC) + + Open source GUI and CLI tools used for remote + console access to VMs. Supported by Compute. + + + + Virtual Network InterFace (VIF) + + An interface that is plugged into a port in a + Networking network. Typically a virtual network + interface belonging to a VM. + + + + virtual port + + Attachment point where a virtual interface + connects to a virtual network. + + + + virtual private network (VPN) + + Provided by Compute in the form of cloudpipes, + specialized instances that are used to create VPNs + on a per-project basis. + + + + virtual server + + Alternative term for a VM or guest. + + + + virtual switch (vSwitch) + + Software that runs on a host or node and + provides the features and functions of a hardware + based network switch. + + + + virtual VLAN + + Alternative term for a virtual network. + + + + VirtualBox + + An OpenStack-supported hypervisor. + + + + VLAN manager + + A Compute component that provides dnsmasq, radvd, + and sets up forwarding to and from cloudpipe + instances. + + + + VLAN network + + The Network Controller provides virtual networks + to enable compute servers to interact with each + other and with the public network. All machines + must have a public and private network interface. + A VLAN network is a private network interface, + which is controlled by the vlan_interface option + with VLAN managers. + + + + VM disk (VMDK) + + One of the VM image disk formats supported by + Image Service. + + + + VM image + + Alternative term for an image. + + + + VM Remote Control (VMRC) + + Method to access VM instance consoles using a + web browser. Supported by Compute. + + + + VMware API + + Supports interaction with VMware products in + Compute. + + + + VMware NSX Neutron plugin + + Provides support for VMware NSX in Neutron. + + + + VNC proxy + + A Compute component that provides users access to + the consoles of their VM instances through VNC or + VMRC. + + + + volume + + Disk-based data storage generally represented as + an iSCSI target with a file system that supports + extended attributes, can be persistent or + ephemeral. + + + + Volume API + + An API on a separate endpoint for attaching, + detaching, and creating block storage for compute + VMs. + + + + volume controller + + A Block Storage component that oversees and coordinates + storage volume actions. + + + + volume driver + + Alternative term for a volume plug-in. + + + + volume ID + + Unique ID applied to each storage volume under + the Block Storage control. + + + + volume manager + + A Block Storage component that creates, attaches, and + detaches persistent storage volumes. + + + + volume node + + A Block Storage node that runs the + cinder-volume + daemon. + + + + volume plug-in + + Provides + support for new and specialized types of + back-end storage for the Block Storage + volume manager. + + + + Volume Service API + + Alternative term for the Compute volume API. + + + + volume worker + + A cinder component that interacts with back-end + storage to manage the creation and deletion of + volumes and the creation of compute volumes, + provided by the cinder-volume daemon. + + + + vSphere + + An OpenStack-supported hypervisor. + + + + + + W + + weighing + + A Compute process that determines the suitability + of the VM instances for a job for a particular + host. For example, not enough RAM on the host, too + many CPUs on the host, and so on. + + + + weight + + Used by Object Storage storage devices to determine which + storage devices are suitable for the job. Devices + are weighted by size. + + + + weighted cost + + The sum of each cost used when deciding where to + start a new VM instance in Compute. + + + + worker + + A daemon that listens to a queue and carries out tasks in response to messages. For example, + the cinder-volume worker attaches + storage to instances. + + + + + + X + + Xen API + + The Xen administrative API, which is supported + by Compute. + + + + Xen Cloud Platform (XCP) + + An OpenStack-supported hypervisor. + + + + Xen Storage Manager Volume Driver + + A Block Storage volume plug-in that enables + communication with the Xen Storage Manager + API. + + + + XenServer + + An OpenStack-supported hypervisor. + + + + + + Y + + + + + + + + + + Z + + ZeroMQ + + Message queue software supported by OpenStack. + An alternative to RabbitMQ. Also spelled + 0MQ. + + + + Zuul + + Tool used in OpenStack development to ensure + correctly ordered testing of changes in + parallel. + + + + + diff --git a/manuals/glossary/opendaylight-glossary.xml b/manuals/glossary/opendaylight-glossary.xml new file mode 100644 index 000000000..71754a8de --- /dev/null +++ b/manuals/glossary/opendaylight-glossary.xml @@ -0,0 +1,17 @@ + + + OpenDaylight Glossary + + + OpenDaylight Glossary + Use this glossary to get definitions of OpenDaylight-related words and + phrases. + + + diff --git a/manuals/glossary/pom.xml b/manuals/glossary/pom.xml new file mode 100644 index 000000000..27ad0547c --- /dev/null +++ b/manuals/glossary/pom.xml @@ -0,0 +1,70 @@ + + + org.opendaylight.documentation + manuals + 0.1.0-SNAPSHOT + ../pom.xml + + 4.0.0 + glossary + OpenDaylight Docs - Manuals - Glossary + + + local + + + + + + + + target/docbkx/pdf + + **/*.fo + + + + + + com.inocybe.api + sdndocs-maven-plugin + + + + generate-webhelp + + generate-webhelp + + generate-sources + + + 0 + 1 + UA-17511903-1 + + false + 0 + 0 + 0 + 0 + glossary + + + + + + true + glossary-terms.xml + . + + opendaylight-glossary.xml + + http://docs.opendaylight.org/glossary/content/ + opendaylight + + + + + diff --git a/manuals/howto-openstack/UserGuide.xpr b/manuals/howto-openstack/UserGuide.xpr new file mode 100644 index 000000000..fab6175e5 --- /dev/null +++ b/manuals/howto-openstack/UserGuide.xpr @@ -0,0 +1,19 @@ + + + + + + + + + validation.scenarios + + + + + + + + + + \ No newline at end of file diff --git a/manuals/howto-openstack/bk-howto-openstack.xml b/manuals/howto-openstack/bk-howto-openstack.xml new file mode 100644 index 000000000..01907e14b --- /dev/null +++ b/manuals/howto-openstack/bk-howto-openstack.xml @@ -0,0 +1,69 @@ + + + + OpenDaylight User Guide + End User Guide + + + + + + Linux Foundation + + + + 2014 + Linux Foundation + + hydrogen + OpenDaylight + + + + Copyright details are filled in by the template. + + + + OpenDaylight is an open platform for network programmability to enable SDN and create a + solid foundation for NFV for networks at any size and scale. OpenDaylight software is a + combination of components including a fully pluggable controller, interfaces, protocol + plug-ins and applications. + + + + 2014-02-24 + + + + First edition of this document. + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/manuals/howto-openstack/ch_install.xml b/manuals/howto-openstack/ch_install.xml new file mode 100644 index 000000000..aaf6e716a --- /dev/null +++ b/manuals/howto-openstack/ch_install.xml @@ -0,0 +1,23 @@ + + + + + OpenDaylight Installation + + The OpenDaylight Installation process is straight forward and self contained. OpenDaylight + can be installed in your environment by using release archives, RPM, VirtualBox images or + even via Docker containers. + + + + + + + + + diff --git a/manuals/howto-openstack/images/Horizon-OpenDaylight-e1392513990486.jpg b/manuals/howto-openstack/images/Horizon-OpenDaylight-e1392513990486.jpg new file mode 100644 index 000000000..fcd9e176b Binary files /dev/null and b/manuals/howto-openstack/images/Horizon-OpenDaylight-e1392513990486.jpg differ diff --git a/manuals/howto-openstack/images/OVSDB-Architecture.png b/manuals/howto-openstack/images/OVSDB-Architecture.png new file mode 100644 index 000000000..e494b04bd Binary files /dev/null and b/manuals/howto-openstack/images/OVSDB-Architecture.png differ diff --git a/manuals/howto-openstack/images/Overlay-OpenDaylight-OVSDB-OpenFlow.png b/manuals/howto-openstack/images/Overlay-OpenDaylight-OVSDB-OpenFlow.png new file mode 100644 index 000000000..87402c490 Binary files /dev/null and b/manuals/howto-openstack/images/Overlay-OpenDaylight-OVSDB-OpenFlow.png differ diff --git a/manuals/howto-openstack/images/VirtualBox-HostOnly-Networks.png b/manuals/howto-openstack/images/VirtualBox-HostOnly-Networks.png new file mode 100644 index 000000000..6db60610a Binary files /dev/null and b/manuals/howto-openstack/images/VirtualBox-HostOnly-Networks.png differ diff --git a/manuals/howto-openstack/images/VirtualBox-HostOnly-Nics.png b/manuals/howto-openstack/images/VirtualBox-HostOnly-Nics.png new file mode 100644 index 000000000..fafad243c Binary files /dev/null and b/manuals/howto-openstack/images/VirtualBox-HostOnly-Nics.png differ diff --git a/manuals/howto-openstack/pom.xml b/manuals/howto-openstack/pom.xml new file mode 100644 index 000000000..2c3d74539 --- /dev/null +++ b/manuals/howto-openstack/pom.xml @@ -0,0 +1,82 @@ + + + org.opendaylight.integration + manuals + 0.1.0-SNAPSHOT + ../pom.xml + + 4.0.0 + opendaylight-howto-openstack + jar + OpenDaylight HowTo OpenStack + + + local + 1 + + + + + + + + com.inocybe.api + sdndocs-maven-plugin + 0.1.0-SNAPSHOT + + + + + generate-webhelp + + generate-webhelp + + generate-sources + + enduser + bk-howto-openstack.xml + + appendix toc,title + article/appendix nop + article toc,title + book toc,title,figure,table,example,equation + chapter toc,title + section toc + part toc,title + qandadiv toc + qandaset toc + reference toc,title + set toc,title + + howto-openstack + howto-openstack + + + + + enduser + 1 + 0 + 1 + 0 + false + true + true + . + mlemay@inocybe.com + opendaylight + 2.6in + 0 + http://docs.opendaylight.org/howto-openstack/content/ + ${basedir}/../glossary/glossary-terms.xml + + + + + diff --git a/manuals/howto-openstack/section_configure_devstack.xml b/manuals/howto-openstack/section_configure_devstack.xml new file mode 100644 index 000000000..2664d3c43 --- /dev/null +++ b/manuals/howto-openstack/section_configure_devstack.xml @@ -0,0 +1,173 @@ + + + + + +]> +
+ Configure the DevStack for the Openstack Controller + Make sure all bridges are removed only if you have previously “stacked” + + $sudo ovs-vsctl show + + Once the OpenDaylight Controller is running, stack the OpenStack Controller: + Fedora 19: + + +$ cd ~/ +$ cd devstack +$ cp local.conf.control local.conf +$ vi local.conf + + Fedora 20: + + +$ cd ~/ +$ cp local.conf.control devstack/local.conf +$ cd devstack +$ vi local.conf + + Edit the local.conf you just copied with the appropriate IPs. Replace all instances with + brackets to the Daylight SDN controller, the OpenStack controller IP or the Openstack + compute IP (Compute ethX address is only on the compute node). + In the local.conf you will see four lines that require the hardcoding of an IP + address. + + +SERVICE_HOST= +HOST_IP= +VNCSERVER_PROXYCLIENT_ADDRESS= +url=http://:8080/controller/nb/v2/neutron + + The following is the OpenStack controller local.conf for this tutorial: + + +[[local|localrc]] +LOGFILE=stack.sh.log +# Logging Section +SCREEN_LOGDIR=/opt/stack/data/log +LOG_COLOR=False +# Prevent refreshing of dependencies and DevStack recloning +OFFLINE=True +#RECLONE=yes + +disable_service rabbit +enable_service qpid +enable_service n-cpu +enable_service n-cond +disable_service n-net +enable_service q-svc +enable_service q-dhcp +enable_service q-l3 +enable_service q-meta +enable_service quantum +enable_service tempest + +Q_HOST=$SERVICE_HOST +HOST_IP=172.16.86.129 + +Q_PLUGIN=ml2 +Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger +ENABLE_TENANT_TUNNELS=True +NEUTRON_REPO=https://github.com/CiscoSystems/neutron.git +NEUTRON_BRANCH=odl_ml2 + +VNCSERVER_PROXYCLIENT_ADDRESS=172.16.86.129 +VNCSERVER_LISTEN=0.0.0.0 + +HOST_NAME=fedora-odl-1 +SERVICE_HOST_NAME=${HOST_NAME} +SERVICE_HOST=172.16.86.129 + +FLOATING_RANGE=192.168.210.0/24 +PUBLIC_NETWORK_GATEWAY=192.168.75.254 +MYSQL_HOST=$SERVICE_HOST +RABBIT_HOST=$SERVICE_HOST +GLANCE_HOSTPORT=$SERVICE_HOST:9292 +KEYSTONE_AUTH_HOST=$SERVICE_HOST +KEYSTONE_SERVICE_HOST=$SERVICE_HOST + +MYSQL_PASSWORD=mysql +RABBIT_PASSWORD=rabbit +QPID_PASSWORD=rabbit +SERVICE_TOKEN=service +SERVICE_PASSWORD=admin +ADMIN_PASSWORD=admin + +[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]] +[agent] +minimize_polling=True + +[ml2_odl] +url=http://172.16.86.129:8080/controller/nb/v2/neutron +username=admin +password=admin + + Verify the local.conf by greping for the IP prefix used: + + +$ grep 172.16 local.conf +HOST_IP=172.16.86.129 +VNCSERVER_PROXYCLIENT_ADDRESS=172.16.86.129 +SERVICE_HOST=172.16.86.129 +url=http://172.16.86.129:8080/controller/nb/v2/neutron + + Finally execute the stack.sh shell script: + + $ ./stack.sh + + You should see activity in your OSGI console as Neutron adds the default private and + public networks like so: + + +osgi> 2014-02-06 20:58:27.418 UTC [http-bio-8080-exec-1] INFO o.o.c.u.internal.UserManager - Local Authentication Succeeded for User: "admin" +2014-02-06 20:58:27.419 UTC [http-bio-8080-exec-1] INFO o.o.c.u.internal.UserManager - User "admin" authorized for the following role(s): [Network-Admin] + + You will see more activity as ODL programs the OVSDB server running on the OpenStack + node. + Here is the state of Open vSwitch after the stack completes and prior to booting a VM + instance. If you do not see the is_connected: true boolean after Manager (OVSDB) and + Controller (OpenFlow), an error has occured, check that the controller/manager IPs are + reachable and the ports are bound using the lsof command listed earlier: + [odl@fedora-odl-1 devstack]$sudo ovs-vsctl show + +17074e89-2ac5-4bba-997a-1a5a3527cf56 +Manager "tcp:172.16.86.129:6640" +is_connected: true +Bridge br-int +Controller "tcp:172.16.86.129:6633" +is_connected: true +fail_mode: secure +Port br-int +Interface br-int +type: internal +Port "tap1e3dfa54-9c" +Interface "tap1e3dfa54-9c" +Bridge br-ex +Controller "tcp:172.16.86.129:6633" +is_connected: true +Port "tap9301c38d-d8" +Interface "tap9301c38d-d8" +Port br-ex +Interface br-ex +type: internal +ovs_version: "2.0.0" + +Here are the OpenFlow v1.3 flow rules for the default namespace ports in OVS (qdhcp / qrouter): + +[crayon-5326f94b7c170907686501 lang="bash" ]OFPST_FLOW reply (OF1.3) (xid=0x2): +cookie=0x0, duration=202.138s, table=0, n_packets=0, n_bytes=0, send_flow_rem in_port=1,dl_src=fa:16:3e:fb:4a:32 actions=set_field:0x2->tun_id,goto_table:10 +cookie=0x0, duration=202.26s, table=0, n_packets=0, n_bytes=0, send_flow_rem in_port=1,dl_src=fa:16:3e:2e:29:d3 actions=set_field:0x1->tun_id,goto_table:10 +cookie=0x0, duration=202.246s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=1 actions=drop +cookie=0x0, duration=202.302s, table=0, n_packets=0, n_bytes=0, send_flow_rem dl_type=0x88cc actions=CONTROLLER:56 +cookie=0x0, duration=202.186s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x1 actions=goto_table:20 +cookie=0x0, duration=202.063s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x2 actions=goto_table:20 +cookie=0x0, duration=202.14s, table=20, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x1 actions=drop +cookie=0x0, duration=202.046s, table=20, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x2 actions=drop +cookie=0x0, duration=202.2s, table=20, n_packets=0, n_bytes=0, send_flow_rem priority=16384,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1 +cookie=0x0, duration=202.083s, table=20, n_packets=0, n_bytes=0, send_flow_rem priority=16384,tun_id=0x2,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1 +cookie=0x0, duration=202.211s, table=20, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x1,dl_dst=fa:16:3e:2e:29:d3 actions=output:1 +cookie=0x0, duration=202.105s, table=20, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x2,dl_dst=fa:16:3e:fb:4a:32 actions=output:1 +
diff --git a/manuals/howto-openstack/section_configure_fedora_images.xml b/manuals/howto-openstack/section_configure_fedora_images.xml new file mode 100644 index 000000000..5ae7bf518 --- /dev/null +++ b/manuals/howto-openstack/section_configure_fedora_images.xml @@ -0,0 +1,182 @@ + + + + + +]> +
+ Configure the Fedora Images for your Environment + To configure you have 2 options, Fedora 19 and Fedora 20. Fedora 19 is recommanded due to + Fderoa 20 having an issue with MariaDB and hostnames. For assistance with getting the stack + going, ping the OVSDB Listserv and check the archives for answers. + Download the pre-built image that contains OpenDaylight, DevStack installing Ice House + OpenStack, Open vSwitch all on Fedora: + Fedora 19: + + curl -O https://wiki.opendaylight.org/images/HostedFiles/ODL_Devstack_Fedora19.zip +$ unzip ODL_Devstack_Fedora19.zip +# Two files contained +ODL-Devstack-Fedora19-disk1.vmdk +ODL-Devstack-Fedora19.ovf + + Fedora 20: + + $ curl -O https://wiki.opendaylight.org/images/HostedFiles/OpenDaylight_DevStack_Fedora20.ova + + Clone this Virtual Machine image into two images. One is for the Control (This VM runs + both the OpenStack Controller and OpenDaylight Controller) and the other for the Compute + instance. If you use VM Fusion the vanilla image works as is with no need to change any + adaptor settings. Use the ‘ip addr’ configuration output as a reference in the next section. + I recommend using SSH to connect to the host rather then using the TTY interface. + + Here are two screenshots with VirtualBox network adaptor examples. The first are the two + networks you can create. vxboxnet0 is there by default. Create the 2nd network with the +w/a + nic picture in the following example. Note: you have to manually fill in the DHCP server + settings on the new network. Refer to the existing if unsure of the values to use. When + complete the host OS should be able to reach the guest OS. + + + + + The second example is what the VirtualBox NIC setup can look like without have to deal + with the NAT Network option in VirtualBox. VM Fusion has integrated hooks in to resolve the + need for host only etc. NAT and Host only work fine with NAT so the host can reach your + networks default gateway and get to the Inets as needed. With host only that is not the case + but it is plenty to run the stack and integration. + + + + + Boot both guest VMs write down the four IP addresses from both NICs. You will primarily + only use one of them other then a gateway or out of band SSH connectivity etc. + Fedora 19: + + +Login: fedora +Passwd: opendaylight + + Fedora 20: + + +Login: odl +Passwd: odl + + In this example the configuration of the IP addresses are as follows: + + +Openstack Controller IP == 172.16.86.129 +Openstack Compute IP == 172.16.86.128 +OpenDaylight Controller IP == 172.16.86.129 + + Record the IP addresses of both of the hosts: + Controller IP addresses: + + [odl@fedora-odl-1 devstack]$ip addr + +1: lo: loopback,up,lower_up, mtu 65536 qdisc noqueue state UNKNOWN group default +link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 +inet 127.0.0.1/8 scope host lo +valid_lft forever preferred_lft forever +inet6 ::1/128 scope host +valid_lft forever preferred_lft forever +2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 +link/ether 00:0c:29:35:0b:65 brd ff:ff:ff:ff:ff:ff +inet 172.16.47.134/24 brd 172.16.47.255 scope global dynamic eth0 +valid_lft 1023sec preferred_lft 1023sec +inet6 fe80::20c:29ff:fe35:b65/64 scope link +valid_lft forever preferred_lft forever +3: eth1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 +link/ether 00:0c:29:35:0b:6f brd ff:ff:ff:ff:ff:ff +inet 172.16.86.129/24 brd 172.16.86.255 scope global dynamic eth1 +valid_lft 1751sec preferred_lft 1751sec +inet6 fe80::20c:29ff:fe35:b6f/64 scope link +valid_lft forever preferred_lft forever + + Compute IP addresses: + + [odl@fedora-odl-2 ~]$ip addr + +1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default +link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 +inet 127.0.0.1/8 scope host lo +valid_lft forever preferred_lft forever +inet6 ::1/128 scope host +valid_lft forever preferred_lft forever +2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 +link/ether 00:0c:29:85:2d:f2 brd ff:ff:ff:ff:ff:ff +inet 172.16.47.133/24 brd 172.16.47.255 scope global dynamic eth0 +valid_lft 1774sec preferred_lft 1774sec +inet6 fe80::20c:29ff:fe85:2df2/64 scope link +valid_lft forever preferred_lft forever +3: eth1: <broadcast,multicast,up,lower_up mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 +link/ether 00:0c:29:85:2d:fc brd ff:ff:ff:ff:ff:ff +inet 172.16.86.128/24 brd 172.16.86.255 scope global dynamic eth1 +valid_lft 1716sec preferred_lft 1716sec +inet6 fe80::20c:29ff:fe85:2dfc/64 scope link +valid_lft forever preferred_lft forever + + Go to the home directory of the user id odl: + + $ cd ~/ + + Start the OVS Service (DevStack should start this svc). This startup script can be loaded + at startup of OVS to load at the OS init. + + sudo /sbin/service openvswitch start + + Configure the /etc/hosts file to reflect your controller and compute hostname mappings. + While not necessarily required it can cause issues for Nova output. + Verify the OpenStack Controller /etc/hosts file. The only edit is adding the compute IP to + hostname mapping. E.g. x.x.x.x fedora-odl-2 + + [odl@fedora-odl-1 ~]$sudo vi /etc/hosts + +127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 fedora-odl-1 +172.16.86.128 fedora-odl-2 +::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 + + Edit the compute nodes /etc/hosts from fedora-odl-1 to fedora-odl-2: + + [odl@fedora-odl-2 ~]$sudo vi /etc/hosts + +$ cat /etc/hosts +127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 fedora-odl-2 +172.16.86.129 fedora-odl-1 +::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 + + Then, change the compute hostname from (compute only): + + $ sudo vi /etc/hostname +# Change to: +$ cat /etc/hostname +fedora-odl-2 +$sudo vi /etc/sysconfig/network +#Change HOSTNAME=fedora-odl-1 to HOSTNAME=fedora-odl-2 +$sudo hostname -b fedora-odl-2 + + Then, reboot the cloned Compute node for the change to take affect: + + sudo shutdown -r now + + After the host restarts verify the hostnames like so: + + $ hostname +fedora-odl-2 + + Note: Iin the Fedora 20 VM, commenting out “#127.0.0.1 localhost fedora-odl-1″ will result + in a crash of MySql. Avoid doing any changes to the host name locally resolving to + 127.0.0.1. + + +An unexpected error prevented the server from fulfilling your request. (OperationalError) (1045, "Access denied for user 'root'@'fedora-odl-1' (using password: YES)") None None (HTTP 500) +2014-02-10 04:03:28 + KEYSTONE_SERVICE= +2014-02-10 04:03:28 + keystone endpoint-create --region RegionOne --service_id --publicurl http://172.16.86.129:5000/v2.0 --adminurl http://172.16.86.129:35357/v2.0 --internalurl http://172.16.86.129:5000/v2.0 +2014-02-10 04:03:28 usage: keystone endpoint-create [--region ] --service +2014-02-10 04:03:28 --publicurl 2014-02-10 04:03:28 [--adminurl ] +2014-02-10 04:03:28 [--internalurl ] +2014-02-10 04:03:28 keystone endpoint-create: error: argument --service/--service-id/--service_id: expected one argument +2014-02-10 04:03:28 ++ failed + +
diff --git a/manuals/howto-openstack/section_configuring_openstack.xml b/manuals/howto-openstack/section_configuring_openstack.xml new file mode 100644 index 000000000..195eaf03d --- /dev/null +++ b/manuals/howto-openstack/section_configuring_openstack.xml @@ -0,0 +1,114 @@ + + + + + +]> +
+ Configuring the Openstack Compute Node + The compute configuration steps are virtually identical to the controller other then the + configurations and that it does not run the Daylight controller. + Fedora 19: + + +$ cd ~/ +$ cd devstack +$ cp local.conf.compute local.conf +$ vi local.conf + + Fedora 20: + + +$ cd /home/odl/ +$ cp local.conf.compute devstack/local.conf +$ cd devstack +$ vi local.conf + + Edit the local.conf you just copied with the appropriate IPs in the devstack directory on + the compute host like the following example with your controller and compute host + IPs: + + +[[local|localrc]] +LOGFILE=stack.sh.log +#LOG_COLOR=False +#SCREEN_LOGDIR=/opt/stack/data/log +OFFLINE=true +#RECLONE=yes + +disable_all_services +enable_service neutron nova n-cpu quantum n-novnc qpid + +HOST_NAME=fedora-odl-2 +HOST_IP=172.16.86.128 +SERVICE_HOST_NAME=fedora-odl-1 +SERVICE_HOST=172.16.86.129 +VNCSERVER_PROXYCLIENT_ADDRESS=172.16.86.128 +VNCSERVER_LISTEN=0.0.0.0 + +FLOATING_RANGE=192.168.210.0/24 + +NEUTRON_REPO=https://github.com/CiscoSystems/neutron.git +NEUTRON_BRANCH=odl_ml2 +Q_PLUGIN=ml2 +Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,linuxbridge +ENABLE_TENANT_TUNNELS=True +Q_HOST=$SERVICE_HOST + +MYSQL_HOST=$SERVICE_HOST +RABBIT_HOST=$SERVICE_HOST +GLANCE_HOSTPORT=$SERVICE_HOST:9292 +KEYSTONE_AUTH_HOST=$SERVICE_HOST +KEYSTONE_SERVICE_HOST=$SERVICE_HOST + +MYSQL_PASSWORD=mysql +RABBIT_PASSWORD=rabbit +QPID_PASSWORD=rabbit +SERVICE_TOKEN=service +SERVICE_PASSWORD=admin +ADMIN_PASSWORD=admin + +[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]] +[agent] +minimize_polling=True + +[ml2_odl] +url=http://172.16.86.129:8080/controller/nb/v2/neutron +username=admin +password=admin + + Or check the conf file quickly by grepping it. + + [ odl @ fedora - odl - 2 devstack ] $grep 172 local .conf + +HOST_IP=172.16.86.128 +SERVICE_HOST=172.16.86.129 +VNCSERVER_PROXYCLIENT_ADDRESS=172.16.86.128 +url=http://172.16.86.129:8080/controller/nb/v2/neutron + + And now stack the compute host: + + $ ./stack.sh + + Once you get the stack working SNAPSHOT the image, it can be a handy timesaver. So is + leaving DevStack “offline=true” and “reclone=no” except for when you need to pull a + patch. + The state of OVS after the stack should be the following: + + [odl@fedora-odl-2 devstack]$sudo ovs-vsctl show + +17074e89-2ac5-4bba-997a-1a5a3527cf56 +Manager "tcp:172.16.86.129:6640" +is_connected: true +Bridge br-int +Controller "tcp:172.16.86.129:6633" +is_connected: true +fail_mode: secure +Port br-int +Interface br-int +ovs_version: "2.0.0" + +
diff --git a/manuals/howto-openstack/section_create_multi_network.xml b/manuals/howto-openstack/section_create_multi_network.xml new file mode 100644 index 000000000..d651c2086 --- /dev/null +++ b/manuals/howto-openstack/section_create_multi_network.xml @@ -0,0 +1,242 @@ + + + + + +]> +
+ Create Multi Network Types, GRE and VXLan + Create some hosts in an overlay using the VXLAN encap with specified segmentation IDs + (VNIs): + + +neutron net-create vxlan-net1 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type vxlan --provider:segmentation_id 1600 +neutron subnet-create vxlan-net1 10.100.1.0/24 --name vxlan-net1 + +neutron net-create vxlan-net2 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type vxlan --provider:segmentation_id 1601 +neutron subnet-create vxlan-net2 10.100.2.0/24 --name vxlan-net2 + +neutron net-create vxlan-net3 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type vxlan --provider:segmentation_id 1603 +neutron subnet-create vxlan-net3 10.100.3.0/24 --name vxlan-net3 + + Next, take a look at the networks which were just created. + + [odl@fedora-odl-1 devstack]$neutron net-list + ++--------------------------------------+------------+-------------------------------------------------------+ +| id | name | subnets | ++--------------------------------------+------------+-------------------------------------------------------+ +| 03e3f964-8bc8-48fa-b4c9-9b8390f37b93 | private | b06d716b-527f-4da2-adda-5fc362456d34 10.0.0.0/24 | +| 4eaf08d3-2234-4632-b1e7-d11704b1238a | vxlan-net2 | b54c30fd-e157-4935-b9c2-cefa145162a8 10.100.2.0/24 | +| af8aa29d-a302-4ecf-a0b1-e52ff9c10b63 | vxlan-net1 | c44f9bee-adca-4bca-a197-165d545bcef9 10.100.1.0/24 | +| e6f3c605-6c0b-4f7d-a64f-6e593c5e647a | vxlan-net3 | 640cf2d1-b470-41dd-a4d8-193d705ea73e 10.100.3.0/24 | +| f6aede62-67a5-4fe6-ad61-2c1a88b08874 | public | 1e945d93-caeb-4890-8b58-ed00297a7f03 192.168.210.0/24 | ++--------------------------------------+------------+-------------------------------------------------------+ + + Now, boot the VMS + + +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep vxlan-net1 | awk '{print $2}') vxlan-host1 --availability_zone=nova:fedora-odl-2 + +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep vxlan-net2 | awk '{print $2}') vxlan-host2 --availability_zone=nova:fedora-odl-2 + +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep vxlan-net2 | awk '{print $2}') vxlan-host3 --availability_zone=nova:fedora-odl-2 + + To pull up the Horizon UI to verify the nodes you have, point your web browser at the + controller IP (port 80). +
+ Horizon-OpenDaylight-e1392513990486.jpg + + + + + +
+ Now, Ping one of the hosts just created to verify it is functional: + + [odl@fedora-odl-1 devstack]$ip netns + +qdhcp-4eaf08d3-2234-4632-b1e7-d11704b1238a +qdhcp-af8aa29d-a302-4ecf-a0b1-e52ff9c10b63 +qrouter-bed7005f-4c51-4c3a-b23b-3830b5e7663a +[odl@fedora-odl-1 devstack]$ nova list ++--------------------------------------+-------------+--------+------------+-------------+-----------------------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+-------------+--------+------------+-------------+-----------------------+ +| f34ed046-5daf-42f5-9b2c-644f5ab6b2bc | vxlan-host1 | ACTIVE | - | Running | vxlan-net1=10.100.1.2 | +| 6b65d0f2-c621-4dc5-87ca-82a2c44734b2 | vxlan-host2 | ACTIVE | - | Running | vxlan-net2=10.100.2.2 | +| f3d5179a-e974-4eb4-984b-399d1858ab76 | vxlan-host3 | ACTIVE | - | Running | vxlan-net2=10.100.2.4 | ++--------------------------------------+-------------+--------+------------+-------------+-----------------------+ +[odl@fedora-odl-1 devstack]$ sudo ip netns exec qdhcp-af8aa29d-a302-4ecf-a0b1-e52ff9c10b63 ping 10.100.1.2 +PING 10.100.1.2 (10.100.1.2) 56(84) bytes of data. +64 bytes from 10.100.1.2: icmp_seq=1 ttl=64 time=2.63 ms +64 bytes from 10.100.1.2: icmp_seq=2 ttl=64 time=1.15 ms +^C +--- 10.100.1.2 ping statistics --- +2 packets transmitted, 2 received, 0% packet loss, time 1001ms +rtt min/avg/max/mdev = 1.151/1.892/2.633/0.741 ms + + Now, create three new Neutron networks using the GRE encapsulation. (Note: With too many + VMs you can make them crash if too much memory is used). + + +### Create the Networks and corresponding Subnets ### +neutron net-create gre-net1 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type gre --provider:segmentation_id 1700 +neutron subnet-create gre-net1 10.100.1.0/24 --name gre-net1 + +neutron net-create gre-net2 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type gre --provider:segmentation_id 1701 +neutron subnet-create gre-net2 10.100.2.0/24 --name gre-net2 + +neutron net-create gre-net3 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type gre --provider:segmentation_id 1703 +neutron subnet-create gre-net3 10.100.3.0/24 --name gre-net3 + + + +### Boot the VMs ### + +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep gre-net1 | awk '{print $2}') gre-host1 --availability_zone=nova:fedora-odl-2 + +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep gre-net2 | awk '{print $2}') gre-host2 --availability_zone=nova:fedora-odl-2 + +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep gre-net2 | awk '{print $2}') gre-host3 --availability_zone=nova:fedora-odl-2 + + Here is an example of a OVS configuration. (Note: Since the tunnel ID is being set, use + the OpenFlow OXM metadata field to set the logical port OFPXMT_OFB_TUNNEL_ID implemented in + OpenFlow v1.3.) + + [odl@fedora-odl-1 devstack]$nova list + ++--------------------------------------+-------------+--------+------------+-------------+-----------------------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+-------------+--------+------------+-------------+-----------------------+ +| 8db56e44-36db-4447-aeb9-e6679ca420b6 | gre-host1 | ACTIVE | - | Running | gre-net1=10.100.1.2 | +| 36fec86d-d9e6-462c-a686-f3c0929a2c21 | gre-host2 | ACTIVE | - | Running | gre-net2=10.100.2.2 | +| 67d97a8e-ecd3-4913-886c-423170ef3635 | gre-host3 | ACTIVE | - | Running | gre-net2=10.100.2.4 | +| f34ed046-5daf-42f5-9b2c-644f5ab6b2bc | vxlan-host1 | ACTIVE | - | Running | vxlan-net1=10.100.1.2 | +| 6b65d0f2-c621-4dc5-87ca-82a2c44734b2 | vxlan-host2 | ACTIVE | - | Running | vxlan-net2=10.100.2.2 | +| f3d5179a-e974-4eb4-984b-399d1858ab76 | vxlan-host3 | ACTIVE | - | Running | vxlan-net2=10.100.2.4 | ++--------------------------------------+-------------+--------+------------+-------------+-----------------------+ + + Neutron mappings from the Neutron client output: + + [odl@fedora-odl-1 devstack]$neutron net-list + ++--------------------------------------+------------+-------------------------------------------------------+ +| id | name | subnets | ++--------------------------------------+------------+-------------------------------------------------------+ +| 03e3f964-8bc8-48fa-b4c9-9b8390f37b93 | private | b06d716b-527f-4da2-adda-5fc362456d34 10.0.0.0/24 | +| 4eaf08d3-2234-4632-b1e7-d11704b1238a | vxlan-net2 | b54c30fd-e157-4935-b9c2-cefa145162a8 10.100.2.0/24 | +| a33c5794-3830-4220-8724-95752d8f94bd | gre-net1 | d32c8a70-70c6-4bdc-b741-af718b3ba4cd 10.100.1.0/24 | +| af8aa29d-a302-4ecf-a0b1-e52ff9c10b63 | vxlan-net1 | c44f9bee-adca-4bca-a197-165d545bcef9 10.100.1.0/24 | +| e6f3c605-6c0b-4f7d-a64f-6e593c5e647a | vxlan-net3 | 640cf2d1-b470-41dd-a4d8-193d705ea73e 10.100.3.0/24 | +| f6aede62-67a5-4fe6-ad61-2c1a88b08874 | public | 1e945d93-caeb-4890-8b58-ed00297a7f03 192.168.210.0/24 | +| fa44d171-4935-4fae-9507-0ecf2d521b49 | gre-net2 | f8151c73-cda4-47e4-bf7c-8a73a7b4ef5f 10.100.2.0/24 | +| ffc7da40-8252-4cdf-a9a2-d538f4986215 | gre-net3 | 146931d8-9146-4abf-9957-d6a8a3db43e4 10.100.3.0/24 | ++--------------------------------------+------------+-------------------------------------------------------+ + + Next, verify the Open + vSwitch configuration. Worthy of note is the tunnel IPv4 src/dest endpoints are defined + using OVSDB but the Tunnel ID is set using the flowmod in OpenFlow using key=flow. This + tells OVSDB to look for the tunnel ID in the flowmod. There is also a similar concept for + IPv4 tunnel source/destination using Nicira extensions with NXM_NX_TUN_IPV4_SRC and + NXM_NX_TUN_IPV4_DST that was implemented in OVS 2.0. The NXM code points are referenced in + the OF v1.3 specification but it seems pretty nascent wether the ONF is looking to handle + tunnel operations with OF-Config or via flowmods such as the NXM references. The NXM code + points are defined the ODL openflowjava project that implements the library model for OFv1.3 + and would just need to be plumbed through the MD-SAL convertor. + + [odl@fedora-odl-2 devstack]$sudo ovs-vsctl show +17074e89-2ac5-4bba-997a-1a5a3527cf56 +Manager "tcp:172.16.86.129:6640" +is_connected: true +Bridge br-int +Controller "tcp:172.16.86.129:6633" +is_connected: true +fail_mode: secure +Port "tap8b31df39-d4" +Interface "tap8b31df39-d4" +Port br-int +Interface br-int +Port "gre-172.16.86.129" +Interface "gre-172.16.86.129" +type: gre +options: {key=flow, local_ip="172.16.86.128", remote_ip="172.16.86.129"} +ovs_version: "2.0.0" + + And then the OF v1.3 flowmods: + + [odl@fedora-odl-2 devstack]$sudo ovs-ofctl -O OpenFlow13 dump-flows br-int + +OFPST_FLOW reply (OF1.3) (xid=0x2): +cookie=0x0, duration=2415.341s, table=0, n_packets=30, n_bytes=2586, send_flow_rem in_port=4,dl_src=fa:16:3e:1a:49:61 actions=set_field:0x641->tun_id,goto_table:10 +cookie=0x0, duration=2425.095s, table=0, n_packets=39, n_bytes=3300, send_flow_rem in_port=2,dl_src=fa:16:3e:93:20:1e actions=set_field:0x640->tun_id,goto_table:10 +cookie=0x0, duration=2415.981s, table=0, n_packets=37, n_bytes=2880, send_flow_rem in_port=5,dl_src=fa:16:3e:02:28:8d actions=set_field:0x641->tun_id,goto_table:10 +cookie=0x0, duration=877.732s, table=0, n_packets=27, n_bytes=2348, send_flow_rem in_port=6,dl_src=fa:16:3e:20:cd:8e actions=set_field:0x6a4->tun_id,goto_table:10 +cookie=0x0, duration=878.981s, table=0, n_packets=31, n_bytes=2908, send_flow_rem in_port=7,dl_src=fa:16:3e:86:08:5f actions=set_field:0x6a5->tun_id,goto_table:10 +cookie=0x0, duration=882.297s, table=0, n_packets=32, n_bytes=2670, send_flow_rem in_port=8,dl_src=fa:16:3e:68:40:4a actions=set_field:0x6a5->tun_id,goto_table:10 +cookie=0x0, duration=884.983s, table=0, n_packets=16, n_bytes=1888, send_flow_rem tun_id=0x6a4,in_port=3 actions=goto_table:20 +cookie=0x0, duration=2429.719s, table=0, n_packets=33, n_bytes=3262, send_flow_rem tun_id=0x640,in_port=1 actions=goto_table:20 +cookie=0x0, duration=881.723s, table=0, n_packets=29, n_bytes=3551, send_flow_rem tun_id=0x6a5,in_port=3 actions=goto_table:20 +cookie=0x0, duration=2418.434s, table=0, n_packets=33, n_bytes=3866, send_flow_rem tun_id=0x641,in_port=1 actions=goto_table:20 +cookie=0x0, duration=2426.048s, table=0, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x2,in_port=3 actions=goto_table:20 +cookie=0x0, duration=2428.34s, table=0, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x1,in_port=3 actions=goto_table:20 +cookie=0x0, duration=878.961s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=7 actions=drop +cookie=0x0, duration=882.211s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=8 actions=drop +cookie=0x0, duration=877.562s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=6 actions=drop +cookie=0x0, duration=2415.941s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=5 actions=drop +cookie=0x0, duration=2415.249s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=4 actions=drop +cookie=0x0, duration=2425.04s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=2 actions=drop +cookie=0x0, duration=2711.147s, table=0, n_packets=970, n_bytes=88270, send_flow_rem dl_type=0x88cc actions=CONTROLLER:56 +cookie=0x0, duration=873.508s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1,in_port=3,dl_dst=00:00:00:00:00:00 actions=output:1 +cookie=0x0, duration=873.508s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1,in_port=1,dl_dst=00:00:00:00:00:00 actions=output:1 +cookie=0x0, duration=877.224s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x6a4 actions=goto_table:20 +cookie=0x0, duration=2415.783s, table=10, n_packets=7, n_bytes=294, send_flow_rem priority=8192,tun_id=0x641 actions=goto_table:20 +cookie=0x0, duration=881.907s, table=10, n_packets=3, n_bytes=169, send_flow_rem priority=8192,tun_id=0x6a5 actions=goto_table:20 +cookie=0x0, duration=2424.811s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x640 actions=goto_table:20 +cookie=0x0, duration=881.623s, table=10, n_packets=37, n_bytes=3410, send_flow_rem priority=16384,tun_id=0x6a5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:3,goto_table:20 +cookie=0x0, duration=2429.661s, table=10, n_packets=18, n_bytes=1544, send_flow_rem priority=16384,tun_id=0x640,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1,goto_table:20 +cookie=0x0, duration=2418.33s, table=10, n_packets=36, n_bytes=3088, send_flow_rem priority=16384,tun_id=0x641,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1,goto_table:20 +cookie=0x0, duration=2428.227s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=16384,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:3,goto_table:20 +cookie=0x0, duration=884.854s, table=10, n_packets=15, n_bytes=1306, send_flow_rem priority=16384,tun_id=0x6a4,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:3,goto_table:20 +cookie=0x0, duration=2425.966s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=16384,tun_id=0x2,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:3,goto_table:20 +cookie=0x0, duration=885.097s, table=10, n_packets=12, n_bytes=1042, send_flow_rem tun_id=0x6a4,dl_dst=fa:16:3e:5d:3d:cd actions=output:3,goto_table:20 +cookie=0x0, duration=2426.083s, table=10, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x2,dl_dst=fa:16:3e:fa:77:36 actions=output:3,goto_table:20 +cookie=0x0, duration=2429.782s, table=10, n_packets=21, n_bytes=1756, send_flow_rem tun_id=0x640,dl_dst=fa:16:3e:f8:d0:96 actions=output:1,goto_table:20 +cookie=0x0, duration=873.509s, table=10, n_packets=23, n_bytes=1999, send_flow_rem tun_id=0x6a5,dl_dst=fa:16:3e:21:eb:65 actions=output:3,goto_table:20 +cookie=0x0, duration=2418.518s, table=10, n_packets=24, n_bytes=2084, send_flow_rem tun_id=0x641,dl_dst=fa:16:3e:9b:c1:c7 actions=output:1,goto_table:20 +cookie=0x0, duration=2428.443s, table=10, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x1,dl_dst=fa:16:3e:ea:1d:9d actions=output:3,goto_table:20 +cookie=0x0, duration=877.119s, table=20, n_packets=12, n_bytes=1042, send_flow_rem priority=8192,tun_id=0x6a4 actions=drop +cookie=0x0, duration=2415.73s, table=20, n_packets=31, n_bytes=2378, send_flow_rem priority=8192,tun_id=0x641 actions=drop +cookie=0x0, duration=881.815s, table=20, n_packets=26, n_bytes=2168, send_flow_rem priority=8192,tun_id=0x6a5 actions=drop +cookie=0x0, duration=2424.74s, table=20, n_packets=21, n_bytes=1756, send_flow_rem priority=8192,tun_id=0x640 actions=drop +cookie=0x0, duration=882.005s, table=20, n_packets=37, n_bytes=3410, priority=16384,tun_id=0x6a5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:8,output:7 +cookie=0x0, duration=2424.884s, table=20, n_packets=22, n_bytes=1864, send_flow_rem priority=16384,tun_id=0x640,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:2 +cookie=0x0, duration=2415.83s, table=20, n_packets=38, n_bytes=3228, send_flow_rem priority=16384,tun_id=0x641,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:5,output:4 +cookie=0x0, duration=877.333s, table=20, n_packets=15, n_bytes=1306, send_flow_rem priority=16384,tun_id=0x6a4,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:6 +cookie=0x0, duration=878.799s, table=20, n_packets=15, n_bytes=1818, send_flow_rem tun_id=0x6a5,dl_dst=fa:16:3e:86:08:5f actions=output:7 +cookie=0x0, duration=2415.884s, table=20, n_packets=15, n_bytes=1818, send_flow_rem tun_id=0x641,dl_dst=fa:16:3e:02:28:8d actions=output:5 +cookie=0x0, duration=877.468s, table=20, n_packets=15, n_bytes=1818, send_flow_rem tun_id=0x6a4,dl_dst=fa:16:3e:20:cd:8e actions=output:6 +cookie=0x0, duration=882.102s, table=20, n_packets=14, n_bytes=1733, send_flow_rem tun_id=0x6a5,dl_dst=fa:16:3e:68:40:4a actions=output:8 +cookie=0x0, duration=2415.171s, table=20, n_packets=15, n_bytes=1818, send_flow_rem tun_id=0x641,dl_dst=fa:16:3e:1a:49:61 actions=output:4 +cookie=0x0, duration=2424.998s, table=20, n_packets=24, n_bytes=2532, send_flow_rem tun_id=0x640,dl_dst=fa:16:3e:93:20:1e actions=output:2 + + For more on TEPs please + see a nice document authored by Ben Pfaff who needs no introduction, that can be found here. + Next take a look at the flowmods. The pipelines have been broken down into three tables, a + classifier, egress and ingress. Over the next 6 months we will be adding services into + pipeline for a much more complete implementation. We are looking for user contributions in + the roadmap and even better, pushing code upstream as the project continues to grow. + Lastly if you want to force availability zones from say the “demo” UID. You can add the + admin role to different UIDs using the following Keystone client calls. + + + +$ keystone user-role-add --user $(keystone user-list | grep '\sdemo' | awk '{print $2}') \ +--role $(keystone role-list | grep 'admin' | awk '{print $2}') \ +--tenant_id $(keystone tenant-list | grep '\sdemo' | awk '{print $2}') +$ . ./openrc demo demo + +
diff --git a/manuals/howto-openstack/section_ovsdb_project.xml b/manuals/howto-openstack/section_ovsdb_project.xml new file mode 100644 index 000000000..7b708379e --- /dev/null +++ b/manuals/howto-openstack/section_ovsdb_project.xml @@ -0,0 +1,42 @@ + + + + + +]> +
+ OVSDB Project Control and Management Logic + OpenFlow v1.3 and OVSDB we're used in the OVSBD project Openstacj implementation. We chose + not to use any extensions or the use of agents. Open vSwitch supported the necessary + OpenFlow v1.3 and OVSDB functionality we required for this architecture. Those of us in the + OVSDB project are pretty agnostic to southbound protocols as long as there is a healthy + adoption so as not to waste our time and based on open standards such as OpenFlow v1.3, RFCs + 7047 (Informational OVSDB RFC) and/or de facto drafts like + draft-mahalingam-dutt-dcops-vxlan(VXLAN framing). We are keen to see NXM extension + functionality upstream into the OpenFlow specification. OVS ARP responder is something we + are beginning to work on proofing now. NXM and OXM extensions merging for ARP and Tunnel + feature parity would make our design and coding lives easier. The overall architecture looks + something like the following. I have hardware TEPs in the diagram. We have cycles to help + hardware vendors implement the hardware_vtep database schema (assuming they prescribe to + open operating systems): + + + + + + The provider segmentation keys used in the encap (GRE key/VNI) is a hash of Network and + Tenant ID since as long as we are subnet bound, networks will always need to support + multi-tenant logical networks until we eradicate L2 all together. The design is flexible and + as generic as possible to allow for any vendor to add differentiation on top of the base + network virtualization. Of course, we have plenty to do between now and stability, so moving + right along. + A quick visual of the OVSDB Neutron implementation code flow itself and how it ties into + the controller project and OpenStack: + + + + + +
diff --git a/manuals/howto-openstack/section_start_odl_controller.xml b/manuals/howto-openstack/section_start_odl_controller.xml new file mode 100644 index 000000000..ab4786bf8 --- /dev/null +++ b/manuals/howto-openstack/section_start_odl_controller.xml @@ -0,0 +1,54 @@ + + + + + +]> +
+ Starting ODL Controller on the Openstack Node + + $ cd odl/opendaylight/ + + Check that the configuration is set for OpenFlow v1.3 with the following to ensure that + ovsdb.of.version=1.3 is uncommented: + + $ grep ovsdb.of.version configuration/config.ini +ovsdb.of.version=1.3 + + If it is not uncommented, adjust the config.ini file to uncomment the line + ovsdb.of.version=1.3The file is located at + /home/odl/opendaylight/configuration/config.ini + + ### Before ### +# ovsdb.of.version=1.3 +### After ### +ovsdb.of.version=1.3 + + Or, paste the following: + + sudo sed -i 's/#\ ovsdb.of.version=1.3/ovsdb.of.version=1.3/' /home/odl/opendaylight/configuration/config.ini + + Lastly, start the ODL controller w/ the following: + + ./run.sh -XX:MaxPermSize=384m -virt ovsdb -of13 + + When the controller is finished loading here are some typical messages in the OSGI + console: + + +2014-02-06 20:41:22.458 UTC [pool-2-thread-4] INFO o.o.controller.frm.flow.FlowProvider - Flow Config Provider started. +2014-02-06 20:41:22.461 UTC [pool-2-thread-4] INFO o.o.c.frm.group.GroupProvider - Group Config Provider started. +2014-02-06 20:41:22.507 UTC [pool-2-thread-4] INFO o.o.c.frm.meter.MeterProvider - Meter Config Provider started. +2014-02-06 20:41:22.515 UTC [pool-2-thread-6] INFO o.o.c.m.s.manager.StatisticsProvider - Statistics Provider started. + + You can verify the sockets/ports are bound with the following command. Ports 6633, 6640 + and 6653 should all be bound and listening: + + $ lsof -iTCP | grep 66 +java 1330 odl 154u IPv6 15262 0t0 TCP *:6640 (LISTEN) +java 1330 odl 330u IPv6 15392 0t0 TCP *:6633 (LISTEN) +java 1330 odl 374u IPv6 14306 0t0 TCP *:6653 (LISTEN) + +
diff --git a/manuals/howto-openstack/section_unstack_and_cleanup.xml b/manuals/howto-openstack/section_unstack_and_cleanup.xml new file mode 100644 index 000000000..8161c2ece --- /dev/null +++ b/manuals/howto-openstack/section_unstack_and_cleanup.xml @@ -0,0 +1,30 @@ + + + + + +]> +
+ Unstack and Cleanup DevStack + Use the following to teardown the stack and reset the state of the VM to pre-stack. + Running unstack.sh will kill the stack. Also, look at the OVS config and make sure all + bridges have been deleted: + + sudo ovs-vsctl show + + A handy cleanup is to run a few commands to ensure the stack was effectively torn down. + Paste the following to create a shell script called ./reallyunstack.sh. + + echo 'sudo killall nova-api nova-conductor nova-cert nova-scheduler nova-consoleauth nova-compute +sudo pkill -9 -f qemu +sudo ovs - vsctl del - manager +sudo ovs - vsctl del - br br - int +sudo ovs - vsctl del - br br - tun +sudo pkill / usr / bin / python +sudo systemctl restart qpidd .service ' & gt ; reallyunstack .sh +chmod + x reallyunstack .sh +. / reallyunstack .sh + +
diff --git a/manuals/howto-openstack/section_verifying_openstack.xml b/manuals/howto-openstack/section_verifying_openstack.xml new file mode 100644 index 000000000..45dd3e629 --- /dev/null +++ b/manuals/howto-openstack/section_verifying_openstack.xml @@ -0,0 +1,129 @@ + + + + + +]> +
+ Verifying Openstack is Functioning + Verify the stack with the following on either host. + There are two KVM hypervisors registered with Nova. *Note openrc will populate the proper + Keystone credentials for service client commands. These can be viewed using the export + command from your shell: + + [ odl @ fedora - odl - 1 devstack ] $. . / openrc admin admin +[ odl @ fedora - odl - 1 devstack ] $nova hypervisor - list + ++----+---------------------+ +| ID | Hypervisor hostname | ++----+---------------------+ +| 1 | fedora-odl-1 | +| 2 | fedora-odl-2 | ++----+---------------------+ + + Note: During the VM Boot Instances, there is a minor configuration differece between + Fedora 19 and Fedora 20 + Fedora 19: + + ~/devstack/addimage.sh +export IMAGE=cirros-0.3.0-i386-disk.img + + Fedora 20: + + export IMAGE = cirros - 0.3.1 - x86_64 - uec + + Next, boot a couple of VMs and verify the network overlay is created by + ODL/OVSDB. + + nova boot -- flavor m1 .tiny -- image $ ( nova image - list | grep $IMAGE '\s' | awk '{print $2}' ) -- nic net - id = $ ( neutron net - list | grep private | awk '{print $2}' ) admin - private1 + + Boot a 2nd node: + + nova boot -- flavor m1 .tiny -- image $ ( nova image - list | grep $IMAGE '\s' | awk '{print $2}' ) -- nic net - id = $ ( neutron net - list | grep private | awk '{print $2}' ) admin - private2 + + You can also force a host to boot to a particular hypervisor using the following + (note: this requires an admin role which is implicitly granted to the admin + user): + + nova boot -- flavor m1 .tiny -- image $ ( nova image - list | grep $IMAGE '\s' | awk '{print $2}' ) -- nic net - id = $ ( neutron net - list | grep private | awk '{print $2}' ) demo - private -- availability_zone = nova : fedora - odl - 1 + + View the state of the VMs + + [odl@fedora-odl-1 devstack]$nova list + ++--------------------------------------+----------------+--------+------------+-------------+------------------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+----------------+--------+------------+-------------+------------------+ +| 01c30219-255a-4376-867a-45d52e349e87 | admin-private1 | ACTIVE | - | Running | private=10.0.0.2 | +| bdcfd05b-ebaf-452d-b8c8-81f391a0bb75 | admin-private2 | ACTIVE | - | Running | private=10.0.0.4 | ++--------------------------------------+----------------+--------+------------+-------------+------------------+ + + To determine where the host is located, look directly at Libvirt using Virsh: + + [odl@fedora-odl-2 devstack]$sudo virsh list +Id Name State +---------------------------------------------------- +2 instance-00000002 running + + Ping the endpoints by grabbing a namespace for qdhcp or qrouter. This provides an L3 source + to ping the VMs. These will only exist on the controller or wherever you are running those + services in your cloud: + + [odl@fedora-odl-1 devstack]$ ip netns +qdhcp-3f0cfbd2-f23c-481a-8698-3b2dcb7c2657 +qrouter-992e450a-875c-4721-9c82-606c283d4f92 +[odl@fedora-odl-1 devstack]$ sudo ip netns exec qdhcp-3f0cfbd2-f23c-481a-8698-3b2dcb7c2657 ping 10.0.0.2 +PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. +64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms +64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.578 ms +^C +--- 10.0.0.2 ping statistics --- +2 packets transmitted, 2 received, 0% packet loss, time 1001ms +rtt min/avg/max/mdev = 0.578/0.657/0.737/0.083 ms +[odl@fedora-odl-1 devstack]$ sudo ip netns exec qdhcp-3f0cfbd2-f23c-481a-8698-3b2dcb7c2657 ping 10.0.0.4 +PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. +64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=2.02 ms +64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=1.03 ms +^C +--- 10.0.0.4 ping statistics --- +2 packets transmitted, 2 received, 0% packet loss, time 1001ms +rtt min/avg/max/mdev = 1.037/1.530/2.023/0.493 ms + + Verify the OF13 flow modifications. + + [odl@fedora-odl-2 devstack]$ sudo ovs-ofctl -O OpenFlow13 dump-flows br-int +OFPST_FLOW reply (OF1.3) (xid=0x2): +cookie=0x0, duration=2044.758s, table=0, n_packets=23, n_bytes=2292, send_flow_rem in_port=2,dl_src=fa:16:3e:f5:03:2e actions=set_field:0x1->tun_id,goto_table:10 +cookie=0x0, duration=2051.364s, table=0, n_packets=30, n_bytes=3336, send_flow_rem tun_id=0x1,in_port=1 actions=goto_table:20 +cookie=0x0, duration=2049.553s, table=0, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x2,in_port=1 actions=goto_table:20 +cookie=0x0, duration=2044.724s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=8192,in_port=2 actions=drop +cookie=0x0, duration=2576.478s, table=0, n_packets=410, n_bytes=36490, send_flow_rem dl_type=0x88cc actions=CONTROLLER:56 +cookie=0x0, duration=2044.578s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=8192,tun_id=0x1 actions=goto_table:20 +cookie=0x0, duration=2051.322s, table=10, n_packets=10, n_bytes=1208, send_flow_rem priority=16384,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1,goto_table:20 +cookie=0x0, duration=2049.477s, table=10, n_packets=0, n_bytes=0, send_flow_rem priority=16384,tun_id=0x2,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1,goto_table:20 +cookie=0x0, duration=2050.621s, table=10, n_packets=11, n_bytes=944, send_flow_rem tun_id=0x1,dl_dst=fa:16:3e:00:c4:97 actions=output:1,goto_table:20 +cookie=0x0, duration=2049.641s, table=10, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x2,dl_dst=fa:16:3e:c6:00:e1 actions=output:1,goto_table:20 +cookie=0x0, duration=2051.415s, table=10, n_packets=2, n_bytes=140, send_flow_rem tun_id=0x1,dl_dst=fa:16:3e:f7:3d:96 actions=output:1,goto_table:20 +cookie=0x0, duration=2048.058s, table=10, n_packets=0, n_bytes=0, send_flow_rem tun_id=0x1,dl_dst=fa:16:3e:e1:a7:e1 actions=output:1,goto_table:20 +cookie=0x0, duration=2044.517s, table=20, n_packets=13, n_bytes=1084, send_flow_rem priority=8192,tun_id=0x1 actions=drop +cookie=0x0, duration=2044.608s, table=20, n_packets=21, n_bytes=2486, send_flow_rem priority=16384,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:2 +cookie=0x0, duration=2044.666s, table=20, n_packets=17, n_bytes=1898, send_flow_rem tun_id=0x1,dl_dst=fa:16:3e:f5:03:2e actions=output:2 + + Define new networks with encaps of VXLAN or GRE along with specifying the segmentation ID. + In this case GRE: + + neutron net-create gre1 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type gre --provider:segmentation_id 1300 +neutron subnet-create gre1 10.200.1.0/24 --name gre1 + + + neutron net-create gre2 --tenant_id $(keystone tenant-list | grep '\sadmin' | awk '{print $2}') --provider:network_type gre --provider:segmentation_id 1310 +neutron subnet-create gre2 10.200.2.0/24 --name gre2 + + And then boot those instances using those networks: + + nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep 'gre1' | awk '{print $2}') gre1-host +nova boot --flavor m1.tiny --image $(nova image-list | grep $IMAGE'\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep 'gre2' | awk '{print $2}') gre2-host + +
diff --git a/manuals/install-guide/bk-install-guide.xml b/manuals/install-guide/bk-install-guide.xml new file mode 100644 index 000000000..f5fce2688 --- /dev/null +++ b/manuals/install-guide/bk-install-guide.xml @@ -0,0 +1,64 @@ + + + + OpenDaylight User Guide + + End User Guide + + + + + + + + Linux Foundation + + + + + 2014 + + Linux Foundation + + + hydrogen + + OpenDaylight + + + + + + Copyright details are filled in by the template. + + + + + OpenDaylight is an open platform for network programmability to enable SDN and create a + solid foundation for NFV for networks at any size and scale. OpenDaylight software is a + combination of components including a fully pluggable controller, interfaces, protocol + plug-ins and applications. + + + + + 2014-02-24 + + + + + First edition of this document. + + + + + + + + + + diff --git a/manuals/install-guide/ch_install.xml b/manuals/install-guide/ch_install.xml new file mode 100644 index 000000000..aaf6e716a --- /dev/null +++ b/manuals/install-guide/ch_install.xml @@ -0,0 +1,23 @@ + + + + + OpenDaylight Installation + + The OpenDaylight Installation process is straight forward and self contained. OpenDaylight + can be installed in your environment by using release archives, RPM, VirtualBox images or + even via Docker containers. + + + + + + + + + diff --git a/manuals/install-guide/pom.xml b/manuals/install-guide/pom.xml new file mode 100644 index 000000000..bc801af47 --- /dev/null +++ b/manuals/install-guide/pom.xml @@ -0,0 +1,82 @@ + + + org.opendaylight.documentation + manuals + 0.1.0-SNAPSHOT + ../pom.xml + + 4.0.0 + installguide + jar + OpenDaylight Docs - Manuals - Install Guide + + + local + 1 + + + + + + + + com.inocybe.api + sdndocs-maven-plugin + 0.1.0 + + + + + generate-webhelp + + generate-webhelp + + generate-sources + + enduser + bk-install-guide.xml + + appendix toc,title + article/appendix nop + article toc,title + book toc,title,figure,table,example,equation + chapter toc,title + section toc + part toc,title + qandadiv toc + qandaset toc + reference toc,title + set toc,title + + user-guide + user-guide + + + + + enduser + 1 + 0 + 1 + 0 + false + true + true + . + mlemay@inocybe.com + opendaylight + 2.6in + 0 + http://docs.opendaylight.org/user-guide/content/ + ${basedir}/../glossary/glossary-terms.xml + + + + + diff --git a/manuals/install-guide/section_install_docker.xml b/manuals/install-guide/section_install_docker.xml new file mode 100644 index 000000000..81ca0b913 --- /dev/null +++ b/manuals/install-guide/section_install_docker.xml @@ -0,0 +1,191 @@ + + + + + +]> +
+ Installing using Docker Image +
+ What is Docker + Docker, provided by docker.io, and + available in most Linux distributions as well as available on MacOS and Windows, is an + open-source project to easily create lightweight, portable, self-sufficient containers + from any application. The same container that a developer builds and tests on a laptop + can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds + and more. + For more information on docker please read docker.io's + documentation. +
+
+ The sudo command and the docker Group + (reprinted from docker.io's basic documentation): + The docker daemon always runs as the root user, and since Docker version 0.5.2, the + docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket + is owned by the user root, and so, by default, you can access it with sudo. + Starting in version 0.5.3, if you (or your Docker installer) create a Unix group + called docker and add users to it, then the docker daemon will make the ownership of the + Unix socket read/writable by the docker group when the daemon starts. The docker daemon + must always run as the root user, but if you run the docker client as a user in the + docker group then you don't need to add sudo to all the client commands. +
+
+ OpenDaylight Docker Images + There are public images available via the public docker repository. You can find the + images by issuing a docker search command looking for 'opendaylight' i.e. + + $docker search opendaylight +Found 3 results matching your query ("opendaylight") + NAME DESCRIPTION + opendaylight/base-edition The base OpenDaylight SDN controlelr + opendaylight/serviceprovider-edition The service provider version of the OpenDaylight SDN controller + opendaylight/virtualization-edition The virtualization version of the OpenDaylight SDN controller + + Each of these images have version tags that allow the specification of the version via + the version name. ` is also a support tag to identify the latest official release. For + the first release of OpenDaylight, the version tag is hydrogen. +
+
+ Using the Image + The OpenDaylight docker image is meant to be used to start an instance of the + OpenDaylight SDN controller and that process will be invoked when the docker image is + run. Any command line options you append to the docker run command will be passed on to + the the OpenDaylight run.sh startup script. In its simplest form you can invoke an + instance of the Opendaylight controller using the command: + + $docker run -d <image-identifier> + + Where <image-identifier> can be one of the pre-build image references, i.e. + opendaylight/base-edition. Additional information and options for 'running' a docker + image can be found at docker.io's run + documentation. +
+ Ports + The OpenDaylight controller image will expose the following ports from the + container to the host system: + + + + 1088 - JMX access + + + 1830 - Netconf use + + + 2400 - OSGi console + + + 4342 - Lisp Flow Mapping (for Service Provider Edition only) + + + 5666 - ODL Internal clustering RPC + + + 6633 - OpenFlow use + + + 6653 - OpenFlow use + + + 7800 - ODL Clustering + + + 8000 - Java debug access + + + 8080 - OpenDaylight web portal + + + 8383 - Netconf use + + + 12001 - ODL Clustering + + + + By default these ports not will be mapped to ports on the host system (i.e. the + system on which the docker run command is + invoked). To understand how to enable docker container instances to communicate + without having to 'hard wire' the port information see docker.io's documentation on linking. + If you wish to map these ports to specific port numbers on the host system, this + can be accomplished as command line options to the docker run command using the + 'port map' option specified using the -p option. + The syntax for this option is documented in docker.io's run documentation, but is + essentially -p + <host-port>:<container-port>. +
+
+
+ Clustering + OpenDaylight supports the concept of clustering using a command line option -Dsupernodes to support high availability. + The docker images can be used to set up a cluster on a single docker server (host) + using the docker naming and linking capability along with some modifications that were + made to the OpenDaylight's processing of the supernodes host specifications. + NOTE: The cluster configuration setup described in this + document does not work for containers that are running on separate hosts. Supporting + clustering using docker images across hosts is an advanced topic that relies on + setting up virtual networks between the containers and is beyond the scope of this + introduction. + To support docker based clustering the syntax of the supernodes parameter has been extended. The important changes + are: + + + + +self - interpreted as a reference to + the local host's address (not 127.0.0.1) and will be resolved to an IP + address through the environment variable HOSTNAME. + + + +<name>+ - interpreted as a + reference to another container, <name>, and will be resolved using the environment variables + defined by docker when the -link command + line option is used + + + + It is important to note that these extensions will only be used if OpenDaylight + determines that it is running inside a container. + This is determined by the value of the environment variable ontainer being set to + lxc. + All values not prefixed by a + will be interpreted + normally. + Below is an example of starting up a three node cluster using this syntax: + $ docker run -d -name node1 opendaylight/base -Dsupernodes=+self + a8435cc23e13cb4e04c3c9788789e7e831af61c735d14a33025b3dd6c76e2938 + $ docker run -d -name node2 -link node1:n1 opendaylight/base -Dsupernodes=+self:+n1 + fa0b37dfd216291e36fd645a345751a1a6079123c99d75326a5775dce8414a93 + $ docker run -d -name node3 -link node1:n1 opendaylight/base -Dsupernodes=+self:+n1 + 9ad6874aa85cad29736030239baf836f46ceb0c242baf873ab455674040d96b1 + + The cluster can be verified through the OpenDaylight user interface. This can be + accomplished by first determining the IP address of one of the nodes: + $ docker inspect -format='{{.NetworkSettings.IPAddress}}' node1 + 172.17.0.46 + + After determining the IP address you can view the web interface by typing http://172.17.0.46:8080 in the browser + address bar, authenticating with the default user name and password (admin/admin), and + then viewing the cluster information by selecting Cluster from the right hand drop down + menu. A popup window should be displayed that shows all the nodes in the cluster with + the master marked with a C and the node to which you are currently connected marked with + a * (asterisks). +
+
diff --git a/manuals/install-guide/section_install_rpm.xml b/manuals/install-guide/section_install_rpm.xml new file mode 100644 index 000000000..917258640 --- /dev/null +++ b/manuals/install-guide/section_install_rpm.xml @@ -0,0 +1,62 @@ + + + + + +]> +
+ Installing from RPM + When using a RedHat-based distribution the easiest way to install OpenDaylight is to use + the prebuilt RPM packages. This can be done by downloading the packages or installing from + the YUM repository. +
+ Installing from YUM Repository + Use this method to install OpenDaylight using the yum repo: + + Install the yum repo: Download the yum repository file + + + Install OpenDaylight Edition of your choice + OpenDaylight Base Edition + + $sudo yum install opendaylight + + OpenDaylight Virtualization + Edition + + $sudo yum install opendaylight-virtualization + + OpenDaylight Service Provider + Edition + + $sudo yum install opendaylight-serviceprovider + + + +
+
+ Installing from downloaded RPM + If you have directly downloaded the RPM artifacts you can install OpenDaylight + + $sudo rpm -Uvh /path/to/rpms/*.rpm + + to install from RPMs directly or + + $sudo yum localinstall /path/to/rpms/*.rpm + +
+
+ Managing Services + The main content of OpenDaylight Hydrogen is in Writea directory called opendaylight, where you will see the following + files: +
+
+ Configuration + Add configuration instructions here +
+
diff --git a/manuals/install-guide/section_install_virtualbox.xml b/manuals/install-guide/section_install_virtualbox.xml new file mode 100644 index 000000000..2350bd58e --- /dev/null +++ b/manuals/install-guide/section_install_virtualbox.xml @@ -0,0 +1,159 @@ + + + + + +]> +
+ Installing using the VirtualBox Image + You can find the OpenDaylight Hydrogen Release VirtualBox Image on the download page here +
+ VM description + + + + Installed SW: + + Java 1.7 + OpenJDK + + + OpenDaylight release distributions + + + mininet + 2.1.0 + + + Open vSwitch + 2.0.0 + + CPqD + ofsoftswitch13 + Robot framework 2.8.3 + integration test scripts + VTN coordinator + + Wireshark + + + + + VM configuration: + + odl_server: Ubuntu 13.04 server, 8GB HDD, ova file + size=2GB + + + odl_desktop: Ubuntu 13.04 desktop (GUI), 20GB HDD, ova file + size=3GB + + + + + +
+
+ Installation Procedure + If you have directly downloaded the RPM artifacts you can install OpenDaylight +
+ Prerequisites + + + + + + Virtualbox (if you use QEMU or + VMware you can find instructions online on how to convert ova + file to these) + + + + + +
+
+ Installation Steps + + + + Download the VM ova file from link above + + + Open VirtualBox and do import appliance + + + Configure the VM with the following recommended settings + + + + Processor: 4x CPU if + you plan to run the controller in the VM, just 1 if you + don't + + + RAM: 4GB if you plan to + run the controller in the VM, or just 1GB if you + don't + + + Network: 1x NIC, bridge + mode is recommended, otherwise NAT (to share your Internet + connection) or host-only (creates internal network) + + + + + + Start the VM + + + Login + + + + for Ubuntu VM, Login with mininet/mininet + + + for Fedora (where available), Login with odl/odl ; The + root password is "password" + + + + + + Open README.txt + + + +
+
+
+ Using the VM + This VM can be used in two scenarios: + + + + Self contained: Both OpenDaylight and the mininet network emulator will + run in the this VM + + + Network Emulator: The mininet network emulator will run in this VM, and + OpenDaylight can be run on an external machine or another VM + + + +
+
diff --git a/manuals/install-guide/section_install_zip.xml b/manuals/install-guide/section_install_zip.xml new file mode 100644 index 000000000..f16403236 --- /dev/null +++ b/manuals/install-guide/section_install_zip.xml @@ -0,0 +1,147 @@ + + + + + +]> +
+ Installing from Zip + Installing from zip is an easy way to get started with OpenDaylight. When installing from + zip, the process is as simple as running the completely packaged environment. However unlike + official distribution packages OpenDaylight will have to be upgraded manually after each + release when installing from zip files. +
+ Prerequisites + In order to be able to install and run the zip file the following prerequisites have + to be fulfilled: + + A Java 1.7 compatible JDK or JRE have to be installed (i.e. Oracle JDK 1.7 + or OpenJDK 1.7) + + + In general, OpenDaylight requires appropriate setting of the JAVA_HOME + directory + + + More information can be found in the OpenDaylight Hydrogen Release Notes + + + + + On some platforms, there are known issues with Oracle Java 1.7.0_21 and + 1.7.0_25, but 1.7.0_45 and 1.7.0_51 have worked fine + + +
+
+ Getting the Zip File + You can find the OpenDaylight Hydrogen Release Base Edition zip file on the download + page here: + http://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/distributions-base/0.1.1/distributions-base-0.1.1-osgipackage.zip +
+
+ Understanding Structure + The main content of OpenDaylight Hydrogen is in a directory called opendaylight, where you will see the following files: + + run.sh + + launches OpenDaylight on Linux/Mac/Unix systems + + + + run.bat + + launches OpenDaylight on Windows systems + + + + version.properties + + indicates the build version + + + + configuration + + basic initialization files (internal to OpenDaylight) + + + + lib + + Java libraries + + + +
+
+ Running OpenDaylight + To launch OpenDaylight follow these easy steps from the root + opendaylight directory + + + Enter ./run.sh or ./run.bat with + administrator privileges to launch OpenDaylight. + + + + Starting + + To run OpenDaylight in background enter ./run.sh + -start with administrator priviledges. + + + + + + Stopping + + To stop OpenDaylight, which is running in background + enter ./run.sh -stop with administrator + priviledges. + + + + + + Status + + In order to see the status of OpenDaylight, + enter ./run.sh -status. It will show whether it is + running or has been stopped. + + + + + + + + Navigate to http://<ip-address-of-machine-where-you-ran-opendaylight>:8080 + to open the web interface, then use the following credentials to log in: + + User: admin + + + Password: admin + + + + If you are running OpenDaylight on the same machine as your browser, you + can browse to http://localhost:8080 or + http://127.0.0.1:8080 to avoid + needing to know the IP address of the machine you are using. + + You will now have a completely running OpenDaylight installation. + + +
+
\ No newline at end of file diff --git a/manuals/pom.xml b/manuals/pom.xml new file mode 100644 index 000000000..f387f1254 --- /dev/null +++ b/manuals/pom.xml @@ -0,0 +1,32 @@ + + + org.opendaylight.documentation + root + 0.1.0-SNAPSHOT + ../pom.xml + + 4.0.0 + org.opendaylight.documentation + manuals + 0.1.0-SNAPSHOT + OpenDaylight Docs - Manuals + pom + https://wiki.opendaylight.org/view/CrossProject:Documentation_Group + + scm:git:ssh://git.opendaylight.org:29418/documentation.git + scm:git:ssh://git.opendaylight.org:29418/documentation.git + https://wiki.opendaylight.org/view/CrossProject:Integration_Group + HEAD + + + http://nexus.opendaylight.org/content + UTF-8 + + + glossary + install-guide + + + diff --git a/manuals/user-guide/Linux_Foundation_logo.png b/manuals/user-guide/Linux_Foundation_logo.png new file mode 100644 index 000000000..e7322293b Binary files /dev/null and b/manuals/user-guide/Linux_Foundation_logo.png differ diff --git a/manuals/user-guide/bk-user-guide.xml b/manuals/user-guide/bk-user-guide.xml new file mode 100644 index 000000000..f5fce2688 --- /dev/null +++ b/manuals/user-guide/bk-user-guide.xml @@ -0,0 +1,64 @@ + + + + OpenDaylight User Guide + + End User Guide + + + + + + + + Linux Foundation + + + + + 2014 + + Linux Foundation + + + hydrogen + + OpenDaylight + + + + + + Copyright details are filled in by the template. + + + + + OpenDaylight is an open platform for network programmability to enable SDN and create a + solid foundation for NFV for networks at any size and scale. OpenDaylight software is a + combination of components including a fully pluggable controller, interfaces, protocol + plug-ins and applications. + + + + + 2014-02-24 + + + + + First edition of this document. + + + + + + + + + + diff --git a/manuals/user-guide/ch_base_edition.xml b/manuals/user-guide/ch_base_edition.xml new file mode 100644 index 000000000..a1821d2e6 --- /dev/null +++ b/manuals/user-guide/ch_base_edition.xml @@ -0,0 +1,96 @@ + + + + + +]> +
+ Hydrogen Base User Guide + The Base edition of OpenDaylight is designed for testing and experimental purposes. Please + see the following sections for more information: + + + + Installation and Configuration + + + OpenFlow + + + NetConf + The table below shows the components that are included in the controller + platform: + + + + <tgroup cols="2"> + <colspec colname="Components" colnum="1" colwidth=".2*"/> + <colspec colname="Description" colnum="2" colwidth=".75*"/> + <thead> + <row> + <entry>Components</entry> + <entry>Descriptions</entry> + </row> + </thead> + <tbody> + <row> + <entry>Clustering Manager</entry> + <entry>Manages shared cache across controller instances</entry> + </row> + <row> + <entry>Container Manager</entry> + <entry>Manages Network Slicing</entry> + </row> + <row> + <entry>Switch Manager</entry> + <entry>Handles SB devices Information</entry> + </row> + <row> + <entry>Statistics Manager</entry> + <entry>Collects Statistics information</entry> + </row> + <row> + <entry>Topology Manager</entry> + <entry>Builds network topology</entry> + </row> + <row> + <entry>Host Tracker</entry> + <entry>Tracks about connected hosts</entry> + </row> + <row> + <entry>Forwarding Rules Manager</entry> + <entry>Installs Flows on SB devices</entry> + </row> + <row> + <entry>ARP Handler</entry> + <entry>Handles ARP messages</entry> + </row> + <row> + <entry>Forwarding Manager</entry> + <entry>Installs Routes and tracks next-hop</entry> + </row> + <row> + <entry>OpenFlow Plugin</entry> + <entry>Interacts with OF switches</entry> + </row> + <row> + <entry>Netconf Plugin</entry> + <entry>Interacts with Netconf switches</entry> + </row> + </tbody> + </tgroup> + </table> + </listitem> + </itemizedlist> + </para> + <para>This edition includes only OpenFlow, OVSDB and NetConf southbound with only the Base + Network Service functions. The following diagram shows OpenDaylight Base edition + architecture in details: </para> + <para><inlinemediaobject> + <imageobject> + <imagedata fileref="../../../../800px-Opendaylight_Base_Edition.png"/> + </imageobject> + </inlinemediaobject></para> + </section> diff --git a/manuals/user-guide/ch_service_provider_edition.xml b/manuals/user-guide/ch_service_provider_edition.xml new file mode 100644 index 000000000..c001a923f --- /dev/null +++ b/manuals/user-guide/ch_service_provider_edition.xml @@ -0,0 +1,143 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE section [ + <!-- Some useful entities borrowed from HTML --> +<!ENTITY ndash "–"> +<!ENTITY mdash "—"> +<!ENTITY hellip "…"> +]> +<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" + xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ovsdb_project"> + <title>Hydrogen Virtualization User Guide + Overview and + Architecture + The Service Provider edition of OpenDaylight is designed for network operator use. It does + not include OVSDB, VTN or DOVE, but does include SNMP, BGP-LS, PCEP, and LISP southbound and + the Affinity Service and the LISP Service northbound. The following diagram shows + OpenDaylight Service Provider edition architecture in details: + + + + + + Installation + Guide + The installation instructions for Service Provider Edition can be found here. + The installation instructions for the Virtualization edition can be found here + Configuration + To configure OpenDaylight Service Provider edition using the OpenFlow 1.3 plugin, start + Opendaylight Controller with the -of13 option. If you do not use the option, the controller + will use the OpenFlow 1.0 version. + + + + To start mininet for the OpenFlow 1.3 simulation, use the following command: + mininet> sudo mn --controller=remote,ip=a.b.c.d + --topo tree,2 --switch ovsk,protocols=OpenFlow13 + + + To start mininet for the OpenFlow 1.0 simulation, use the following command: + mininet> sudo mn + --controller=remote,ip=10.125.136.52 --topo tree,2 + + + + Web / Graphical + Interface + The graphical user interface is the same as the one for Base edition. + Release Notes + (link) + The release notes for the Service Provider Edition can be found here. + SNMP4SDN + Overview and Architecture + Current SDN technology is usually assumed to be based on network infrastructures using + OpenFlow switches. Actually, SDN is not limited to OpenFlow, for example OpenDaylight SAL + can support multiple southbound protocols. To fulfill the scope of underlying switches + supported in OpenDaylight, Ethernet switches should also be considered. + Commodity Ethernet switches have the advantage of low price and is programmable to some + extent (i.e. using CLI and SNMP to modify the ACL, MAC table, forwarding table, etc). In an + SDN built on commodity Ethernet switches, the upper layer applications could be responsible + for making all the forwarding decisions for each switch, and the switches execute data plane + forwarding as assigned. Therefore, we believe that commodity Ethernet switch has its + advantage and warrants a position in SDN technology development. + Off-the-shelf commodity Ethernet switches are commonly allowed to be configured by SNMP, + and the Ethernet switch can actively report its status to the administrative computer (i.e. + OpenDaylight controller) using SNMP trap. Therefore, we propose an SNMP southbound plugin to + control underlying devices supporting SNMP using off-the-shelf commodity Ethernet switch. In + addition to SNMP support, this plugin will provide capabilities to manage configurations + that can only be accessed via CLI, e.g. ACL, disabling flooding, etc., since such + configurations are necessary for using Ethernet switches for SDN. Therefore, there will be + three phases in this project, as follows. (1) Creating an SNMP SouthBound Plugin: to + configure Ethernet switches via SNMP. (2) The plugin configures Ethernet switches via CLI, + for settings that SNMP cannot access. (3) SAL extension: for (1) and (2) we will contribute + extensions to the SAL configuration APIs to provide additional API to support SNMP and CLI + usage as specified above. + The below diagram shows the described components: + + + + + + An overview of the project can be found here. + Installation Guide + Guide to installation and testing can be found here. + Tutorial / How-To + + + + User Guide + + + Developer Guide + + + + Programmatic Interfaces + Proposed SAL API for the SNMP SouthBound Plugin can be found here. + Help + Sign up for snmp4sdn-dev mailing list. + BGP-LS PCEP + You can find basic howto and guide here. + Lisp Flow Mapping + Overview and Architecture + Locator ID Separation Protocol (LISP) is a technology that provides a flexible + map-and-encap framework that can be used for overlay network applications, such as data + center network virtualization, and Network Function Virtualization (NFV). LISP introduces + two name spaces: Endpoint Identifiers (EIDs), and Routing Locators (RLOCs). In a + virtualization environment, EIDs can be viewed as virtual address space and RLOCs can be + viewed as physical network address space. + The LISP framework decouples network control plane from the forwarding plane by providing: + (1) a data plane that specifies how the virtualized network addresses are encapsulated in + addresses from the underlying physical network, and (2) a control plane that stores the + mapping of the virtual-to-physical address spaces and the associated forwarding policies, + and serves this information to the data plane on demand. Network programmability is achieved + by programming forwarding policies such as transparent mobility, service chaining, and + traffic engineering in the mapping system, where the data plane elements can fetch these + policies on demand as new flows arrive. In this presentation we explain how the LISP Flow + Mapping project in ODL can be used to enable advanced SDN and NFV use cases. + The Lisp Flow Mapping service provides LISP Mapping System services. This includes LISP + Map-Server and LISP Map-Resolver services, to store and serve the mapping data to data plane + nodes as well as to OpenDaylight applications. Mapping data can include mapping of virtual + addresses to physical network address where the virtual nodes are reachable/hosted at. + Mapping data can also include a variety of routing policies including traffic engineering + and load balancing. To leverage this service, a northbound API allows OpenDaylight + applications and services to define the mappings and policies in the LISP Mapping Service. + This project also includes a southbound LISP plugin that enables LISP dataplane devices to + interact with the OpenDaylight via the LISP protocol. + The below diagram shows the described components: + + + + + + An overview of the project can be found here. + Please see the LISP Flow Mapping User Guide for more details on how to install and use + this project. + Tutorial / How-To + Please see the Tutorial section of the LISP Flow Mapping User Guide for more details on + how to use this project. + Programmatic Interfaces + Lisp Flow Mapping API can be found here. + Help + Sign up for lispflowmapping-dev mailing list. + diff --git a/manuals/user-guide/ch_virtualization_edition.xml b/manuals/user-guide/ch_virtualization_edition.xml new file mode 100644 index 000000000..5887d6d09 --- /dev/null +++ b/manuals/user-guide/ch_virtualization_edition.xml @@ -0,0 +1,239 @@ + + + + + +]> +
+ Hydrogen Virtualization User Guide + Overview and + Architecture + + + + + + Installation + The installation instructions for the Virtualization edition can be found here + Configuration + To configure OpenDaylight Virtualization edition using the OpenFlow 1.3 plugin, start + Opendaylight Controller with the -of13 option. If you do not use the option, the controller + will use the OpenFlow 1.0 version. + + + + To start mininet for the OpenFlow 1.3 simulation, use the following command: + mininet> sudo mn --controller=remote,ip=a.b.c.d + --topo tree,2 --switch ovsk,protocols=OpenFlow13 + + + To start mininet for the OpenFlow 1.0 simulation, use the following command: + mininet> sudo mn + --controller=remote,ip=10.125.136.52 --topo tree,2 + + + + Web / Graphical + Interface + The graphical user interface is the same as the one for Base edition. + Release + Notes + The release note for the Virtualization Edition can be found here. + OVSDB + + + + OVSDB Integration is a bundle for OpenDaylight that will implement the Open + vSwitch Database management protocol, allowing southbound configuration of + vSwitches. It is a critical protocol for Network Virtualization with Open + vSwitch forwarding elements. + + + OVSDB neutron bundle in Virtualization edition supports Network Virtualization + using VXLAN & GRE tunnel for Openstack and Cloudstack deployments + + + + The below diagram shows the OVSDB architecture for Neutron: + + + + + + + + + Feature list and other Project documentation can be found here + + + The Design of the project can be found here + + + + Tutorial / + How-To + Introduction and tutorial for the project can be found here. + Help + + + + Sign up for ovsdb-dev mailing list. + + + Join us in #opendaylight-ovsdb channel in freenode.net + + + Visit us @ https://plus.google.com/u/0/+opendaylightovsdb/ + + + + Virtual Tenant Network + (VTN) + OpenDaylight Virtual Tenant Network (VTN) is an application that provides multi-tenant + virtual network on an SDN controller. + The below diagram shows the VTN architecture: + + + + + + The User Guide of VTN can be found here: + + + + VTN User Guide + + + + Affinity + Overview and Architecture + + + + The Affinity service provides an API to allow controller and higher-level + applications to create and share an abstract, topology and implementation + independent description of the infrastructure needs, preferences and behaviors + of workloads that use the network to "talk" to one another. + + + The detailed description of the project can be found here. + + + + Programmatic Interfaces + Affinity API and implementation in OpenDaylight can be found here. Additional information + can be found in the Affinity Developer Guide. + Help + Sign up for affinity-dev mailing list. + OpenDOVE + Overview and Architecture + + + + DOVE (distributed overlay virtual Ethernet) is a network virtualization + platform that provides isolated multi-tenant networks on any IP network in a + virtualized data center. DOVE provides each tenant with a virtual network + abstraction providing layer-2 or layer-3 connectivity and the ability to control + communication using access control policies. Address dissemination and policy + enforcement in DOVE is provided by a clustered directory service. It also + includes a gateway function to enable virtual machines on a virtual network to + communicate with hosts outside the virtual network domain. + + + Users interact with Open DOVE to create and manage virtual networks through + the Open DOVE Management Console (DMC) which provides a REST API for + programmatic virtual network management and a basic graphical UI. The DMC is + also used to configure the Open DOVE Gateway to configure connectivity to + external, non-virtualized networks. + + + The Open DOVE Connectivity Server (DCS) supplies address and policy + information to individual Open DOVE vswitches which implement virtual networks + by encapsulating tenant traffic in overlays that span virtualized hosts in the + data center. The DCS also includes support for high-availability and scale-out + deployments through a lightweight clustering protocol between replicated DCS + instances. The Open DOVE vswitches serve as policy enforcement points for + traffic entering virtual networks. Open DOVE uses the VxLAN encapsulation format + but implements a scalable control plane that does not require the use of IP + multicast in the data center. + + + The DOVE technology was originally developed by IBM Research and has also been + included in commercial products. + + + The below diagram shows an overall DOVE architecture: + + + + + + + + + + + + The Hydrogen release review of the project can be found here. + + + + Installation Guide + The installation instructions for the project can be found here. + Tutorial / How-To + Step by step tutorial for this project, including how to do zero day tasks, configuring a + basic overlay, and externalizing the basic overlay, can be found here. + Programmatic Interface(s) (link) + + + + OpenDOVE Northbound and Southbound APIs can be found here. + + + OpenDOVE developer guide can be found here. + + + + Help + Sign up for opendove-dev mailing list. + Defense4All + Overview and Architecture + + + + Defense4All is an SDN application for detecting and mitigating DDoS + attacks. + + + The below diagram shows Defense4All overall architecture: + + + + + + + + + + + + An overview of the project can be found here. + + + + Installation Guide (link) + The installation instructions for the project can be found here. + Installation Troubleshooting (link) + The installation troubleshooting of the project can be found here. + Configuration + The information for configuration of the project can be found here. + Tutorial / How-To + Introduction and tutorial for the project can be found here. + Programmatic Interface(s) (link) + Defense4All API can be found here. + Help + Sign up for defense4all-dev mailing list. +
diff --git a/manuals/user-guide/images/717px-SNMP4SDN_Architecture.jpg b/manuals/user-guide/images/717px-SNMP4SDN_Architecture.jpg new file mode 100644 index 000000000..caaf68a3b Binary files /dev/null and b/manuals/user-guide/images/717px-SNMP4SDN_Architecture.jpg differ diff --git a/manuals/user-guide/images/800px-D4A_in_odl.jpg b/manuals/user-guide/images/800px-D4A_in_odl.jpg new file mode 100644 index 000000000..e7fecc34c Binary files /dev/null and b/manuals/user-guide/images/800px-D4A_in_odl.jpg differ diff --git a/manuals/user-guide/images/800px-OVSDB-Neutron.png b/manuals/user-guide/images/800px-OVSDB-Neutron.png new file mode 100644 index 000000000..47e205a84 Binary files /dev/null and b/manuals/user-guide/images/800px-OVSDB-Neutron.png differ diff --git a/manuals/user-guide/images/800px-Opendaylight_Base_Edition.png b/manuals/user-guide/images/800px-Opendaylight_Base_Edition.png new file mode 100644 index 000000000..6d157ca04 Binary files /dev/null and b/manuals/user-guide/images/800px-Opendaylight_Base_Edition.png differ diff --git a/manuals/user-guide/images/800px-Opendove-arch.png b/manuals/user-guide/images/800px-Opendove-arch.png new file mode 100644 index 000000000..2e3b46841 Binary files /dev/null and b/manuals/user-guide/images/800px-Opendove-arch.png differ diff --git a/manuals/user-guide/images/800px-Serv-arch.png b/manuals/user-guide/images/800px-Serv-arch.png new file mode 100644 index 000000000..13cae569d Binary files /dev/null and b/manuals/user-guide/images/800px-Serv-arch.png differ diff --git a/manuals/user-guide/images/800px-Virt_edition.png b/manuals/user-guide/images/800px-Virt_edition.png new file mode 100644 index 000000000..07fb6d4d2 Binary files /dev/null and b/manuals/user-guide/images/800px-Virt_edition.png differ diff --git a/manuals/user-guide/images/LISP-ODL-02.jpg b/manuals/user-guide/images/LISP-ODL-02.jpg new file mode 100644 index 000000000..2bb12a6b6 Binary files /dev/null and b/manuals/user-guide/images/LISP-ODL-02.jpg differ diff --git a/manuals/user-guide/images/VTN_APPLICATION_ARCHITECTURE.png b/manuals/user-guide/images/VTN_APPLICATION_ARCHITECTURE.png new file mode 100644 index 000000000..49e2bb9dc Binary files /dev/null and b/manuals/user-guide/images/VTN_APPLICATION_ARCHITECTURE.png differ diff --git a/manuals/user-guide/pom.xml b/manuals/user-guide/pom.xml new file mode 100644 index 000000000..75a1cad8b --- /dev/null +++ b/manuals/user-guide/pom.xml @@ -0,0 +1,82 @@ + + + org.opendaylight.documentation + manuals + 0.1.0-SNAPSHOT + ../pom.xml + + 4.0.0 + userguide + jar + OpenDaylight User Guide + + + local + 1 + + + + + + + + com.inocybe.api + sdndocs-maven-plugin + 0.1.0-SNAPSHOT + + + + + generate-webhelp + + generate-webhelp + + generate-sources + + enduser + bk-user-guide.xml + + appendix toc,title + article/appendix nop + article toc,title + book toc,title,figure,table,example,equation + chapter toc,title + section toc + part toc,title + qandadiv toc + qandaset toc + reference toc,title + set toc,title + + user-guide + user-guide + + + + + enduser + 1 + 0 + 1 + 0 + false + true + true + . + mlemay@inocybe.com + opendaylight + 2.6in + 0 + http://docs.opendaylight.org/user-guide/content/ + ${basedir}/../glossary/glossary-terms.xml + + + + + diff --git a/pom.xml b/pom.xml new file mode 100644 index 000000000..5b92d1eca --- /dev/null +++ b/pom.xml @@ -0,0 +1,101 @@ + + 4.0.0 + org.opendaylight.documentation + root + 0.1.0-SNAPSHOT + OpenDaylight Docs + pom + https://wiki.opendaylight.org/view/CrossProject:Documentation_Group + + scm:git:ssh://git.opendaylight.org:29418/documentation.git + scm:git:ssh://git.opendaylight.org:29418/documentation.git + https://wiki.opendaylight.org/view/CrossProject:Integration_Group + HEAD + + + http://nexus.opendaylight.org/content + UTF-8 + + + manuals + + + + + + + opendaylight-mirror + opendaylight-mirror + ${nexusproxy}/groups/public/ + + false + + + true + never + + + + + opendaylight-snapshot + opendaylight-snapshot + ${nexusproxy}/repositories/opendaylight.snapshot/ + + true + + + false + + + + + + + central2 + central2 + ${nexusproxy}/repositories/central2/ + + + opendaylight-snapshot + central2 + ${nexusproxy}/repositories/opendaylight.snapshot/ + + + oss-sonatype + oss-sonatype + https://oss.sonatype.org/content/repositories/snapshots/ + + true + + + + + + + + opendaylight-release + ${nexusproxy}/repositories/opendaylight.release/ + + + + opendaylight-snapshot + ${nexusproxy}/repositories/opendaylight.snapshot/ + + + ${project.artifactId}-site + ./ + + + + + + com.inocybe.api + sdndocs-maven-plugin + 0.1.0 + + + + + diff --git a/tools/README b/tools/README new file mode 100644 index 000000000..e028dc7bf --- /dev/null +++ b/tools/README @@ -0,0 +1 @@ +Location for Tools for the Build diff --git a/web/README b/web/README new file mode 100644 index 000000000..1d7a9c89f --- /dev/null +++ b/web/README @@ -0,0 +1 @@ +Website related artifacts