--- /dev/null
+<center>
+<div class="btn-group" role="group" aria-label="...">
+ {% if prev %}
+ <a class="btn btn-default" href="{{ prev.link|e }}">Prev Page</a>
+ {% else %}
+ <button type="button" class="btn btn-default disabled">Prev Page</button>
+ {% endif %}
+
+ {% if next %}
+ <a class="btn btn-default" href="{{ next.link|e }}">Next Page</a>
+ {% else %}
+ <button type="button" class="btn btn-default disabled">Next Page</button>
+ {% endif %}
+</div>
+</center>
+
#html_theme_options = {}
html_theme_options = {
'bootswatch_theme': "united",
+ 'navbar_sidebarrel': False,
'source_link_position': "footer",
}
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
- '**': ['localtoc.html'],
+ '**': ['localtoc.html', 'relations.html'],
}
# Additional templates that should be rendered to pages, maps page names to
Mode with active resolution of if-features makes yang statements
containing an if-feature statement conditional based on the supported
features. These features are provided in the form of a QName-based
-java.util.function.Predicate object. In the example below, only two
+java.util.Set object. In the example below, only two
features are supported: example-feature-1 and example-feature-2. The
-Predicate which contains this information is passed to the method
+Set which contains this information is passed to the method
newBuild() and the mode is activated.
.. code:: java
- Predicate<QName> isFeatureSupported == qName -> {
- Set<QName> supportedFeatures == new HashSet<>();
- supportedFeatures.add(QName.create("example-namespace", "2016-08-31", "example-feature-1"));
- supportedFeatures.add(QName.create("example-namespace", "2016-08-31", "example-feature-2"));
- return supportedFeatures.contains(qName);
- }
+ Set<QName> supportedFeatures = ImmutableSet.of(
+ QName.create("example-namespace", "2016-08-31", "example-feature-1"),
+ QName.create("example-namespace", "2016-08-31", "example-feature-2"));
- CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(isFeatureSupported);
+ CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);
-In case when no features should be supported, you should provide a
-Predicate<QName> object whose test() method just returns false.
+In case when no features should be supported, you should provide an
+empty Set<QName> object.
.. code:: java
- Predicate<QName> isFeatureSupported == qName -> false;
+ Set<QName> supportedFeatures = ImmutableSet.of();
- CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(isFeatureSupported);
+ CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);
When this mode is not activated, all features in the processed YANG
sources are supported.
creating: distribution-karaf-0.5.x-Boron/deploy/
creating: distribution-karaf-0.5.x-Boron/etc/
creating: distribution-karaf-0.5.x-Boron/externalapps/
- ...
- inflating: distribution-karaf-0.5.x-Boron/bin/start.bat
- inflating: distribution-karaf-0.5.x-Boron/bin/status.bat
- inflating: distribution-karaf-0.5.x-Boron/bin/stop.bat
+ ...
+ inflating: distribution-karaf-0.5.x-Boron/bin/start.bat
+ inflating: distribution-karaf-0.5.x-Boron/bin/status.bat
+ inflating: distribution-karaf-0.5.x-Boron/bin/stop.bat
$ cd distribution-karaf-0.5.x-Boron
$ ./bin/karaf
- all
* - Time Series Data Repository (TSDR)
- - Enables support for storing and querying time series data with the
- default data collector for OpenFlow statistics the default data store
- for HSQLDB
- - odl-tsdr-hsqldb-all
+ - Enables support for collecting, storing and querying time series data.
+ TSDR supports the following collection data:
+
+ * OpenFlow statistics
+ * NETFLOW statistics
+ * sFlow statistics
+ * OpenFlow Controller metrics
+ * SNMP data
+ * SysLog data
+ * RestConf data
+
+ TSDR supports the following data stores:
+
+ * HSQLDB
+ * HBase
+ * Cassandra
+
+ TSDR supports the default OpenDaylight RESTCONF and API interfaces and an
+ ElasticSearch interface for all data stores.
+ - odl-tsdr-core, odl-tsdr-hsqldb
- all
* - TSDR Data Collectors
- - Enables support for various TSDR data sources (collectors) including
- OpenFlow statistics, NetFlow statistics, NetFlow statistics, SNMP data,
- Syslog, and OpenDaylight (controller) metrics
- - odl-tsdr-openflow-statistics-collector,
- odl-tsdr-netflow-statistics-collector,
- odl-tsdr-snmp-data-collector,
- odl-tsdr-syslog-collector,
- odl-tsdr-controller-metrics-collector
+ - TSDR collector features include support for collecting the following
+ data:
+
+ * OpenFlow statistics
+ * NETFLOW statistics
+ * sFlow statistics
+ * OpenFlow Controller metrics
+ * SNMP data
+ * SysLog data
+ * RESTCONF data.
+
+ - * odl-tsdr-openflow-statistics-collector
+ * odl-tsdr-netflow-statistics-collector
+ * odl-tsdr-sflow-statistics-collector
+ * odl-tsdr-controller-metrics-collector
+ * odl-tsdr-snmp-data-collector
+ * odl-tsdr-syslog-collector
+ * odl-tsdr-restconf-collector
- all
* - TSDR Data Stores
- - Enables support for TSDR data stores including HSQLDB, HBase, and
- Cassandra
- - odl-tsdr-hsqldb, odl-tsdr-hbase, or odl-tsdr-cassandra
+ - TSDR enables support for the following data stores:
+ * HSQLDB
+ * HBase
+ * Cassandra
+ - * odl-tsdr-hsqldb
+ * odl-tsdr-hbase
+ * odl-tsdr-cassandra
+ - all
+
+ * - TSDR Data Query
+ - TSDR supports the default OpenDaylight RESTCONF and ODL API interfaces
+ for queries to all data stores. It also supports an integrated ElasticSearch query.
+ - odl-tsdr-elasticsearch
- all
* - Topology Processing Framework
* Data Query Service - For external data-driven applications to query data from
TSDR through REST APIs
+* ElasticSearch - Use external elastic search engine with TSDR integrated support.
* NBI integration with Grafana - Allows visualization of data collected in TSDR
using Grafana
* Data Aggregation Service - Periodically aggregates raw data into larger time granularities
* Cassandra data store - Cassandra implementation of TSDR SPIs
* NetFlow data collector - Collect NetFlow data from network elements
* NetFlowV9 - version 9 Netflow collector
-* SNMP Data Collector - Integrates with SNMP plugin to bring SNMP data into TSDR
* sFlowCollector - Collects sFlow data from network elements
+* SNMP Data Collector - Integrates with SNMP plugin to bring SNMP data into TSDR
* Syslog data collector - Collects syslog data from network elements
+* Web Activity data collector - Collects ODL RESTCONF queries made to TSDR
TSDR has multiple features to enable the functionality above. To begin,
select one of these data stores:
* odl-tsdr-openflow-statistics-collector
* odl-tsdr-netflow-statistics-collector
-* odl-tsdr-controller-metrics-collector
* odl-tsdr-sflow-statistics-collector
+* odl-tsdr-controller-metrics-collector
* odl-tsdr-snmp-data-collector
* odl-tsdr-syslog-collector
+* odl-tsdr-restconf-collector
+
+Enable ElasticSearch support:
+
+* odl-tsdr-elasticsearch
See these TSDR_Directions_ for more information.
rm <cassandra-installation-directory>
It is recommended to restart the Karaf container after uninstallation of the TSDR data store.
+
+
+.. include:: ../../user-guide/tsdr-elastic-search.rst
branch-cutting
simultaneous-release
milestone-readouts
+
+************************
+Supporting Documentation
+************************
+
+The release management team maintains several documents in Google Drive to
+track releases. These documents can be found at this link:
+
+https://drive.google.com/drive/folders/0ByPlysxjHHJaUXdfRkJqRGo4aDg
+
cd scripts/
./odlsign-bulk STAGING_REPO_ID # eg. autorelease-1367
+Verify the distribution-karaf file with the signature.
+
+.. code-block:: bash
+
+ gpg2 --verify distribution-karaf-x.y.z-${RELEASE}.tar.gz.asc distribution-karaf-x.y.z-${RELEASE}.tar.gz
+
+
Releasing OpenDaylight
======================
-- Block submit permissions for registered users and elevate RE's committer rights on gerrit.
+- Block submit permissions for registered users and elevate RE's committer rights on gerrit. **(Helpdesk)**
.. figure:: images/gerrit-update-committer-rights.png
-Subproject commit dd924a1850c79eff2a7c4e430265153f3c5ee67b
+Subproject commit 4b68122061f13a3b6e3834e4c96efea0958f8f8b
-Subproject commit c52902472b21687a6f7dacff2030701750cbca53
+Subproject commit 75c340785f84c8bb3eb678d0a708fa4afe14ab6d
-Subproject commit 58024f9fd65dcc310b01723f6c75bb8298523ec1
+Subproject commit 7fe0b2fdf87a971afa6af18cade219a5b00bc42d
-Subproject commit e1ff28a28c54583059b83c23883350b0641ddc92
+Subproject commit e59ad39056b21ab857257358549702a8e17caa4a
-Subproject commit c0ad3da700b5df7e4482399930c40576689128a9
+Subproject commit 2a4287de2dbc8b7d2924031564d566c0009e3f62
-Subproject commit 8ba3cfaec519fca30b30626d01b64ec842f17e2a
+Subproject commit 855980926d53f158e8d3e76eb0fbe9e13661fb43
-Subproject commit 8a75dc835b6ff248da8762081f8bcc44e432e679
+Subproject commit 047f95d91df694eaa1a9afa77fb1a281f9ababd0
-Subproject commit cc497a5837bb2bb232a83ce1182dda602000b58d
+Subproject commit d616023d09bb2e4238a62fa91814748adf422f6c
-Subproject commit cd0c25e87a2fbfa14bb15be4439b079db2f61b49
+Subproject commit 3f5447275daba8288347865d130af62761087946
-Subproject commit 667f8b3df89af040b3dcbd30b3f4ef2ef9a199f6
+Subproject commit fb987ee16028acb17d58e855882d2d76f0556329
TLS
Transport Layer Security
+CLI
+ Command Line Interface
+
Security Framework for AAA services
-----------------------------------
is automatically installed as part of them. In the cases that APIs are *not*
protected by AAA, this will be noted in the per-project release notes.
-
How to disable AAA
------------------
- Useful for applications in which roles determine allowed
capabilities.
-OpenDaylight contains four implementations:
+OpenDaylight contains five implementations:
- TokenAuthRealm
enhanced logging as well as isolation of all realms in a single package,
which enables easier import by consuming servlets.
+- KeystoneAuthRealm
+
+ - This realm authenticates OpenDaylight users against the OpenStack’s
+ Keystone server.
+
+ - Disabled out of the box.
+
.. note::
More than one Realm implementation can be specified. Realms are attempted
in order until authentication succeeds or all realm sources are exhausted.
-
+ Edit the **securityManager.realms = $tokenAuthRealm** property in shiro.ini
+ and add all the realms needed separated by commas.
TokenAuthRealm
^^^^^^^^^^^^^^
# Stacked realm configuration; realms are round-robbined until authentication succeeds or realm sources are exhausted.
securityManager.realms = $tokenAuthRealm, $ldapRealm
+KeystoneAuthRealm
+^^^^^^^^^^^^^^^^^
+
+How it works
+~~~~~~~~~~~~
+
+This realm authenticates OpenDaylight users against the OpenStack's Keystone
+server. This realm uses the
+`Keystone's Identity API v3 <https://developer.openstack.org/api-ref/identity/v3/>`_
+or later.
+
+.. figure:: ./images/aaa/keystonerealm-authentication.png
+ :alt: KeystoneAuthRealm authentication mechanism
+
+ KeystoneAuthRealm authentication/authorization mechanism
+
+As can shown on the above diagram, once configured, all the RESTCONF APIs calls
+will require sending **user**, **password** and optionally **domain** (1). Those
+credentials are used to authenticate the call against the Keystone server (2) and,
+if the authentication succeeds, the call will proceed to the MDSAL (3). The
+credentials must be provisioned in advance within the Keystone Server. The user
+and password are mandatory, while the domain is optional, in case it is not
+provided within the REST call, the realm will default to (**Default**),
+which is hard-coded. The default domain can be also configured through the
+*shiro.ini* file (see the :doc:`AAA User Guide <../user-guide/authentication-and-authorization-services>`).
+
+The protocol between the Controller and the Keystone Server (2) can be either
+HTTPS or HTTP. In order to use HTTPS the Keystone Server's certificate
+must be exported and imported on the Controller (see the :ref:`Certificate Management <certificate-management>` section).
+
+Configuring KeystoneAuthRealm
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Edit the "etc/shiro.ini" file and modify the following:
+
+::
+
+ # The KeystoneAuthRealm allows for authentication/authorization against an
+ # OpenStack's Keystone server. It uses the Identity's API v3 or later.
+ keystoneAuthRealm = org.opendaylight.aaa.shiro.realm.KeystoneAuthRealm
+ # The URL where the Keystone server exposes the Identity's API v3 the URL
+ # can be either HTTP or HTTPS and it is mandatory for this realm.
+ keystoneAuthRealm.url = https://<host>:<port>
+ # Optional parameter to make the realm verify the certificates in case of HTTPS
+ #keystoneAuthRealm.sslVerification = true
+ # Optional parameter to set up a default domain for requests using credentials
+ # without domain, uncomment in case you want a different value from the hard-coded
+ # one "Default"
+ #keystoneAuthRealm.defaultDomain = Default
+
+Once configured the realm, the mandatory fields are the fully quallified name of
+the class implementing the realm *keystoneAuthRealm* and the endpoint where the
+Keystone Server is listening *keystoneAuthRealm.url*.
+
+The optional parameter *keystoneAuthRealm.sslVerification* specifies whether the
+realm has to verify the SSL certificate or not. The optional parameter
+*keystoneAuthRealm.defaultDomain* allows to use a different default domain from
+the hard-coded one *"Default"*.
+
Authorization Configuration
---------------------------
- Shiro-Based Authorization
-- MDAL-Based Dynamic Authorization
+- MDSAL-Based Dynamic Authorization
.. note::
authentication attempts. Custom AuthenticationListener(s) must implement
the org.apache.shiro.authc.AuthenticationListener interface.
+.. _certificate-management:
Certificate Management
----------------------
The Certificate Manager Service RPCs are allowed only to the Role Admin Users
and it could be completely disabled through the shiro.ini config file. Check
the URL section at the shiro.ini.
+
+Encryption Service
+------------------
+
+The **AAA Encryption Service** is used to encrypt the OpenDaylight's users'
+passwords and TLS communication certificates. This section shows how to use the
+AAA Encryption Service with an OpenDaylight distribution project to encrypt data.
+
+The following are the steps to configure the Encryption Service:
+
+1. After starting the distribution, the *aaa-encryption-service* feature has to
+ get installed. Use the following command at Karaf CLI to check.
+
+ .. code-block:: bash
+
+ opendaylight-user@root>feature:list -i | grep aaa-encryption-service
+ odl-aaa-encryption-service | 0.5.0-SNAPSHOT | x | odl-aaa-0.5.0-SNAPSHOT | OpenDaylight :: AAA :: Encryption Service
+
+2. The initial configuration of the Encryption Service exists under the
+ distribution directory etc/opendaylight/datastore/initial/config/aaa-encrypt-service-config.xml
+
+ .. code-block:: xml
+
+ <aaa-encrypt-service-config xmlns="config:aaa:authn:encrypt:service:config">
+ <encrypt-key/>
+ <encrypt-salt/>
+ <encrypt-method>PBKDF2WithHmacSHA1</encrypt-method>
+ <encrypt-type>AES</encrypt-type>
+ <encrypt-iteration-count>32768</encrypt-iteration-count>
+ <encrypt-key-length>128</encrypt-key-length>
+ <cipher-transforms>AES/CBC/PKCS5Padding</cipher-transforms>
+ </aaa-encrypt-service-config>
+
+ .. note::
+
+ Both the initial encryption key and encryption salt become randomly generated
+ when the *aaa-encryption-service* feature is installed.
+
+3. Finally the new configurations will take affect after restarting the
+ distribution.
--- /dev/null
+ElasticSearch
+-------------
+
+Setting Up the environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To setup and run the TSDR data store ElasticSearch feature, you need to have
+an ElasticSearch node (or a cluster of such nodes) running. You can use a
+customized ElasticSearch docker image for this purpose.
+
+Your ElasticSearch (ES) setup must have the "Delete By Query Plugin" installed.
+Without this, some of the ES functionality won't work properly.
+
+
+Creating a custom ElasticSearch docker image
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(You can skip this section if you already have an instance of ElasticSearch running)
+
+Run the following set of commands:
+
+.. code-block:: bash
+
+ cat << EOF > Dockerfile
+ FROM elasticsearch:2
+ RUN /usr/share/elasticsearch/bin/plugin install --batch delete-by-query
+ EOF
+
+To build the image, run the following command in the directory where the
+Dockerfile was created:
+
+.. code-block:: bash
+
+ docker build . -t elasticsearch-dd
+
+You can check whether the image was properly created by running:
+
+.. code-block:: bash
+
+ docker images
+
+This should print all your container images including the elasticsearch-dd.
+
+Now we can create and run a container from our image by typing:
+
+.. code-block:: bash
+
+ docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch-dd elasticsearch-dd
+
+To see whether the container is running, run the following command:
+
+.. code-block:: bash
+
+ docker ps
+
+The output should include a row with elasticsearch-dd in the NAMES column.
+To check the std out of this container use
+
+.. code-block:: bash
+
+ docker logs elasticsearch-dd
+
+Running the ElasticSearch feature
+---------------------------------
+
+Once the features have been installed, you can change some of its properties. For
+example, to setup the URL where your ElasticSearch installation runs,
+change the *serverUrl* parameter in tsdr/persistence-elasticsearch/src/main/resources/configuration/initial/:
+
+.. code-block:: bash
+
+ tsdr-persistence-elasticsearch.properties
+
+All the data are stored into the TSDR index under a type. The metric data are
+stored under the metric type and the log data are store under the log type.
+You can modify the files in tsdr/persistence-elasticsearch/src/main/resources/configuration/initial/:
+
+.. code-block:: bash
+
+ tsdr-persistence-elasticsearch_metric_mapping.json
+ tsdr-persistence-elasticsearch_log_mapping.json
+
+to change or tune the mapping for those types. The changes in those files will be promoted after
+the feature is reloaded or the distribution is restarted.
+
+Testing the setup
+^^^^^^^^^^^^^^^^^
+
+We can now test whether the setup is correct by downloading and installing mininet,
+which we use to send some data to the running ElasticSearch instance.
+
+Installing the necessary features:
+
+.. code-block:: bash
+
+ start OpenDaylight
+ feature:install odl-restconf odl-l2switch-switch odl-tsdr-core odl-tsdr-openflow-statistics-collector
+ feature:install odl-tsdr-elasticsearch
+
+We can check whether the distribution is now listening on port 6653:
+
+.. code-block:: bash
+
+ netstat -an | grep 6653
+
+Run mininet
+
+.. code-block:: bash
+
+ sudo mn --topo single,3 --controller 'remote,ip=distro_ip,port=6653' --switch ovsk,protocols=OpenFlow13
+
+where the distro_ip is the IP address of the machine where the OpenDaylight distribution
+is running. This command will create three hosts connected to one OpenFlow capable
+switch.
+
+We can check if data was stored by ElasticSearch in TSDR by running the
+following command:
+
+.. code-block:: bash
+
+ tsdr:list FLOWTABLESTATS
+
+The output should look similar to the following::
+
+ [NID=openflow:1][DC=FLOWTABLESTATS][MN=ActiveFlows][RK=Node:openflow:1,Table:50][TS=1473427383598][3]
+ [NID=openflow:1][DC=FLOWTABLESTATS][MN=PacketMatch][RK=Node:openflow:1,Table:50][TS=1473427383598][12]
+ [NID=openflow:1][DC=FLOWTABLESTATS][MN=PacketLookup][RK=Node:openflow:1,Table:50][TS=1473427383598][12]
+ [NID=openflow:1][DC=FLOWTABLESTATS][MN=ActiveFlows][RK=Node:openflow:1,Table:80][TS=1473427383598][3]
+ [NID=openflow:1][DC=FLOWTABLESTATS][MN=PacketMatch][RK=Node:openflow:1,Table:80][TS=1473427383598][17]
+ [NID=openflow:1][DC=FLOWTABLESTATS][MN=PacketMatch][RK=Node:openflow:1,Table:246][TS=1473427383598][19]
+ ...
+
+Or you can query your ElasticSearch instance:
+
+.. code-block:: bash
+
+ curl -XPOST "http://elasticseach_ip:9200/_search?pretty" -d'{ "from": 0, "size": 10000, "query": { "match_all": {} } }'
+
+The elasticseach_ip is the IP address of the server where the ElasticSearch is running.
+
+
+Web Activity Collector
+----------------------
+
+The Web Activity Collector records the meaningful REST requests made through the
+OpenDaylight RESTCONF interface.
+
+
+How to test the RESTCONF Collector
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- Install some other feature that has a RESTCONF interface, for example. "odl-tsdr-syslog-collector"
+- Issue a RESTCONF command that uses either POST,PUT or DELETE.
+ For example, you could call the register-filter RPC of tsdr-syslog-collector.
+- Look up data in TSDR database from Karaf.
+
+ .. code-block:: bash
+
+ tsdr:list RESTCONF
+
+- You should see the request that you have sent, along with its information
+ (URL, HTTP method, requesting IP address and request body)
+- Try to send a GET request, then check again, your request should not be
+ registered, because the collector does not register GET requests by default.
+- Open the file: "etc/tsdr.restconf.collector.cfg", and add GET to the list of
+ METHODS_TO_LOG, so that it becomes:
+
+ ::
+
+ METHODS_TO_LOG=POST,PUT,DELETE,GET
+
+ - Try again to issue your GET request, and check if it was recorded this time,
+ it should be recorder.
+ - Try manipulating the other properties (PATHS_TO_LOG (which URLs do we want
+ to log from), REMOTE_ADDRESSES_TO_LOG (which requesting IP addresses do we
+ want to log from) and CONTENT_TO_LOG (what should be in the request's body
+ in order to log it)), and see if the requests are getting logged.
+ - Try providing invalid properties (unknown methods for the METHODS_TO_LOG
+ parameter, or the same method repeated multiple times, and invalid regular
+ expressions for the other parameters), then check karaf's log using
+ "log:display". It should tell you that the value is invalid, and that it
+ will use the default value instead.
This document describes how to use HSQLDB, HBase, and Cassandra data
stores to capture time series data using Time Series Data Repository
-(TSDR) features in OpenDaylight. This document contains configuration,
-administration, management, usage, and troubleshooting sections for the
+(TSDR) features in OpenDaylight. This document contains configuration,
+administration, management, usage, and troubleshooting sections for these
features.
Overview
The Time Series Data Repository (TSDR) project in OpenDaylight (ODL)
creates a framework for collecting, storing, querying, and maintaining
-time series data. TSDR provides the framework for plugging in proper
+time series data. TSDR provides the framework for plugging in
data collectors to collect various time series data and store the data
into TSDR Data Stores. With a common data model and generic TSDR data
persistence APIs, the user can choose various data stores to be plugged
into the TSDR persistence framework. Currently, three types of data
-stores are supported: HSQLDB relational database, HBase NoSQL database,
-and Cassandra NoSQL database.
+stores are supported: HSQLDB relational database (default installed),
+HBase NoSQL database and Cassandra NoSQL database.
With the capabilities of data collection, storage, query, aggregation,
and purging provided by TSDR, network administrators can leverage
-various data driven appliations built on top of TSDR for security risk
+various data driven applications built on top of TSDR for security risk
detection, performance analysis, operational configuration optimization,
-traffic engineering, and network analytics with automated intelligence.
+traffic engineering and network analytics with automated intelligence.
TSDR Architecture
-----------------
various data stores to be plugged in. The Data Aggregation Service
aggregates time series fine-grained raw data into course-grained roll-up
data to control the size of the data. The Data Purging Service
-periodically purges both fine-grained raw data and course-granined
+periodically purges both fine-grained raw data and course-grained
aggregated data according to user-defined schedules.
-We have implemented The Data Collection Service, Data Storage Service,
-TSDR Persistence Layer, TSDR HSQLDB Data Store, TSDR HBase Data Store,
-and TSDR Cassandra Datastore. Among these services and components, time
-series data is communicated using a common TSDR data model, which is
-designed and implemented for the abstraction of time series data
-commonalities. With these functions, TSDR is able to collect the data
-from the data sources and store them into one of the TSDR data stores:
-HSQLDB Data Store, HBase Data Store or Cassandra Data Store. Besides a
-simple query command from Karaf console to retrieve data from the TSDR
-data stores, we also provided a Data Query Service for the user to use
-REST API to query the data from the data stores. Moreover, the user can
-use Grafana, which is a time series visualization tool to view the data
-stored in TSDR in various charting formats.
+TSDR provides component-based services on a common data model. These
+services include the data collection service, data storage service and
+data query service. The TSDR data storage service supports HSQLDB
+(the default datastore), HBASE and Cassandra datastores. Between these
+services and components, time series data is communicated using a common
+TSDR data model. This data model is designed around the abstraction of
+time series data commonalities. With these services, TSDR is able
+to collect the data from the data sources and store them into one of
+the TSDR data stores; HSQLDB, HBase and Cassandra datastores. Data can
+be retrieved with the Data Query service using the default OpenDaylight
+RestConf interface or its ODL API interface. TSDR also has integrated
+support for ElasticSearch capabilities. TSDR data can also be viewed
+directly with Grafana for time series visualization or various chart formats.
Configuring TSDR Data Stores
----------------------------
feature:install odl-tsdr-openflow-statistics-collector
-- SNMP Data Collector
+- NetFlow Data Collector
::
- feature:install odl-tsdr-snmp-data-collector
+ feature:install odl-tsdr-netflow-statistics-collector
-- NetFlow Data Collector
+- sFlow Data Collector
::
- feature:install odl-tsdr-netflow-statistics-collector
+ feature:install odl-tsdr-sflow-statistics-colletor
+
+- SNMP Data Collector
+
+ ::
-- sFlow Data Collector feature:install
- odl-tsdr-sflow-statistics-colletor
+ feature:install odl-tsdr-snmp-data-collector
- Syslog Data Collector
feature:install odl-tsdr-controller-metrics-collector
+- Web Activity Collector
+
+ ::
+
+ feature:install odl-tsdr-restconf-collector
+
+
In order to use controller metrics collector, the user needs to install
Sigar library.
*remote,ip=172.17.252.210,port=6653* --switch
ovsk,protocols=OpenFlow13
-- Install tsdr hbase feature from Karaf:
+- Install TSDR hbase feature from Karaf:
- feature:install odl-tsdr-hbase
categories. For example, "tsdr:list FlowStats" will output the Flow
statistics data collected from the switch(es).
+
+.. include:: tsdr-elastic-search.rst
+
+
Troubleshooting
---------------
metric-psersistency=true
log-persistency=false
binary-persistency=false
-