4 This document describes how to use HSQLDB, HBase, and Cassandra data
5 stores to capture time series data using Time Series Data Repository
6 (TSDR) features in OpenDaylight. This document contains configuration,
7 administration, management, usage, and troubleshooting sections for the
13 The Time Series Data Repository (TSDR) project in OpenDaylight (ODL)
14 creates a framework for collecting, storing, querying, and maintaining
15 time series data. TSDR provides the framework for plugging in proper
16 data collectors to collect various time series data and store the data
17 into TSDR Data Stores. With a common data model and generic TSDR data
18 persistence APIs, the user can choose various data stores to be plugged
19 into the TSDR persistence framework. Currently, three types of data
20 stores are supported: HSQLDB relational database, HBase NoSQL database,
21 and Cassandra NoSQL database.
23 With the capabilities of data collection, storage, query, aggregation,
24 and purging provided by TSDR, network administrators can leverage
25 various data driven appliations built on top of TSDR for security risk
26 detection, performance analysis, operational configuration optimization,
27 traffic engineering, and network analytics with automated intelligence.
32 TSDR has the following major components:
34 - Data Collection Service
36 - Data Storage Service
38 - TSDR Persistence Layer with data stores as plugins
44 - Grafana integration for time series data visualization
46 - Data Aggregation Service
48 - Data Purging Service
50 The Data Collection Service handles the collection of time series data
51 into TSDR and hands it over to the Data Storage Service. The Data
52 Storage Service stores the data into TSDR through the TSDR Persistence
53 Layer. The TSDR Persistence Layer provides generic Service APIs allowing
54 various data stores to be plugged in. The Data Aggregation Service
55 aggregates time series fine-grained raw data into course-grained roll-up
56 data to control the size of the data. The Data Purging Service
57 periodically purges both fine-grained raw data and course-granined
58 aggregated data according to user-defined schedules.
60 We have implemented The Data Collection Service, Data Storage Service,
61 TSDR Persistence Layer, TSDR HSQLDB Data Store, TSDR HBase Data Store,
62 and TSDR Cassandra Datastore. Among these services and components, time
63 series data is communicated using a common TSDR data model, which is
64 designed and implemented for the abstraction of time series data
65 commonalities. With these functions, TSDR is able to collect the data
66 from the data sources and store them into one of the TSDR data stores:
67 HSQLDB Data Store, HBase Data Store or Cassandra Data Store. Besides a
68 simple query command from Karaf console to retrieve data from the TSDR
69 data stores, we also provided a Data Query Service for the user to use
70 REST API to query the data from the data stores. Moreover, the user can
71 use Grafana, which is a time series visualization tool to view the data
72 stored in TSDR in various charting formats.
74 Configuring TSDR Data Stores
75 ----------------------------
77 To Configure HSQLDB Data Store
78 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80 The HSQLDB based storage files get stored automatically in <karaf
81 install folder>/tsdr/ directory. If you want to change the default
82 storage location, the configuration file to change can be found in
83 <karaf install folder>/etc directory. The filename is
84 org.ops4j.datasource-metric.cfg. Change the last portion of the
85 url=jdbc:hsqldb:./tsdr/metric to point to different directory.
87 To Configure HBase Data Store
88 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
90 After installing HBase Server on the same machine as OpenDaylight, if
91 the user accepts the default configuration of the HBase Data Store, the
92 user can directly proceed with the installation of HBase Data Store from
95 Optionally, the user can configure TSDR HBase Data Store following HBase
96 Data Store Configuration Procedure.
98 - HBase Data Store Configuration Steps
100 - Open the file etc/tsdr-persistence-hbase.peroperties under karaf
101 distribution directory.
103 - Edit the following parameters:
109 - HBase client connection pool size
111 - HBase client write buffer size
113 After the configuration of HBase Data Store is complete, proceed with
114 the installation of HBase Data Store from Karaf console.
116 - HBase Data Store Installation Steps
118 - Start Karaf Console
120 - Run the following commands from Karaf Console: feature:install
123 To Configure Cassandra Data Store
124 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
126 Currently, there’s no configuration needed for Cassandra Data Store. The
127 user can use Cassandra data store directly after installing the feature
130 Additionally separate commands have been implemented to install various
133 Administering or Managing TSDR Data Stores
134 ------------------------------------------
136 To Administer HSQLDB Data Store
137 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139 Once the TSDR default datastore feature (odl-tsdr-hsqldb-all) is
140 enabled, the TSDR captured OpenFlow statistics metrics can be accessed
141 from Karaf Console by executing the command
145 tsdr:list <metric-category> <starttimestamp> <endtimestamp>
149 - <metric-category> = any one of the following categories
150 FlowGroupStats, FlowMeterStats, FlowStats, FlowTableStats, PortStats,
153 - <starttimestamp> = to filter the list of metrics starting this
156 - <endtimestamp> = to filter the list of metrics ending this timestamp
158 - <starttimestamp> and <endtimestamp> are optional.
160 - Maximum 1000 records will be displayed.
162 To Administer HBase Data Store
163 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
165 - Using Karaf Command to retrieve data from HBase Data Store
167 The user first need to install hbase data store from karaf console:
169 feature:install odl-tsdr-hbase
171 The user can retrieve the data from HBase data store using the following
172 commands from Karaf console:
177 tsdr:list <CategoryName> <StartTime> <EndTime>
179 Typing tab will get the context prompt of the arguments when typeing the
180 command in Karaf console.
182 To Administer Cassandra Data Store
183 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
185 The user first needs to install Cassandra data store from Karaf console:
189 feature:install odl-tsdr-cassandra
191 Then the user can retrieve the data from Cassandra data store using the
192 following commands from Karaf console:
197 tsdr:list <CategoryName> <StartTime> <EndTime>
199 Typing tab will get the context prompt of the arguments when typeing the
200 command in Karaf console.
202 Installing TSDR Data Collectors
203 -------------------------------
205 When the user uses HSQLDB data store and installed "odl-tsdr-hsqldb-all"
206 feature from Karaf console, besides the HSQLDB data store, OpenFlow data
207 collector is also installed with this command. However, if the user
208 needs to use other collectors, such as NetFlow Collector, Syslog
209 Collector, SNMP Collector, and Controller Metrics Collector, the user
210 needs to install them with separate commands. If the user uses HBase or
211 Cassandra data store, no collectors will be installed when the data
212 store is installed. Instead, the user needs to install each collector
213 separately using feature install command from Karaf console.
215 The following is the list of supported TSDR data collectors with the
216 associated feature install commands:
218 - OpenFlow Data Collector
222 feature:install odl-tsdr-openflow-statistics-collector
224 - SNMP Data Collector
228 feature:install odl-tsdr-snmp-data-collector
230 - NetFlow Data Collector
234 feature:install odl-tsdr-netflow-statistics-collector
236 - sFlow Data Collector feature:install
237 odl-tsdr-sflow-statistics-colletor
239 - Syslog Data Collector
243 feature:install odl-tsdr-syslog-collector
245 - Controller Metrics Collector
249 feature:install odl-tsdr-controller-metrics-collector
251 In order to use controller metrics collector, the user needs to install
254 The following is the instructions for installing Sigar library on
257 - Install back end library by "sudo apt-get install
258 libhyperic-sigar-java"
261 LD\_LIBRARY\_PATH=/usr/lib/jni/:/usr/lib:/usr/local/lib" to set the
262 path of the JNI (you can add this to the ".bashrc" in your home
265 - Download the file "sigar-1.6.4.jar". It might be also in your ".m2"
266 directory under "~/.m2/resources/org/fusesource/sigar/1.6.4"
268 - Create the directory "org/fusesource/sigar/1.6.4" under the "system"
269 directory in your controller home directory and place the
270 "sigar-1.6.4.jar" there
272 Configuring TSDR Data Collectors
273 --------------------------------
275 - SNMP Data Collector Device Credential Configuration
277 After installing SNMP Data Collector, a configuration file under etc/
278 directory of ODL distribution is generated: etc/tsdr.snmp.cfg is
281 The following is a sample tsdr.snmp.cfg file:
283 credentials=[192.168.0.2,public],[192.168.0.3,public]
285 The above credentials indicate that TSDR SNMP Collector is going to
286 connect to two devices. The IPAddress and Read community string of these
287 two devices are (192.168.0.2, public), and (192.168.0.3) respectively.
289 The user can make changes to this configuration file any time during
290 runtime. The configuration will be picked up by TSDR in the next cycle
293 Polling interval configuration for SNMP Collector and OpenFlow Stats Collector
294 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
296 The default polling interval of SNMP Collector and OpenFlow Stats
297 Collector is 30 seconds and 15 seconds respectively. The user can change
298 the polling interval through restconf APIs at any time. The new polling
299 interval will be picked up by TSDR in the next collection cycle.
301 - Retrieve Polling Interval API for SNMP Collector
304 http://localhost:8181/restconf/config/tsdr-snmp-data-collector:TSDRSnmpDataCollectorConfig
308 - Update Polling Interval API for SNMP Collector
311 http://localhost:8181/restconf/operations/tsdr-snmp-data-collector:setPollingInterval
315 - Content Type: application/json
327 - Retrieve Polling Interval API for OpenFlowStats Collector
330 http://localhost:8181/restconf/config/tsdr-openflow-statistics-collector:TSDROSCConfig
334 - Update Polling Interval API for OpenFlowStats Collector
337 http://localhost:8181/restconf/operations/tsdr-openflow-statistics-collector:setPollingInterval
341 - Content Type: application/json
353 Querying TSDR from REST APIs
354 ----------------------------
356 TSDR provides two REST APIs for querying data stored in TSDR data
359 - Query of TSDR Metrics
361 - URL: http://localhost:8181/tsdr/metrics/query
367 - tsdrkey=[NID=][DC=][MN=][RK=]
371 The TSDRKey format indicates the NodeID(NID), DataCategory(DC), MetricName(MN), and RecordKey(RK) of the monitored objects.
372 For example, the following is a valid tsdrkey:
373 [NID=openflow:1][DC=FLOWSTATS][MN=PacketCount][RK=Node:openflow:1,Table:0,Flow:3]
374 The following is also a valid tsdrkey:
375 tsdrkey=[NID=][DC=FLOWSTATS][MN=][RK=]
376 In the case when the sections in the tsdrkey is empty, the query will return all the records in the TSDR data store that matches the filled tsdrkey. In the above example, the query will return all the data in FLOWSTATS data category.
377 The query will return only the first 1000 records that match the query criteria.
379 - from=<time\_in\_seconds>
381 - until=<time\_in\_seconds>
383 The following is an example curl command for querying metric data from
386 curl -G -v -H "Accept: application/json" -H "Content-Type:
387 application/json" "http://localhost:8181/tsdr/metrics/query"
388 --data-urlencode "tsdrkey=[NID=][DC=FLOWSTATS][MN=][RK=]"
389 --data-urlencode "from=0" --data-urlencode "until=240000000000"\|more
391 - Query of TSDR Log type of data
393 - URL:http://localhost:8181/tsdr/logs/query
399 - tsdrkey=tsdrkey=[NID=][DC=][RK=]
403 The TSDRKey format indicates the NodeID(NID), DataCategory(DC), and RecordKey(RK) of the monitored objects.
404 For example, the following is a valid tsdrkey:
405 [NID=openflow:1][DC=NETFLOW][RK]
406 The query will return only the first 1000 records that match the query criteria.
408 - from=<time\_in\_seconds>
410 - until=<time\_in\_seconds>
412 The following is an example curl command for querying log type of data
413 from TSDR data store:
415 curl -G -v -H "Accept: application/json" -H "Content-Type:
416 application/json" "http://localhost:8181/tsdr/logs/query"
417 --data-urlencode "tsdrkey=[NID=][DC=NETFLOW][RK=]" --data-urlencode
418 "from=0" --data-urlencode "until=240000000000"\|more
420 Grafana integration with TSDR
421 -----------------------------
423 TSDR provides northbound integration with Grafana time series data
424 visualization tool. All the metric type of data stored in TSDR data
425 store can be visualized using Grafana.
427 For the detailed instruction about how to install and configure Grafana
428 to work with TSDR, please refer to the following link:
430 https://wiki.opendaylight.org/view/Grafana_Integration_with_TSDR_Step-by-Step
432 Purging Service configuration
433 -----------------------------
435 After the data stores are installed from Karaf console, the purging
436 service will be installed as well. A configuration file called
437 tsdr.data.purge.cfg will be generated under etc/ directory of ODL
440 The following is the sample default content of the tsdr.data.purge.cfg
443 host=127.0.0.1 data\_purge\_enabled=true data\_purge\_time=23:59:59
444 data\_purge\_interval\_in\_minutes=1440 retention\_time\_in\_hours=168
446 The host indicates the IPAddress of the data store. In the case when the
447 data store is together with ODL controller, 127.0.0.1 should be the
448 right value for the host IP. The other attributes are self-explained.
449 The user can change those attributes at any time. The configuration
450 change will be picked up right away by TSDR Purging service at runtime.
452 How to use TSDR to collect, store, and view OpenFlow Interface Statistics
453 -------------------------------------------------------------------------
458 This tutorial describes an example of using TSDR to collect, store, and
459 view one type of time series data in OpenDaylight environment.
464 You would need to have the following as prerequisits:
466 - One or multiple OpenFlow enabled switches. Alternatively, you can use
467 mininet to simulate such a switch.
469 - Successfully installed OpenDaylight Controller.
471 - Successfully installed HBase Data Store following TSDR HBase Data
472 Store Installation Guide.
474 - Connect the OpenFlow enabled switch(es) to OpenDaylight Controller.
479 HBase data store is only supported in Linux operation system.
484 - Start OpenDaylight.
486 - Connect OpenFlow enabled switch(es) to the controller.
488 - If using mininet, run the following commands from mininet command
491 - mn --topo single,3 --controller
492 *remote,ip=172.17.252.210,port=6653* --switch
493 ovsk,protocols=OpenFlow13
495 - Install tsdr hbase feature from Karaf:
497 - feature:install odl-tsdr-hbase
499 - Install OpenFlow Statistics Collector from Karaf:
501 - feature:install odl-tsdr-openflow-statistics-collector
503 - run the following command from Karaf console:
505 - tsdr:list PORTSTATS
507 You should be able to see the interface statistics of the switch(es)
508 from the HBase Data Store. If there are too many rows, you can use
509 "tsdr:list InterfaceStats\|more" to view it page by page.
511 By tabbing after "tsdr:list", you will see all the supported data
512 categories. For example, "tsdr:list FlowStats" will output the Flow
513 statistics data collected from the switch(es).
521 All TSDR features and components write logging information including
522 information messages, warnings, errors and debug messages into
525 HBase and Cassandra logs
526 ~~~~~~~~~~~~~~~~~~~~~~~~
528 For HBase and Cassandra data stores, the database level logs are written
529 into HBase log and Cassandra logs.
533 - HBase log is under <HBase-installation-directory>/logs/.
537 - Cassandra log is under {cassandra.logdir}/system.log. The default
538 {cassandra.logdir} is /var/log/cassandra/.
543 TSDR gets the data from a variety of sources, which can be secured in
548 - The OpenFlow data can be configured with Transport Layer Security
549 (TLS) since the OpenFlow Plugin that TSDR depends on provides this
554 - The SNMP version3 has security support. However, since ODL SNMP
555 Plugin that TSDR depends on does not support version 3, we (TSDR)
556 will not have security support at this moment.
560 - NetFlow, which cannot be configured with security so we recommend
561 making sure it flows only over a secured management network.
565 - Syslog, which cannot be configured with security so we recommend
566 making sure it flows only over a secured management network.
568 Support multiple data stores simultaneously at runtime
569 ------------------------------------------------------
571 TSDR supports running multiple data stores simultaneously at runtim. For
572 example, it is possible to configure TSDR to push log type of data into
573 Cassandra data store, while pushing metrics type of data into HBase.
575 When you install one TSDR data store from karaf console, such as using
576 feature:install odl-tsdr-hsqldb, a properties file will be generated
577 under <Karaf-distribution-directory>/etc/. For example, when you install
578 hsqldb, a file called tsdr-persistence-hsqldb.properties is generated
579 under that directory.
581 By default, all the types of data are supported in the data store. For
582 example, the default content of tsdr-persistence-hsqldb.properties is as
587 metric-persistency=true
589 binary-persistency=true
591 When the user would like to use different data stores to support
592 different types of data, he/she could enable or disable a particular
593 type of data persistence in the data stores by configuring the
594 properties file accordingly.
596 For example, if the user would like to store the log type of data in
597 HBase, and store the metric and binary type of data in Cassandra, he/she
598 needs to install both hbase and cassandra data stores from Karaf
599 console. Then the user needs to modify the properties file under
600 <Karaf-distribution-directory>/etc as follows:
602 - tsdr-persistence-hbase.properties
606 metric-persistency=false
608 binary-persistency=true
610 - tsdr-persistence-cassandra.properties
614 metric-psersistency=true
615 log-persistency=false
616 binary-persistency=false