diff --git a/doc/source/internal/apimon_training/faq/what_are_the_annotations.rst b/doc/source/internal/apimon_training/faq/what_are_the_annotations.rst index be59af8..0a1359e 100644 --- a/doc/source/internal/apimon_training/faq/what_are_the_annotations.rst +++ b/doc/source/internal/apimon_training/faq/what_are_the_annotations.rst @@ -2,7 +2,6 @@ What Are The Annotations? ######################### - Annotations provide a way to mark points on the graph with rich events. When you hover over an annotation you can get event description and event tags. The text field can include links to other systems with more detail. diff --git a/doc/source/internal/apimon_training/logs.rst b/doc/source/internal/apimon_training/logs.rst index a5ed0eb..68d46f9 100644 --- a/doc/source/internal/apimon_training/logs.rst +++ b/doc/source/internal/apimon_training/logs.rst @@ -5,7 +5,6 @@ Logs ==== - - Every single job run log is stored on OpenStack Swift object storage. - Each single job log file provides unique URL which can be accessed to see log details diff --git a/doc/source/internal/apimon_training/test_scenarios.rst b/doc/source/internal/apimon_training/test_scenarios.rst index 206b850..fdbb821 100644 --- a/doc/source/internal/apimon_training/test_scenarios.rst +++ b/doc/source/internal/apimon_training/test_scenarios.rst @@ -43,7 +43,7 @@ New Test Scenario introduction ============================== As already mentioned playbook scenarios are stored in separate repository on -`github `_. Due to the +`Github `_. Due to the fact that we have various environments which differ between each other by location, supported services, different flavors, etc it's required to have monitoring configuration matrix which defines the monitoring standard and scope @@ -114,7 +114,7 @@ Custom metrics in Test Scenarios OpenStack SDK and otcextensions (otcextensions covers services which are out of scope of OpenStack SDK and extends its functionality with services provided by -OTC) support metric generation natively for every single API call and ApiMon +OTC) support metric generation natively for every single API call and ApiMon executor supports collection of ansible playbook statistics so every single scenario and task can store its result, duration and name in metric database. @@ -126,8 +126,10 @@ be complicated to transfer processing logic of metrics on grafana. Therefore tags feature on task level introduces possibility to address custom metrics. -In following example the custom metric stores the result of multiple tasks in -special metric name create_server:: +In following example (snippet from `scenario2_simple_ece.yaml +`_) +custom metric stores the result of multiple tasks in special metric name +create_server:: - name: Create Server in default AZ openstack.cloud.server: