From e81cf6db24f95ee9d769bc9862ffe5e3c230880e Mon Sep 17 00:00:00 2001 From: Vladimir Hasko Date: Wed, 24 May 2023 13:45:41 +0000 Subject: [PATCH] fixing reference issue --- doc/source/internal/apimon_training/test_scenarios.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/source/internal/apimon_training/test_scenarios.rst b/doc/source/internal/apimon_training/test_scenarios.rst index 660543c..206b850 100644 --- a/doc/source/internal/apimon_training/test_scenarios.rst +++ b/doc/source/internal/apimon_training/test_scenarios.rst @@ -12,7 +12,7 @@ python script). With Ansible on it's own having nearly limitless capability and availability to execute anything else ApiMon can do pretty much anything. The only expectation is that whatever is being done produces some form of metric for further analysis and evaluation. Otherwise there is no sense in monitoring. The -scenarios are collected in a `Git repository +scenarios are collected in a `Github `_ and updated in real-time. In general mentioned test jobs do not need take care of generating data implicitly. Since the API related tasks in the playbooks rely on the Python @@ -25,7 +25,7 @@ the playbook names, results and duration time ('ansible_stats' metrics) and stores them to :ref:`postgresql relational database `. The playbooks with monitoring scenarios are stored in separate repository on -`github `_ (the location +`Github `_ (the location will change with CloudMon replacement in `future `_). Playbooks address the most common use cases with cloud services conducted by end customers.