This guide walks you through the process of adding a new scraper type.
You can use our code tour by using the CodeTour extension for VS Code and/or use it in GitHub Codespaces.
📢 Before implementing a new scraping type, please open an issue to discuss your scenario
- Add your new scraping type to the
ResourceType
enum inPromitor.Core.Contracts
. - Describe the resource for which you're scraping metrics by creating
<New-Type>ResourceDefinition
and inherit fromPromitor.Core.Scraping.Configuration.Model.Metrics.AzureResourceDefinition
- this class should go in.\src\Promitor.Core.Contracts\ResourceTypes
. - Describe the resource configuration for which you're scraping metrics by creating
<New-Type>ResourceV1
and inherit fromPromitor.Core.Scraping.Configuration.Serialization.v1.Model.AzureResourceDefinitionV1
- this class should go in.\src\Promitor.Core.Scraping\Configuration\Serialization\v1\Model\ResourceTypes
. - Create a new Deserializer in
.\src\Promitor.Core.Scraping\Configuration\Serialization\v1\Providers
. This must inherit fromResourceDeserializer
. - Update
Promitor.Core.Scraping.Configuration.v1.Core.AzureResourceDeserializerFactory
to handle your new resource type by returning a new instance of the Deserializer you created in the previous step. - Update the
Promitor.Core.Scraping.Configuration.Serialization.v1.Mapping.V1MappingProfile
so that it:
- Is able to map your new resource type by mapping the
<New-Type>ResourceV1
to<New-Type>ResourceDefinition
- Annotate how to map it with
IAzureResourceDefinition
(include).
- Provide a unit test in
.\src\Promitor.Tests.Unit\Serialization\v1\Providers
that tests the deserialization based on our sample. Your test class must inherit fromResourceDeserializerTest
to ensure the inherited functionality is tested.
Going forward in this guide, TResourceDefinition
will refer to your newly created
configuration type.
For every scraper type we provide validation for the configuration so that Promitor fails to start up.
This requires the following steps:
- Create a new validator that implements
IMetricValidator
. This validator should reside in.\src\Promitor.Agents.Scraper\Validation\MetricDefinitions\ResourceTypes
. You can look at the contents ofServiceBusQueueMetricValidator
for an idea of the validation inputs, steps, and outputs typical of validator implementation. - Add construction and usage of this validator to
.\src\Promitor.Agents.Scraper\Validation\Factories\MetricValidatorFactory.cs
for the ResourceType you created in step #1 above. - Provide a unit test for every validation rule that was added in
.\src\Promitor.Tests.Unit\Validation\Scraper\Metrics\ResourceTypes
We'll add a new scraper that pulls the metrics from Azure Monitor:
- Implement a scraper, that inherits from
AzureMonitorScraper<TResourceDefinition>
, which will specify what resource to scrape with Azure Monitor.- You can find it in
.\src\Promitor.Core.Scraping\ResourceTypes
.
- You can find it in
- Hook your new scraper in our
MetricScraperFactory
which determines what scraper to use for the passed configuration.- You can find it in
.\src\Promitor.Core.Scraping\Factories
.
- You can find it in
We'll add dynamic resource discovery support by using Azure Resource Graph:
- Implement a new discovery query that create an Azure Resource Graph query.It should inherits from
ResourceDiscoveryQuery
and be located in.\src\Promitor.Agents.ResourceDiscovery\Graph\ResourceTypes
- Support the new resource type in
ResourceDiscoveryFactory
📝 Currently we don't have integration tests
Every new scraper, should be automatically tested to ensure that it can be scraped.
To achieve this, the steps are fairly simple:
- Provision a new test resource in our testing infrastructure (GitHub)
- Define a new resource discovery group
- You can find it on
config\promitor\resource-discovery\resource-discovery-declaration.yaml
- You can find it on
- Define a new metric in the scraper configuration with a valid Azure Monitor metric for the service (overview]
- You can find it on
config\promitor\scraper\metrics.yaml
- You can find it on
Our testing infrastructure will pick up the new metric automatically and ensure that it is being reported!
Features are great to have but without clear & well-written documentation they are somewhat useless.
The documentation for Promitor is hosted on docs.promitor.io and is maintained in promitor/docs.
Please follow the instructions in the docs contribution guide for documenting a new scraper.
New scalers are a great additions and we should make sure that they are listed in our changelog.
Learn about our changelog in our contribution guide.
Now that you are done, make sure you run Promitor locally so verify that it generates the correct metrics!
When opening the pull request (PR), feel free to copy the generated Prometheus metrics for review.
Learn how to run it in our development guide.