You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the version of my Kubernetes is 1.20.11, I used the v0.40.0 version of the collector.
The configuration of my Collector is 1 core and 2GB of memory.
Part of the collector's logs are as follows:
2024-12-16T14:36:04.690Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:04.700Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:05.691Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:05.691Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:06.692Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:06.693Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:07.693Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:07.696Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:08.696Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:08.699Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:09.699Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:09.702Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:10.700Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:10.701Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:11.700Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:11.701Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:12.702Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:12.703Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:13.710Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:13.712Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:14.719Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:14.722Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:15.726Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:15.729Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:16.730Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:16.733Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:17.735Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:17.738Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:18.739Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:18.742Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:19.743Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:19.746Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:20.747Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
2024-12-16T14:36:20.751Z INFO loggingexporter/logging_exporter.go:40 TracesExporter {"#spans": 500}
In order to test the performance of the Collector, I send trace data to the Collector, The current performance of the Collector is as follows:
I found that under the current configuration, the CPU usage is relatively high, while the memory usage is very low.
My question is, is there any other way, or strategy, to improve Collector's performance? I'm just new to OpenTelemetry and hope to get some good advice!
Thank you all again for your help.
The text was updated successfully, but these errors were encountered:
@VihasMakwana
Firstly, regarding the part of performance analysis, I'll try to configure it now. Thank you.
Secondly, I'd like to ask that for the K8S version 1.20.11, if I want to use a higher - version Collector, is there any way to do it? Or can I deploy the Collector independently without using the Opentelemetry Operator? I mainly want to use the function of partitioning by traceId in Kafka Export.
Component(s)
exporter/kafka
Describe the issue you're reporting
I deployed a simple Collector using the OpenTelemetry Operator, and its configuration is as follows:
Since the version of my Kubernetes is 1.20.11, I used the v0.40.0 version of the collector.
The configuration of my Collector is 1 core and 2GB of memory.
Part of the collector's logs are as follows:
In order to test the performance of the Collector, I send trace data to the Collector, The current performance of the Collector is as follows:
I found that under the current configuration, the CPU usage is relatively high, while the memory usage is very low.
My question is, is there any other way, or strategy, to improve Collector's performance? I'm just new to OpenTelemetry and hope to get some good advice!
Thank you all again for your help.
The text was updated successfully, but these errors were encountered: