diff --git a/site/content/learn/_index.md b/site/content/learn/_index.md
index ea61c4f5b..93d0604e4 100644
--- a/site/content/learn/_index.md
+++ b/site/content/learn/_index.md
@@ -37,6 +37,14 @@ Monitoring and Observability are key concepts when dealing with application qual
description="Learn basics, tools and tips for monitoring your Kubernetes clusters."
link="/learn/kubernetes"
>}}
+{{< doc-card
+ class="three-column-card"
+ headerTag="h3"
+ title="Monitoring"
+ img="/learn/icons/monitoring.svg"
+ description="Learn the fundamentals of monitoring production services."
+ link="/learn/monitoring"
+>}}
diff --git a/site/content/learn/kubernetes/monitoring-metrics.md b/site/content/learn/kubernetes/monitoring-metrics.md
deleted file mode 100644
index 5203313aa..000000000
--- a/site/content/learn/kubernetes/monitoring-metrics.md
+++ /dev/null
@@ -1,88 +0,0 @@
----
-title: Monitoring Kubernetes Metrics - Best Practices and Key Metrics
-displayTitle: Critical metrics for monitoring Kubernetes
-navTitle: Key Metrics
-description: This article presents a comprehensive overview of the key metrics to monitor in a Kubernetes environment and the challenges associated with monitoring in such a complex infrastructure.
-date: 2024-11-08
-author: Nocnica Mellifera
-githubUser: serverless-mom
-displayDescription:
- A comprehensive overview of the key metrics to monitor in a Kubernetes environment and the challenges associated with monitoring in such a complex infrastructure.
-menu:
- learn_kubernetes
-weight: 60
----
-
-# Monitoring Kubernetes Metrics: Best Practices and Key Metrics
-
-This article presents a comprehensive overview of the key metrics to monitor in a Kubernetes environment and the challenges associated with monitoring in such a complex infrastructure.
-
-## The Importance of Monitoring in Kubernetes
-
-Kubernetes introduces a layer of complexity in application deployment and management. Unlike traditional infrastructures where applications were tightly coupled to static hosts, Kubernetes abstracts these elements by distributing containers across a dynamic pool of nodes. This shift necessitates new approaches to monitoring that go beyond traditional metrics associated with individual servers.
-
-Monitoring Kubernetes effectively requires a multifaceted approach that encompasses various layers, from the application itself to the underlying infrastructure. Here are the key layers to consider:
-
-### 1. Application Metrics
-
-At the highest level, monitoring the application metrics is critical. These metrics provide insights into how the application is performing and whether it meets the expectations set by business objectives. The most commonly monitored application metrics include:
-
-- **Response Time**: Measures the time taken to process requests. High response times may indicate issues with application performance or backend services.
-- **Error Rates**: Tracks the percentage of failed requests. A spike in error rates can be an early warning sign of underlying issues.
-- **Throughput**: Measures the number of requests processed over a specific time frame, which helps in understanding the load on the application.
-
-### 2. Service Metrics
-
-Services that support the application, such as databases and message queues, are also essential to monitor. Key service metrics include:
-
-- **Connection Count**: Monitoring active connections to databases can help ensure that they are not overwhelmed and can handle requests efficiently.
-- **Latency**: Measures the time taken for service requests to be completed, helping to identify bottlenecks in service performance.
-- **Resource Utilization**: Tracking CPU and memory usage for services ensures they are not reaching capacity limits.
-
-### 3. Kubernetes Health Metrics (The "Holy Check")
-
-Kubernetes itself requires careful monitoring to ensure that it is managing applications effectively. This involves:
-
-- **Pod Status**: Monitoring the number of running pods against desired states. If the actual count falls below expectations, it can indicate deployment issues.
-- **Deployment Health**: Ensuring that deployments are successful and that rollouts do not cause disruptions.
-- **Replica Set Monitoring**: Confirming that the configured number of replicas is running to maintain service availability.
-
-### 4. Kubernetes Internal Metrics
-
-Kubernetes components, such as the API server, controller manager, and kubelet, also need to be monitored. Key metrics to track include:
-
-- **API Server Latency**: Measures the response time of the API server to ensure it can handle requests efficiently.
-- **Scheduler Performance**: Monitoring scheduling latency can help identify delays in pod placement.
-- **Node Health**: Ensuring that all nodes in the cluster are operational and can support running pods.
-
-### 5. Host Metrics
-
-Despite Kubernetes' abstraction of hosts, monitoring the underlying nodes is still crucial. This includes:
-
-- **Node Resource Utilization**: Keeping an eye on CPU, memory, and disk usage on nodes to prevent overload.
-- **Network Traffic**: Monitoring network I/O on hosts to identify potential bottlenecks or failures in connectivity.
-- **Disk Space Availability**: Ensuring there is adequate disk space for logs, applications, and containers.
-
-## Challenges in Monitoring Kubernetes
-
-While Kubernetes offers powerful capabilities for deploying and managing applications, it also brings several challenges for monitoring:
-
-- **Dynamic Nature of Containers**: The ephemeral nature of containers means that traditional monitoring methods may not be effective. Tools need to adapt to constantly changing environments.
-- **Complex Architectures**: Microservices architectures increase the number of metrics and interactions that must be monitored, making it difficult to obtain a holistic view of application health.
-- **Visibility**: Gaining visibility inside containers can be challenging, as traditional monitoring tools may not be designed to operate within the Kubernetes environment.
-
-## Best Practices for Monitoring Kubernetes Metrics
-
-To effectively monitor Kubernetes metrics, consider the following best practices:
-
-1. **Unified Monitoring Solutions**: Leverage unified monitoring platforms that can aggregate metrics across all layers, providing a single pane of glass for visibility.
-2. **Utilize Existing Tools**: Tools like Prometheus and Grafana are popular for monitoring Kubernetes metrics due to their ability to scrape data and provide visualization.
-3. **Define Key Metrics**: Focus on a limited set of critical metrics that provide actionable insights, avoiding metric overload that can obscure important information.
-4. **Automate Alerts**: Set up automated alerting based on predefined thresholds for key metrics to ensure quick responses to potential issues.
-5. **Regularly Review and Adjust**: Periodically review monitoring setups and adjust metrics and thresholds as application workloads and architectures evolve.
-
-## Conclusion
-
-Monitoring Kubernetes metrics is essential for maintaining the health and performance of applications running in containerized environments. By focusing on key metrics across different layers, organizations can gain valuable insights into both their applications and the underlying infrastructure. Embracing best practices and leveraging appropriate tools will enable teams to enhance their monitoring capabilities, ultimately leading to better performance and reliability in Kubernetes environments.
-
-As organizations continue to evolve their cloud-native practices, understanding the nuances of Kubernetes monitoring will be crucial for operational success.
diff --git a/site/content/learn/monitoring/_index.md b/site/content/learn/monitoring/_index.md
new file mode 100644
index 000000000..76b72bf68
--- /dev/null
+++ b/site/content/learn/monitoring/_index.md
@@ -0,0 +1,10 @@
+---
+title: monitoring index
+displayTitle: Monitoring
+description: Learn Monitoring
+date: 2024-10-17
+author: Nocnica Mellifera
+githubUser: serverless-mom
+displayDescription:
+---
+This is a temporary placeholder for the Monitoring section of Guides
\ No newline at end of file
diff --git a/site/content/learn/monitoring/frontend-monitoring.md b/site/content/learn/monitoring/frontend-monitoring.md
new file mode 100644
index 000000000..22b14fca0
--- /dev/null
+++ b/site/content/learn/monitoring/frontend-monitoring.md
@@ -0,0 +1,246 @@
+---
+title: Frontend Monitoring - Benefits, Challenges, and Top Tools
+displayTitle: Critical metrics for monitoring Kubernetes
+navTitle: Frontend Monitoring
+description: Discover the benefits, challenges, and top tools for frontend monitoring. Learn how to track performance, detect issues, and optimize user experience.
+date: 2024-12-15
+author: Nocnica Mellifera
+githubUser: serverless-mom
+displayDescription:
+ Discover the benefits, challenges, and top tools for frontend monitoring. Learn how to track performance, detect issues, and optimize user experience.
+menu:
+ learn_monitoring
+weight: 10
+---
+
+No matter what internal testing or error monitoring we do for our web services, our end users will interact with that service through a front end. It’s necessary to perform front end monitoring so that you’re not relying on users to report problems.
+
+Frontend monitoring ensures seamless user experiences by observing and analyzing the performance and functionality of web applications. Edge case failures, bad 3rd party service interactions, and poor front end performance are all examples of issues that only direct front end monitoring can detect.
+
+In this article, we’ll cover the basics of frontend monitoring, explore its various types, key metrics, benefits, and challenges, and review some top tools to help you manage and optimize your applications effectively.
+
+
+## What is Frontend Monitoring?
+
+Frontend monitoring involves tracking and analyzing the performance, reliability, and user experience of web applications from the user’s perspective. Frontend monitoring can either occur by observing users real interactions on the frontend (real user monitoring) or by sending an automated system to interact with your frontend (synthetic user monitoring).
+
+It helps teams identify issues like slow page load times, JavaScript errors, or failed API calls that can negatively impact user satisfaction.
+
+
+## Types of Frontend Monitoring
+
+### Proactive Monitoring
+
+Proactive monitoring involves detecting potential issues before they affect users. Tools simulate user interactions to identify bottlenecks or vulnerabilities. For example, a proactive monitor might [simulate slow network response times](https://www.checklyhq.com/learn/playwright/intercept-requests/) to check how a frontend will render when some components are slow to load. This is comparable to a ‘chaos monkey’ testing strategy for backend services.
+
+### Reactive Monitoring
+
+Reactive monitoring focuses on capturing issues reported by users in real-time. It complements proactive efforts by highlighting problems occurring in production. Reactive monitoring may be as simple as catching JS errors that occur in a page and reporting them to an end service.
+
+### Real User Monitoring (RUM)
+
+RUM tracks actual user interactions with your application, offering insights into how real users experience your site in different environments. In theory, RUM is the ideal way to find front end problems: simply track every user’s experience, everywhere. However this approach has several challenges:
+
+- Failures of users expectations may go undetected - for example if a user searches for recent posts and gets posts from 7 years ago, no errors will be raised, and no problem will be tracked.
+- Rum can impact browser performance for users.
+- Transmitting data for every user every time is quite expensive, both for you the service provider and for the user’s browser performance and network bandwidth. The suggested solution for this known issue is to sample randomly: when a user starts a session on your site, they’re randomly assigned whether that session will be tracked in detail and transmitted to your observability service. This raises the issue of missing key failures: when a key client reports an error on your site, but no data was captured, you’re stuck trying to replicate an issue with only a user description.
+- Difficulty finding patterns - user behavior is inherently inconsistent. It’s often very difficult to identify connected failures or trends based on multiple users’ inconsistent behavior. The situation is similar to the sampling problem: we’re left trying to guess what happened based on sketchy information.
+- Complex implementation - from loading a javascript package to track user experience in the browser, to endpoints to collect that information, and a system to find patterns in stochastic user behavior, RUM is a complex technical challege with extensive techncial lift. If you’re trying to create a DIY solution for RUM, you’ll find that [CORS: Cross-Origin Request Blocked](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors) errors are the first of many challenges.
+
+### Synthetic Monitoring
+
+[Synthetic monitoring](https://www.checklyhq.com/blog/what-is-synthetic-monitoring/) can be as simple as sending requests to a service and making sure the response has a `200 OK` status code, but the term is generally used to describe an automated browser that can simulate user behavior. [Checkly uses the open source Playwright framework](https://www.checklyhq.com/docs/browser-checks/playwright-test/) to simulate user behavior in great detail, and make complex assertions about the results.
+
+Synthetic monitoring solves many of the problems listed above with RUM:
+
+- Engineers can create frontend tests to be run that mimic user behaviors and key user paths. By scripting things like user searches, these tests can exactly describe the expected output.
+- Synthetic monitoring doesn’t impact performance for users.
+- Synthetic monitoring’s costs are controlled from the outset, as you control the cadence of synthetic test runs.
+- Patterns are readily identifiable since the behavior of a synthetic user is always the same. Further, by running tests on a cadence, the exact time that failures started is easier to find. This is expecially helpful if you’re trying to connect a new failure to a paticular deployment
+- No implementation requirements - synthetic monitoring can be implemented as a 100% external service.
+
+### Application Performance Monitoring (APM)
+
+APM is a general term, usually referring to a combined solution that includes both front end monitoring and the measurement of performance of backend software through the use of installed software agents.
+
+---
+
+## How Frontend Monitoring Works
+
+Frontend monitoring tools capture data through a combination of synthetic tests, browser instrumentation, and real user interactions. These tools generate insights by analyzing performance metrics, logging errors, and tracking user behavior.
+
+---
+
+## Key Components of Frontend Application Monitoring
+
+- **Performance Metrics**: Monitor loading speeds, rendering times, and resource usage.
+- **Error Detection**: Identify JavaScript errors, API failures, and crashes. Crash reporting can be a complex problem but it’s great if it’s possible to report some details from user browser crashes.
+- **User Experience Analysis**: Assess user interactions, engagement, and satisfaction. This general concept is only sensible when performing Real User Monitoring, and may have significant overlap with business intelligence or business analytics. If you find yourself asking ‘what interface elements are most attractive to users?’ your use of the tool has shifted from monitoring to user analytics.
+
+---
+
+## Key Metrics in Frontend Monitoring
+
+The key metrics for frontend monitoring—*performance*, *user interactions*, *errors*, and *availability*—are crucial for understanding and optimizing the user experience. **Performance** metrics, including page load time and Core Web Vitals (e.g., Largest Contentful Paint, Interaction to Next Paint), measure how fast and smoothly content is delivered to users. **User interactions** capture events like clicks, form submissions, and navigations to gauge how users engage with the application. **Errors** track JavaScript exceptions, resource failures, and API issues, helping teams identify and resolve defects impacting functionality. **Availability** monitors uptime and service reachability to ensure the application is consistently accessible. Together, these metrics provide a comprehensive view of application health, enabling teams to improve performance, address issues proactively, and enhance the overall user experience.
+
+Key metrics in detail:
+
+- **Core Web Vitals**: [Largest Contentful Paint](https://web.dev/articles/lcp) (LCP), [Interaction to Next Pain](https://web.dev/articles/inp) (INP), Cumulative Layout Shift (CLS). These metrics are considered to be quite critical for search engine optimization.
+- **JavaScript Error Rates**: Frequency of client-side code failures. You may need to implement filtering for common errors.
+- **API Response Times**: Speed and reliability of API calls. This may be called ‘heartbeat monitoring’ if you’re only measuring the reliability of straightforward `get` requests.
+- **Network Request Failures**: Broken or delayed network requests
+- **Errors and Crashes**: Stability of the application under different conditions
+- **User Interactions and Engagement**: Clicks, scrolls, and session durations
+
+---
+
+## Common Use Cases of Frontend Monitoring
+
+- **Monitoring Page Load Times**: Ensure optimal page rendering speed.
+- **Tracking Third-Party Services**: No amount of pre-deploy testing with stubs of third party services can find all the possible interaction problems with those services. Frontend monitoring can detect issues caused by external libraries or APIs.
+- **User Interaction Monitoring**: Analyze behavior patterns and engagement levels.
+- **Analyzing Client-Side Errors**: Identify and resolve JavaScript issues, especially those happening on single platforms or with particular browser versions.
+
+---
+
+## Benefits of Frontend Monitoring
+
+### Proactive Issue Detection
+
+Frontend monitoring empowers teams to detect potential problems before they escalate into critical failures. By observing patterns such as sudden increases in error rates or performance degradation, teams can investigate and resolve issues early, often before users are impacted. This phase of the monitoring journey focuses on "known knowns," enabling developers to answer key questions like, “What happened during a spike in errors?” By building a narrative around historical data, teams can use these insights to improve system resilience.
+
+### Precise Performance Insights
+
+Performance monitoring provides actionable data that highlights bottlenecks and inefficiencies in applications. Metrics such as Largest Contentful Paint or Interaction to Next Paint help developers understand where delays occur and prioritize optimization efforts. These insights shift the focus from merely reacting to known issues toward analyzing anomalies, such as unexpectedly fast or slow responses. This phase aligns with "known unknowns," where developers explore statistical questions to assess how normal or abnormal system behavior is.
+
+### Real-Time User Experience Analysis
+
+Frontend monitoring enables developers to track user interactions, such as clicks, form submissions, or navigation paths, providing visibility into how users engage with an application. These insights help identify friction points in real-time, allowing teams to address usability challenges proactively. By adding context-specific data, such as user or session identifiers, developers can refine their analysis and enhance the customer experience. This phase blends analysis with experimentation, helping developers test hypotheses about what improves engagement or usability.
+
+### Resource Allocation Efficiency
+
+By identifying specific areas that require attention, monitoring data enables teams to focus their efforts where they matter most. For instance, if metrics reveal that a checkout flow is a primary source of performance complaints, resources can be redirected toward optimizing that functionality. This approach not only improves the end-user experience but also ensures the efficient use of time and budget. This aligns with the later phases of the observability journey, where teams experiment with targeted solutions and evaluate the ROI of their changes.
+
+---
+
+### Challenges of Frontend Monitoring
+
+### Managing Tool Sprawl and Integration Issues
+
+The abundance of frontend monitoring tools can create silos, making it challenging to consolidate insights across systems. Teams often struggle to integrate disparate tools like performance trackers, error loggers, and user interaction analytics. This fragmentation can hinder the ability to see the full picture, leaving blind spots in observability. Addressing this challenge requires adopting systems that unify logs, metrics, and traces to provide a cohesive view of the application’s behavior.
+
+### Handling Diverse User Environments
+
+Applications are accessed through a wide array of devices, browsers, and network conditions, each introducing unique challenges. For example, a feature that works seamlessly on one browser may fail on another, and network latency can vary dramatically across regions. Monitoring tools must account for this diversity by capturing data that reflects real-world conditions. Understanding these "known unknowns" helps developers identify anomalies across different environments and adapt their solutions to improve the experience for all users.
+
+### Keeping Up with Rapid Technological Changes
+
+The fast pace of evolution in frontend technologies, including frameworks and browser standards, makes it difficult to maintain effective monitoring. What works for today’s stack might become obsolete as new features and tools emerge. Teams must stay agile, continually updating their instrumentation and observability practices to align with the latest advancements. This requires a culture of experimentation and learning, allowing developers to test and adopt new solutions without losing focus on system stability.
+
+### Limited Root Cause Analysis with Basic Tools
+
+Entry-level monitoring tools often focus on surface-level insights, such as error counts or simple performance metrics, but fail to provide the deeper diagnostics needed to identify root causes. For example, they may show that an API call failed but not explain why. This limitation makes it difficult to move from the "what" to the "why," hindering the ability to address systemic issues. Advanced observability tools that provide context-rich data and enable correlation across systems are essential for overcoming this challenge.
+
+### Ensuring Full-Stack Visibility
+
+Frontend monitoring alone cannot provide a complete view of an application’s health. Many issues arise at the intersection of the frontend and backend, requiring integrated observability across the entire stack. Without this integration, teams risk spending excessive time proving whether an issue is frontend- or backend-related. Implementing context propagation and unified tracing, such as through OpenTelemetry, helps connect frontend events to backend processes, enabling a more holistic understanding of system behavior and streamlining troubleshooting efforts. For more detail on connecting backend OpenTelemetry traces with frontend performance see how [Checkly Traces connects data from across your stack](https://www.checklyhq.com/docs/traces-open-telemetry/how-it-works/).
+
+## Top frontend monitoring tools
+
+| Tool | Features | Notes |
+| --- | --- | --- |
+| **Checkly** | Uses Playwright end-to-end tests for synthetic monitoring. Robust alerts to detect issues early. Integrates with OpenTelemetry traces. | Excellent for proactive monitoring and integration with backend systems. |
+| **Sematext** | Backend monitoring with [an open-source data collection agent](https://github.com/sematext/sematext-agent-java). | Suitable for teams needing a lightweight, open-source-friendly solution. |
+| **Pingdom** | Specializes in uptime monitoring and performance tracking. | Focused on simple up-or-down monitoring. Limited in scope compared to other tools. |
+| **Google PageSpeed Insights** | Provides performance recommendations based on real-world data. | Primarily an auditing tool, best for reactive monitoring. |
+| **New Relic Browser** | Offers frontend monitoring with APM integration. Provides Real User Monitoring (RUM) capabilities. | Comprehensive but costly, best for large-scale applications needing detailed user insights, willing to be ‘locked in’ to the closed New Relic ecosystem |
+| **Sentry** | Focused on error tracking more than traditional monitoring. | Ideal for teams prioritizing debugging JavaScript errors and crashes. |
+| **Dynatrace** | Full-stack monitoring with AI-driven insights. | Expensive, with limited innovative features. AI insights often summarize existing data. |
+| **AppDynamics** | Monitors both frontend and backend performance, another ‘all in one’ APM tool. | Costly enterprise-level tool, suited for large organizations with complex environments. If you’re using AppD, chances are your team has been using it for 5+ years! |
+
+---
+
+## How to Choose the Right Frontend Monitoring Tool
+
+Consider features like synthetic monitoring, RUM, session replay, language compatibility, and security. Evaluate pricing, ease of use, and whether an agent or agentless setup fits your needs.
+
+## Frontend monitoring and OpenTelemetry
+
+Frontend monitoring has historically lagged behind backend systems in sophistication and integration. With OpenTelemetry (OTel), frontend monitoring can now achieve deeper insights into user experiences. However, it also brings unique challenges that require careful consideration.
+
+### Challenges in Using OpenTelemetry for Frontend Monitoring
+
+### 1. **Initial Complexity of Setup**
+
+While OpenTelemetry provides out-of-the-box instrumentation, setting it up for a browser-based application requires upfront effort. Instrumentation code must load before the application initializes to capture critical spans like document load times. Ensuring this is correctly implemented across environments can be a source of frustration.
+
+### 2. **Performance Overhead**
+
+Instrumenting a frontend app involves adding listeners for browser events, fetching metrics, and propagating trace headers. Over-instrumentation or poorly optimized spans can degrade application performance, especially for resource-intensive pages or on devices with limited capabilities.
+
+### 3. **Handling Browser-Specific Nuances**
+
+Browsers have unique behaviors that can make instrumentation challenging. For example:
+
+- **Redirects and Network Timing:** JavaScript doesn’t have access to certain browser-level events, like pre-redirect network timing. Combining OpenTelemetry's network instrumentation with browser APIs like `PerformanceObserver` can help, but it requires extra configuration.
+- **Clock Synchronization:** Distributed tracing relies on accurate timestamps. However, clock drift between client devices and servers can result in misaligned spans. Proxying timestamp corrections through an [OpenTelemetry Collector](https://www.checklyhq.com/learn/opentelemetry/what-is-the-otel-collector/) is often necessary.
+
+### 4. **Data Volume and Rate Limiting**
+
+Frontend telemetry can quickly generate a high volume of spans, especially on high-traffic applications. Without rate limiting or filtering, this data can overwhelm storage systems or increase monitoring costs. Developers must design selective instrumentation strategies to focus on high-value spans.
+
+### 5. **Contextual Relevance of Spans**
+
+While auto-instrumentation provides useful baselines like resource fetch times and click events, it lacks application-specific context. Developers need to enhance these spans with attributes that matter to their business logic—such as user IDs, session data, or interaction details. Without this, telemetry risks becoming just noise.
+
+### 6. **Debugging and Observability Gaps**
+
+Traditional frontend monitoring tools often report *what* is happening (e.g., a page is slow), but lack insights into *why*. OpenTelemetry addresses this by correlating frontend spans with backend traces. However, this requires propagation of trace headers (`traceparent`) between the frontend and backend, which can be technically challenging in distributed systems.
+
+OpenTelemetry opens a new frontier for frontend observability, but it’s not a plug-and-play solution. Teams must balance instrumentation depth with performance and focus on collecting the data that best illuminates user experiences. By addressing these challenges head-on, developers can achieve a robust, connected view of their systems and elevate the standard of frontend monitoring.
+
+## Developing a Frontend Monitoring Strategy: A Step-by-Step Guide
+
+1. **Identify Key Metrics**: Focus on what matters most to your application.
+2. **Select Tools**: Choose tools that integrate with your tech stack.
+3. **Monitor During High-Usage Periods**: Analyze performance under peak load.
+4. **Integrate Insights**: Align findings with business goals.
+
+---
+
+## How Can Checkly Help with Frontend Monitoring?
+
+Checkly performs synthetic monitoring with the power of Playwright for robust testing of key workflows. By simulating real-world scenarios, it ensures applications remain functional and performant under varying conditions.
+
+Effective frontend monitoring ensures the seamless performance and reliability of websites and applications, delivering a strong user experience. Checkly excels in this area by combining the power of end-to-end (E2E) monitoring and modern development workflows to proactively identify and resolve issues before they impact users.
+
+Checkly enables developers to monitor critical user flows—like login, search, and checkout—in real-time. By leveraging tools like Playwright, Checkly runs automated scripts simulating real user interactions. This synthetic monitoring approach provides continuous feedback on the health of frontend applications, revealing issues that often go undetected in pre-production testing.
+
+For example, Checkly allows teams to:
+
+- Run E2E tests on a cadence to catch edge cases and unpredictable failures.
+- Test frontend flows directly in production, monitoring the behavior users actually experience.
+- Extend monitoring to include application performance metrics like load times and rendering speed.
+
+### Advantages of Checkly’s Approach
+
+1. **Integration with Existing Workflows**
+
+ Checkly fits seamlessly into CI/CD pipelines, enabling checks to trigger automatically on deployments or pull requests. This "[monitoring as code](https://www.checklyhq.com/guides/monitoring-as-code/)" (MaC) approach allows developers to maintain monitoring scripts alongside the application codebase.
+
+2. **Resource-Efficient Testing**
+
+ By utilizing headless browser automation tools, Checkly achieves higher stability and faster execution compared to traditional headful testing methods. This means frontend monitoring can run efficiently in cloud environments without consuming excessive resources.
+
+3. **Real-Time Alerting**
+
+ When checks fail, Checkly alerts teams immediately through preferred channels like Slack, PagerDuty, or email. This ensures swift action to resolve issues and minimize user impact.
+
+4. **Global Monitoring Capability**
+
+ Checkly's cloud-based infrastructure lets teams run tests from multiple geographic locations, ensuring frontend performance remains consistent for users worldwide.
+
+
+## Conclusion
+
+Frontend monitoring is essential for maintaining application reliability and delivering exceptional user experiences. By leveraging the right tools and strategies, teams can proactively address issues, optimize performance, and meet the expectations of modern web users. Consider tools like Checkly to simplify and strengthen your monitoring efforts.
\ No newline at end of file
diff --git a/site/content/learn/monitoring/web-application-monitoring.md b/site/content/learn/monitoring/web-application-monitoring.md
new file mode 100644
index 000000000..38dd31143
--- /dev/null
+++ b/site/content/learn/monitoring/web-application-monitoring.md
@@ -0,0 +1,326 @@
+---
+title: Web Application Monitoring - Types, Benefits & Top 10 Tools
+displayTitle: Critical metrics for monitoring Kubernetes
+navTitle: Web Application Monitoring
+description: Explore web application monitoring to boost performance and reliability with real user insights, performance tracking, and top tools.
+date: 2024-12-15
+author: Nocnica Mellifera
+githubUser: serverless-mom
+displayDescription: Explore web application monitoring to boost performance and reliability with real user insights, performance tracking, and top tools.
+menu:
+ learn_monitoring
+weight: 20
+---
+
+## What is Web Application Monitoring?
+
+Web application monitoring refers to the practice of observing, tracking, and managing the performance, availability, and reliability of web applications. It ensures users have a seamless experience and minimizes disruptions by identifying issues in real-time. All that’s required for something to be a ‘web application’ is that it’s more than a static site, and that it takes requests via the internet. Even an API mainly accessed via TCP is a web application!
+
+Web application monitoring is a highly generalized term, and distinctions between, for example, ‘metrics’ and ‘monitoring;’ and ‘error tracking’ versus ‘performance monitoring’ are often distinctions without a difference. The question we’re trying to answer with Web Application Monitoring is: “how well is our application performing for users?” Observations that don’t relate to this question, for example the popularity of a single post on our social media site, or how well our live-generated site follows brand standards, are outside the scope of web application monitoring.
+
+## How Web Application Monitoring Works
+
+Monitoring tools collect data from your web application, servers, and user interactions. They process this data to generate insights about application health, performance, and user behavior. Alerts, dashboards, and reports provide actionable insights for resolving issues and optimizing performance.
+
+## Types of Web Application Monitoring
+
+Monitoring a web application requires understanding various dimensions of performance, usability, and security. Each type of monitoring addresses specific aspects of the application to ensure seamless operation. Let’s explore these types in greater detail.
+
+### Synthetic Monitoring
+
+[Synthetic monitoring](https://www.checklyhq.com/docs/) uses pre-recorded scripts to simulate user interactions with your web application. By performing these synthetic transactions, you can test key functionalities such as page load times, form submissions, or API calls without relying on real user activity.
+
+- Benefits: It is proactive, enabling teams to detect and resolve issues before users are impacted. Synthetic monitoring is particularly effective for testing uptime, availability during off-hours, and the impact of new deployments.
+- Example Use Case: Running tests on an e-commerce checkout page to ensure it processes payments correctly after a new update.
+- Challenges: It doesn’t capture real-world user behaviors, so it should complement, not replace, real user monitoring.
+
+See our complete guide to Synthetic Monitoring for a deeper dive.
+
+---
+
+### Real User Monitoring (RUM)
+
+Real User Monitoring (RUM) captures real-time data from actual users as they interact with your application. By embedding a lightweight tracking code into the application, RUM collects metrics like page load times, errors, and user interactions.
+
+- Benefits: It provides insights into actual user experiences, helping identify regional performance variations, device-specific issues, or areas of improvement in UX design.
+- Example Use Case: Tracking how mobile users from Europe experience an application compared to desktop users in North America.
+- Challenges: RUM may require substantial data processing infrastructure to analyze the vast volume of user interaction data in real time.
+
+---
+
+### Application Performance Monitoring (APM)
+
+APM focuses on monitoring and optimizing application-level metrics such as response times, throughput, memory consumption, and database query performance. APM tools provide deep visibility into application behavior, often by instrumenting code to measure key metrics.
+
+- Benefits: APM is essential for identifying performance bottlenecks, such as slow database queries, inefficient APIs, or memory leaks.
+- Example Use Case: Diagnosing why a specific API endpoint is causing latency spikes under heavy load.
+- Challenges: Implementing APM requires careful planning, as excessive instrumentation can add overhead to the application.
+
+### Error Tracking and Logging
+
+Error tracking focuses on detecting, logging, and analyzing application errors to help developers diagnose and fix issues effectively. Logs capture details about errors and events, providing valuable context for debugging.
+
+- Pitfalls of Defining Errors:
+ - What is an error? The definition varies widely. For some, an error might include user-facing error messages or unhandled exceptions. For others, it might extend to slow-loading assets or deprecated API warnings.
+ - Signaling Theory: Effective error tracking relies on understanding what each sensor (or monitoring system) is intended to capture. Without clear definitions, teams risk alert fatigue or missing critical issues.
+- Benefits: Centralized logging reduces the time to identify the root cause of an issue, while structured error tracking can prioritize issues affecting user experience.
+- Example Use Case: Monitoring uncaught exceptions in a JavaScript application and prioritizing fixes for errors impacting 10% of users.
+- Challenges: Over-logging can lead to excessive noise, making it harder to find actionable insights.
+
+### Uptime Monitoring
+
+Uptime monitoring ensures that your web application is available and responsive by periodically sending requests to check its status. Most uptime checks verify that a service responds with a `200 OK` HTTP status.
+
+- Benefits: Uptime monitoring offers a straightforward way to track availability, often serving as the first line of defense against outages.
+- Example Use Case: Monitoring whether an online banking platform’s login page is accessible to users.
+- Challenges: Simple uptime checks don’t account for partial outages or degraded performance (e.g., slow response times). Or for responses that look okay to a script, but not the users (e.g., the page loads but the only text says ‘server error’).
+
+### Server and Infrastructure Monitoring
+
+Server and infrastructure monitoring tracks the health of the hardware, virtual machines, or cloud infrastructure that supports your application. It collects data on CPU usage, memory availability, disk I/O, and network traffic.
+
+- Benefits: Essential for ensuring that the underlying infrastructure can meet application demands, especially during peak loads.
+- Example Use Case: Monitoring CPU usage to detect bottlenecks during a holiday sale on an e-commerce site.
+- Challenges: Infrastructure monitoring can produce misleading signals in scenarios like network outages that reduce user traffic but increase resource availability, falsely suggesting optimal performance.
+
+---
+
+## Key Metrics to Monitor
+
+- Response Time: Time taken by the application to respond to requests.
+- Error Rate: Frequency of errors in the application.
+- Throughput: Number of requests processed over a time frame.
+- Uptime and Availability: Percentage of time the application is operational.
+
+## Benefits of Web Application Monitoring
+
+### Instant Downtime Alerts
+
+By monitoring your web application directly, you have a better chance of knowing exactly when your service goes down. By alerting on metrics like response time and volume, you can get early indicators of growing problems.
+
+### Find trends before they become problems
+
+Often, poor performance doesn’t happen all at once. For every user who gets in touch complaining of slow response times, dozens or hundreds will have simply abandoned your service, and written off the quality of your site. Without web application monitoring, you’ll use users for some time before you’re even aware of a problem.
+
+## Limitations of Web Application Monitoring
+
+### Dynamic Content
+
+Monitoring tools may struggle with rapidly changing, personalized content. A robust testing framework like [Playwright](https://www.checklyhq.com/docs/browser-checks/playwright-test/) can be helpful for writing smarter assertions about how interfaces *should* look.
+
+### Cross-purposes
+
+As mentioned in the introduction, Web Application Monitoring has a specific use: finding how well a service performs for users. Once the use case expands into security monitoring or business analytics, mission creep can kill your efficiency.
+
+## Best Practices for Monitoring Web Applications
+
+Effective monitoring is essential to maintaining the performance, availability, and security of web applications. However, monitoring is more than just collecting data; it’s a strategic process of learning and acting based on insights. By following these best practices, organizations can ensure their monitoring efforts are both meaningful and actionable.
+
+---
+
+### Set Clear Objectives
+
+Before implementing monitoring, define what success looks like for your application. Monitoring without clear goals often leads to data overload without actionable insights.
+
+- Why It Matters: Objectives guide what data you collect and how you interpret it. For example, minimizing downtime might focus on uptime monitoring and incident alerts, while improving user experience might emphasize performance metrics like load time and responsiveness.
+- How to Do It: Align your objectives with business goals. For instance, if your goal is to increase conversion rates, focus on monitoring checkout processes and page performance.
+
+---
+
+### Choose the Right Monitoring Tools
+
+Selecting the right tools for your web application’s unique needs is critical. Not all monitoring tools are created equal, and mismatched tools can lead to unnecessary complexity or blind spots.
+
+- Why It Matters: A tool tailored to your application architecture provides more relevant data and reduces noise.
+- How to Do It: Assess your application stack (e.g., serverless, microservices, or monolithic), the types of metrics you need, and your team’s familiarity with specific tools. For example, use Prometheus for metrics, Loki for logs, and Jaeger for tracing in a Kubernetes-based application.
+
+---
+
+### Define Key Performance Indicators (KPIs)
+
+KPIs translate business objectives into measurable metrics, bridging the gap between technical monitoring and organizational goals.
+
+- Why It Matters: Without KPIs, monitoring efforts can lack focus, leading to wasted resources and misaligned priorities.
+- How to Do It: Identify KPIs that directly affect user experience or business outcomes, such as uptime, response time, error rates, or user engagement metrics. For example, define a goal like “99.9% uptime for key services over a month.”
+
+---
+
+### Monitor User Experience
+
+Modern monitoring goes beyond infrastructure to focus on the end-user journey. Users don’t care if your CPU is underutilized—they care if your site loads quickly and works smoothly.
+
+- Why It Matters: User experience (UX) monitoring ensures that technical performance aligns with user satisfaction and retention.
+- How to Do It: Combine Real User Monitoring (RUM) and Synthetic Monitoring to capture both actual and simulated user interactions. Focus on load times, time to interact (TTI), and error rates that directly impact UX.
+
+---
+
+### Implement Continuous Monitoring
+
+Web applications operate in dynamic environments, where issues can arise at any time. Continuous monitoring ensures constant vigilance.
+
+- Why It Matters: Continuous monitoring helps teams catch problems early, reducing downtime and improving system reliability.
+- How to Do It: Automate monitoring across all layers of your stack—servers, APIs, front-end performance, and user interactions. Use tools like CI/CD integrations to monitor deployments for potential issues.
+
+---
+
+### Be Proactive with Alerting and Notifications
+
+Alert fatigue is a common problem in monitoring, where too many notifications desensitize teams. A proactive approach focuses on actionable alerts.
+
+- Why It Matters: Timely and meaningful alerts enable faster incident resolution while avoiding unnecessary noise.
+- How to Do It: Configure alerts for critical thresholds and anomalies. For instance, set alerts for unusual spikes in response time or memory usage, but suppress notifications for predictable auto-scaling events.
+
+---
+
+### Analyze and Act on Monitoring Data
+
+Data alone is not valuable unless it leads to action. Effective monitoring transforms raw data into insights that drive meaningful improvements.
+
+- Why It Matters: Many organizations collect vast amounts of monitoring data but fail to act on it, leaving potential optimizations on the table.
+- How to Do It: Establish regular review processes to analyze trends, identify recurring issues, and implement fixes. For example, a monthly review of error logs can reveal patterns like frequently failing endpoints.
+
+---
+
+### Implement Synthetic Monitoring
+
+Synthetic monitoring simulates user activity to proactively identify potential issues.
+
+- Why It Matters: This type of monitoring allows teams to test functionality and performance before users are affected.
+- How to Do It: Use scripts to mimic common user actions, such as navigating pages, submitting forms, or using APIs. Test critical user paths regularly, especially after updates or deployments.
+
+---
+
+### Leverage Real User Monitoring (RUM)
+
+RUM provides insights based on actual user interactions, capturing the diversity of real-world experiences.
+
+- Why It Matters: Real user data reflects the performance users experience, including regional differences, device-specific issues, and varying network conditions.
+- How to Do It: Deploy lightweight tracking scripts to collect metrics such as page load time, interaction speed, and error rates. Segment data by user demographics or device type for targeted improvements.
+
+---
+
+### Conduct Regular Performance Audits
+
+Periodic audits ensure that your monitoring strategy remains effective and that your application continues to meet performance expectations.
+
+- Why It Matters: Web applications evolve over time, and so do the challenges they face. Regular audits help identify outdated metrics, unnecessary alerts, and new performance bottlenecks.
+- How to Do It: Schedule audits to review KPIs, monitoring coverage, and tool configurations. For instance, ensure your monitoring setup includes new microservices added to your architecture.
+
+---
+
+### Tie Best Practices to Business Goals
+
+Effective monitoring isn’t just about data collection—it’s about using data to improve your application and achieve your organization’s goals. By integrating these best practices into your strategy, you ensure that monitoring becomes a driver of growth, user satisfaction, and operational excellence.
+
+## Top 9 Web Application Monitoring Tools
+
+### 1. Datadog
+
+Datadog provides a unified platform that integrates metrics, logs, and traces for comprehensive monitoring. It includes features like Real User Monitoring (RUM), Synthetic Monitoring, and Application Performance Monitoring (APM).
+
+Personal Experience: When I worked on a large e-commerce platform, Datadog was indispensable for tracking the performance of our microservices. However, the pricing became a pain point as our usage scaled. Integrating some of our custom Prometheus metrics required additional work, and once we were deep into the Datadog ecosystem, migration seemed daunting. It felt like we were trapped in their ecosystem because they handled *everything*, but that also meant learning to live with their constraints.
+
+---
+
+### 2. New Relic
+
+New Relic is known for its robust APM, RUM, and distributed tracing capabilities. It offers strong support for OpenTelemetry, allowing integration of custom telemetry data.
+
+Personal Experience: I worked at New Relic for a number of years, and won’t go into what I learned behind the scenes. As a part of the observability industry now outside of New Relic, I’d say its users have very high standards for how integrated their performance data can be. Small teams and open source projects will never have the slick features of a mature dashboard and monitoring system with hundreds of engineers working to expand and maintain it.
+
+---
+
+### 3. Logz.io
+
+Logz.io combines the ELK Stack (Elasticsearch, Logstash, and Kibana) with Grafana, providing log management and visualization. It targets engineers familiar with open source tooling but who want the convenience of a managed service.
+
+Personal Experience: At a startup, we adopted Logz.io because it fit well with our existing ELK stack workflows. The integration was smooth, and their alerting system helped us catch critical API failures. However, as our log volume grew, the costs escalated, forcing us to revisit whether we should return to managing ELK ourselves. Overall the DIY solution had cheaper infrastructure costs, but way higher team overhead.
+
+---
+
+### 4. Sentry
+
+Sentry specializes in error tracking, offering detailed stack traces and insights into application crashes. It focuses on developer workflows, making it easy to identify and fix bugs.
+
+Personal Experience: We used Sentry extensively during a high-stakes product launch. It was a lifesaver for catching JavaScript errors on our front end and exceptions in our Node.js backend. One memorable instance was tracking down a browser-specific bug affecting a subset of users—Sentry pinpointed the exact issue within minutes. However, the volume of alerts sometimes led to fatigue, requiring us to fine-tune what constituted an actionable error.
+
+---
+
+### 5. Icinga
+
+Originally a fork of Nagios, Icinga offers monitoring for servers, networks, and applications. It is an open-core product, with its core technology available on GitHub.
+
+Personal Experience: This is the one tool on here I’ve never installed or explored, it has a loyal Reddit following and clearly has some knowledge transfer benefits from Nagios.
+
+---
+
+### 6. Site24x7
+
+Site24x7 offers application, server, and website monitoring, positioning itself as an all-in-one solution for small-to-medium-sized organizations.
+
+Personal Experience: We briefly evaluated Site24x7 for a mid-sized SaaS platform. While the promise of a unified monitoring solution was appealing, the results fell short in every area—synthetic checks, uptime reports, and server monitoring all felt basic compared to more specialized tools. It seemed like Site24x7 was spread too thin trying to cover every use case, and it struggled to meet the expectations of a team with even modest experience in monitoring tools.
+
+---
+
+### 7. Raygun
+
+Raygun focuses on real user monitoring, crash reporting, and performance tracking. It’s tailored for front-end and mobile developers looking to improve user experience.
+
+Personal Experience: We relied on Raygun for monitoring a mobile app with a global user base. It excelled in surfacing crash reports and slow performance patterns tied to specific device types. One particular win was discovering a memory leak issue affecting older Android devices—it gave our team the data we needed to patch the issue quickly. However, integrating Raygun into our CI/CD pipeline required extra effort, as it wasn’t as seamless as some other tools.
+
+---
+
+### 8. AppDynamics
+
+AppDynamics, now part of Cisco, provides APM capabilities with a focus on deep-dive analytics and business transaction monitoring.
+
+Personal Experience: At an enterprise client, we implemented AppDynamics to monitor a sprawling microservices architecture. Its transaction-based insights helped us identify bottlenecks in our checkout flow, translating directly into improved conversion rates. However, navigating its interface felt cumbersome, and despite its advanced features, team adoption was slower than anticipated due to the tool’s complexity. It became a love-hate relationship—powerful but not always user-friendly.
+
+---
+
+### 9. IBM Instana
+
+Instana promises AI-driven insights and automatic monitoring for dynamic applications. Its focus is on reducing manual configuration by detecting and mapping dependencies in real time.
+
+Personal Experience: During an initiative to modernize a legacy application, Instana was pitched to us as the “next-generation AI observability solution.” While it delivered on automated dependency mapping, the AI insights often felt like little more than a repackaging of existing metrics with an added buzzword. It was helpful for identifying sudden resource spikes, but I often found myself relying on other tools for detailed root-cause analysis. The promises of “AI-driven” observability didn’t live up to the hype.
+
+## Open Source Monitoring: a key part of web application monitoring
+
+Open source monitoring plays a pivotal role in modern cloud-native web application monitoring by leveraging community-driven tools to provide robust, scalable, and accessible monitoring solutions. These tools empower organizations to manage the complexity of distributed systems without being locked into proprietary solutions.
+
+### Challenges in Cloud-Native Observability
+
+Cloud-native applications, by design, introduce new complexities:
+
+- Obfuscation: Dependencies across microservices, Kubernetes orchestration, and cloud-managed services obscure system behavior.
+- Dynamic Dependencies: The interactions among thousands of microservices, infrastructure layers, and APIs shift dynamically with scaling and updates.
+- Data Volume: High data granularity across logs, metrics, traces, and flows creates immense operational overhead to derive actionable insights.
+
+### Core Components of Open Source Monitoring
+
+1. Metrics Collection with Prometheus
+
+ Prometheus is the cornerstone of open source monitoring, providing time-series data collection and querying capabilities. With exporters like Node Exporter and Kubernetes State Metrics (KSM), it gathers metrics at both the system and application levels.
+
+2. Log Aggregation with Loki
+
+ Loki collects and queries logs efficiently, ensuring contextual insights alongside metrics. Integrated with Prometheus and Kubernetes, Loki enables rapid troubleshooting.
+
+3. Distributed Tracing with Jaeger
+
+ Jaeger offers a standardized approach to tracing requests across microservices, enabling visibility into service-to-service interactions and latency.
+
+4. Kernel-Level Observability with eBPF
+
+ The extended Berkeley Packet Filter (eBPF) collects real-time data on network flows and application behavior, bypassing traditional agents and minimizing performance overhead.
+
+
+Open source monitoring is not just a viable alternative to proprietary solutions—it is the backbone of modern observability. With tools like Prometheus, Loki, Jaeger, and eBPF, organizations can effectively monitor cloud-native applications, dynamically adjust to changing workloads, and achieve operational excellence. By embracing these technologies, teams can focus on delivering exceptional user experiences while keeping operational costs in check.
+
+## How Can Checkly Help with Web Application Monitoring
+
+[Checkly](https://www.checklyhq.com/) offers an integrated platform for synthetic monitoring and API testing, ensuring applications are functional and performant. For simulating user requests to perform synthetic monitoring, Checkly harnesses the power of the open source automation tool Playwright. With [Checkly and Playwright](https://www.checklyhq.com/docs/browser-checks/playwright-test/), there’s no user behavior that you can’t test, and no critical path you can’t monitor continually.
+
+## Conclusion
+
+Web application monitoring is essential for ensuring high performance, user satisfaction, and business success. With various tools and techniques available, businesses can proactively manage their applications and deliver exceptional user experiences.
diff --git a/site/layouts/partials/learn-breadcrumb.html b/site/layouts/partials/learn-breadcrumb.html
index d32d42ee9..12a20415c 100644
--- a/site/layouts/partials/learn-breadcrumb.html
+++ b/site/layouts/partials/learn-breadcrumb.html
@@ -18,6 +18,9 @@
{{ if eq $element "kubernetes" }}
Kubernetes Monitoring Guide
{{ end }}
+ {{ if eq $element "monitoring" }}
+ Monitoring Guide
+ {{ end }}
{{ else }}
/
{{ if eq $index 2 }}
diff --git a/site/static/learn/icons/monitoring.svg b/site/static/learn/icons/monitoring.svg
new file mode 100644
index 000000000..34556dff9
--- /dev/null
+++ b/site/static/learn/icons/monitoring.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/src/scss/_learn.scss b/src/scss/_learn.scss
index 474c23c35..bb2193d18 100644
--- a/src/scss/_learn.scss
+++ b/src/scss/_learn.scss
@@ -462,6 +462,7 @@
padding: 20px 24px;
border-radius: 6px;
border: 1px solid $gray-light;
+ margin-bottom: 20px;
.text-wrap {
width: 100%;