Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature #65: track metrics of failed tests #82

Open
wants to merge 35 commits into
base: master
Choose a base branch
from

Conversation

lhpt2
Copy link

@lhpt2 lhpt2 commented Jun 18, 2024

Description

Fixes issue #65 .

DEPENDS ON: #79 (#80), #78

Several people asked for support of pytest-monitor also monitoring failing tests. This PR includes the necessary changes for this feature.

Changes made in overview:

  • Extending database table TEST_METRICS with a column TEST_PASSED to make failing and passing test logs in database distinguishable
  • Setting an attribute "passed" that is handed to add_test_info() call, indicating if the test passed or not (for database entry)
  • Moving memory profiler module into own module inside codebase (unmaintained) and doing the necessary changes to report the memory usage of functions raising Exceptions [see next section].
  • Adding logic for auto-migrating old databases to include the new column.
  • Adding a cmd option to disable monitoring failed tests, monitoring failed tests is the standard

The memory profiler change

In order to get the memory usage of failing functions, the memory_profiler module had to be changed. The first approach was to make a PR on the memory_profiler project, this was marked as unmaintained though.

My options were to

  1. use another profiler instead or
  2. just simplify the profiler logic and include it in the pytest-monitor plugin itself.

I went with the second option in order to avoid big dependencies like Xray which furthermore would have needed socket based communication. This is why I introduced a new module profiler.py into the codebase being responsible for memory profiling (by making use of Process class and psutils).

Type of change

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes (not just the CI)
    - [ ] Any dependent changes have been merged and published in downstream modules
  • I have provided a link to the issue this PR adresses in the Description section above (If there is none yet,
    create one !)
  • I have updated the changelog
  • I have labeled my PR using appropriate tags (in particular using status labels like Status: Code Review Needed, Business: Test Needed or Status: In Progress if you are still working on the PR)

@js-dieu I decided to propose my changes independent on what the original plans were, maybe it is of help, maybe not. However, this is my take on the issue.

Lucas Haupt and others added 29 commits May 8, 2024 14:42
disabled (--no-monitor cmd flag)

changelog updated in this commit too

The issue was an exception not being raised when calling
wrapped_function when monitoring is disabled in line 216.
The raise is being handled now in the following lines 218ff.
function to always return memory usage no matter if function throws
exception or not
column TEST_PASSED (type boolean) to TEST_METRICS table to indicate
if the test being logged passed.
column TEST_PASSED (type boolean) to TEST_METRICS table to indicate
if the test being logged passed.

Testing: Add proper test to test successful database migration
…rt/pytest-monitor into feature/track-metrics-of-failed
…rt/pytest-monitor into feature/track-metrics-of-failed

Seems like an empty merge with no changes, had to be done to be
able to push new commits upstream
…rt/pytest-monitor into feature/track-metrics-of-failed
…onfig on Initialization of DBHandler (test if new column existent)
for Bitbucket CI. (Part of session or execution context table)
Updates the sqlite test functions to the newest versions that are
also used in the PostgresDBHandler branch.
Merge newest main branch version into this branch
Add hint in profiler.py to explain why only the recently created
MemTimer is being killed on failed tests throwing an exception.
The documentation files are updated to include the newest changes.
@lhpt2 lhpt2 changed the title Feature: track metrics of failed tests Feature #65: track metrics of failed tests Jun 18, 2024
@lhpt2 lhpt2 marked this pull request as draft June 18, 2024 15:37
@lhpt2 lhpt2 marked this pull request as ready for review June 18, 2024 15:39
Lucas Haupt added 4 commits July 3, 2024 15:49
Returning exceptions inside the wrapped_function() can lead to
assertions being ignored, therefore they need to be raised instantly
and parent calls need to wrap their call in a try except block.
Merge refactored bugfix 79 into track-metrics-of-failed because
it is dependent on this bugfix.
In order to handle all exceptions of a failing test, catch
exceptions with BaseException in profiler.py.
lhpt2 pushed a commit to einhundert/pytest-monitor that referenced this pull request Jul 15, 2024
The memory profiler only reports Exceptions when it should report all
Exceptions that inherit from BaseExceptions.

This is ultimately reworked in a PR incorporating a simplified
memory profiler module into the codebase that fixes not only this issue
but also gives the possibility of getting measurements for failing
tests.

This workaround uses return values to work around the issue. This way
pytest-monitor can check for BaseExceptions and act accordingly.
Like described earlier the ultimate goal should probably be to replace
the whole memory profiler as proposed in this PR:
CFMTech#82
lhpt2 pushed a commit to einhundert/pytest-monitor that referenced this pull request Jul 15, 2024
The memory profiler only reports Exceptions when it should report all
Exceptions that inherit from BaseExceptions.

This is ultimately reworked in a PR incorporating a simplified
memory profiler module into the codebase that fixes not only this issue
but also gives the possibility of getting measurements for failing
tests.

This workaround uses return values to work around the issue. This way
pytest-monitor can check for BaseExceptions and act accordingly.
Like described earlier the ultimate goal should probably be to replace
the whole memory profiler as proposed in this PR:
CFMTech#82
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant