-
-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The performance overhead of test coverage is insane - 5 times slower on python code - REPRO ATTACHED #1916
Comments
Thanks for the code and timings. The performance of coverage is dependent on many things, so it can be hard to isolate a single cause for slowness, and even harder to improve. I'm not sure when I'll be able to dig into this to understand if there is a way to make it faster. |
hello, thanks for getting back to me. I was trying to do a bit of debugging. I've read some of your articles on writing coveragepy, nice blog by the way. :) Doing a run with debug stats.
Doing a debug run with tracer. It's apparent that a lot of the time is taken to invoke coverage for the same lines of code in the loop.
Doing a run with
|
Thanks for digging into it. Unfortunately, we don't have a way to turn off the trace function per-line, and we don't know when the entire function has been trace, though that could be an interesting area for exploration. If you can run on 3.12, and are only doing line coverage not branch coverage, try the sysmon core, it should be much more efficient. |
Describe the bug
Hello, I was running some benchmark on my python codebase on different hardware.
Turns out, there were massive unexplained differences up to 10 times slower in some environment, while the difference with older/newer CPU should only be about 2 or 3 times as slow.
After many many days of debugging, I've finally found the root cause. The issue is when running tests with code coverage.
Running some python tests with branch code coverage is 5 times slower, or 3 times slower without branch coverage.
To Reproduce
tests are running with the CTracer. (I saw the other bug tickets where you said to check that first :D )
CLOSE TO LATEST
python 3.11.9
pytest 7.4.4 (latest last year, there is a 8.x now)
coverage 7.2.7 (same result on 7.6.10 latest)
OLDER PYTHON FOR COMPARISON
python 3.8.12
pytest 3.10.1
coverage 5.0.3
source code
test_empty.py
is an empty test likedef test_empty(): \npass
test_multiplication.py
is a math operationdef test_multiplication(): 9**10_000_000
test_prime.py
is a python script below to compute prime numbers up to N.**Actual behavior **
best of 5 runs. the command is basically
<coveragewithoptions> pytest <testfile>
full test command with hyperfine.
** Expected behavior **
Obviously the coverage should have as little overhead as possible.
On the operation
9^N
, the overhead is very little. that's fine.On the python code to compute prime numbers up to N, the overhead is insane, as much as 5 times with branch coverage enabled.
Even if you disable code coverage on the test file with
--omit=tests/*
, the overhead is still 3 times, which makes no sense!What could possibly explain that?
additional context
The pure pytest run without coverage is 5.695 seconds on python 3.8 vs 4.821 on python 3.11.
The runtime is massively increased with code coverage, it's actually increased to the same duration, despite being totally different versions with totally different interpreter. Which I find extremely odd!
Is it possible that there is a massive fix overhead with cover coverage?
Maybe important to know. Python 3.11+ is 10%-30% faster on most operations. Python 3.11 is 4 times faster on tight loop and list comprehension.
If code coverage has some fixed overhead around loops, that could have a massive effect on recent python versions.
The text was updated successfully, but these errors were encountered: