A Coding Implementation of Advanced PyTest to Build Customized and Automated Testing with Plugins, Fixtures, and JSON Reporting

In this tutorial, we discover the superior capabilities of PyTest, one of probably the most highly effective testing frameworks in Python. We construct a whole mini-project from scratch that demonstrates fixtures, markers, plugins, parameterization, and customized configuration. We give attention to displaying how PyTest can evolve from a easy check runner into a strong, extensible system for real-world purposes. By the top, we perceive not simply how to write exams, however how to management and customise PyTest’s conduct to match any undertaking’s wants. Check out the FULL CODES here.
import sys, subprocess, os, textwrap, pathlib, json
subprocess.run([sys.executable, "-m", "pip", "install", "-q", "pytest>=8.0"], examine=True)
root = pathlib.Path("pytest_advanced_tutorial").absolute()
if root.exists():
import shutil; shutil.rmtree(root)
(root / "calc").mkdir(mother and father=True)
(root / "app").mkdir()
(root / "exams").mkdir()
We start by establishing the environment, importing important Python libraries for file dealing with and subprocess execution. We set up the most recent model of PyTest to guarantee compatibility and then create a clear undertaking construction with folders for our essential code, utility modules, and exams. This offers us a stable basis to set up all the pieces neatly earlier than writing any check logic. Check out the FULL CODES here.
(root / "pytest.ini").write_text(textwrap.dedent("""
[pytest]
addopts = -q -ra --maxfail=1 -m "not sluggish"
testpaths = exams
markers =
sluggish: sluggish exams (use --runslow to run)
io: exams hitting the file system
api: exams patching exterior calls
""").strip()+"n")
(root / "conftest.py").write_text(textwrap.dedent(r'''
import os, time, pytest, json
def pytest_addoption(parser):
parser.addoption("--runslow", motion="store_true", assist="run sluggish exams")
def pytest_configure(config):
config.addinivalue_line("markers", "sluggish: sluggish exams")
config._summary = {"handed":0,"failed":0,"skipped":0,"slow_ran":0}
def pytest_collection_modifyitems(config, gadgets):
if config.getoption("--runslow"):
return
skip = pytest.mark.skip(cause="want --runslow to run")
for merchandise in gadgets:
if "sluggish" in merchandise.key phrases: merchandise.add_marker(skip)
def pytest_runtest_logreport(report):
cfg = report.config._summary
if report.when=="name":
if report.handed: cfg["passed"]+=1
elif report.failed: cfg["failed"]+=1
elif report.skipped: cfg["skipped"]+=1
if "sluggish" in report.key phrases and report.handed: cfg["slow_ran"]+=1
def pytest_terminal_summary(terminalreporter, exitstatus, config):
s=config._summary
terminalreporter.write_sep("=", "SESSION SUMMARY (customized plugin)")
terminalreporter.write_line(f"Passed: {s['passed']} | Failed: {s['failed']} | Skipped: {s['skipped']}")
terminalreporter.write_line(f"Slow exams run: {s['slow_ran']}")
terminalreporter.write_line("PyTest completed efficiently
" if s["failed"]==0 else "Some exams failed
")
@pytest.fixture(scope="session")
def settings(): return {"env":"prod","max_retries":2}
@pytest.fixture(scope="operate")
def event_log(): logs=[]; yield logs; print("nEVENT LOG:", logs)
@pytest.fixture
def temp_json_file(tmp_path):
p=tmp_path/"knowledge.json"; p.write_text('{"msg":"hello"}'); return p
@pytest.fixture
def fake_clock(monkeypatch):
t={"now":1000.0}; monkeypatch.setattr(time,"time",lambda: t["now"]); return t
'''))
We now create our PyTest configuration and plugin information. In pytest.ini, we outline markers, default choices, and check paths to management how exams are found and filtered. In conftest.py, we implement a customized plugin that tracks handed, failed, and skipped exams, provides a –runslow possibility, and offers fixtures for reusable check assets. This helps us lengthen PyTest’s core conduct whereas preserving our setup clear and modular. Check out the FULL CODES here.
(root/"calc"/"__init__.py").write_text(textwrap.dedent('''
from .vector import Vector
def add(a,b): return a+b
def div(a,b):
if b==0: elevate ZeroDivisionError("division by zero")
return a/b
def moving_avg(xs,okay):
if okay<=0 or okay>len(xs): elevate ValueError("unhealthy window")
out=[]; s=sum(xs[:k]); out.append(s/okay)
for i in vary(okay,len(xs)):
s+=xs[i]-xs[i-k]; out.append(s/okay)
return out
'''))
(root/"calc"/"vector.py").write_text(textwrap.dedent('''
class Vector:
__slots__=("x","y","z")
def __init__(self,x=0,y=0,z=0): self.x,self.y,self.z=float(x),float(y),float(z)
def __add__(self,o): return Vector(self.x+o.x,self.y+o.y,self.z+o.z)
def __sub__(self,o): return Vector(self.x-o.x,self.y-o.y,self.z-o.z)
def __mul__(self,s): return Vector(self.x*s,self.y*s,self.z*s)
__rmul__=__mul__
def norm(self): return (self.x**2+self.y**2+self.z**2)**0.5
def __eq__(self,o): return abs(self.x-o.x)<1e-9 and abs(self.y-o.y)<1e-9 and abs(self.z-o.z)<1e-9
def __repr__(self): return f"Vector({self.x:.2f},{self.y:.2f},{self.z:.2f})"
'''))
We now construct the core calculation module for our undertaking. In the calc bundle, we outline easy mathematical utilities, together with addition, division with error dealing with, and a moving-average operate, to exhibit logic testing. Alongside this, we create a Vector class that helps arithmetic operations, equality checks, and norm computation, an ideal instance for testing customized objects and comparisons utilizing PyTest. Check out the FULL CODES here.
(root/"app"/"io_utils.py").write_text(textwrap.dedent('''
import json, pathlib, time
def save_json(path,obj):
path=pathlib.Path(path); path.write_text(json.dumps(obj)); return path
def load_json(path): return json.hundreds(pathlib.Path(path).read_text())
def timed_operation(fn,*a,**kw):
t0=time.time(); out=fn(*a,**kw); t1=time.time(); return out,t1-t0
'''))
(root/"app"/"api.py").write_text(textwrap.dedent('''
import os, time, random
def fetch_username(uid):
if os.environ.get("API_MODE")=="offline": return f"cached_{uid}"
time.sleep(0.001); return f"user_{uid}_{random.randint(100,999)}"
'''))
(root/"exams"/"test_calc.py").write_text(textwrap.dedent('''
import pytest, math
from calc import add,div,moving_avg
from calc.vector import Vector
@pytest.mark.parametrize("a,b,exp",[(1,2,3),(0,0,0),(-1,1,0)])
def test_add(a,b,exp): assert add(a,b)==exp
@pytest.mark.parametrize("a,b,exp",[(6,3,2),(8,2,4)])
def test_div(a,b,exp): assert div(a,b)==exp
@pytest.mark.xfail(raises=ZeroDivisionError)
def test_div_zero(): div(1,0)
def test_avg(): assert moving_avg([1,2,3,4,5],3)==[2,3,4]
def test_vector_ops(): v=Vector(1,2,3)+Vector(4,5,6); assert v==Vector(5,7,9)
'''))
(root/"exams"/"test_io_api.py").write_text(textwrap.dedent('''
import pytest, os
from app.io_utils import save_json,load_json,timed_operation
from app.api import fetch_username
@pytest.mark.io
def test_io(temp_json_file,tmp_path):
d={"x":5}; p=tmp_path/"a.json"; save_json(p,d); assert load_json(p)==d
assert load_json(temp_json_file)=={"msg":"hello"}
def test_timed(capsys):
val,dt=timed_operation(lambda x:x*3,7); print("dt=",dt); out=capsys.readouterr().out
assert "dt=" in out and val==21
@pytest.mark.api
def test_api(monkeypatch):
monkeypatch.setenv("API_MODE","offline")
assert fetch_username(9)=="cached_9"
'''))
(root/"exams"/"test_slow.py").write_text(textwrap.dedent('''
import time, pytest
@pytest.mark.sluggish
def test_slow(event_log,fake_clock):
event_log.append(f"begin@{fake_clock['now']}")
fake_clock["now"]+=3.0
event_log.append(f"finish@{fake_clock['now']}")
assert len(event_log)==2
'''))
We add light-weight app utilities for JSON I/O and a mocked API to train real-world behaviors with out exterior companies. We write centered exams that use parametrization, xfail, markers, tmp_path, capsys, and monkeypatch to validate logic and uncomfortable side effects. We embody a sluggish check wired to our event_log and fake_clock fixtures to exhibit managed timing and session-wide state. Check out the FULL CODES here.
print("
Project created at:", root)
print("n
RUN #1 (default, skips @sluggish)n")
r1=subprocess.run([sys.executable,"-m","pytest",str(root)],textual content=True)
print("n
RUN #2 (--runslow)n")
r2=subprocess.run([sys.executable,"-m","pytest",str(root),"--runslow"],textual content=True)
summary_file=root/"abstract.json"
abstract={
"total_tests":sum("test_" in str(p) for p in root.rglob("test_*.py")),
"runs": ["default","--runslow"],
"outcomes": ["success" if r1.returncode==0 else "fail",
"success" if r2.returncode==0 else "fail"],
"contains_slow_tests": True,
"example_event_log":["[email protected]","[email protected]"]
}
summary_file.write_text(json.dumps(abstract,indent=2))
print("n
FINAL SUMMARY")
print(json.dumps(abstract,indent=2))
print("n
Tutorial accomplished — all exams & abstract generated efficiently.")
We now run our check suite twice: first with the default configuration that skips sluggish exams, and then once more with the –runslow flag to embody them. After each runs, we generate a JSON abstract containing check outcomes, the overall quantity of check information, and a pattern occasion log. This remaining abstract offers us a transparent snapshot of our undertaking’s testing well being, confirming that every one parts work flawlessly from begin to end.
In conclusion, we see how PyTest helps us check smarter, not more durable. We design a plugin that tracks outcomes, makes use of fixtures for state administration, and controls sluggish exams with customized choices, all whereas preserving the workflow clear and modular. We conclude with an in depth JSON abstract that demonstrates how simply PyTest can combine with trendy CI and analytics pipelines. With this basis, we at the moment are assured to lengthen PyTest additional, combining protection, benchmarking, and even parallel execution for large-scale, professional-grade testing.
Check out the FULL CODES here. Feel free to try our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to observe us on Twitter and don’t overlook to be part of our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The submit A Coding Implementation of Advanced PyTest to Build Customized and Automated Testing with Plugins, Fixtures, and JSON Reporting appeared first on MarkTechPost.