A Coding Guide to Markerless 3D Human Kinematics with Pose2Sim, RTMPose, and OpenSim
In this tutorial, we construct and run an entire Pose2Sim pipeline on Colab to perceive how markerless 3D kinematics works in apply. We start with surroundings setup, configure the mission for Colab’s headless runtime, and then stroll via calibration, 2D pose estimation, synchronization, particular person affiliation, triangulation, filtering, marker augmentation, and OpenSim-based kinematics. As we progress, we not solely execute every stage of the workflow but in addition examine the generated outputs, visualize trajectories and joint angles, and learn the way every part contributes to changing uncooked multi-camera movies into significant biomechanical movement information.
import subprocess, sys, os
def run(cmd, desc=""):
"""Run a shell command with reside output."""
print(f"n{'='*60}")
print(f" {desc}" if desc else f" Running: {cmd}")
print(f"{'='*60}")
outcome = subprocess.run(cmd, shell=True, capture_output=True, textual content=True)
if outcome.stdout:
strains = outcome.stdout.strip().cut up('n')
for line in strains[-15:]:
print(line)
if outcome.returncode != 0:
print(f"
Warning (exit code {outcome.returncode}): {outcome.stderr[-500:] if outcome.stderr else 'unknown error'}")
return outcome.returncode
run("pip set up -q pose2sim", "Installing Pose2Sim (contains RTMPose, filtering, and so forth.)")
opensim_available = False
attempt:
run("pip set up -q opensim", "Installing OpenSim Python bindings")
import opensim
opensim_available = True
print(f"
OpenSim {opensim.__version__} put in efficiently!")
besides Exception as e:
print(f"
OpenSim couldn't be put in ({e}).")
print(" The tutorial will run all steps EXCEPT kinematics.")
print(" For full kinematics, use a neighborhood conda surroundings as a substitute.")
gpu_returncode = run("nvidia-smi --query-gpu=identify,reminiscence.whole --format=csv,noheader", "Checking GPU availability")
if gpu_returncode != 0:
print("
No GPU detected. Pose estimation will run on CPU (slower).")
print(" Tip: Runtime → Change runtime sort → T4 GPU")
else:
print("
GPU detected! Pose estimation can be accelerated.")
attempt:
import torch
print(f"
PyTorch {torch.__version__} | CUDA out there: {torch.cuda.is_available()}")
besides ImportError:
print("
PyTorch not discovered (pose estimation will use ONNX Runtime immediately)")
print("n
Installation full!")
import shutil
from pathlib import Path
import Pose2Sim
pkg_path = Path(Pose2Sim.__file__).dad or mum
print(f"
Pose2Sim put in at: {pkg_path}")
demo_src = pkg_path / "Demo_SinglePerson"
work_dir = Path("/content material/Pose2Sim_Tutorial")
if work_dir.exists():
shutil.rmtree(work_dir)
shutil.copytree(demo_src, work_dir)
print(f"
Demo copied to: {work_dir}")
print("n
PROJECT STRUCTURE:")
print("=" * 60)
for p in sorted(work_dir.rglob("*")):
depth = len(p.relative_to(work_dir).components)
indent = " " * depth
if p.is_dir():
print(f"{indent}
{p.identify}/")
elif depth <= 3:
size_kb = p.stat().st_size / 1024
print(f"{indent}
{p.identify} ({size_kb:.1f} KB)")
print("""
╔══════════════════════════════════════════════════════════╗
║ KEY DIRECTORIES: ║
║ calibration/ → Camera calibration recordsdata ║
║ movies/ → Raw enter movies from every digicam ║
║ pose/ → 2D pose estimation outcomes (auto-gen) ║
║ pose-3d/ → 3D triangulated coordinates (.trc) ║
║ kinematics/ → OpenSim joint angles (.mot) ║
║ Config.toml → ALL pipeline parameters ║
╚══════════════════════════════════════════════════════════╝
""")
import toml
config_path = work_dir / "Config.toml"
config = toml.load(config_path)
print("
KEY CONFIGURATION PARAMETERS:")
print("=" * 60)
sections_of_interest = ['project', 'pose', 'calibration', 'synchronization',
'triangulation', 'filtering', 'markerAugmentation']
for part in sections_of_interest:
if part in config:
print(f"n[{section}]")
for key, val in checklist(config[section].gadgets())[:6]:
print(f" {key} = {val}")
if len(config[section]) > 6:
print(f" ... and {len(config[section]) - 6} extra parameters")
print("nn
ADAPTING CONFIG FOR GOOGLE COLAB (headless)...")
print("-" * 60)
modifications = {}
if 'synchronization' in config:
config['synchronization']['synchronization_gui'] = False
modifications['synchronization.synchronization_gui'] = False
if 'pose' in config:
config['pose']['display_detection'] = False
modifications['pose.display_detection'] = False
config['pose']['mode'] = 'balanced'
modifications['pose.mode'] = 'balanced'
config['pose']['save_video'] = 'none'
modifications['pose.save_video'] = 'none'
config['pose']['det_frequency'] = 50
modifications['pose.det_frequency'] = 50
if 'filtering' in config:
config['filtering']['display_figures'] = False
modifications['filtering.display_figures'] = False
if 'triangulation' in config:
config['triangulation']['undistort_points'] = False
modifications['triangulation.undistort_points'] = False
config['triangulation']['handle_LR_swap'] = False
modifications['triangulation.handle_LR_swap'] = False
if 'kinematics' in config:
config['kinematics']['use_simple_model'] = True
modifications['kinematics.use_simple_model'] = True
with open(config_path, 'w') as f:
toml.dump(config, f)
for ok, v in modifications.gadgets():
print(f"
{ok} → {v}")
print("n
Config.toml up to date for Colab execution!")
We arrange the total Pose2Sim surroundings in Google Colab and be sure that the required dependencies can be found for the pipeline. We set up Pose2Sim, optionally check OpenSim help, test GPU availability, and then copy the demo mission right into a working listing so we will function on a clear instance dataset. We additionally examine and modify the configuration file to make sure the workflow runs easily in Colab’s headless surroundings, with settings that steadiness pace, stability, and usability.
import os
import time
import warnings
import matplotlib
matplotlib.use('Agg')
os.chdir(work_dir)
print(f"
Working listing: {os.getcwd()}")
from Pose2Sim import Pose2Sim
step_times = {}
print("n" + "█" * 60)
print("█ STEP 1/8: CAMERA CALIBRATION")
print("█" * 60)
print("""
Calibration converts digicam parameters (intrinsics + extrinsics)
right into a unified Pose2Sim format.
The demo makes use of 'convert' mode (pre-existing Qualisys calibration).
For your personal information, you'll be able to:
- Convert from Qualisys, Vicon, OpenCap, AniPose, and so forth.
- Calculate from scratch utilizing a checkerboard
""")
t0 = time.time()
attempt:
Pose2Sim.calibration()
step_times['calibration'] = time.time() - t0
print(f"n
Calibration full! ({step_times['calibration']:.1f}s)")
calib_file = work_dir / "calibration" / "Calib.toml"
if calib_file.exists():
calib_data = toml.load(calib_file)
cam_names = [k for k in calib_data.keys() if k.startswith('cam')]
print(f"
Cameras discovered: {len(cam_names)} → {cam_names}")
for cam in cam_names[:2]:
print(f" Camera '{cam}':")
if 'matrix' in calib_data[cam]:
print(f" Intrinsic matrix (3x3): ✓")
if 'rotation' in calib_data[cam]:
print(f" Rotation vector: {calib_data[cam]['rotation']}")
if 'translation' in calib_data[cam]:
print(f" Translation (m): {calib_data[cam]['translation']}")
besides Exception as e:
print(f"
Calibration warning: {e}")
step_times['calibration'] = time.time() - t0
print("n" + "█" * 60)
print("█ STEP 2/8: 2D POSE ESTIMATION (RTMPose)")
print("█" * 60)
print("""
RTMPose detects 2D physique keypoints in every video body.
- Model: RTMPose (balanced mode) with YOLOX detection
- Body mannequin: Body_with_feet (HALPE_26 keypoints)
- Output: JSON recordsdata per body, per digicam
- GPU acceleration used if out there
For customized fashions (animal, hand, face), set `mode` in Config.toml
to a dictionary with customized mannequin URLs (see README).
""")
t0 = time.time()
attempt:
Pose2Sim.poseEstimation()
step_times['poseEstimation'] = time.time() - t0
print(f"n
Pose estimation full! ({step_times['poseEstimation']:.1f}s)")
pose_dirs = checklist((work_dir / "pose").glob("*_json"))
for pd in pose_dirs:
n_files = len(checklist(pd.glob("*.json")))
print(f"
{pd.identify}: {n_files} JSON frames generated")
besides Exception as e:
print(f"
Pose estimation error: {e}")
step_times['poseEstimation'] = time.time() - t0
print("n" + "█" * 60)
print("█ STEP 3/8: CAMERA SYNCHRONIZATION")
print("█" * 60)
print("""
Synchronization aligns frames throughout cameras by correlating
vertical keypoint speeds. This finds the time offset between cameras.
Skip this step in case your cameras are hardware-synchronized
(e.g., by way of timecode, set off cable, or GoPro GPS sync).
""")
t0 = time.time()
attempt:
Pose2Sim.synchronization()
step_times['synchronization'] = time.time() - t0
print(f"n
Synchronization full! ({step_times['synchronization']:.1f}s)")
besides Exception as e:
print(f"
Synchronization notice: {e}")
step_times['synchronization'] = time.time() - t0
print("n" + "█" * 60)
print("█ STEP 4/8: PERSON ASSOCIATION")
print("█" * 60)
print("""
Associates detected individuals throughout completely different digicam views.
- Single-person mode: picks particular person with lowest reprojection error
- Multi-person mode: makes use of epipolar geometry to match folks
For this single-person demo, it selects the best-matching particular person.
""")
t0 = time.time()
attempt:
Pose2Sim.particular personAssociation()
step_times['personAssociation'] = time.time() - t0
print(f"n
Person affiliation full! ({step_times['personAssociation']:.1f}s)")
besides Exception as e:
print(f"
Person affiliation notice: {e}")
step_times['personAssociation'] = time.time() - t0
We start working the core Pose2Sim pipeline step-by-step to clearly perceive what occurs at every stage. We begin with digicam calibration, then carry out 2D pose estimation with RTMPose, and proceed into digicam synchronization and particular person affiliation to put together the info for 3D reconstruction. As we execute these phases, we additionally examine the outputs and timing data to join the computation to the recordsdata and buildings being generated.
print("n" + "█" * 60)
print("█ STEP 5/8: 3D TRIANGULATION")
print("█" * 60)
print("""
Triangulates 2D keypoints from a number of cameras into 3D coordinates.
- Weighted by detection confidence scores
- Cameras eliminated if reprojection error exceeds threshold
- Right/left swap correction (disabled per present suggestion)
- Missing values interpolated (cubic, bezier, linear, and so forth.)
- Output: .trc file (OpenSim-compatible 3D marker format)
""")
t0 = time.time()
attempt:
Pose2Sim.triangulation()
step_times['triangulation'] = time.time() - t0
print(f"n
Triangulation full! ({step_times['triangulation']:.1f}s)")
trc_files = checklist((work_dir / "pose-3d").glob("*.trc"))
for trc in trc_files:
size_kb = trc.stat().st_size / 1024
print(f"
{trc.identify} ({size_kb:.1f} KB)")
besides Exception as e:
print(f"
Triangulation error: {e}")
step_times['triangulation'] = time.time() - t0
print("n" + "█" * 60)
print("█ STEP 6/8: 3D COORDINATE FILTERING")
print("█" * 60)
print("""
Smooths 3D trajectories to scale back noise from triangulation.
Available filters:
- Butterworth (low-pass, default)
- Kalman (predictive smoothing)
- Butterworth on pace
- Gaussian, LOESS, Median
Filter sort and parameters are set in Config.toml underneath [filtering].
""")
t0 = time.time()
attempt:
Pose2Sim.filtering()
step_times['filtering'] = time.time() - t0
print(f"n
Filtering full! ({step_times['filtering']:.1f}s)")
besides Exception as e:
print(f"
Filtering notice: {e}")
step_times['filtering'] = time.time() - t0
print("n" + "█" * 60)
print("█ STEP 7/8: MARKER AUGMENTATION")
print("█" * 60)
print("""
Uses a Stanford-trained LSTM to predict 47 digital markers from
the triangulated keypoints. These extra markers can enhance
OpenSim mannequin scaling and inverse kinematics.
NOTE: Results are NOT at all times higher with augmentation (~50/50).
The pipeline permits you to evaluate with and with out augmentation.
""")
t0 = time.time()
attempt:
Pose2Sim.markerAugmentation()
step_times['markerAugmentation'] = time.time() - t0
print(f"n
Marker augmentation full! ({step_times['markerAugmentation']:.1f}s)")
aug_files = checklist((work_dir / "pose-3d").glob("*augmented*.trc"))
for af in aug_files:
print(f"
{af.identify} ({af.stat().st_size/1024:.1f} KB)")
besides Exception as e:
print(f"
Marker augmentation notice: {e}")
step_times['markerAugmentation'] = time.time() - t0
print("n" + "█" * 60)
print("█ STEP 8/8: OPENSIM KINEMATICS")
print("█" * 60)
print("""
Performs automated mannequin scaling and inverse kinematics:
1. Scale: Adjusts a generic OpenSim mannequin to participant dimensions
(primarily based on section lengths from the slowest, most secure frames)
2. IK: Computes 3D joint angles by becoming the mannequin to markers
Output:
- Scaled .osim mannequin file
- Joint angles .mot file (will be loaded in OpenSim GUI)
""")
t0 = time.time()
if opensim_available:
attempt:
Pose2Sim.kinematics()
step_times['kinematics'] = time.time() - t0
print(f"n
Kinematics full! ({step_times['kinematics']:.1f}s)")
kin_dir = work_dir / "kinematics"
if kin_dir.exists():
for f in sorted(kin_dir.glob("*")):
print(f"
{f.identify} ({f.stat().st_size/1024:.1f} KB)")
besides Exception as e:
print(f"
Kinematics error: {e}")
step_times['kinematics'] = time.time() - t0
else:
print("
Skipped — OpenSim not put in on this Colab runtime.")
print(" To run kinematics domestically, set up by way of:")
print(" conda set up -c opensim-org opensim")
step_times['kinematics'] = 0
print("n" + "═" * 60)
print("
PIPELINE TIMING SUMMARY")
print("═" * 60)
whole = 0
for step, t in step_times.gadgets():
bar = "█" * int(t / max(step_times.values(), default=1) * 30)
print(f" {step:<22s} {t:6.1f}s {bar}")
whole += t
print(f" {'TOTAL':<22s} {whole:6.1f}s")
print("═" * 60)
We proceed the primary workflow by changing the related 2D detections into 3D coordinates by way of triangulation, then bettering the ensuing trajectories by way of filtering. We additionally run marker augmentation to enrich the marker set for downstream biomechanical evaluation and then try OpenSim-based kinematics when the surroundings helps it. By the top of this part, we’ll summarize the runtime of every main step to consider the general effectivity and stream of the total pipeline.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pathlib import Path
import re
def parse_trc(trc_path):
"""Parse a .trc file and return marker names, body information, and metadata."""
with open(trc_path, 'r') as f:
strains = f.readlines()
meta_keys = strains[2].strip().cut up('t')
meta_vals = strains[3].strip().cut up('t')
metadata = dict(zip(meta_keys, meta_vals))
marker_line = strains[3].strip().cut up('t')
header_lines = 0
for i, line in enumerate(strains):
if line.strip() and not line.startswith(('PathFileType', 'DataPrice',
'Frame', 't')):
attempt:
float(line.strip().cut up('t')[0])
header_lines = i
break
besides ValueError:
proceed
raw_markers = strains[3].strip().cut up('t')
markers = [m for m in raw_markers if m and m not in ('Frame#', 'Time')]
marker_names = []
for m in markers:
if m and (not marker_names or m != marker_names[-1]):
marker_names.append(m)
data_lines = strains[header_lines:]
information = []
for line in data_lines:
vals = line.strip().cut up('t')
if len(vals) > 2:
attempt:
row = [float(v) if v else np.nan for v in vals]
information.append(row)
besides ValueError:
proceed
information = np.array(information)
return marker_names, information, metadata
trc_files = sorted((work_dir / "pose-3d").glob("*.trc"))
print(f"
Found {len(trc_files)} TRC file(s):")
for f in trc_files:
print(f" {f.identify}")
trc_file = None
for f in trc_files:
if 'filt' in f.identify.decrease() and 'augm' not in f.identify.decrease():
trc_file = f
break
if trc_file is None and trc_files:
trc_file = trc_files[0]
if trc_file:
print(f"n
Visualizing: {trc_file.identify}")
marker_names, information, metadata = parse_trc(trc_file)
print(f" Markers: {len(marker_names)}")
print(f" Frames: {information.form[0]}")
print(f" Marker names: {marker_names[:10]}{'...' if len(marker_names) > 10 else ''}")
frames = information[:, 0].astype(int) if information.form[1] > 0 else []
instances = information[:, 1] if information.form[1] > 1 else []
coords = information[:, 2:]
n_markers = len(marker_names)
mid_frame = len(information) // 2
fig = plt.determine(figsize=(16, 6))
ax1 = fig.add_subplot(131, projection='3d')
xs = coords[mid_frame, 0::3][:n_markers]
ys = coords[mid_frame, 1::3][:n_markers]
zs = coords[mid_frame, 2::3][:n_markers]
ax1.scatter(xs, ys, zs, c='dodgerblue', s=40, alpha=0.8, edgecolors='navy')
for i, identify in enumerate(marker_names[:len(xs)]):
if i % 3 == 0:
ax1.textual content(xs[i], ys[i], zs[i], f' {identify}', fontsize=6, alpha=0.7)
ax1.set_xlabel('X (m)')
ax1.set_ylabel('Y (m)')
ax1.set_zlabel('Z (m)')
ax1.set_title(f'3D Keypoints (Frame {int(frames[mid_frame])})', fontsize=10)
ax2 = fig.add_subplot(132)
key_markers = ['RAnkle', 'LAnkle', 'RWrist', 'LWrist', 'Nose']
colors_map = {'RAnkle': 'pink', 'LAnkle': 'blue', 'RWrist': 'orange',
'LWrist': 'inexperienced', 'Nose': 'purple'}
for mkr in key_markers:
if mkr in marker_names:
idx = marker_names.index(mkr)
z_col = idx * 3 + 2
if z_col < coords.form[1]:
ax2.plot(instances, coords[:, z_col],
label=mkr, shade=colors_map.get(mkr, 'grey'),
linewidth=1.2, alpha=0.8)
ax2.set_xlabel('Time (s)')
ax2.set_ylabel('Z place (m)')
ax2.set_title('Vertical Trajectories', fontsize=10)
ax2.legend(fontsize=8, loc='greatest')
ax2.grid(True, alpha=0.3)
ax3 = fig.add_subplot(133)
if len(instances) > 1:
dt = np.diff(instances)
dt[dt == 0] = 1e-6
for mkr in ['RAnkle', 'RWrist']:
if mkr in marker_names:
idx = marker_names.index(mkr)
x_col, y_col, z_col = idx * 3, idx * 3 + 1, idx * 3 + 2
if z_col < coords.form[1]:
dx = np.diff(coords[:, x_col])
dy = np.diff(coords[:, y_col])
dz = np.diff(coords[:, z_col])
pace = np.sqrt(dx**2 + dy**2 + dz**2) / dt
pace = np.clip(pace, 0, np.nanpercentile(pace, 99))
ax3.plot(instances[1:], pace, label=mkr,
shade=colors_map.get(mkr, 'grey'),
linewidth=0.8, alpha=0.7)
ax3.set_xlabel('Time (s)')
ax3.set_ylabel('Speed (m/s)')
ax3.set_title('Marker Speeds (Quality Check)', fontsize=10)
ax3.legend(fontsize=8)
ax3.grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig(work_dir / 'trajectory_analysis.png', dpi=150, bbox_inches='tight')
plt.present()
print("
Trajectory plots saved to trajectory_analysis.png")
else:
print("
No TRC file discovered to visualize.")
We deal with deciphering the generated 3D movement information by parsing the TRC file and extracting marker trajectories in a structured approach. We create visualizations of 3D keypoints, the vertical movement of chosen physique components, and marker speeds, so we will examine each motion patterns and sign high quality. This helps us transfer past merely working the pipeline and begin understanding what the reconstructed movement truly appears like over time.
def parse_mot(mot_path):
"""Parse an OpenSim .mot file and return column names and information."""
with open(mot_path, 'r') as f:
strains = f.readlines()
data_start = 0
for i, line in enumerate(strains):
if 'endheader' in line.decrease():
data_start = i + 1
break
col_line = strains[data_start].strip().cut up('t')
if not col_line[0].exchange('.', '').exchange('-', '').exchange('e', '').isdigit():
col_names = col_line
data_start += 1
else:
col_names = strains[data_start - 2].strip().cut up('t')
information = []
for line in strains[data_start:]:
vals = line.strip().cut up('t')
attempt:
row = [float(v) for v in vals]
information.append(row)
besides ValueError:
proceed
return col_names, np.array(information)
mot_files = sorted((work_dir / "kinematics").glob("*.mot")) if (work_dir / "kinematics").exists() else []
if mot_files:
mot_file = mot_files[0]
print(f"
Visualizing joint angles: {mot_file.identify}")
col_names, jdata = parse_mot(mot_file)
print(f" Columns: {len(col_names)}")
print(f" Frames: {jdata.form[0]}")
print(f" Joints: {col_names[:8]}...")
fig, axes = plt.subplots(2, 3, figsize=(16, 8))
fig.suptitle('OpenSim Joint Angles from Pose2Sim', fontsize=14, fontweight='daring')
joints_to_plot = [
('hip_flexion_r', 'Right Hip Flexion'),
('hip_flexion_l', 'Left Hip Flexion'),
('knee_angle_r', 'Right Knee Angle'),
('knee_angle_l', 'Left Knee Angle'),
('ankle_angle_r', 'Right Ankle Angle'),
('ankle_angle_l', 'Left Ankle Angle'),
]
time_col = jdata[:, 0] if 'time' in col_names[0].decrease() else np.arange(jdata.form[0])
for ax, (joint_key, title) in zip(axes.flat, joints_to_plot):
col_idx = None
for i, cn in enumerate(col_names):
if joint_key.decrease() in cn.decrease():
col_idx = i
break
if col_idx isn't None and col_idx < jdata.form[1]:
ax.plot(time_col, jdata[:, col_idx], 'b-', linewidth=1.2)
ax.set_title(title, fontsize=10)
ax.set_xlabel('Time (s)')
ax.set_ylabel('Angle (°)')
ax.grid(True, alpha=0.3)
else:
ax.textual content(0.5, 0.5, f'{title}n(not discovered)', ha='heart',
va='heart', remodel=ax.transAxes, fontsize=10, shade='grey')
ax.set_title(title, fontsize=10, shade='grey')
plt.tight_layout()
plt.savefig(work_dir / 'joint_angles.png', dpi=150, bbox_inches='tight')
plt.present()
print("
Joint angle plots saved to joint_angles.png")
else:
print("
No .mot recordsdata discovered (kinematics step was skipped or OpenSim unavailable).")
print(" Joint angle visualization can be out there when working domestically with OpenSim.")
print("
DATA QUALITY ANALYSIS")
print("=" * 60)
if trc_file:
marker_names, information, metadata = parse_trc(trc_file)
coords = information[:, 2:]
print("n
Per-Marker Data Completeness:")
print("-" * 45)
n_frames = information.form[0]
for i, mkr in enumerate(marker_names):
x_col = i * 3
if x_col < coords.form[1]:
legitimate = np.sum(~np.isnan(coords[:, x_col]))
pct = 100 * legitimate / n_frames
bar = "█" * int(pct / 5) + "░" * (20 - int(pct / 5))
standing = "
" if pct > 90 else "
" if pct > 50 else "
"
print(f" {standing} {mkr:<20s} {bar} {pct:5.1f}% ({legitimate}/{n_frames})")
print("n
Trajectory Smoothness (decrease = higher):")
print("-" * 45)
for mkr in marker_names[:10]:
idx = marker_names.index(mkr)
cols = [idx * 3, idx * 3 + 1, idx * 3 + 2]
if cols[2] < coords.form[1]:
xyz = coords[:, cols]
accel = np.diff(xyz, axis=0, n=2)
jitter = np.nanmean(np.abs(accel))
bar = "█" * min(int(jitter * 500), 20)
print(f" {mkr:<20s} jitter={jitter:.4f} {bar}")
We additional analyze the kinematic outcomes by parsing the MOT file and plotting key joint angles after the OpenSim stage completes efficiently. We then consider total information high quality by measuring marker completeness and estimating smoothness utilizing inter-frame jitter, which gives a greater sense of how secure and dependable the reconstructed trajectories are. This part helps us assess whether or not the movement output is prepared for significant biomechanical interpretation or if extra tuning is required.
print("n
ADVANCED: PROGRAMMATIC CONFIGURATION")
print("=" * 60)
print("""
You can modify ANY Config.toml parameter programmatically.
This is beneficial for batch experiments or parameter sweeps.
Below are widespread modifications for various eventualities.
""")
import toml
config = toml.load(work_dir / "Config.toml")
print("Example 1: Change filter sort to Kalman")
example_config_1 = {
'filtering': {
'sort': 'kalman',
}
}
print(f" config['filtering']['type'] = 'kalman'")
print("nExample 2: Tighten triangulation high quality thresholds")
example_config_2 = {
'triangulation': {
'likelihood_threshold': 0.4,
'reproj_error_threshold_triangulation': 15,
'min_cameras_for_triangulation': 3,
}
}
for ok, v in example_config_2['triangulation'].gadgets():
print(f" config['triangulation']['{k}'] = {v}")
print("nExample 3: Use light-weight mode for quicker processing")
print(" config['pose']['mode'] = 'light-weight'")
print(" config['pose']['mode'] = 'balanced', 'efficiency', or customized dict")
print("nExample 4: Enable multi-person evaluation")
print(" config['project']['multi_person'] = True")
print(" config['project']['participant_height'] = [1.72, 1.65]")
print(" config['project']['participant_mass'] = [70.0, 55.0]")
print("nExample 5: Process solely particular frames")
print(" config['project']['frame_range'] = [50, 200]")
print("n
To apply adjustments, modify and save Config.toml, then re-run the pipeline:")
print("""
import toml
config = toml.load('Config.toml')
config['filtering']['type'] = 'kalman'
with open('Config.toml', 'w') as f:
toml.dump(config, f)
from Pose2Sim import Pose2Sim
Pose2Sim.filtering()
""")
print("n
ALTERNATIVE: ONE-LINE PIPELINE EXECUTION")
print("=" * 60)
print("""
Instead of calling every step individually, use:
from Pose2Sim import Pose2Sim
Pose2Sim.runAll()
Or with selective steps:
Pose2Sim.runAll(
do_calibration=True,
do_poseEstimation=True,
do_synchronization=True,
do_personAssociation=True,
do_triangulation=True,
do_filtering=True,
do_markerAugmentation=False,
do_kinematics=True
)
This is equal to calling every step in sequence, however extra concise
for manufacturing workflows.
""")
print("n
ALL GENERATED OUTPUT FILES")
print("=" * 60)
output_dirs = ['pose', 'pose-3d', 'kinematics']
for dirname in output_dirs:
d = work_dir / dirname
if d.exists():
recordsdata = checklist(d.rglob("*"))
recordsdata = [f for f in files if f.is_file()]
total_size = sum(f.stat().st_size for f in recordsdata) / (1024 * 1024)
print(f"n
{dirname}/ ({len(recordsdata)} recordsdata, {total_size:.1f} MB whole)")
for f in sorted(recordsdata)[:15]:
rel = f.relative_to(d)
measurement = f.stat().st_size / 1024
print(f" {'
'} {rel} ({measurement:.1f} KB)")
if len(recordsdata) > 15:
print(f" ... and {len(recordsdata) - 15} extra recordsdata")
print("n" + "=" * 60)
print("
TO DOWNLOAD RESULTS IN GOOGLE COLAB:")
print("=" * 60)
print("""
from google.colab import recordsdata
recordsdata.obtain('/content material/Pose2Sim_Tutorial/pose-3d/YOUR_FILE.trc')
!zip -r /content material/pose2sim_results.zip /content material/Pose2Sim_Tutorial/pose-3d /content material/Pose2Sim_Tutorial/kinematics
recordsdata.obtain('/content material/pose2sim_results.zip')
""")
print("""
╔══════════════════════════════════════════════════════════════════════════════╗
║ TUTORIAL COMPLETE!
║
╠══════════════════════════════════════════════════════════════════════════════╣
║ ║
║ WHAT YOU ACHIEVED: ║
║
Installed Pose2Sim on Google Colab ║
║
Ran digicam calibration (conversion mode) ║
║
Performed 2D pose estimation with RTMPose ║
║
Synchronized multi-camera views ║
║
Associated individuals throughout cameras ║
║
Triangulated 2D→3D keypoints ║
║
Filtered 3D trajectories ║
║
Augmented markers with Stanford LSTM ║
║
Computed joint angles with OpenSim IK (if out there) ║
║
Visualized trajectories, joint angles, and information high quality ║
║ ║
║ NEXT STEPS: ║
║ • Try with YOUR OWN multi-camera movies ║
║ • Experiment with 'efficiency' mode for larger accuracy ║
║ • Try multi-person mode (Demo_MultiPerson) ║
║ • Visualize in OpenSim GUI or Blender (Pose2Sim_Blender add-on) ║
║ • Use customized DeepLabCut fashions for animal monitoring ║
║ • Run batch processing over a number of trials ║
║ ║
║ CITATION: ║
║ Pagnon, D., Domalain, M., & Reveret, L. (2022). ║
║ ║
║ GitHub: https://github.com/perfanalytics/pose2sim ║
║ Discord: https://discord.com/invite/4mXUdSFjmt ║
║ Docs: https://perfanalytics.github.io/pose2sim/ ║
╚══════════════════════════════════════════════════════════════════════════════╝
""")
We discover extra superior utilization patterns by demonstrating how to modify the Pose2Sim configuration programmatically for experiments and different processing setups. We additionally overview how to run your complete pipeline in a single name, examine all generated output recordsdata, and put together the outcomes for obtain immediately from Colab. Finally, we wrap up the tutorial with a abstract of what we achieved and spotlight the following steps to prolong the workflow to our personal datasets and analysis targets.
In conclusion, we gained a full hands-on understanding of how Pose2Sim transforms multi-view movies into 3D movement trajectories and biomechanical outputs inside a sensible Colab workflow. We noticed how every stage of the pipeline connects to the following, from extracting dependable 2D keypoints to reconstructing filtered 3D coordinates and, when out there, estimating joint kinematics via OpenSim. We additionally went past fundamental execution by visualizing outcomes, checking information high quality, and exploring programmatic configuration adjustments for extra superior experimentation. At the top, now we have a reusable, adaptable pipeline that we will prolong to our personal datasets, refine for higher accuracy, and use as a basis for deeper movement evaluation and markerless biomechanics analysis.
Check out the FULL CODES here. Also, be happy to observe us on Twitter and don’t neglect to be part of our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Need to companion with us for selling your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar and so forth.? Connect with us
The submit A Coding Guide to Markerless 3D Human Kinematics with Pose2Sim, RTMPose, and OpenSim appeared first on MarkTechPost.
