3. AM-Quality

This section covers usage of additional API endpoints and models for AM-Quality.

3.1. Inspection reports

Comparing to AM-Vision API, there is a new endpoint inspection_report. The inspection report is the result of the quality analysis.

Each scan in the system can have exactly one or zero inspection reports. The scan is created first (at part entry), and the inspection report is added after the analysis is ready.

An inspection report carries the information of a quality scan, the evaluation metrics, and metrology data (in case a metrology template is present, see Metrology templates to know how to use the api to upload a .xvgt file and link it to a part).

Main fields of the inspection report are:

uuid:

A unique id

metrics:

dict, measurements used to determine the automatic PASS/FAIL verdict (see section Inspection profile, inspection criterium and metrics for a full overview)

metrology_data:

dict, results for the measurements specified in the MetrologyTemplate

scan:

string, The scan uuid for which this report was produced

passed:

boolean, the automatic PASS/FAIL verdict

approved:

boolean, operator verdict

part:

string, id of the part (NOT uuid)

pdf_url:

string, the url to download the PDF version of this report

Here’s an example to retrieve inspection reports with the Slumber API client defined earlier in this tutorial:

# retrieve the inspection report for a given scan
res = api.inspection_report.get(scan=scan_uuid)
inspection_report = res["results"][0]
# get the PDF binary content from 'render_pdf' endpoint
content = api.inspection_report(inspection_report["uuid"]).render_pdf.get()

3.2. Metrology templates

Metrology templates are .xvgt files that you can get from VGStudio Max software. You can upload .xvgt files that specify what to measure on the product by using the api endpoint api/metrology_template/.

The following example uses the Slumber API client to upload a metrology template and updates a part with its id:

template_path = "/path/to/xvgt/file"
api.metrology_template.post({"id": "my_xvg_template", files={"template": open(template_path, "rb")})
# link the metrology template to an existing part in the database
part_id = "batch_01_part_1"
# set the metrology_template attribute with the id
api.part(part_id).patch({"metrology_template": "my_xvg_template"}))

3.3. Inspection profile, inspection criterium and metrics

An inspection profile is a collection of criteria. These criteria define the metrics to compute according their thresholds. An inspection profile can be defined for a part or for a material. AM-Quality will use the default inspection profile defined in the settings if there is no inspection profile for the part under evaluation or its recognized material.

An inspection criterium is defined by a metric (chosen between the list of available metrics) and parameters thresholds.

The available metrics and their parameters are currently:

Criterium

Parameters

points_percent

distance_threshold (mm): Minimum deviation distance to consider a point as defective

reject_threshold (%): Maximum percentage of points that can exceed the distance threshold before the part fails

bbox_deviation

threshold (mm): Maximum deviation of the most deviated bounding box axis between the scanned part and reference part

bbox_deviation_percent

threshold (%): Maximum deviation of the most deviated bounding box axis as a percentage of the reference part

missing_area

threshold (mm²): Maximum size for the largest missing area on the part

missing_area_percent

threshold (%): Maximum size of the largest missing area as a percentage of total part area

mean_absolute_error

threshold (mm): Maximum mean absolute error between scanned and reference surfaces

median_absolute_error

threshold (mm): Maximum median absolute error between scanned and reference surfaces

rmse

threshold (mm): Maximum root mean squared error between scanned and reference surfaces

largest_area

min_deviation_mm (mm): Minimum deviation distance to identify defective patches

max_area_mm (mm²): Maximum area for the largest defective patch

largest_area_percent

min_deviation_mm (mm): Minimum deviation distance to identify defective patches

max_area_pct (%): Maximum area of the largest defective patch as percentage of total surface

total_area

min_deviation_mm (mm): Minimum deviation distance to identify defective patches

max_area_mm (mm²): Maximum combined area of all defective patches

total_area_percent

min_deviation_mm (mm): Minimum deviation distance to identify defective patches

max_area_pct (%): Maximum combined area of all defective patches as percentage of total surface

max_distance

threshold (mm): Maximum root mean squared error between scanned and reference surfaces

Here’s an example of usage to create a profile and some criteria for it.

# create profile
res = api.inspection_profile.post({"name": "Test"})
profile_uuid = res["uuid"]
# prepare data for the criteria
data = [{
        "profile": profile_uuid,
        "metric": "points_percent",
        "order": 1,
        "parameters": {"distance_threshold": 0.5, "reject_threshold": 0.5},
    },
    {
        "profile": profile_uuid,
        "metric": "bbox_deviation",
        "order": 2,
        "parameters": {"threshold": 0.5},
    },
    {
        "profile": profile_uuid,
        "metric": "missing_area",
        "order": 3,
        "parameters": {"threshold": 0.5},
}]
# create criteria for the profile
api.inspection_criteria.post(data)
# Assign the profile to a part
api.part(part_id).patch({"inspection_profile": profile_uuid})

3.4. Decision Trees

The automatic PASS/FAIL evaluation of an inspection report is based on the enabled criteria. If all of them pass, the inspection report will have passed=True. If one of them fails, it will have passed=False.

If metrics evaluation needs more flexibility, you can add a custom decision tree to an inspection profile.

Decision Tree YAML Syntax

This YAML format represents a binary decision tree used for classification or decision-making based on metric thresholds.

metric: <<metric_id>>
operator: [>,<,=,==,!=,<>,>=,<=]
threshold: <<float number>>
left: true  # true path
right: false  # false path

Keys left and right are literals true or false if they are terminal leafs, otherwise they will define another node in a recursive manner.

Each node in the tree contains:

metric: The feature or measurement being evaluated (e.g., bbox_deviation, missing_area)

operator: The comparison operator to apply (<, >, <=, >=, ==, =, !=, <>)

threshold: The numeric value to compare against

left: The branch to follow when the condition is true (metric compared to threshold using operator)

right: The branch to follow when the condition is false

Leaf Nodes

Branches terminate with leaf nodes that contain boolean values (true or false) representing the final classification or decision outcome.

Example Interpretation

Using the provided tree:

metric: bbox_deviation
operator: <
threshold: 0.6
left:
  metric: missing_area
  operator: <
  threshold: 600
  left:
    metric: points_percent>0.50mm
    operator: <
    threshold: 7
    left: true
    right: false
  right: false
right: false

Root node: If bbox_deviation < 0.6 → go left, else return false

Second level: If missing_area < 600 → go left, else return false

Third level: If points_percent>0.50mm < 7 → return true, else return false

In this case, the tree returns true only when all three conditions are satisfied.

Notes

  • Metric names can contain special characters (like > in points_percent>0.50mm)

  • The tree is evaluated recursively from root to leaf

  • Each internal node represents a decision point; each leaf represents a final outcome

To use a decision tree, add it to the InspectionProfile decision_tree field and set the use_decision_tree to a true value, via AM-Quality API or the Django Admin.

A validation of the tree is performed unless use_decision_tree is false.

For the metric, use the metric_id field from profile criteria already defined. It’s not possible to add a decision tree if there aren’t any criteria.

To inspect the available metrics/criteria and their metric_id, use the API or go to Django Admin.

While adding new nodes to the decision tree, or while changing an InspectionCriterium in a way it changes its metric_id, the decision tree might be inconsistent.

To avoid unnecessary validation errors, first disable the use_decision_tree for the profile then apply changes to criteria and decision tree, and re-enable the use_decision_tree flag. This will properly validate the decision tree.

API examples

from slumber.exceptions import HttpClientError

# Create first a new profile
profile = {"name": "Test"}
res = api.inspection_profile.post(profile)
profile["uuid"] = res["uuid"]
# add criteria for the profile
criteria = [
    {
        "metric": "points_percent",
        "order": 1,
        "profile": profile["uuid"],
        "parameters": {"distance_threshold": 0.5, "reject_threshold": 0.5},
    },
    {
        "metric": "bbox_deviation",
        "order": 2,
        "profile": profile["uuid"],
        "parameters": {"threshold": 0.5},
    },
    {
        "metric": "missing_area",
        "order": 3,
        "profile": profile["uuid"],
        "parameters": {"threshold": 0.5},
    },
]
api.inspection_criterium.post(criteria)
# add the decision tree
decision_tree = """metric: bbox_deviation
operator: <
threshold: 0.6
left:
  metric: missing_area
  operator: <
  threshold: 600
  left:
    metric: points_percent>0.50mm
    operator: <
    threshold: 7
    left: true
    right: false
  right: false
right: false
"""
profile["decision_tree"] = decision_tree
profile["use_decision_tree"] = True
api.inspection_profile(profile["uuid"]).put(profile)
# Post an invalid decision tree
invalid_tree = """metric: bbox_deviation_invalid
operator: <
threshold: 0.6
left: true
right: false
"""
profile["decision_tree"] = invalid_tree
try:
    res = api.inspection_profile(profile["uuid"]).put(profile)
except HttpClientError as e:
    print(e.response.status_code)  # 400
    print(e.response.json())
    # {'decision_tree': [
    # 'Invalid decision tree: Invalid metric name at root: bbox_deviation_invalid'
    # ]}

3.5. Webhooks scan.inspect and scan.inspect.update

The webhooks scan.inspect and scan.inspect.update are additional events that AM-Quality backend sends when an inspection report is ready or updated (typically when the operators approve/reject the report from the UI). You can register and listen to these events for further processing (e.g. align with your systems or export CSV/PDF reports):

api.webhook.post({
    'event': 'scan.inspect',
    'target': 'http://yourapi.com/on_inspection_report/'
})
api.webhook.post({
    'event': 'scan.inspect.update',
    'target': 'http://yourapi.com/on_inspection_report_update/'
})

The target endpoint callback receives a payload with the uuid of the inspection report object (see the Webhooks section for more details). The inspection report can subsequently be fetched from the API. Here’s an example webhook JSON output:

{
    'hook': {
        'id': 18,
        'event': 'scan.inspect',
        'target': 'http://yourapi.com/on_inspection_report/'
    },
    'data': {
        'uuid': 'e3ef74e5-a38f-4c26-8b3a-53f98110aa58',
        'url': '<AM-QUALITY-URL>/api/inspection_report/e3ef74e5-a38f-4c26-8b3a-53f98110aa58/'
    }
}

Below is an example scan.inspect webhook receiver. it listens to the webhook, fetches the part and scan ids, and downloads a pdf version of the report:

# Flask python code for the callback view previously defined in ``target``
from flask import Flask, request

def on_inspection_report():
    post = request.json
    report_uuid = post["data"]["uuid"]
    log.info("An InspectionReport was created with uuid: %s", report_uuid)
    # Get the full inspection report from API
    inspection_report = api.inspection_report(report_uuid).get()

    # scan_uuid and part_id can be retrieved from the report
    scan_uuid = inspection_report['scan']
    part_id = inspection_report['part']

    # a pdf version of the report can be downloaded
    log.info("Downloading pdf from %s", inspection_report["pdf_url"])
    content = api.inspection_report(report_uuid).render_pdf.get()
    with open("./report.pdf", "wb") as f:
        f.write(content)

app = Flask(__name__)
app.add_url_rule(
    rule="/on_inspection_report/",
    endpoint="on_inspection_report",
    view_func=on_inspection_report,
    methods=["POST"],
)

app.run(port=5000, debug=True)