# Predictive (Modeled) Autotuning API¶

class ModeledApp(op_knobs, target_device=None)[source]

Like approxapp.ApproxApp, but uses a model for QoS/cost measurement.

To use this class, inherit from it and implement get_models, empirical_measure_qos_cost, and approxapp.ApproxApp.name. (This class provides an implementation of approxapp.ApproxApp.measure_qos_cost.)

Parameters
• op_knobs (Dict[str, List[ApproxKnob]]) – a mapping from each operator (identified by str) to a list of applicable knobs.

• target_device (Optional[str]) – the target device that this application should be tuned on. See approxapp.ApproxApp constructor.

abstract empirical_measure_qos_cost(with_approxes, is_test)[source]

Empirically measures QoS and cost by actually running the program with approximation (as opposed to using model).

Parameters
• with_approxes (Dict[str, str]) – The approximation configuration to measure QoS and cost for.

• is_test (bool) – If True, uses a “test” dataset/mode that is held away from the tuner during tuning.

Return type

Tuple[float, float]

abstract get_models()[source]

A list of QoS/Cost prediction models for this application.

Cost models should inherit from ICostModel while QoS models should inherit from IQoSModel.

Return type

List[Union[ICostModel, IQoSModel]]

get_tuner()[source]

Sets up an ApproxTuner instance which the user can directly call tune() on with opentuner parameters.

This returns an ApproxModeledTuner, different from approxapp.ApproxApp.get_tuner which returns an ApproxTuner.

Return type

ApproxModeledTuner

measure_qos_cost(with_approxes, is_test, qos_model=None, cost_model=None)[source]

Returns the QoS and cost (time, energy, …) of a given configuration, potentially using models.

If either of cost_model or qos_model is None, this will perform empirical measurement once to get the one that is not using a model. Otherwise, no empirical measurement will be used.

Note that when running on test set (is_test == True), no modeling is allowed (this raises a ValueError).

Parameters
• with_approxes (Dict[str, str]) – The approximation configuration to measure QoS and cost for.

• is_test (bool) – If True, uses a “test” dataset/mode that is held away from the tuner during tuning; otherwise use “tune” dataset.

• qos_model (Optional[str]) – The QoS model to use in this measurement, keyed by model’s name (See IQoSModel.name).

• cost_model (Optional[str]) – The Cost model to use in this measurement, keyed by model’s name (See ICostModel.name).

Return type

Tuple[float, float]

class ApproxModeledTuner(app)[source]

Bases: Generic[predtuner.approxapp.T]

plot_configs(show_qos_loss=False, connect_best_points=False)[source]

Plots 1 to 3 QoS-vs-speedup scatter plot of configurations.

All kept configurations and all “best” configurations (before test-set filtering if any) are always plotted in the first subplot.

If there was a validation phase during tuning, the second subplot contains the “best” configurations plotted twice, with predicted and empirically measured QoS (on tune set) respectively.

If both validation and test-set filtering was used, the last subplot contains the “best” configurations with empirically measured tune-set and test-set QoS loss respectively.

Parameters
• show_qos_loss (bool) – If True, uses the loss of QoS (compared to the baseline) instead of the absolute QoS in the first 2 graphs. This does not apply to the third graph if it exists, which always use QoS loss for ease of comparison.

• connect_best_points (bool) –

Return type

matplotlib.figure.Figure

tune(max_iter, qos_tuner_threshold, qos_keep_threshold=None, is_threshold_relative=False, take_best_n=None, test_configs=True, validate_configs=None, cost_model=None, qos_model=None)[source]

Runs a tuning session.

Parameters
• max_iter (int) – Number of iterations to use in tuning.

• qos_tuner_threshold (float) – The QoS threshold that the tuner should aim for. QoS is assumed to be a higher-better quantity. This should be slightly tighter than qos_keep_threshold to account for extra error when running on test dataset.

• qos_keep_threshold (Optional[float]) – The QoS threshold beyond which we will keep the configuration. By default it is equal to qos_keep_threshold.

• is_threshold_relative (bool) – If True, the actual thresholds are considered to be baseline_qos - given_threshold. This applies to qos_tuner_threshold and qos_keep_threshold.

• take_best_n (Optional[int]) – Take the best $$n$$ configurations after tuning. “Best” is defined as the configurations closest to the pareto curve of the QoS-cost tradeoff space. If take_best_n is None, only the configurations strictly on the pareto curve are taken.

• test_configs (bool) – If True, runs the configs on the test dataset, filter the taken configs by qos_keep_threshold, and fill the test_qos field of ValConfig.

• validate_configs (Optional[bool]) – If True, runs a validation step that empirically measures the QoS of configs, filter the taken configs by qos_keep_threshold, and fill the validated_qos field of ValConfig.

• cost_model (Optional[str]) – The cost model to use for this tuning session.

• qos_model (Optional[str]) – The QoS model to use for this tuning session. This and cost_model are relayed down the line to ModeledApp.measure_qos_cost.

Return type
class ValConfig(qos, cost, knobs, test_qos=None, validated_qos=None)[source]

An approxapp.Config that also optionally stores the “validation QoS”.

Validation QoS is the empirically measured QoS in the “validation phase” at the end of tuning (see ApproxModeledTuner.tune).

Parameters
• qos – The maybe-predicted QoS of this config. (If tuning is empirical then this is empirical, not predicted, QoS.) This is in contrast to Config.qos, which is always empirically measured on tuning dataset.

• cost – The relative cost (time, energy, etc.) of this config compared to the baseline config. This is essentially $$1 / speedup$$.

• knobs – The op-knob mapping in this configuration.

• test_qos – The empirically measured QoS of this config on test mode.

• validated_qos – The empirically measured QoS of this config on tuning mode, in the validation phase. See ApproxModeledTuner.tune.

## Predictive Model Interface¶

class IQoSModel[source]

Abstract base class for models that provide QoS prediction.

abstract measure_qos(with_approxes)[source]

Predict the QoS of application.

Parameters

with_approxes (Dict[str, str]) – The configuration to predict QoS for.

Return type

float

abstract property name

Name of model.

class ICostModel[source]

Abstract base class for models that provide cost prediction.

abstract measure_cost(with_approxes)[source]

Predict the cost of application.

Parameters

with_approxes (Dict[str, str]) – The configuration to predict cost for.

Return type

float

abstract property name

Name of model.

## Predefined Predictive Models¶

Below is a list of cost and QoS models already defined:

class LinearCostModel(app, op_costs, knob_speedups)[source]

Weighted linear cost predictor based on cost of each operator.

This predictor compute a weighted sum over the cost of each operator and the speedup of each knob on that operator.

Parameters
• app – The ModeledApp to predict cost for.

• op_costs – A mapping from operator name to its (baseline) cost.

• knob_speedups – A mapping from knob name to its (expected) speedup.

class QoSModelP1(app, tensor_output_getter, qos_metric, storage=None)[source]

QoS model P1 in ApproxTuner.

Parameters
• app – The ModeledApp to predict QoS for.

• tensor_output_getter

A function that can run the tensor-based application with a config and return a single tensor result.

Note that here we require the return value to be a PyTorch tensor.

• qos_metric – A function that compute a QoS level from the return value of tensor_output_getter.

• storage – A file of PyTorch format to store this model into, if the file doesn’t exist, or load the model from if the file exists. If not given, the model will not be stored.

class QoSModelP2(app, storage=None)[source]

QoS model P1 in ApproxTuner.

Parameters
• app – The ModeledApp to predict QoS for.

• storage – A JSON file to store this model into, if the file doesn’t exist, or load the model from if the file exists. If not given, the model will not be stored.