Skip to main content

Concept

An inquiry into the European Central Bank’s stance on algorithmic transparency reveals a foundational principle of financial stability. The core of the ECB’s position is a direct reflection of its mandate to supervise major financial institutions and maintain systemic integrity. The acceptability of any explainability technique is therefore measured by its ability to provide supervisors with a clear, unambiguous, and verifiable understanding of model-driven risk.

This perspective is rooted in the operational reality that an unexplainable model represents an unquantifiable liability. The institution’s approach is one of pragmatic caution, built upon decades of experience with statistical models in credit and market risk.

The central pillar of this approach is the distinction between inherently transparent models and complex, opaque systems often termed “black boxes.” The ECB, along with the European Banking Authority (EBA), has indicated that traditional statistical methods such as linear regressions, logistic regressions, and decision trees are considered to possess a high degree of transparency. Their mechanics are well-understood, their outputs are directly traceable to specific inputs and weights, and their behavior can be audited with established procedures. These models form the baseline of acceptability because their explainability is intrinsic to their structure. They provide a clear and direct line of sight from input data to risk assessment, a critical requirement for any supervisory review.

An intricate, blue-tinted central mechanism, symbolizing an RFQ engine or matching engine, processes digital asset derivatives within a structured liquidity conduit. Diagonal light beams depict smart order routing and price discovery, ensuring high-fidelity execution and atomic settlement for institutional-grade trading

The Architecture of Explainability

Explainable AI (XAI) becomes a subject of supervisory interest precisely when institutions move beyond these traditional models into more complex machine learning frameworks like deep neural networks or gradient boosted ensembles. For these systems, “acceptability” is a function of how effectively an institution can overlay them with techniques that restore the transparency lost to complexity. The goal of XAI in this context is to translate the model’s internal logic into human-understandable terms.

This translation must be robust enough to satisfy rigorous validation and audit processes. It involves demonstrating not just what the model predicted, but how it arrived at that prediction, what drivers were most influential, and how it is likely to behave under various stress scenarios.

A model’s acceptance is contingent on the institution’s ability to articulate its function and associated risks with clarity.

This requirement for clarity gives rise to two primary forms of transparency that are relevant to the ECB’s supervisory framework. The first is model transparency, which refers to the intrinsic intelligibility of the model’s inner workings. The second is process transparency, which involves the comprehensive documentation and governance surrounding the model’s entire lifecycle, from data sourcing and feature engineering to validation, deployment, and ongoing monitoring.

An acceptable framework must excel in both dimensions. A bank must be able to demonstrate not only that it has a tool like SHAP or LIME to interpret a prediction, but that it has a robust governance process for using that tool, interpreting its outputs, and acting on the insights generated.

Polished metallic rods, spherical joints, and reflective blue components within beige casings, depict a Crypto Derivatives OS. This engine drives institutional digital asset derivatives, optimizing RFQ protocols for high-fidelity execution, robust price discovery, and capital efficiency within complex market microstructure via algorithmic trading

What Is the Threshold for Model Complexity?

The threshold at which a model transitions from being inherently transparent to requiring sophisticated XAI methods is a matter of supervisory judgment. It is guided by the principle of proportionality. A model used for a non-critical internal process will face a lower scrutiny threshold than a complex AI model used for Internal Ratings-Based (IRB) approaches to credit risk, which directly impacts the calculation of a bank’s regulatory capital.

The more material the model’s output, the higher the expectation for its explainability. The ECB’s exploration of AI for its own internal processes, such as data classification and the drafting of initial analytical summaries, reflects this same principle of careful, risk-based adoption.

Ultimately, the ECB’s perspective is architecturally focused. It views a bank’s suite of models as a critical component of its operational infrastructure. Just as a physical structure must have a sound foundation and clear blueprints, a model-driven risk function must be built on a foundation of sound data, clear logic, and verifiable processes.

The specific XAI technique employed is a component within this larger architecture. Its acceptability is determined by its contribution to the overall structural integrity of the bank’s risk management framework.


Strategy

The European Central Bank’s strategy regarding artificial intelligence and machine learning in banking supervision is one of structured adoption and rigorous oversight. This strategy is built to balance the potential for innovation and efficiency gains against the significant risks posed by opaque, complex, and poorly governed models. The core tenet of this strategy is that the onus of proof lies entirely with the financial institution.

The bank must be able to demonstrate to the ECB’s supervisory arm, the Single Supervisory Mechanism (SSM), that its models are sound, stable, and, above all, understandable. This mandate for understandability is where explainability techniques become a central pillar of a bank’s AI strategy.

The ECB’s strategic approach can be understood as having two primary vectors. The first is its internal application of AI to enhance its own analytical and supervisory capabilities, referred to as SupTech (Supervisory Technology). The second, and more critical for regulated entities, is its external posture as a supervisor of banks that use AI and ML models.

For its internal use, the ECB is exploring AI for tasks like macroeconomic forecasting and data management, serving as a proving ground for the technology. For its supervisory role, the strategy is to enforce a high standard of model risk management, where explainability is a non-negotiable component for all but the simplest models.

Abstract geometric structure with sharp angles and translucent planes, symbolizing institutional digital asset derivatives market microstructure. The central point signifies a core RFQ protocol engine, enabling precise price discovery and liquidity aggregation for multi-leg options strategies, crucial for high-fidelity execution and capital efficiency

A Risk Based Supervisory Framework

The ECB’s supervisory strategy is explicitly risk-based. The level of scrutiny applied to a bank’s model is directly proportional to its materiality and complexity. A simple, transparent model used for a low-impact task will receive less attention than a highly complex neural network used to determine creditworthiness for a significant portion of a bank’s loan book.

This principle of proportionality means that there is no single list of “approved” XAI techniques. Instead, banks are expected to develop an internal model risk management framework that is commensurate with the sophistication of the models they deploy.

The ECB’s strategic objective is to ensure that technological innovation does not outpace institutional risk control.

This framework is expected to cover the entire model lifecycle, from inception to retirement. A key strategic consideration for banks is the recognition that simpler, more transparent models are viewed by supervisors as inherently less risky. Therefore, a primary strategic decision for any financial institution is to determine whether the performance uplift from a complex “black-box” model justifies the significant overhead of implementing and maintaining the necessary XAI frameworks and governance structures to satisfy supervisors.

An intricate, transparent cylindrical system depicts a sophisticated RFQ protocol for digital asset derivatives. Internal glowing elements signify high-fidelity execution and algorithmic trading

Comparing Model Architectures

The strategic choice between a simple, transparent model and a complex one requiring XAI is a trade-off between performance and transparency. The following table outlines the key characteristics that define this strategic decision from a supervisory perspective.

Characteristic Inherently Transparent Models (e.g. Logistic Regression) Complex “Black-Box” Models (e.g. Deep Neural Networks)
Interpretability

High. Model logic is directly observable through coefficients or simple rules.

Low. Internal logic is non-linear and involves millions of parameters, requiring post-hoc explanation.

Supervisory Risk Profile

Low. Considered transparent and less risky by supervisors.

High. Subject to intense scrutiny due to opacity and potential for unpredictable behavior.

Explainability Requirement

Fulfilled by the model’s intrinsic structure and standard documentation.

Requires dedicated XAI techniques (e.g. SHAP, LIME) and extensive governance.

Validation Overhead

Standard, well-established validation procedures.

Complex validation, including testing the stability and logic of the XAI layer itself.

Segmented beige and blue spheres, connected by a central shaft, expose intricate internal mechanisms. This represents institutional RFQ protocol dynamics, emphasizing price discovery, high-fidelity execution, and capital efficiency within digital asset derivatives market microstructure

The Push for Harmonization

A crucial element of the evolving European strategy is the call for greater harmonization of supervisory expectations regarding AI. Supervisory authorities at both the national and EU level recognize that divergence in standards could lead to regulatory arbitrage and systemic risk. There is a clear strategic direction towards establishing shared principles for explainable AI.

For banks operating across the Eurozone, this means that an AI strategy must be robust and flexible enough to adapt to these emerging, harmonized standards. The strategy should anticipate a future where a common understanding and a common set of expectations for model risk management and explainability are enforced across the Single Supervisory Mechanism.


Execution

The execution of an AI strategy that aligns with the expectations of the European Central Bank is a matter of operational discipline and technical precision. It requires building a robust internal system for model risk management that treats explainability as a core functional requirement. For financial institutions under the ECB’s supervision, this means moving beyond theoretical discussions of AI ethics and implementing concrete, auditable procedures for every stage of a model’s life. The execution framework must be capable of demonstrating to supervisors not only that a model works, but that the institution understands precisely how and why it works.

This execution is grounded in a clear hierarchy of model acceptability. The most straightforward path to supervisory acceptance is the use of models that are inherently transparent. When an institution chooses to deploy more complex models, the burden of execution shifts to the rigorous implementation of XAI techniques and the creation of a comprehensive governance structure to manage the associated risks.

A light sphere, representing a Principal's digital asset, is integrated into an angular blue RFQ protocol framework. Sharp fins symbolize high-fidelity execution and price discovery

Inherently Transparent Models the Baseline for Acceptance

The foundation of an ECB-compliant model inventory rests on traditional statistical techniques whose transparency is a structural feature. The execution here is focused on meticulous documentation and adherence to established best practices.

  • Logistic and Linear Regression These models are considered highly interpretable because the relationship between each input variable and the output is explicitly defined by a coefficient. Execution involves documenting the rationale for variable selection, the statistical significance of coefficients, and the results of diagnostic tests (e.g. for multicollinearity). The explanation for a decision is a direct readout of the model’s equation.
  • Decision Trees A single decision tree is transparent because its logic can be visualized as a series of simple, hierarchical if-then-else rules. Execution requires documenting the criteria for splitting nodes and the overall structure of the tree. The path from the root node to a terminal leaf provides a clear, human-readable explanation for any given prediction.
A precisely engineered system features layered grey and beige plates, representing distinct liquidity pools or market segments, connected by a central dark blue RFQ protocol hub. Transparent teal bars, symbolizing multi-leg options spreads or algorithmic trading pathways, intersect through this core, facilitating price discovery and high-fidelity execution of digital asset derivatives via an institutional-grade Prime RFQ

How Are Complex Models Made Acceptable in Practice?

When the performance requirements of a use case demand a complex “black-box” model, the execution plan becomes significantly more demanding. The goal is to construct a layer of interpretation around the model that is robust enough to meet supervisory scrutiny. This involves deploying specific XAI tools and embedding them within a rigorous validation process.

Effective execution translates a model’s complex calculations into a clear and stable risk narrative.

The choice of XAI technique is secondary to the quality of its implementation. The institution must demonstrate that the explanations generated are faithful to the underlying model, stable over time, and understandable to relevant stakeholders, including auditors and supervisors.

  1. Model-Agnostic Techniques These methods are favored for their flexibility, as they can be applied to any underlying model architecture. Their execution involves a two-pronged approach. First is the technical implementation, and second is the procedural framework for their use.
    • SHAP (SHapley Additive exPlanations) SHAP provides a sophisticated, game-theory-based approach to assigning an importance value to each feature for an individual prediction. Execution involves generating SHAP values for model decisions, particularly for key segments or for outcomes that are being reviewed by human operators or auditors. The validation process must test the stability of these SHAP values and ensure that they provide a coherent picture of the model’s behavior.
    • LIME (Local Interpretable Model-agnostic Explanations) LIME works by creating a simple, interpretable model (like a linear regression) that approximates the behavior of the complex model in the local vicinity of a single prediction. Execution requires defining the appropriate scope for these local explanations and ensuring that the simplified models are faithful approximations of the “black-box” system’s decision-making in that specific instance.
  2. The Governance And Documentation Imperative The most critical execution component is the governance framework. No XAI tool can, on its own, make a model acceptable. The institution must build and maintain a comprehensive documentation package for each model. This is the central artifact that is presented to supervisors.
Curved, segmented surfaces in blue, beige, and teal, with a transparent cylindrical element against a dark background. This abstractly depicts volatility surfaces and market microstructure, facilitating high-fidelity execution via RFQ protocols for digital asset derivatives, enabling price discovery and revealing latent liquidity for institutional trading

The Supervisory Review and Validation Process

The final stage of execution is the interaction with the supervisory authority. The ECB, through on-site inspections and ongoing monitoring, will rigorously assess the bank’s model risk management framework. The ability to execute on the principles of transparency and explainability will be tested directly. The following table outlines the core pillars of a validation framework that would be presented to supervisors.

Pillar of Validation Key Execution Tasks
Data Governance

Document data lineage, quality controls, and any transformations. Demonstrate the absence of unintended bias in training data.

Model Development

Provide a detailed rationale for choosing a complex model over a simpler alternative. Document the model’s architecture and all hyperparameters.

Explainability Framework

Document the choice of XAI technique(s). Provide evidence that the explanations are stable, accurate, and understandable. Define the process for using XAI outputs in decision-making and review.

Ongoing Monitoring

Implement a system for monitoring model performance and drift. Establish triggers for model recalibration or review, including checks on the stability of XAI-generated explanations.

A polished blue sphere representing a digital asset derivative rests on a metallic ring, symbolizing market microstructure and RFQ protocols, supported by a foundational beige sphere, an institutional liquidity pool. A smaller blue sphere floats above, denoting atomic settlement or a private quotation within a Principal's Prime RFQ for high-fidelity execution

References

  • Moufakkir, Myriam. “Careful embrace ▴ AI and the ECB.” ECB Blog, European Central Bank, 6 Oct. 2023.
  • European Banking Federation. “Understanding Credit Scoring ▴ Techniques and Distinction from Artificial Intelligence.” EBF Position Paper, 2023.
  • Freier, Maximilian. “AI applications and governance at the ECB.” Online workshop “AI in Central Banking”, European Central Bank, 22 Apr. 2024.
  • De Nederlandsche Bank. “Perspectives on Explainable AI in The Financial Sector.” DNB Report, 2021.
  • Puchakayala, P. R. A. et al. “Explainable AI and Interpretable Machine Learning in Financial Industry Banking.” European Journal of Advanced Engineering and Technology, vol. 10, no. 3, 2023, pp. 82-92.
  • Lenza, M. I. Moutachaker, and J. Paredes. “Density Forecasts of Inflation ▴ A Quantile Regression Forest Approach.” ECB Working Paper No. 2830, European Central Bank, 2023.
  • McCaul, Elizabeth. Speech at the Supervision Innovators Conference 2023. European Central Bank, 3 Oct. 2023.
A transparent blue sphere, symbolizing precise Price Discovery and Implied Volatility, is central to a layered Principal's Operational Framework. This structure facilitates High-Fidelity Execution and RFQ Protocol processing across diverse Aggregated Liquidity Pools, revealing the intricate Market Microstructure of Institutional Digital Asset Derivatives

Reflection

Abstractly depicting an institutional digital asset derivatives trading system. Intersecting beams symbolize cross-asset strategies and high-fidelity execution pathways, integrating a central, translucent disc representing deep liquidity aggregation

Calibrating the Architecture of Trust

The information provided details a supervisory system grounded in verifiable evidence and procedural integrity. The central challenge this presents is not merely technical. It is architectural.

How does an institution construct an internal framework where innovation can proceed without compromising the structural soundness of its risk controls? The specific tools for explaining a model’s output are components, but the larger system of governance, validation, and human oversight is the essential architecture.

This prompts a deeper consideration of an organization’s internal capabilities. Does the existing model validation function possess the skillset to critically assess the outputs of a SHAP analysis, or to challenge the fidelity of a LIME approximation? Is there a clear protocol for when a model’s decision, even if optimal, must be overridden because its underlying logic is counterintuitive or unstable? Answering these questions requires a shift in perspective, viewing explainability as a fundamental component of operational resilience, integral to the system’s design from the very beginning.

Abstractly depicting an Institutional Digital Asset Derivatives ecosystem. A robust base supports intersecting conduits, symbolizing multi-leg spread execution and smart order routing

Glossary

Two spheres balance on a fragmented structure against split dark and light backgrounds. This models institutional digital asset derivatives RFQ protocols, depicting market microstructure, price discovery, and liquidity aggregation

European Central Bank

Meaning ▴ The European Central Bank functions as the central monetary authority for the Eurozone, tasked with maintaining price stability within its constituent economies.
Metallic, reflective components depict high-fidelity execution within market microstructure. A central circular element symbolizes an institutional digital asset derivative, like a Bitcoin option, processed via RFQ protocol

Inherently Transparent Models

Different algorithmic strategies create unique information leakage signatures through their distinct patterns of order placement and timing.
A precision engineered system for institutional digital asset derivatives. Intricate components symbolize RFQ protocol execution, enabling high-fidelity price discovery and liquidity aggregation

Decision Trees

Meaning ▴ Decision Trees represent a non-parametric supervised learning method employed for classification and regression tasks, constructing a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
A sleek, dark sphere, symbolizing the Intelligence Layer of a Prime RFQ, rests on a sophisticated institutional grade platform. Its surface displays volatility surface data, hinting at quantitative analysis for digital asset derivatives

Deep Neural Networks

Meaning ▴ Deep Neural Networks are multi-layered computational models designed to learn complex patterns and relationships from vast datasets, enabling sophisticated function approximation and predictive analytics.
A central multi-quadrant disc signifies diverse liquidity pools and portfolio margin. A dynamic diagonal band, an RFQ protocol or private quotation channel, bisects it, enabling high-fidelity execution for digital asset derivatives

Machine Learning

Validating a trading model requires a systemic process of rigorous backtesting, live incubation, and continuous monitoring within a governance framework.
Precision metallic component, possibly a lens, integral to an institutional grade Prime RFQ. Its layered structure signifies market microstructure and order book dynamics

Ongoing Monitoring

A broker-dealer's continuous monitoring of control locations is the architectural safeguard ensuring client assets are operationally segregated.
Robust institutional-grade structures converge on a central, glowing bi-color orb. This visualizes an RFQ protocol's dynamic interface, representing the Principal's operational framework for high-fidelity execution and precise price discovery within digital asset market microstructure, enabling atomic settlement for block trades

Model Transparency

Meaning ▴ Model Transparency refers to the inherent capacity of an algorithmic system to reveal its internal logic, input-output relationships, and the specific rationale underpinning its generated decisions or predictions.
Complex metallic and translucent components represent a sophisticated Prime RFQ for institutional digital asset derivatives. This market microstructure visualization depicts high-fidelity execution and price discovery within an RFQ protocol

Lime

Meaning ▴ LIME, or Local Interpretable Model-agnostic Explanations, refers to a technique designed to explain the predictions of any machine learning model by approximating its behavior locally around a specific instance with a simpler, interpretable model.
A precisely engineered multi-component structure, split to reveal its granular core, symbolizes the complex market microstructure of institutional digital asset derivatives. This visual metaphor represents the unbundling of multi-leg spreads, facilitating transparent price discovery and high-fidelity execution via RFQ protocols within a Principal's operational framework

Shap

Meaning ▴ SHAP, an acronym for SHapley Additive exPlanations, quantifies the contribution of each feature to a machine learning model's individual prediction.
Central metallic hub connects beige conduits, representing an institutional RFQ engine for digital asset derivatives. It facilitates multi-leg spread execution, ensuring atomic settlement, optimal price discovery, and high-fidelity execution within a Prime RFQ for capital efficiency

Inherently Transparent

Different algorithmic strategies create unique information leakage signatures through their distinct patterns of order placement and timing.
Metallic hub with radiating arms divides distinct quadrants. This abstractly depicts a Principal's operational framework for high-fidelity execution of institutional digital asset derivatives

Internal Ratings-Based

Meaning ▴ Internal Ratings-Based (IRB) refers to a regulatory framework, primarily under Basel Accords, which permits financial institutions to utilize their proprietary internal credit risk models to calculate regulatory capital requirements for credit risk exposures.
Abstract geometric forms, symbolizing bilateral quotation and multi-leg spread components, precisely interact with robust institutional-grade infrastructure. This represents a Crypto Derivatives OS facilitating high-fidelity execution via an RFQ workflow, optimizing capital efficiency and price discovery

Risk Management Framework

Meaning ▴ A Risk Management Framework constitutes a structured methodology for identifying, assessing, mitigating, monitoring, and reporting risks across an organization's operational landscape, particularly concerning financial exposures and technological vulnerabilities.
Intersecting metallic structures symbolize RFQ protocol pathways for institutional digital asset derivatives. They represent high-fidelity execution of multi-leg spreads across diverse liquidity pools

European Central

Systematic Internalisers re-architect RFQ dynamics by offering a private, bilateral liquidity channel for discreet, large-scale execution.
A luminous central hub, representing a dynamic liquidity pool, is bisected by two transparent, sharp-edged planes. This visualizes intersecting RFQ protocols and high-fidelity algorithmic execution within institutional digital asset derivatives market microstructure, enabling precise price discovery

Single Supervisory Mechanism

Meaning ▴ The Single Supervisory Mechanism (SSM) represents a centralized, harmonized framework for prudential supervision of financial institutions within a designated economic area, established to ensure the safety and soundness of the banking system and to foster financial stability.
A central hub with four radiating arms embodies an RFQ protocol for high-fidelity execution of multi-leg spread strategies. A teal sphere signifies deep liquidity for underlying assets

Suptech

Meaning ▴ SupTech, or Supervisory Technology, designates the application of advanced technological solutions, including artificial intelligence, machine learning, and distributed ledger technology, to enhance the capabilities of regulatory bodies and financial institutions in their oversight and compliance functions.
A transparent sphere, representing a digital asset option, rests on an aqua geometric RFQ execution venue. This proprietary liquidity pool integrates with an opaque institutional grade infrastructure, depicting high-fidelity execution and atomic settlement within a Principal's operational framework for Crypto Derivatives OS

Model Risk Management

Meaning ▴ Model Risk Management involves the systematic identification, measurement, monitoring, and mitigation of risks arising from the use of quantitative models in financial decision-making.
A central hub, pierced by a precise vector, and an angular blade abstractly represent institutional digital asset derivatives trading. This embodies a Principal's operational framework for high-fidelity RFQ protocol execution, optimizing capital efficiency and multi-leg spreads within a Prime RFQ

Management Framework

The OMS codifies investment strategy into compliant, executable orders; the EMS translates those orders into optimized market interaction.
Central axis, transparent geometric planes, coiled core. Visualizes institutional RFQ protocol for digital asset derivatives, enabling high-fidelity execution of multi-leg options spreads and price discovery

Model Risk

Meaning ▴ Model Risk refers to the potential for financial loss, incorrect valuations, or suboptimal business decisions arising from the use of quantitative models.
A diagonal metallic framework supports two dark circular elements with blue rims, connected by a central oval interface. This represents an institutional-grade RFQ protocol for digital asset derivatives, facilitating block trade execution, high-fidelity execution, dark liquidity, and atomic settlement on a Prime RFQ

Transparent Models

A hybrid RFQ model offers superior execution by sequencing anonymous liquidity discovery with targeted quoting to minimize information leakage.
A central metallic bar, representing an RFQ block trade, pivots through translucent geometric planes symbolizing dynamic liquidity pools and multi-leg spread strategies. This illustrates a Principal's operational framework for high-fidelity execution and atomic settlement within a sophisticated Crypto Derivatives OS, optimizing private quotation workflows

Following Table Outlines

A downward SSTI shift requires algorithms to price information leakage and fracture hedging activity to mask intent.
A precision-engineered metallic and glass system depicts the core of an Institutional Grade Prime RFQ, facilitating high-fidelity execution for Digital Asset Derivatives. Transparent layers represent visible liquidity pools and the intricate market microstructure supporting RFQ protocol processing, ensuring atomic settlement capabilities

Explainable Ai

Meaning ▴ Explainable AI (XAI) refers to methodologies and techniques that render the decision-making processes and internal workings of artificial intelligence models comprehensible to human users.
A precise lens-like module, symbolizing high-fidelity execution and market microstructure insight, rests on a sharp blade, representing optimal smart order routing. Curved surfaces depict distinct liquidity pools within an institutional-grade Prime RFQ, enabling efficient RFQ for digital asset derivatives

Risk Management

Meaning ▴ Risk Management is the systematic process of identifying, assessing, and mitigating potential financial exposures and operational vulnerabilities within an institutional trading framework.
Intersecting translucent planes with central metallic nodes symbolize a robust Institutional RFQ framework for Digital Asset Derivatives. This architecture facilitates multi-leg spread execution, optimizing price discovery and capital efficiency within market microstructure

Validation Process

Walk-forward validation respects time's arrow to simulate real-world trading; traditional cross-validation ignores it for data efficiency.
A crystalline sphere, representing aggregated price discovery and implied volatility, rests precisely on a secure execution rail. This symbolizes a Principal's high-fidelity execution within a sophisticated digital asset derivatives framework, connecting a prime brokerage gateway to a robust liquidity pipeline, ensuring atomic settlement and minimal slippage for institutional block trades

Complex Model

A profitability model tests a strategy's theoretical alpha; a slippage model tests its practical viability against market friction.