The Inaccessibility of the Source: Toward a General Interface Epistemology of Science
ORCID: 0009-0002-7724-5762
24 March 2026
Original language of the article: English
Abstract
All scientific disciplines operate without direct epistemic access to their presumed sources. What is empirically given is not the source itself, but structured traces produced through interface-constrained transformations of interaction.
We show that the standard assumption of source accessibility constitutes an implicit epistemic leap, grounded in the unjustified treatment of the observational process as effectively invertible. In general, the interface mapping from interactions to registrations is non-invertible, and therefore observations determine not a unique source, but an equivalence class of admissible interactions.
As a consequence, objects, laws, invariants, and statistical constructs cannot be interpreted as properties of an underlying reality. They must be understood as operator-dependent structures defined within the space of registrations.
We demonstrate that stable knowledge arises from identifying invariant structures of traces rather than reconstructing sources. This establishes the principle of source inaccessibility as a general condition of scientific knowledge and motivates a shift toward a General Interface Epistemology of Science.
Keywords: epistemology of science; interface; observation; measurement; non-invertibility; registrations; admissible regions; invariants; state-space models; statistical ontology.
Introduction
Scientific knowledge is traditionally framed as knowledge about objects, systems, or underlying sources that generate observable phenomena. Measurement is interpreted as a process that provides access to these sources, possibly subject to noise or distortion.
This interpretation relies on a fundamental but rarely examined assumption: that the source is, at least in principle, accessible through observation.
We argue that this assumption is not satisfied.
What is empirically given is not the source itself, but the outcome of a transformation governed by an interface. Observation does not provide access to an underlying entity; it produces a registration.
In operational terms, every empirical datum arises from the interaction of:
an interface that constrains admissible forms of observation,
an interaction that generates observable effects,
a measurement procedure that fixes and stabilizes the result.
The result of this process is not a partial view of the source, but a structured trace within a space of registrations.
Despite this, scientific reasoning systematically performs an implicit transition from registrations to sources, treating constructed representations as if they were properties of underlying reality. This transition is not derived from observation; it is an epistemic assumption embedded in the standard interpretative framework.
The central claim of this paper is therefore:
There is no direct epistemic access to the source in any scientific discipline. All empirical knowledge is derived from structured traces produced within specific interfaces of registration.
This claim is not merely philosophical. It follows from the structural properties of the observational process and leads to a systematic reinterpretation of objects, laws, invariants, and statistical constructs.
The purpose of this work is to formalize this shift and to establish the inaccessibility of the source as a general condition of scientific knowledge.
A formal counterpart to this framework is developed in [1], where admissible regions and closure are introduced as primary structures in an interval-based formulation of dynamics. The present work establishes the epistemic necessity of these constructions.
The Standard Model of Scientific Access
The implicit structure of scientific reasoning can be represented as:
\[\begin{equation} \text{Source} \rightarrow \text{Observation} \rightarrow \text{Knowledge} \end{equation}\]
This scheme is typically interpreted as a direct or progressively refined access to an underlying reality.
However, in its operational form, this chain conceals a composition of transformations: \[\begin{equation} \text{Source} \xrightarrow{\mathcal{I}} R \xrightarrow{\mathcal{A}} \mathcal{K} \end{equation}\]
where:
\(\mathcal{I}\) is an interface operator mapping interactions to registrations,
\(R\) is the space of registrations,
\(\mathcal{A}\) is an analysis operator producing knowledge representations,
\(\mathcal{K}\) denotes the space of constructed descriptions (models, laws, parameters).
In this formulation, observation is not access but transformation, and knowledge is not extraction but construction.
Implicit Assumptions
Within the standard model, the following assumptions are implicitly adopted:
the source exists as an independent and well-defined entity,
the interface \(\mathcal{I}\) is transparent or approximately invertible,
observation provides partial but reliable access to the source,
noise is an additive perturbation independent of the signal,
analysis \(\mathcal{A}\) can recover or approximate the true state of the source.
These assumptions are rarely stated explicitly, yet they determine the interpretation of all subsequent results.
Ontological Commitments
From these assumptions, a set of ontological commitments follows:
Ontologization of point values: numerical outputs are interpreted as properties of the source rather than as elements of \(R\) or \(\mathcal{K}\);
Separation of signal and noise: observations are decomposed into a true component and a perturbation, implying that part of the trace is epistemically irrelevant;
Unique causal identification: stable observations are assumed to correspond to a unique underlying source or mechanism.
Hidden Non-Invertibility
The key structural assumption is the effective invertibility of the interface: \[\begin{equation} \mathcal{I}^{-1} : R \rightarrow \text{Source} \end{equation}\]
Even when not explicitly stated, scientific reasoning proceeds as if such an inverse exists, at least approximately.
However, as shown in Section [sec:formal_preliminaries], the interface operator is, in general, non-invertible. Therefore: \[\begin{equation} \exists r \in R \quad \text{such that} \quad |\mathcal{I}^{-1}(r)| > 1 \end{equation}\]
Multiple distinct interactions correspond to the same registration.
This invalidates:
unique reconstruction of the source,
unambiguous attribution of observed structures,
the interpretation of measurement as partial revelation of reality.
Epistemic Leap
Despite the non-invertibility of \(\mathcal{I}\), the standard model performs an implicit transition:
\[\begin{equation} R \longrightarrow \text{Source} \end{equation}\]
This transition is not derived from observation. It is an epistemic leap. This leap is not a logical consequence of the data, but a structural feature of the interpretative framework. In practice:
models are fitted to registrations,
parameters are estimated,
and the resulting constructs are reinterpreted as properties of the source.
Thus, knowledge is not inferred from the source, but projected onto it.
Consequences
The combination of non-invertibility and epistemic projection leads to systematic distortions:
operator-dependent constructs are treated as intrinsic properties,
variability is misinterpreted as noise,
different generative configurations are collapsed into a single causal narrative.
Conclusion. The standard model of scientific access relies on an implicit assumption of invertibility that is not justified. As a result, it systematically transforms structured registrations into ontological claims about an inaccessible source.
We argue that all three core assumptions—point ontologization, signal/noise separation, and unique causal identification—are unwarranted.
Non-Identifiability of the Source
Given a set of observations \(O \subset R\), standard reasoning attempts to reconstruct a unique source \(S\) such that:
\[\begin{equation} S \mapsto O \end{equation}\]
This formulation assumes that observations can be traced back to a well-defined origin.
Operator Formulation
Within the interface framework, observations arise through the interface operator: \[\begin{equation} \mathcal{I} : \Omega \rightarrow R \end{equation}\]
Thus, a set of observations \(O \subset R\) corresponds to a set of interactions: \[\begin{equation} \mathcal{I}^{-1}(O) = \{ \omega \in \Omega \mid \mathcal{I}(\omega) \in O \} \end{equation}\]
In general, this set is not a singleton.
Many-to-One Mapping
For a given observation \(r \in R\), there exist multiple interactions: \[\begin{equation} \exists \omega_1 \neq \omega_2 \in \Omega \quad \text{such that} \quad \mathcal{I}(\omega_1) = \mathcal{I}(\omega_2) = r \end{equation}\]
Equivalently: \[\begin{equation} |\mathcal{I}^{-1}(r)| > 1 \end{equation}\]
Thus, the mapping from interactions to registrations is many-to-one.
Equivalence Classes of Interactions
Each registration \(r \in R\) induces an equivalence class: \[\begin{equation} [\omega]_r := \{ \tilde{\omega} \in \Omega \mid \mathcal{I}(\tilde{\omega}) = r \} \end{equation}\]
All elements of \([\omega]_r\) are indistinguishable at the level of observation.
Therefore, observation does not identify a unique interaction, but only an equivalence class.
Non-Identifiability
Proposition 2 (Non-Identifiability of the Source). Given \(r \in R\), there does not exist a unique \(\omega \in \Omega\) such that: \[\begin{equation} \mathcal{I}(\omega) = r \end{equation}\]
Implication.
the source is not identifiable from observation,
reconstruction of a unique origin is not possible,
any inferred source is model-dependent.
From Equivalence Classes to Admissible Regions
The non-identifiability result shows that each registration \(r \in R\) corresponds not to a single interaction, but to an equivalence class: \[\begin{equation} [\omega]_r := \{ \tilde{\omega} \in \Omega \mid \mathcal{I}(\tilde{\omega}) = r \} \end{equation}\]
However, scientific analysis does not operate in \(\Omega\), but in the space of registrations \(R\).
We therefore introduce the corresponding structure in \(R\).
Induced Regions in Registration Space
Let \(\sim_\tau\) be the indistinguishability relation on \(R\) defined earlier.
Then each registration \(r \in R\) induces a region: \[\begin{equation} C_r := \{ \tilde{r} \in R \mid \tilde{r} \sim_\tau r \} \end{equation}\]
This region represents all registrations that are indistinguishable from \(r\) under the given interface and measurement resolution.
Mapping Between Interaction Classes and Regions
The interface operator induces a correspondence: \[\begin{equation} [\omega]_r \;\longrightarrow\; C_r \end{equation}\]
This mapping has the following interpretation:
\([\omega]_r\) captures indistinguishable interactions in \(\Omega\),
\(C_r\) captures indistinguishable registrations in \(R\),
both represent equivalence classes under different domains.
Thus, instead of attempting to identify elements of \(\Omega\), scientific reasoning can operate entirely within \(R\) using regions \(C_r\).
Admissible Regions
More generally, we define an admissible region: \[\begin{equation} C \subset R \end{equation}\]
as a set of registrations that:
are mutually compatible under indistinguishability,
are stable under repeated observation,
form a coherent structure under admissible transformations.
In this framework:
\(C\) replaces the notion of a point state,
\(C\) represents a class of possible realizations,
variability within \(C\) is intrinsic, not noise.
Dual Interpretation
The pair: \[\begin{equation} [\omega]_r \quad \text{and} \quad C_r \end{equation}\]
provides a dual description of non-identifiability:
in \(\Omega\): multiple interactions correspond to one registration,
in \(R\): one registration expands to a region of admissible realizations.
Thus, non-identifiability in the source domain induces regional structure in the registration domain.
Epistemic Shift
This leads to a fundamental shift:
from identifying sources to characterizing regions,
from point values to structured sets,
from reconstruction to stability analysis.
Conclusion. Scientific objects should not be identified with elements of \(\Omega\), but with admissible regions \(C \subset R\) that represent stable classes of registrations under the interface constraints.
Underdetermination of Causality
Let \(O \subset R\) be a stable set of observations.
Then: \[\begin{equation} \exists \Omega_1, \Omega_2 \subset \Omega, \quad \Omega_1 \neq \Omega_2, \quad \text{such that} \quad \mathcal{I}(\Omega_1) = \mathcal{I}(\Omega_2) = O \end{equation}\]
Thus:
multiple distinct generative configurations correspond to the same observations,
causal attribution is underdetermined,
stability of \(O\) does not imply uniqueness of its origin.
Epistemic Consequence
The standard inference: \[\begin{equation} O \longrightarrow S \end{equation}\] is not justified by the structure of observation.
Instead, observation determines only: \[\begin{equation} O \longrightarrow \mathcal{I}^{-1}(O) \end{equation}\]
that is, a class of admissible interactions.
Conclusion
Observation does not identify a source. It identifies an equivalence class of interactions compatible with the registration.
Therefore:
the source cannot be uniquely reconstructed,
causal explanations are not determined by observation alone,
scientific inference operates over equivalence classes, not individual origins.
In particular, the notion of a uniquely identifiable source is not an empirical result, but an interpretative assumption imposed on equivalence classes of interactions.
Interface-Constrained Observation
We introduce a minimal reformulation of the observational process.
\[\begin{equation} \text{Source} \;\not\rightarrow\; \text{Observation} \end{equation}\]
This expresses not a practical limitation, but a structural impossibility: there is no direct mapping from the source to observation within the empirical layer.
Instead, observation arises as the result of a mediated transformation:
\[\begin{equation} \text{Interface} + \text{Interaction} \rightarrow \text{Registration} \end{equation}\]
Formally, this is captured by the interface operator introduced earlier: \[\begin{equation} \mathcal{I} : \Omega \rightarrow R \end{equation}\]
where:
\(\Omega\) denotes the space of interactions (not directly observable),
\(R\) is the space of registrations,
\(\mathcal{I}\) defines the transformation from interaction to observable trace.
Observation as Transformation
Observation is not access but transformation: \[\begin{equation} r = \mathcal{I}(\omega), \quad r \in R, \ \omega \in \Omega \end{equation}\]
Thus:
observation does not reveal the source,
it produces a structured trace constrained by the interface,
the observed object is the result of \(\mathcal{I}\), not a pre-existing entity.
In particular, different interactions may produce identical registrations: \[\begin{equation} \mathcal{I}(\omega_1) = \mathcal{I}(\omega_2), \quad \omega_1 \neq \omega_2 \end{equation}\]
which eliminates the possibility of unique reconstruction.
Interface Dependence
The interface determines:
the admissible form of registration,
the resolution and granularity of observation,
the structure of variability within \(R\).
Changing the interface changes the observable space: \[\begin{equation} \mathcal{I}_1(\Omega) \neq \mathcal{I}_2(\Omega) \end{equation}\]
Thus, observation is not invariant under change of interface.
Measurement as Fixation
A measurement procedure is not merely a passive recording but a fixation operation that selects and stabilizes a registration within \(R\).
Let \(\mathcal{F}\) denote a fixation operator. Then: \[\begin{equation} r = \mathcal{F}(\mathcal{I}(\omega)) \end{equation}\]
The role of \(\mathcal{F}\) includes:
discretization,
thresholding,
temporal and spatial aggregation,
encoding into a representable form.
Thus, what is observed is not \(\mathcal{I}(\omega)\) directly, but a stabilized form of it.
Empirical Layer
The empirical domain is therefore fully contained in \(R\): \[\begin{equation} \text{Empirical domain} = R \end{equation}\]
All data, measurements, and observations belong to this space.
The source does not belong to the empirical layer and cannot be accessed or reconstructed uniquely from it.
Consequences
This leads to the following consequences: In particular, the notion of an observed object must be redefined as a stable class of registrations, rather than as a directly accessible entity.
observation is inherently interface-dependent,
there is no direct epistemic access to the source,
the observable object is a constructed trace,
invariants must be defined within \(R\), not in the source domain.
Conclusion. Observation is a transformation within the space of registrations. The source is never part of the empirical layer, and any reference to it arises only through model-dependent reconstruction.
Invariants of Traces
If the source is not accessible, then scientific knowledge must rely on something else.
We propose:
Scientific invariants are not properties of sources, but stable structures of traces.
Formally, let \(R\) be the space of registrations. An invariant is a structure \(I \subset R\) such that:
it is stable under variations of conditions,
it is reproducible across interfaces,
it remains invariant under admissible transformations.
Crucially:
invariants are not point values,
they are regions, patterns, or structural constraints.
Reinterpretation of Noise
In the standard framework, observation is represented as:
\[\begin{equation} \text{Observation} = \text{Signal} + \text{Noise} \end{equation}\]
This formulation assumes:
the existence of a separable signal component,
the independence of noise from the signal,
the possibility of recovering the signal by filtering.
We show that these assumptions are not justified.
Observation as Interface Output
Let \(\mathcal{I}\) be an interface operator: \[\begin{equation} \mathcal{I} : \Omega \rightarrow R \end{equation}\]
An observation is given by: \[\begin{equation} r = \mathcal{I}(\omega) \end{equation}\]
where \(\omega \in \Omega\) is an interaction configuration.
There is no decomposition of \(r\) into independent components prior to the action of \(\mathcal{I}\).
Failure of Additive Decomposition
Assume that \(r\) can be decomposed as: \[\begin{equation} r = s + n \end{equation}\]
where \(s\) is the signal and \(n\) is the noise.
This implies the existence of projections: \[\begin{equation} \pi_s(r) = s, \quad \pi_n(r) = n \end{equation}\]
such that: \[\begin{equation} r = \pi_s(r) + \pi_n(r) \end{equation}\]
However, since \(r = \mathcal{I}(\omega)\) and \(\mathcal{I}\) is not invertible, there is no unique mapping: \[\begin{equation} r \mapsto (s, n) \end{equation}\]
Moreover, for \(\omega_1 \neq \omega_2\): \[\begin{equation} \mathcal{I}(\omega_1) = \mathcal{I}(\omega_2) \end{equation}\]
which implies that any decomposition \((s, n)\) is not uniquely defined.
Interaction-Dependent Noise
Let us consider a family of interactions \(\omega \in \Omega\).
Define a structural descriptor of variability: \[\begin{equation} \Sigma(r) = \text{structure of variations in } R \end{equation}\]
Then: \[\begin{equation} \Sigma(\mathcal{I}(\omega)) = \Phi(\omega) \end{equation}\]
for some mapping \(\Phi\).
This implies:
the structure of what is called “noise” depends on \(\omega\),
noise is not independent of the interaction,
noise is not stationary across different configurations.
Therefore, noise is not an additive perturbation: \[\begin{equation} n \neq \text{independent additive term} \end{equation}\]
but an intrinsic component of the mapping \(\mathcal{I}\).
Non-Separability Principle
Proposition 2 (Non-Separability of Signal and Noise). Let \(r = \mathcal{I}(\omega)\). In general, there do not exist functions \(s(\cdot)\) and \(n(\cdot)\) such that: \[\begin{equation} r = s(\omega) + n(\omega) \end{equation}\] with \(s\) invariant and \(n\) independent.
Implication.
any decomposition into signal and noise is model-dependent,
filtering operations modify the structure of \(r\),
removing noise removes information about the interaction.
Observation as Structured Trace
We replace the additive model with:
\[\begin{equation} \text{Observation} = \text{Structured Trace} \end{equation}\]
A structured trace is defined by:
its geometric form in \(R\),
its variability structure \(\Sigma\),
its stability under repeated registration.
In this formulation:
variability is intrinsic,
noise is not removable,
invariants are defined over structures of variation, not point values.
Filtering as Projection
Classical filtering is usually interpreted as a recovery procedure: a corrupted observation is transformed into a cleaner representation that is assumed to be closer to the true signal.
In the interface framework, filtering must be described differently.
Definition 5 (Filter as Projection). A filter is an operator \[\begin{equation} F : R \rightarrow R' \end{equation}\] that maps the original space of registrations \(R\) into a reduced representation space \(R'\).
In general, \(F\) is not injective. Therefore: \[\begin{equation} \exists r_1 \neq r_2 \in R \quad \text{such that} \quad F(r_1) = F(r_2) \end{equation}\]
This means that distinct registrations with different internal variability structures may become indistinguishable after filtering.
Proposition 3 (Loss of Structural Information Under Filtering). If \(F : R \to R'\) is non-injective, then filtering reduces the discriminative structure available in the original registration space.
Proof sketch. Since \(F\) is non-injective, there exist equivalence classes \[\begin{equation} [r]_F := \{ \tilde r \in R \mid F(\tilde r) = F(r) \} \end{equation}\] containing more than one element. Hence, the filtered representation does not preserve the distinctions between members of \([r]_F\). Therefore, part of the structural content present in \(R\) is removed in \(R'\). \(\square\)
Thus, filtering is not recovery of an original signal. It is a projection that suppresses distinctions.
Reduction of Variability Structure
Let \(\Sigma(r)\) denote the variability structure of a registration \(r \in R\). After filtering, one obtains: \[\begin{equation} \Sigma'(F(r)) \end{equation}\]
In general, there is no mapping allowing reconstruction of \(\Sigma(r)\) from \(\Sigma'(F(r))\). Instead, one has: \[\begin{equation} \Sigma'(F(r)) \preceq \Sigma(r) \end{equation}\] where \(\preceq\) denotes structural reduction: the filtered object preserves at most a compressed fragment of the original variability pattern.
This has a direct epistemic consequence:
filtering reduces apparent instability,
but it also removes features generated by the interaction,
including regime transitions, asymmetries, tails, and local structural changes.
Interval Interpretation of Filtering
In the interval framework, a registration is not interpreted as a point but as belonging to a region of admissible realizations.
Let \(C \subset R\) be a structured admissible region. Under filtering, one obtains: \[\begin{equation} F(C) \subset R' \end{equation}\]
Since \(F\) is a projection, the image \(F(C)\) is not a faithful preservation of the original structure of \(C\), but a compressed representation of it.
This implies that filtering acts as an operation of structural contraction: \[\begin{equation} C \mapsto F(C) \end{equation}\] where the resulting image may be narrower, smoother, or more regular, yet less informative.
In this sense, filtering can be interpreted as a forced closure over distinguishable structures: \[\begin{equation} \mathrm{cl}_F(C) := F(C) \end{equation}\] not because it stabilizes truth, but because it suppresses distinctions inside the admissible region.
Compression Without Justified Ontology
In the standard framework, such contraction is often interpreted as epistemic improvement: the filtered signal is treated as more representative of the underlying source.
In the present framework, this interpretation is unjustified.
The passage \[\begin{equation} C \subset R \quad \longrightarrow \quad F(C) \subset R' \end{equation}\] does not reveal the source more clearly. It only produces a lower-dimensional or less discriminative representation of the trace.
Therefore:
filtering is a compression operation,
not a recovery operation,
and any gain in regularity is paid for by loss of structural content.
Consequence for Scientific Interpretation
If variability is part of the trace, then the scientific task is not to eliminate it, but to determine which structures of variability are stable, reproducible, and invariant under admissible transformations.
Accordingly:
the relevant invariant is not the filtered point,
but the stable admissible region together with its internal structure,
and the correct object of analysis is not a cleaned signal, but a structured domain of trace realizations.
Corollary 1. A filtered observation cannot be considered epistemically superior to the original registration unless the discarded structural content is proven irrelevant to the class of invariants under study.
Example: Median Filter and Loss of Structure
Consider a one-dimensional discrete registration: \[\begin{equation} r = (x_1, x_2, \dots, x_n) \in R \end{equation}\]
Define a median filter with window size \(3\): \[\begin{equation} (F(r))_i = \mathrm{median}(x_{i-1}, x_i, x_{i+1}) \end{equation}\]
Non-Injectivity
The median operator is non-injective.
For example, consider two distinct registrations: \[\begin{align} r^{(1)} &= (0, 10, 0) \\ r^{(2)} &= (5, 10, 5) \end{align}\]
Applying the filter: \[\begin{equation} F(r^{(1)}) = (10), \quad F(r^{(2)}) = (10) \end{equation}\]
Thus: \[\begin{equation} r^{(1)} \neq r^{(2)}, \quad F(r^{(1)}) = F(r^{(2)}) \end{equation}\]
These two inputs have fundamentally different variability structures:
\(r^{(1)}\) contains a sharp spike,
\(r^{(2)}\) contains a plateau.
After filtering, this distinction is completely lost.
Loss of Variability Structure
Let \(\Sigma(r)\) denote the local variability structure.
For the above signals: \[\begin{align} \Sigma(r^{(1)}) &\text{: high local contrast, impulse-like} \\ \Sigma(r^{(2)}) &\text{: low contrast, smooth plateau} \end{align}\]
However: \[\begin{equation} \Sigma'(F(r^{(1)})) = \Sigma'(F(r^{(2)})) \end{equation}\]
Thus, the filter collapses distinct structural regimes into a single representation.
Interval Interpretation
Let us now interpret this in the interval framework.
Suppose the admissible region of local registrations is: \[\begin{equation} C = \{ (x_{i-1}, x_i, x_{i+1}) \} \end{equation}\]
Within this region, different configurations correspond to different interaction structures (spike vs plateau).
The median filter maps this region as: \[\begin{equation} F(C) = \{ \mathrm{median}(x_{i-1}, x_i, x_{i+1}) \} \end{equation}\]
This mapping:
reduces dimensionality,
suppresses internal distinctions,
produces a single representative value.
Thus, a structured region \(C\) is mapped to a compressed image \(F(C)\) that does not preserve its internal geometry.
Interpretation
The median filter is often justified as removing impulsive noise.
In the present framework, it performs a different operation:
it enforces a constraint on admissible local orderings,
it removes configurations that violate median dominance,
it replaces structural diversity with a single statistic.
Therefore:
the filter does not recover an underlying signal,
it selects a projection consistent with a chosen criterion,
and discards alternative realizations within the admissible region.
Conclusion. Even in this simplest case, filtering does not separate signal from noise. It reduces the space of distinguishable registrations and removes structurally meaningful variability generated by the interaction.
This demonstrates that what is commonly called “noise removal” is, in fact, a collapse of admissible structures into a reduced representation.
Example: Mean Filter and Creation of Non-Observed Values
Consider the same local registration: \[\begin{equation} r = (x_{i-1}, x_i, x_{i+1}) \in R \end{equation}\]
Define a mean (averaging) filter: \[\begin{equation} (F(r))_i = \frac{x_{i-1} + x_i + x_{i+1}}{3} \end{equation}\]
Non-Preservation of Observed Values
Let us consider a simple case: \[\begin{equation} r = (0, 10, 0) \end{equation}\]
Then: \[\begin{equation} F(r) = \frac{10}{3} \approx 3.33 \end{equation}\]
The value \(3.33\):
does not appear in the original registration,
is not observed at any position,
is not a member of the initial admissible set of values.
Thus, the filter produces a value that was never registered.
Non-Injectivity and Collapse
As in the median case, distinct inputs may map to the same output.
For example: \[\begin{align} r^{(1)} &= (0, 10, 0) \\ r^{(2)} &= (2, 6, 2) \end{align}\]
Then: \[\begin{equation} F(r^{(1)}) = F(r^{(2)}) = \frac{10}{3} \end{equation}\]
However:
\(r^{(1)}\) is impulse-like,
\(r^{(2)}\) is smooth and low-contrast.
The averaging operation collapses these distinct structures into the same value.
Interval Interpretation
Let \(C \subset R\) be a region of admissible local configurations.
The mean filter maps: \[\begin{equation} F(C) = \left\{ \frac{x_{i-1} + x_i + x_{i+1}}{3} \right\} \end{equation}\]
This mapping has two critical properties:
it compresses the region \(C\) into a lower-dimensional representation,
it generates values that may not belong to the original set of realizations.
Thus, \(F(C)\) is not a subset of \(C\): \[\begin{equation} F(C) \not\subset C \end{equation}\]
The filter produces new elements that are artifacts of the transformation.
Interpretation
In the standard framework, averaging is interpreted as estimation of the true signal.
In the interface framework:
averaging is a synthetic operation,
it constructs values not present in the trace,
it replaces structural variability with a numerical artifact.
Conclusion. The mean filter does not reveal an underlying signal. It creates new values by collapsing admissible configurations and therefore cannot be interpreted as a recovery of the source.
Corollary 2. Averaging introduces values that do not correspond to any individual registration and therefore cannot be treated as direct observations of a physical or empirical state.
Example: Variance as an Operator-Dependent Construct
In the standard statistical framework, variance is interpreted as a property of a signal or an underlying process.
Let a local registration be given: \[\begin{equation} r = (x_1, x_2, \dots, x_n) \in R \end{equation}\]
The variance is defined as: \[\begin{equation} \mathrm{Var}(r) = \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2 \end{equation}\] where: \[\begin{equation} \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i \end{equation}\]
Dependence on Representation
The value of \(\mathrm{Var}(r)\) depends on:
the chosen window size \(n\),
the segmentation of the signal,
the preprocessing and filtering applied,
the interface resolution and sampling rate.
Thus, variance is not an intrinsic property of a source, but a functional of the registration representation.
Effect of Filtering
Let \(F : R \to R'\) be a filter.
Then: \[\begin{equation} \mathrm{Var}(F(r)) \neq \mathrm{Var}(r) \end{equation}\]
In many cases: \[\begin{equation} \mathrm{Var}(F(r)) < \mathrm{Var}(r) \end{equation}\]
This reduction is typically interpreted as noise suppression.
However, from the interface perspective:
\(F\) removes structural variability,
the decrease in variance reflects loss of information,
not recovery of an underlying stable signal.
Non-Invariance Under Reparameterization
Consider a transformation: \[\begin{equation} T : R \rightarrow R \end{equation}\]
Even simple transformations (e.g., scaling, nonlinear mapping, aggregation) change variance:
\[\begin{equation} \mathrm{Var}(T(r)) \neq \mathrm{Var}(r) \end{equation}\]
Thus, variance is not invariant under admissible transformations of the interface.
Interval Interpretation
Let \(C \subset R\) be an admissible region of registrations.
Variance can be interpreted as a functional: \[\begin{equation} \mathrm{Var} : C \rightarrow \mathbb{R} \end{equation}\]
This functional:
reduces the region \(C\) to a single scalar,
discards geometric and structural properties of \(C\),
depends on the choice of representation.
Thus, variance is a compression of the admissible region: \[\begin{equation} C \longrightarrow \mathrm{Var}(C) \end{equation}\]
which cannot be inverted.
Interpretation
Variance is often treated as a characteristic of the system.
In the interface framework:
variance is a property of the representation,
it depends on the interface and processing pipeline,
it does not uniquely characterize the underlying interaction.
Conclusion. Variance is not a fundamental property of a signal or system, but an operator-dependent scalar that compresses a structured region of registrations and removes internal distinctions.
General Consequence. Statistical descriptors such as mean and variance do not reveal properties of an underlying source. They are operator-dependent compressions of structured regions in the space of registrations.
Example: Distribution as an Aggregation Artifact
In the standard framework, a probability distribution is interpreted as a fundamental property of a signal or an underlying process.
Let a collection of registrations be given: \[\begin{equation} \mathcal{D} = \{ r_1, r_2, \dots, r_N \}, \quad r_i \in R \end{equation}\]
A distribution is constructed as a mapping: \[\begin{equation} \mathcal{P} : R \rightarrow [0,1] \end{equation}\] typically via empirical frequencies: \[\begin{equation} \mathcal{P}(x) = \frac{1}{N} \sum_{i=1}^{N} \mathbf{1}_{x}(r_i) \end{equation}\]
Dependence on Sampling Procedure
The constructed distribution depends on:
sampling rate,
observation window,
segmentation of the signal,
selection criteria,
interface characteristics.
Thus, \(\mathcal{P}\) is not uniquely defined by the underlying interaction, but by the procedure that generates \(\mathcal{D}\).
Non-Invariance Under Transformation
Let \(T : R \rightarrow R\) be a transformation (e.g., filtering, normalization, aggregation).
Then: \[\begin{equation} \mathcal{P}_T(x) \neq \mathcal{P}(x) \end{equation}\]
where \(\mathcal{P}_T\) is the distribution constructed from \(\{T(r_i)\}\).
Thus, the distribution is not invariant under admissible transformations of the registration space.
Aggregation-Induced Structure
The distribution is obtained by collapsing a structured set of registrations into frequency counts.
Let \(C \subset R\) be a region of admissible realizations. The distribution induces: \[\begin{equation} C \longrightarrow \mathcal{P}_C \end{equation}\]
where \(\mathcal{P}_C\) is a summary of occurrences.
This mapping:
ignores ordering,
ignores temporal structure,
ignores dependencies between elements,
retains only aggregated frequencies.
Thus, distinct structural configurations of \(C\) may produce identical distributions.
Non-Uniqueness
There exist distinct datasets: \[\begin{equation} \mathcal{D}_1 \neq \mathcal{D}_2 \end{equation}\] such that: \[\begin{equation} \mathcal{P}_{\mathcal{D}_1} = \mathcal{P}_{\mathcal{D}_2} \end{equation}\]
even though:
their internal structures differ,
their temporal organization differs,
their interaction patterns differ.
Interval Interpretation
Let \(C \subset R\) be an admissible region.
The distribution is a functional: \[\begin{equation} \mathcal{P} : C \rightarrow \mathcal{M} \end{equation}\] where \(\mathcal{M}\) is a space of measures.
This mapping:
compresses \(C\) into a measure,
removes structural geometry,
depends on aggregation rules.
Thus, \(\mathcal{P}\) is not a property of the region \(C\), but a representation derived from it.
Interpretation
In the standard view, the distribution characterizes the system.
In the interface framework:
the distribution is an artifact of aggregation,
it depends on how data are collected and processed,
it does not uniquely represent the underlying interaction.
Conclusion. A probability distribution is not a fundamental descriptor of a system. It is a projection of a structured set of registrations into a space of measures, and therefore cannot be interpreted as direct knowledge of the source.
Global Conclusion on Statistical Ontology. None of the standard statistical constructs—signal, noise, mean, variance, or distribution—constitute direct knowledge of a source. All of them are operator-dependent representations derived from structured regions in the space of registrations.
Example: Kalman Filter and Model-Dependent State Construction
The Kalman filter is commonly interpreted as a method for estimating the true hidden state of a dynamical system from noisy observations.
In its standard form: \[\begin{align} x_{t+1} &= A x_t + w_t \\ z_t &= H x_t + v_t \end{align}\]
where:
\(x_t\) is the hidden state,
\(z_t\) is the observation,
\(w_t, v_t\) are process and measurement noise.
Absence of Direct Access to the State
Within the interface framework:
only \(z_t \in R\) is observed,
\(x_t\) is not part of the registration space,
\(x_t\) is introduced as a model variable.
Thus, the state is not reconstructed from observation. It is postulated.
Model Dependence
The estimate \(\hat{x}_t\) depends on:
system matrices \(A, H\),
noise covariances \(Q, R\),
initial conditions.
Different choices of these parameters produce different trajectories: \[\begin{equation} \hat{x}_t^{(1)} \neq \hat{x}_t^{(2)} \end{equation}\]
while being consistent with the same observations \(\{z_t\}\).
Non-Uniqueness
Given observations \(\{z_t\}\), there exist multiple state trajectories: \[\begin{equation} x_t^{(1)}, x_t^{(2)}, \dots \end{equation}\] such that: \[\begin{equation} H x_t^{(i)} \approx z_t \end{equation}\]
Thus, the hidden state is not uniquely determined by observations.
Interpretation
In the standard view, the Kalman filter recovers the true state.
In the interface framework:
the filter constructs a trajectory in a chosen state space,
the result is a model-consistent projection,
not a reconstruction of an underlying source.
Conclusion. The Kalman filter does not reveal hidden reality. It produces a coherent state trajectory within a specified model, consistent with observed registrations but not uniquely determined by them.
Kalman Filtering as Dynamic Smoothing and Bayesian Projection
The Kalman filter is usually presented as a method for estimating the true hidden state of a dynamical system from noisy observations. In the present framework, this interpretation must be replaced.
Consider the standard linear-Gaussian scheme: \[\begin{align} x_{t+1} &= A x_t + w_t, \\ z_t &= H x_t + v_t, \end{align}\] where \(x_t\) is called the hidden state, \(z_t\) is the observation, and \(w_t\), \(v_t\) are process and measurement noise.
State as Model Coordinate, Not Observed Source
Within the interface framework, only the registrations \(\{z_t\}\) belong to the empirical layer. The variable \(x_t\) does not belong to the registration space \(R\) and is never directly observed. It is introduced as a model coordinate used to organize admissible trajectories.
Therefore, the Kalman filter does not recover a source. It constructs a trajectory in a chosen state space compatible with the observed registrations.
Prediction and Update as Successive Contractions
The Kalman recursion consists of two steps.
Prediction: \[\begin{align} \hat{x}_{t|t-1} &= A \hat{x}_{t-1|t-1}, \\ P_{t|t-1} &= A P_{t-1|t-1} A^\top + Q. \end{align}\]
Update: \[\begin{align} K_t &= P_{t|t-1} H^\top (H P_{t|t-1} H^\top + R)^{-1}, \\ \hat{x}_{t|t} &= \hat{x}_{t|t-1} + K_t (z_t - H \hat{x}_{t|t-1}), \\ P_{t|t} &= (I - K_t H) P_{t|t-1}. \end{align}\]
In the standard interpretation, prediction approximates the system evolution and update corrects the estimate toward the true state.
In the present interpretation:
prediction extends the previously stabilized admissible region according to the chosen dynamic rule;
update contracts this region using compatibility with the new registration;
the covariance update is not recovery of truth, but narrowing of a model-dependent admissible domain.
Thus, the filter acts as a dynamic smoothing operator together with a recursive projection onto a model-defined class of admissible trajectories.
Bayesian Projection
In Bayesian terms, the Kalman filter combines:
a prior induced by the dynamic model,
a likelihood induced by the observation model,
a posterior interpreted as the new admissible state region.
This does not imply access to a hidden source. It only means that the registration has been incorporated into a chosen inferential structure.
Hence, the posterior is not the revealed state of reality, but a projected and contracted representation conditioned by:
the model matrices \(A\) and \(H\),
the covariance assumptions \(Q\) and \(R\),
the initialization of the recursion.
Different choices of these elements produce different posterior trajectories while remaining compatible with the same registrations.
Kalman Filtering as Closure
Let \(C_{t|t-1}\) denote the predicted admissible region and let \(\mathcal{O}(z_t)\) denote the region compatible with the registration \(z_t\).
Then the update step can be interpreted schematically as: \[\begin{equation} C_{t|t} = \mathrm{cl}\big(C_{t|t-1} \cap \mathcal{O}(z_t)\big), \end{equation}\] where \(\mathrm{cl}\) is a closure operator determined by the filtering assumptions.
In this sense, the Kalman filter is a recursive closure procedure:
it propagates an admissible region,
intersects it with observational compatibility,
stabilizes the result in a tractable form.
In the linear-Gaussian case, this tractable form is represented by the pair \((\hat{x}_{t|t}, P_{t|t})\). The point estimate \(\hat{x}_{t|t}\) is therefore not a recovered hidden state, but a trace of the chosen closure procedure.
Consequences
This reinterpretation leads to the following conclusions:
the Kalman filter is not a recovery of hidden reality;
it is a model-dependent dynamic smoothing of registrations;
it performs a Bayesian projection into an admissible state representation;
its output is a closure artifact, not a direct epistemic access to the source.
Conclusion. Kalman filtering should not be interpreted as estimation of an underlying true state. It is a recursive operation of dynamic smoothing and Bayesian projection, whose output is a stabilized representation of admissible trajectories consistent with the observed registrations.
Corollary 3. The hidden state produced by a Kalman filter is no more ontologically primary than a mean, a variance, or a distribution. It is an operator-dependent construct generated from registrations under a chosen closure rule.
Reinterpretation of State-Space Models
The preceding analysis of Kalman filtering suggests a broader conclusion.
State-space models should not be interpreted as descriptions of hidden physical states. Instead, they can be understood as formal systems for organizing and stabilizing admissible structures in the space of registrations.
In the standard interpretation:
the state represents an underlying reality,
the model approximates its evolution,
estimation recovers the true trajectory.
In the interface framework:
the state is a coordinate representation of admissible trace structures,
the dynamics define admissible transformations of these structures,
estimation is a recursive selection of a stable representation under given constraints.
Thus, a state-space model can be viewed as a calculus over admissible regions: \[\begin{equation} C_{t+1} = \mathcal{T}(C_t), \end{equation}\] where \(C_t \subset R\) represents a structured region of registrations, and \(\mathcal{T}\) is a model-dependent transformation.
Observation introduces compatibility constraints: \[\begin{equation} C_t \longrightarrow C_t \cap \mathcal{O}(z_t), \end{equation}\]
followed by stabilization: \[\begin{equation} C_t \longrightarrow \mathrm{cl}(C_t). \end{equation}\]
In this formulation:
dynamics are transformations of admissible regions,
observations restrict these regions,
closure ensures representability and stability.
Epistemic Status of the State
The state variable \(x_t\) is therefore not an ontological entity but a representational device:
it encodes a compressed description of \(C_t\),
it depends on the chosen model and parameterization,
it is not uniquely determined by observations.
Different state-space constructions may yield different state trajectories while remaining consistent with the same set of registrations.
Consequences
This reinterpretation preserves the computational usefulness of state-space models while removing their ontological overreach.
In particular:
state estimation becomes region stabilization,
filtering becomes recursive closure,
model selection becomes choice of admissible transformation rules.
Conclusion. State-space models are not descriptions of hidden reality. They are formal calculi for constructing, transforming, and stabilizing admissible structures in the space of registrations.
Dynamics of Admissible Regions
The preceding analysis replaces point-based representations with admissible regions \(C \subset R\). We now extend this formulation to dynamics.
From State Trajectories to Region Evolution
In the standard framework, system evolution is described as a trajectory of states: \[\begin{equation} x_t \rightarrow x_{t+1} \end{equation}\]
Within the interface framework, the primary object is not a state \(x_t\), but an admissible region: \[\begin{equation} C_t \subset R \end{equation}\]
Dynamics is therefore expressed as a transformation: \[\begin{equation} C_t \xrightarrow{\mathcal{T}} C_{t+1} \end{equation}\]
where \(\mathcal{T}\) is an operator acting on regions of registrations.
Structure of the Evolution Operator
The operator \(\mathcal{T}\) captures:
admissible transformations of registration structures,
evolution of variability patterns,
constraints imposed by the interface and interaction mechanisms.
Unlike state evolution:
\(\mathcal{T}\) does not act on hidden variables,
it operates entirely within the empirical domain \(R\),
it preserves or transforms regions rather than points.
Incorporation of New Observations
Let \(z_t \in R\) be a new registration.
Observation introduces a compatibility constraint: \[\begin{equation} C_{t+1}^{*} = C_{t+1} \cap C_{z_t} \end{equation}\]
where \(C_{z_t}\) is the admissible region induced by \(z_t\).
This operation:
reduces the admissible region,
enforces consistency with new data,
preserves structural variability within the compatible set.
Closure and Stabilization
In general, the intersection \(C_{t+1}^{*}\) may not belong to a tractable or stable class of regions.
We therefore introduce a closure operator: \[\begin{equation} C_{t+1} = \mathrm{cl}(C_{t+1}^{*}) \end{equation}\]
where \(\mathrm{cl}\) enforces:
representability,
stability under transformations,
compatibility with the chosen analytical framework.
Thus, evolution proceeds as: \[\begin{equation} C_t \xrightarrow{\mathcal{T}} C_{t+1} \xrightarrow{\cap} C_{t+1}^{*} \xrightarrow{\mathrm{cl}} C_{t+1} \end{equation}\]
Comparison with State-Space Models
This formulation provides a direct reinterpretation of state-space dynamics:
state trajectories \(x_t\) are replaced by evolving regions \(C_t\),
prediction corresponds to \(\mathcal{T}\),
update corresponds to intersection with observational constraints,
estimation corresponds to closure.
Thus, classical filtering methods (including Kalman filtering) can be understood as specific implementations of this general scheme under particular choices of region representation and closure.
Interpretation
In this framework:
dynamics describes evolution of admissible structures, not hidden states,
observation restricts these structures, not reveals truth,
stability arises from invariance of regions under \(\mathcal{T}\) and \(\mathrm{cl}\).
Conclusion. The evolution of a system is not a trajectory of states, but a transformation of admissible regions in the space of registrations. This provides a natural alternative to state-based models and forms the basis for a general interface epistemology.
This shift eliminates the need for hidden states as primary objects and replaces them with directly observable, structurally stable regions in the space of registrations.
Closure as the Origin of Discreteness and Measurement
The introduction of admissible regions \(C \subset R\) and their stabilization through closure suggests a deeper structural role of the closure operator.
In the present framework, closure is not merely a technical operation used to ensure representability or analytical convenience. It reflects a structural constraint on admissible registrations.
Closure and Structural Stabilization
Let \(C \subset R\) be an admissible region of registrations. The closure operator: \[\begin{equation} \mathrm{cl}(C) \end{equation}\] produces a stabilized structure that is:
invariant under admissible transformations,
compatible with the interface constraints,
representable within a given measurement scheme.
This stabilized structure may decompose into components: \[\begin{equation} \mathrm{cl}(C) = \bigsqcup_{k \in K} C_k \end{equation}\]
where each \(C_k\) is stable under repeated application of \(\mathrm{cl}\) and cannot be continuously transformed into another component without violating admissibility.
Emergence of Discreteness
In the standard view, discreteness is often attributed to intrinsic properties of the source.
Within the interface framework, discreteness arises differently.
When \(\mathrm{cl}(C)\) decomposes into a family of stable components \(\{C_k\}\), any admissible measurement procedure that selects a representative element produces values indexed by \(k\): \[\begin{equation} \hat{x}_k = \phi(C_k) \end{equation}\]
Thus:
discreteness is not a primitive property of the source,
it is induced by the structure of \(\mathrm{cl}(C)\),
numerical values arise as traces of closure and fixation.
Measurement as Selection within Closure
Measurement is not a process of revealing a pre-existing value.
Let \(\phi\) be a fixation functional. Then measurement takes the form: \[\begin{equation} \hat{x} = \phi(\mathrm{cl}(C)) \end{equation}\]
The result depends on:
the structure of \(\mathrm{cl}(C)\),
the choice of fixation functional \(\phi\),
the interface and measurement protocol.
Different measurement procedures may correspond to different pairs \((\mathrm{cl}, \phi)\) and therefore produce different numerical outcomes while remaining compatible with the same admissible region.
Interpretation
This leads to a reinterpretation of fundamental notions:
values are not directly observed, but selected within closed admissible structures,
discreteness reflects the internal organization of admissibility under closure,
measurement outcomes are traces of closure-dependent stabilization.
Relation to the Interval Formalism
A detailed formalization of closure as a structural operator on admissible regions, including its role in inducing discrete spectra and stabilizing measurement outcomes, is developed in the interval-based formulation of dynamics [1].
In that framework:
admissible regions are primary,
closure defines their internal structure,
numerical values emerge as fixation traces.
The present work provides the epistemic interpretation of these structures within the space of registrations \(R\).
Conclusion
Closure is not a secondary operation applied to observations. It is a structural mechanism that shapes admissible regions and induces both discreteness and measurement outcomes.
This reinforces the central claim of the paper: observable quantities are not properties of a source, but traces of structured transformations within the space of registrations.
Conclusion
We establish the following principle:
The source is epistemically inaccessible. Scientific knowledge operates exclusively on structured traces produced within interfaces of registration.
This result is not a methodological limitation, but a structural consequence of the non-invertibility of the interface operator: \[\begin{equation} \mathcal{I} : \Omega \rightarrow R \end{equation}\]
Since \(\mathcal{I}\) is, in general, non-invertible, observation determines not a unique source, but an equivalence class of admissible interactions: \[\begin{equation} r \longrightarrow \mathcal{I}^{-1}(r) \end{equation}\]
Accordingly, scientific reasoning must be reformulated in terms of structures defined within the space of registrations \(R\).
Epistemic Consequences
This leads to the following shifts:
Objects: objects are not observed entities, but admissible regions \(C \subset R\) representing stable classes of registrations;
Laws: laws do not govern sources, but constrain transformations and invariants of traces within \(R\);
Invariants: invariants are not point values, but stable structures of variability preserved under admissible operations;
Measurement: measurement is not access, but transformation and fixation within an interface;
Statistical Constructs: quantities such as signal, noise, mean, variance, and distribution are operator-dependent representations, not properties of an underlying reality;
State: hidden states are not recovered, but constructed as model-dependent representations of admissible regions.
From Reconstruction to Structure
The traditional objective of science—reconstruction of an underlying source—must be replaced by a different objective:
The identification and analysis of stable structures within the space of registrations.
This entails a shift:
from sources to traces,
from points to regions,
from reconstruction to stability,
from hidden states to admissible domains.
Toward a General Interface Epistemology
The framework developed in this work suggests a generalization:
A General Interface Epistemology of Science.
In this framework:
the space of registrations \(R\) forms the empirical domain,
admissible regions \(C \subset R\) replace state variables,
dynamics is expressed as transformations of regions,
inference is a process of restriction and closure, not reconstruction.
This shift eliminates the need for hidden sources as primary objects and replaces them with directly observable, structurally stable regions.
Final Statement. Science does not access the source. It constructs stable descriptions within the constraints of its interfaces.
The formal development of this framework will be presented in subsequent work.
Cross-Disciplinary Evidence of Source Inaccessibility
The principle of source inaccessibility is not specific to physics. It appears systematically across scientific disciplines, each of which operates through its own interface and corresponding space of registrations.
In all cases, the same structural pattern is observed: \[\begin{equation} \text{Interaction} \xrightarrow{\mathcal{I}} R \xrightarrow{\mathcal{A}} \mathcal{K} \end{equation}\]
where the source remains outside the empirical domain.
Chemistry
In chemistry, the source (molecular structure and reaction mechanisms) is not directly observed.
The interface consists of:
spectroscopic measurements,
chromatographic separation,
macroscopic observables (color, temperature, phase).
The space \(R\) includes:
spectra,
concentration profiles,
reaction rates.
Chemical structure is inferred from these registrations:
multiple molecular configurations may produce similar spectra,
inverse reconstruction is model-dependent,
observed patterns correspond to equivalence classes of structures.
Thus, chemistry operates on structured traces rather than direct access to molecular reality.
Biology
In biology, the source (organismal or cellular processes) is not directly accessible.
The interface consists of:
experimental assays,
sequencing technologies,
imaging and behavioral observation.
The space \(R\) includes:
gene expression levels,
phenotypic traits,
recorded behaviors.
Biological interpretation involves identifying admissible regions: \[\begin{equation} C \subset R \end{equation}\]
Key property:
distinct underlying mechanisms may produce similar phenotypes,
identical observations do not uniquely determine biological processes.
Psychology
In psychology, the source (mental states, cognition, internal processes) is not observable.
The interface consists of:
behavioral responses,
self-reports,
physiological measurements.
The space \(R\) includes:
test results,
reaction times,
verbal and non-verbal responses.
Psychological constructs are inferred from patterns in \(R\):
multiple internal configurations may produce identical observable behavior,
constructs such as “intelligence” or “anxiety” are region-based abstractions,
no direct access to internal states is available.
Sociology
In sociology, the source (social structure or collective dynamics) is not directly observable.
The interface consists of:
surveys,
digital traces,
institutional records.
The space \(R\) includes:
response distributions,
network structures,
behavioral aggregates.
Social phenomena are constructed from these registrations:
different social configurations may yield similar observable patterns,
measurement procedures influence the structure of \(R\),
observed regularities do not uniquely determine underlying social mechanisms.
Medicine
In medicine, the presumed source (disease or pathological process) is not directly observed.
The interface consists of:
physiological responses,
laboratory measurements,
imaging modalities.
The space of registrations \(R\) includes:
symptoms,
biomarker values,
diagnostic images.
Diagnosis corresponds to identifying a region: \[\begin{equation} C \subset R \end{equation}\] of admissible clinical presentations.
Key property:
distinct underlying conditions may correspond to the same region \(C\),
identical observations do not imply a unique source.
Economics
In economics, the source (economic system) is not directly accessible.
The interface consists of:
statistical reporting,
market transactions,
data aggregation procedures.
The space \(R\) includes:
price series,
macroeconomic indicators,
aggregated indices.
These quantities are constructed:
they depend on definitions and aggregation rules,
they vary under changes of measurement protocols,
they do not uniquely determine an underlying economic state.
Thus, economic knowledge operates on structured traces rather than on direct access to the system.
History
In history, the source (past events) is not observable.
The interface consists of:
documents,
artifacts,
recorded narratives.
The space \(R\) includes:
textual records,
material evidence,
secondary reconstructions.
Historical knowledge is constructed by:
selecting compatible subsets of \(R\),
forming coherent regions \(C \subset R\),
imposing interpretative structures.
Different reconstructions may correspond to the same available traces.
Machine Learning
In machine learning, models are trained on datasets: \[\begin{equation} \mathcal{D} \subset R \end{equation}\]
The source (real-world generative process) is not available.
Learning consists of:
identifying stable patterns in \(\mathcal{D}\),
constructing functions consistent with observed data,
generalizing over admissible regions of inputs.
Generalization is therefore:
not recovery of the source,
but extension of learned structure within \(R\).
Different models may be equally consistent with the same dataset, reflecting non-identifiability.
General Pattern
Across disciplines, the following structure is invariant:
the source is not part of the empirical domain,
observations are interface-dependent registrations,
multiple generative configurations correspond to the same observations,
knowledge is constructed over admissible regions in \(R\).
Conclusion
The inaccessibility of the source is a general structural property of scientific inquiry.
It follows that:
empirical knowledge is necessarily interface-constrained,
reconstruction of a unique source is not justified,
stability and invariance must be defined within the space of registrations.
Final Remark. The recurrence of the same structural pattern across fundamentally different disciplines indicates that source inaccessibility is not a domain-specific limitation, but a general condition of scientific knowledge.