Mode-Dependent Differentials and Stability of Limits in Partial Differential Equations
ORCID: 0009-0002-7724-5762
28 April 2026
Original language of the article: English
Abstract
Classical definitions of derivatives and partial differential equations are formulated in terms of limits that characterize asymptotic behavior but do not encode the operational structure of convergence. In applications, however, limits are realized through concrete procedures involving discretization, scaling, ordering, and regularization, which can influence the resulting objects.
We introduce convergence modes as primary mathematical objects that explicitly represent these operational procedures. Derivatives are defined as limits along modes, yielding mode-dependent differentials. We then define stability with respect to classes of modes and show that classical differentiability is equivalent to invariance of the local differential under admissible variations of such modes.
At the level of partial differential equations, this leads to a mode-dependent notion of solution: instead of a single object, a PDE generates a family of admissible limits associated with a class of modes. For nonlinear conservation laws, including the Burgers equation, we show that entropy solutions arise from restricted classes of modes, thereby reinterpreting entropy conditions as constraints on admissible convergence procedures rather than additional conditions on solutions.
This framework provides a unified and structurally explicit connection between analysis, numerical approximation, and physical observability, and suggests a reinterpretation of differential calculus and PDE theory as theories of stability over spaces of admissible convergence modes.
Keywords: convergence modes; mode-dependent calculus; partial differential equations; entropy solutions; conservation laws; weak solutions; vanishing viscosity; numerical schemes; stability of limits; mode-invariance; mode-sensitivity; operational convergence; multiscale analysis; stochastic modes; variational principles
Introduction
Classical differential calculus defines derivatives through limiting processes that are formally specified but operationally abstract. In particular, standard definitions of limits characterize asymptotic behavior without encoding the procedures by which convergence is realized. Classical PDE theory provides several frameworks for selecting physically relevant solutions, including entropy conditions [1], [2] and vanishing viscosity methods.
In practice, however, all limits are accessed through concrete approximation processes: discretization, refinement of scales, ordering of updates, and the introduction of regularization. These operational structures are not auxiliary; they determine how limits are approached and, in many cases, which limits are obtained.
This discrepancy becomes critical in the analysis of partial differential equations. For linear, well-posed problems, different approximation procedures typically converge to the same solution. In contrast, for nonlinear systems—especially conservation laws—different approximation mechanisms (e.g., numerical schemes or regularizations) can produce distinct limits. Classical theory resolves this by imposing additional admissibility conditions, such as entropy, which effectively select among competing limits.
This work makes the underlying operational structure explicit. We introduce modes of convergence as structured objects encoding discretization, scaling relations, ordering, and regularization. Limits, derivatives, and PDE solutions are then defined relative to classes of such modes.
Within this framework, classical differential objects are reinterpreted as invariants under admissible variations of convergence modes. Partial differential equations give rise not to single solutions, but to mode-dependent solution sets, and admissibility conditions correspond to restrictions on the class of modes. Numerical schemes and physical measurement procedures appear naturally as generators of such mode classes.
The resulting perspective provides a unified and structurally explicit framework connecting analysis, numerical approximation, and physical observability through the concept of convergence modes.
Implicit Mode Assumptions in Classical Definitions
Classical differential calculus defines derivatives through limit processes. A standard definition (see evans?) takes the form: \[\frac{\partial u}{\partial x_i}(p) = \lim_{h \to 0} \frac{u(p + h e_i) - u(p)}{h}.\]
This definition is typically interpreted as canonical and procedure-independent.
However, it implicitly assumes a specific structure of convergence:
only one coordinate varies (the others are fixed),
increments in other directions are suppressed,
the approach to the limit is effectively one-dimensional.
In other words, the classical partial derivative corresponds to a particular class of convergence modes, characterized by asymptotic suppression of all coordinates except \(x_i\).
This structure is not part of the explicit definition, but is required for its interpretation.
From the perspective of convergence modes, the classical derivative can be reinterpreted as:
a stable limit taken over a class of suppression-compatible modes.
This observation leads to two consequences:
derivatives are not intrinsically unique objects, but depend on the admissible class of convergence modes;
classical differentiability corresponds to invariance of the resulting limit under admissible variations of such modes.
Thus, the classical definition of the derivative can be seen as a special case within a broader mode-dependent framework.
Modes of Convergence
Let \(A = \{x_1, \dots, x_n\}\) be a set of coordinate axes.
A mode of convergence is a sequence: \[r = \{(\Delta x_{1,n}, \dots, \Delta x_{n,n})\}_{n \in \mathbb{N}}\] such that: \[\Delta x_{i,n} \to 0 \quad \text{for all } i.\]
Additional structure may include:
scaling relations between components,
partial ordering of axes,
history dependence,
regularization parameters.
We denote by \(R\) a class of admissible modes.
Axioms of Convergence Modes
A convergence mode is not merely a sequence of increments, but a structured object encoding the operational procedure of approaching a limit.
Let \(A = \{x_1, \dots, x_n\}\) be coordinate axes.
A mode is a tuple: \[r = \{(\Delta x_{1,n}, \dots, \Delta x_{n,n}, \theta_n)\}_{n \in \mathbb{N}},\] where \(\theta_n\) denotes auxiliary parameters (regularization, scheme identifiers, history).
We impose the following axioms.
Axiom 1 (Vanishing).
\[\Delta x_{i,n} \to 0 \quad \text{for all } i.\]
Axiom 2 (Suppression).
For each axis \(x_i\), define the suppression class: \[R_i = \left\{ r : \frac{\Delta x_{j,n}}{\Delta x_{i,n}} \to 0 \quad \text{for all } j \neq i \right\}.\]
Axiom 3 (Ordering of Axes).
There exists a partial order \(\preccurlyeq\) on \(A\) such that: \[x_i \preccurlyeq x_j \;\Rightarrow\; \frac{\Delta x_{i,n}}{\Delta x_{j,n}} \to 0.\]
Axiom 4 (Ordering of Steps).
There exists a partial order \(\preceq\) on \(\mathbb{N}\) such that increments at step \(n\) depend only on steps \(k \preceq n\).
Axiom 5 (Topology on Modes).
The space of modes \(R\) carries a topology \(\tau\) generated by coordinatewise convergence of increments and auxiliary parameters.
This topology is used to define stability.
Axiom 6 (Compatibility).
The ordering of axes and the ordering of steps are compatible in the sense that: \[n \preceq m \;\Rightarrow\; \text{the asymptotic ordering of increments is preserved, i.e.} \quad \frac{\Delta x_{i,m}}{\Delta x_{j,m}} = O\!\left( \frac{\Delta x_{i,n}}{\Delta x_{j,n}} \right).\]
These axioms turn modes into a mathematical structure rather than a heuristic construct.
Scheme-Induced Modes
Let \(\mathcal{A}\) be a numerical approximation scheme for a PDE.
Each iteration of the scheme produces increments: \[(\Delta x_{1,n}, \dots, \Delta x_{n,n}, \theta_n),\] where \(\theta_n\) includes scheme parameters (e.g., artificial viscosity, flux limiter, stencil width).
Definition. The class of modes induced by the scheme \(\mathcal{A}\) is: \[R(\mathcal{A}) = \left\{ r : r \text{ reproduces the increment and regularization structure of } \mathcal{A} \right\}.\]
The solution generated by the scheme is: \[u(\mathcal{A}) = \lim_{r \in R(\mathcal{A})} u(r).\]
Interpretation.
A numerical scheme is a generator of a class of convergence modes. Numerical approximation schemes such as Godunov, Lax–Friedrichs, and monotone finite volume methods are known to converge to entropy solutions [3], [4], leveque?.
Mode-Dependent Regularization
Each mode includes auxiliary parameters: \[\theta_n = (\varepsilon_n, \delta_n, \text{scheme parameters}, \dots),\] where:
\(\varepsilon_n\) — viscosity,
\(\delta_n\) — smoothing scale,
scheme parameters — flux limiters, stencil width, etc.
Definition. Define the entropy-compatible class: \[R_{\mathrm{ent}} = \left\{ r : \varepsilon_n \sim \Delta x_n,\; \text{monotonicity conditions hold} \right\}.\]
Interpretation.
Entropy corresponds to restrictions on regularization patterns within modes.
Mode-Dependent Differentials
Let \(u: \mathbb{R}^n \to \mathbb{R}\) and \(p \in \mathbb{R}^n\).
Define: \[p_n = p + (\Delta x_{1,n}, \dots, \Delta x_{n,n}).\]
For a given mode \(r\), define the mode-dependent partial differential:
\[\partial_{x_i}^r u(p) = \lim_{n \to \infty} \frac{u(p_n) - u(p)}{\Delta x_{i,n}},\] provided the limit exists.
This definition depends explicitly on the mode \(r\).
Mode Calculus Rules
Theorem (Mode Chain Rule). Let \(u\) admit a mode-stable derivative and let \(g\) be classically differentiable. Then: \[d_R(g \circ u)(p) = g'(u(p)) \, d_R u(p).\]
Theorem (Mode Product Rule). If \(u\) and \(v\) admit mode-stable derivatives: \[d_R(uv)(p) = u(p)\,d_R v(p) + v(p)\,d_R u(p).\]
Interpretation.
Mode calculus forms a consistent differential calculus extending the classical one.
Beyond Classical Differentiability
Theorem. There exist functions that do not possess classical partial derivatives at a point but admit mode-stable derivatives for certain classes of modes.
Examples.
Lipschitz functions: directional derivatives exist; suppression modes select stable ones.
BV functions: approximate gradients coincide with mode-stable derivatives.
Example: \[u(x,y) = |x| + y^2.\] The derivative in \(x\) fails at \(x=0\), but suppression modes yield a stable derivative.
Interpretation.
Mode-stable calculus strictly extends classical differentiability.
Continuity in Mode Topology
Theorem. Let \(u\) admit a mode-stable derivative with respect to \(R\). If \(r_k \to r\) in the topology \(\tau\), then: \[d_{r_k} u(p) \to d_r u(p).\]
Interpretation.
Mode-stable derivatives vary continuously with respect to the mode.
Compatibility with Classical Differentiability
Let \(u:\mathbb{R}^n \to \mathbb{R}\) be classically differentiable at \(p\).
Proposition (Mode-Linear Approximation). For any mode \(r\) satisfying Axioms 1–6: \[u(p_n) = u(p) + \langle \nabla u(p), \Delta x_n \rangle + o(\|\Delta x_n\|).\]
Corollary. For any suppression class \(R_i\): \[\frac{u(p_n) - u(p)}{\Delta x_{i,n}} \;\longrightarrow\; \partial_{x_i} u(p),\] and the limit is stable with respect to perturbations of the mode in the topology \(\tau\).
Interpretation.
Classical differentiability is equivalent to invariance of the differential under admissible variations of modes.
Stability with Respect to Modes
Let \(R_i \subset R\) be the class of modes satisfying suppression: \[\frac{\Delta x_{j,n}}{\Delta x_{i,n}} \to 0 \quad (j \neq i).\]
Definition. We say that \(u\) admits a stable partial derivative with respect to \(x_i\) at \(p\) if there exists \(L\) such that:
for any \(\varepsilon > 0\), there exists a neighborhood \(U \subset R_i\) and \(N\) such that for all \(r \in U\) and \(n \geq N\): \[\left| \frac{u(p_n(r)) - u(p)}{\Delta x_{i,n}(r)} - L \right| < \varepsilon.\]
Proposition. If \(u\) is classically differentiable at \(p\), then the stable derivative exists and coincides with the classical one.
Thus, classical differentiability is equivalent to invariance under admissible perturbations of modes.
Application: Burgers Equation
Consider: \[u_t + \left(\frac{u^2}{2}\right)_x = 0,\] with Riemann initial data: \[u(x,0) = \begin{cases} 1, & x < 0, \\ 0, & x > 0. \end{cases}\]
Let modes include a regularization parameter \(\varepsilon_n\).
We consider two classes of modes.
Class \(R_1\) (Viscous modes).
\[\varepsilon_n \sim \Delta x_n.\] Then the limit corresponds to the entropy shock solution.
Class \(R_2\) (Under-regularized modes).
\[\varepsilon_n \ll (\Delta x_n)^2.\] In this case, regularization is insufficient to enforce entropy selection, and non-entropy weak solutions may arise.
Conclusion.
\[S(R_1) \neq S(R_2).\]
This demonstrates mode-sensitivity.
Interpretation
The derivative becomes an operational object:
dependent on convergence mode,
stabilized only under admissible variations,
linked to physical observability.
Similarly, PDE solutions are not uniquely determined by equations alone, but by admissible classes of convergence modes. This perspective is related to convergence results for numerical schemes, such as the Barles–Souganidis framework [5], where consistency and stability determine the limiting solution.
Mode-Structured Partial Differential Equations
PDE as Mode-Dependent Objects
The classical form of a partial differential equation is written as: \[F(u, \nabla u, \nabla^2 u, \dots) = 0.\]
This formulation implicitly assumes that:
derivatives are uniquely defined,
limits are independent of the convergence procedure,
local approximations are invariant.
Within the framework of mode-dependent analysis, these assumptions do not hold in general.
We replace the classical structure with: \[F_r(u) = 0,\] where \(F_r\) is an operator constructed from mode-dependent differentials: \[d_r u, \quad \nabla_r u, \quad \operatorname{div}_r u, \dots\] and may additionally depend on mode parameters such as regularization, scaling relations, and ordering of axes.
Solutions as Mode-Dependent Limits
For a fixed mode \(r\), we define a solution as a limit of approximations: \[u(r) = \lim_{n \to \infty} u_n(r),\] where \(u_n(r)\) is a discrete or regularized approximation scheme compatible with the mode \(r\).
Given a class of modes \(R\), we define the set of admissible solutions as: \[S(R) = \bigcup_{r \in R} S(r).\]
This set constitutes a solution cloud, determined not solely by the equation, but by the class of admissible modes.
Classical Weak Solutions as Maximal Clouds
For a sufficiently broad class of modes \(R_{\mathrm{all}}\), we obtain: \[S(R_{\mathrm{all}}) = \{\text{all weak solutions}\}.\]
This formalizes the key observation:
Partial differential equations, by themselves, do not determine dynamics; they define only a set of compatible weak solutions.
Entropy Solutions as Stable Mode Classes
For nonlinear conservation laws, such as the Burgers equation: \[u_t + f(u)_x = 0,\] classical entropy solutions arise as: \[S_{\mathrm{ent}} = S(R_{\mathrm{ent}}),\] where \(R_{\mathrm{ent}}\) is a class of modes satisfying:
coherent suppression of non-dominant directions,
compatible regularization,
stability with respect to perturbations in \(R\).
Thus, entropy solutions can be interpreted as arising from restricted classes of modes: \[R_{\mathrm{ent}} \subset R.\]
This leads to a central statement:
Entropy is not an additional condition imposed on solutions, but a restriction on admissible modes of convergence.
Linear PDE as Mode-Invariant Systems
For linear PDE (e.g., transport, diffusion, wave equations), we typically have: \[S(R_{\mathrm{all}}) = S(R_{\mathrm{ent}}) = \{u_{\mathrm{classical}}\}.\]
That is:
the solution cloud collapses to a single object,
different modes do not produce distinct limits,
stability with respect to \(R\) is automatically satisfied.
This explains why linear PDE are well-posed and insensitive to the choice of approximation procedure: they are invariant under admissible mode variations.
Nonlinear PDE as Mode-Sensitive Systems
For nonlinear PDE, the situation changes fundamentally:
different modes produce different limits,
different schemes yield different solutions,
different regularizations induce different dynamics.
This formalizes the observation:
Nonlinearity manifests as sensitivity to the structure of the convergence process.
Mode-Weak Formulation
Definition. A function \(u\) is a mode-weak solution of: \[u_t + f(u)_x = 0\] if for all test functions \(\varphi\): \[\int u\,\partial_t^R \varphi + f(u)\,\partial_x^R \varphi \,dx\,dt = 0.\]
Interpretation.
Weak formulation becomes dependent on the class of modes.
Mode Entropy Condition
Let \(\eta\) be convex and \(q\) its entropy flux.
Definition. A mode-weak solution satisfies the entropy condition if: \[\partial_t^R \eta(u) + \partial_x^R q(u) \le 0\] in the sense of distributions.
Interpretation.
Entropy corresponds to monotonicity under mode-stable differentiation.
Mode Completeness
Theorem. Let \(u\) be a weak solution of a PDE admitting a consistent discrete approximation.
Then there exists a mode \(r \in R_{\mathrm{all}}\) such that: \[u = \lim_{n \to \infty} u_n(r).\]
Interpretation.
\[S(R_{\mathrm{all}}) = \{\text{all weak solutions}\}.\]
Physical Observability as Stability
A physically observable solution is defined as a stable limit over admissible modes: \[u_{\mathrm{phys}} = \text{stable limit over admissible modes}.\]
Thus, the notion of physical relevance is tied not to the equation alone, but to the stability of the induced solution under perturbations of the convergence structure.
This extends the earlier principle:
The derivative becomes an operational object linked to physical observability.
to the level of partial differential equations.
Mode-Dependent Differential Operators
Mode-Stable Gradient
Let \(u:\mathbb{R}^n \to \mathbb{R}\) and let \(R\) be a class of admissible modes.
Assume that for each coordinate axis \(x_i\), the stable partial derivative \[\partial_{x_i} u(p; R)\] exists in the sense of stability with respect to modes suppressing all other directions.
Definition. The mode-stable gradient of \(u\) at \(p\) is defined as: \[\nabla_R u(p) = \left( \partial_{x_1} u(p; R), \dots, \partial_{x_n} u(p; R) \right).\]
This object is:
invariant with respect to admissible variations within \(R\),
dependent on the class of modes,
equal to the classical gradient \(\nabla u\) when \(u\) is classically differentiable and \(R\) is sufficiently rich.
Mode-Stable Divergence
Let \(v:\mathbb{R}^n \to \mathbb{R}^n\), with components \(v = (v_1, \dots, v_n)\).
Definition. The mode-stable divergence is defined as: \[\operatorname{div}_R v(p) = \sum_{i=1}^n \partial_{x_i} v_i(p; R).\]
Each component is differentiated using stable derivatives consistent with the class \(R\).
When classical differentiability holds and \(R\) is sufficiently broad, we recover the standard divergence.
Mode-Stable Curl (in \(\mathbb{R}^3\))
Let \(v:\mathbb{R}^3 \to \mathbb{R}^3\), with \(v = (v_1, v_2, v_3)\).
Definition. The mode-stable curl is defined as: \[\operatorname{curl}_R v = \left( \partial_y v_3 - \partial_z v_2,\; \partial_z v_1 - \partial_x v_3,\; \partial_x v_2 - \partial_y v_1 \right),\] where all partial derivatives are understood in the mode-stable sense.
Reformulation of PDE Using Mode-Dependent Operators
Linear Transport Equation
Classically: \[u_t + c\,u_x = 0.\]
In the mode-dependent formulation: \[\partial_t u(t,x; R) + c\,\partial_x u(t,x; R) = 0.\]
If the equation is mode-invariant, then for a sufficiently broad class \(R\):
stable derivatives exist,
all admissible modes produce the same solution.
Nonlinear Conservation Law
\[u_t + f(u)_x = 0.\]
In mode-dependent form: \[\partial_t u(t,x; R) + \partial_x f(u(t,x; R); R) = 0.\]
For broad \(R\): \[S(R) = \{\text{all weak solutions}\}.\]
For restricted \(R_{\mathrm{ent}}\): \[S(R_{\mathrm{ent}}) = \{\text{entropy solutions}\}.\]
Elliptic Equation
\[-\Delta u = f \quad \Longrightarrow \quad -\operatorname{div}_R(\nabla_R u) = f.\]
For regular functions and linear operators: \[\nabla_R u = \nabla u, \quad \operatorname{div}_R = \operatorname{div},\] and the solution cloud collapses to the classical solution.
Mode Classes and Solution Clouds
| PDE | Mode Class \(R\) | Solution Set \(S(R)\) | Status |
|---|---|---|---|
| Linear transport | broad \(R\) | single classical solution | mode-invariant |
| Linear transport | \(R' \subset R\) | same solution | stable |
| Burgers | broad \(R\) | all weak solutions | non-unique |
| Burgers | \(R_{\mathrm{ent}}\) | entropy solution | selection by \(R\) |
| Conservation laws | broad \(R\) | weak solutions | cloud |
| Conservation laws | \(R_{\mathrm{phys}}\) | physical solution | physics = constraint on \(R\) |
| Elliptic (Poisson) | broad \(R\) | unique classical solution | mode-invariant |
| Wave equation | broad \(R\) | unique classical solution | mode-invariant |
Interpretation
The table illustrates the central principle:
linear PDE are invariant under admissible mode variations,
nonlinear PDE are sensitive to the structure of modes,
physical selection of solutions corresponds to restricting the class \(R\),
stability with respect to \(R\) determines observability.
Relation to Directional, Gateaux, and Frechet Derivatives
Classical differential calculus provides several generalizations of partial derivatives, including directional, Gateaux, and Frechet derivatives.
Directional derivative.
The directional derivative of \(u\) at point \(p\) in direction \(v\) is defined as: \[D_v u(p) = \lim_{h \to 0} \frac{u(p + h v) - u(p)}{h}.\]
This construction fixes a single direction and considers convergence along a one-dimensional path.
Gateaux derivative.
The Gateaux derivative generalizes this idea to arbitrary directions: \[D_v u(p) = \lim_{h \to 0} \frac{u(p + h v) - u(p)}{h},\] whenever the limit exists for all \(v\).
Frechet derivative.
The Frechet derivative requires uniform linear approximation: \[u(p + h) = u(p) + L(h) + o(\|h\|),\] where \(L\) is a linear operator.
Mode-theoretic interpretation.
All these notions correspond to specific classes of convergence modes:
directional derivative: modes concentrated along a fixed direction;
Gateaux derivative: families of directional modes;
Frechet derivative: invariance under a broad class of modes.
In particular:
The Frechet derivative exists if and only if the local differential is invariant under admissible variations of convergence modes.
Generalization.
The mode-dependent differential \(d_R u(p)\) extends these constructions:
it does not restrict convergence to a single direction;
it allows suppression, ordering, and multiscale structure;
it incorporates regularization and history dependence;
it defines a differential as a stable limit over a class of modes.
Thus, classical derivatives appear as special cases corresponding to restricted mode classes.
Interpretation.
This perspective suggests that:
directional derivatives capture one-dimensional mode restrictions;
Gateaux derivatives capture families of such restrictions;
Frechet differentiability corresponds to full mode-invariance;
mode-dependent calculus provides a unified framework encompassing all of them.
Relation to Weak Derivatives and Distributions
Classical analysis extends differentiation beyond pointwise definitions through the notion of weak derivatives and distributions.
Weak derivative.
A function \(u \in L^1_{\mathrm{loc}}(\Omega)\) admits a weak derivative \(\partial_{x_i} u\) if: \[\int_\Omega u \, \partial_{x_i} \varphi \, dx = - \int_\Omega (\partial_{x_i} u) \, \varphi \, dx\] for all test functions \(\varphi \in C_c^\infty(\Omega)\).
This definition does not rely on pointwise limits, but instead uses integration against smooth test functions.
Mode-theoretic interpretation.
From the perspective of convergence modes, weak derivatives can be interpreted as limits taken over broad classes of admissible modes.
More precisely, let \(R_{\mathrm{weak}}\) denote a class of modes satisfying:
admissible regularization (e.g., mollification),
convergence in \(L^1_{\mathrm{loc}}\),
stability with respect to the mode topology.
Then the weak derivative corresponds to a mode-stable differential: \[\partial_{x_i} u = d_{R_{\mathrm{weak}}} u.\]
Distributions as mode-stable limits.
A distribution can be interpreted as the limit of regularized functions: \[u_\varepsilon \to u,\] where the regularization parameter \(\varepsilon \to 0\).
In the mode framework, such regularizations are encoded as auxiliary parameters within the mode: \[r = \{(\Delta x_n, \varepsilon_n, \theta_n)\}.\]
Thus, distributions arise as limits over classes of modes that include smoothing operations.
Interpretation.
This perspective yields the following correspondence:
classical derivative: limit along highly restricted modes (suppression);
weak derivative: limit along broad classes of regularizing modes;
distribution: equivalence class of mode-stable limits.
Conceptual consequence.
Weak differentiation does not eliminate dependence on the limiting procedure; it enlarges the admissible class of modes.
Thus:
Weak derivatives correspond to invariance of the differential under a broad class of regularizing convergence modes.
Connection to PDE.
For nonlinear PDE, weak solutions are precisely those functions that are stable under such broad mode classes.
This explains why weak solutions form large solution clouds: \[S(R_{\mathrm{weak}}) = \{\text{all weak solutions}\}.\]
Additional conditions (entropy, viscosity, monotonicity) restrict the mode class and reduce the cloud to physically relevant solutions.
Relation to Sobolev, BV, and Measure-Valued Solutions
Classical analysis further refines weak differentiability through the introduction of functional spaces such as Sobolev spaces, functions of bounded variation (BV), and measure-valued solutions.
Sobolev spaces.
A function \(u \in W^{1,p}(\Omega)\) admits weak derivatives in \(L^p(\Omega)\). This imposes integrability constraints on both the function and its derivatives.
Mode-theoretic interpretation.
Sobolev regularity can be interpreted as a restriction on admissible convergence modes.
Let \(R_{W^{1,p}}\) denote the class of modes satisfying:
convergence in \(L^p\),
boundedness of mode-dependent gradients: \[\sup_{r \in R_{W^{1,p}}} \|d_r u\|_{L^p} < \infty,\]
stability with respect to the mode topology.
Then: \[u \in W^{1,p} \quad \Longleftrightarrow \quad u \text{ is mode-stable over } R_{W^{1,p}}.\]
BV functions.
Functions of bounded variation satisfy: \[\sup_n TV(u_n) < \infty.\]
In the mode framework, this corresponds to a class \(R_{BV}\) of modes with uniformly bounded variation:
\[\sup_{r \in R_{BV}} TV(u(r)) < \infty.\]
Thus:
BV regularity corresponds to mode-stability under variation-bounded convergence modes.
Measure-valued solutions.
In nonlinear PDE, oscillations and concentrations may prevent strong convergence. This leads to measure-valued solutions, represented by Young measures.
In the mode framework, such behavior arises naturally when the mode class allows:
oscillatory increments,
lack of strong compactness,
multiple limiting branches.
Thus, measure-valued solutions correspond to:
\[S(R_{\mathrm{osc}}),\] where \(R_{\mathrm{osc}}\) is a class of highly oscillatory modes.
Hierarchy of mode classes.
These correspondences can be summarized as:
| Classical differentiability | narrow, suppression-based modes |
| Sobolev spaces | integrability-controlled modes |
| BV functions | variation-controlled modes |
| Weak solutions | broad admissible modes |
| Measure-valued solutions | oscillatory mode classes |
Conceptual consequence.
Functional spaces can be reinterpreted as classifications of convergence modes.
Regularity is not only a property of functions, but a property of admissible classes of convergence modes.
Connection to PDE.
Different functional settings correspond to different mode classes:
strong solutions: narrow, stable modes,
weak solutions: broad mode classes,
entropy solutions: restricted mode classes,
measure-valued solutions: highly non-compact mode classes.
Thus, the hierarchy of solution concepts in PDE theory can be understood as a hierarchy of admissible convergence modes.
Mode-Invariance of Partial Differential Equations
Basic Definition
Let \(\mathcal{R}\) be a family of classes of modes (i.e., subsets of the space of all admissible modes).
For each \(R \in \mathcal{R}\), let \(S(R)\) denote the corresponding solution cloud.
Definition. A PDE is said to be mode-invariant with respect to \(\mathcal{R}\) if: \[\forall R_1, R_2 \in \mathcal{R}: \quad S(R_1) = S(R_2).\]
That is, the choice of admissible mode class does not affect the set of solutions.
Strong Mode-Invariance
Let \(R_{\mathrm{all}}\) be a sufficiently broad class of admissible modes.
Definition. A PDE is said to be strongly mode-invariant with respect to \(R_{\mathrm{all}}\) if: \[\forall R \subseteq R_{\mathrm{all}}: \quad S(R) = S(R_{\mathrm{all}}).\]
In this case, any restriction of admissible modes leaves the solution set unchanged.
Mode-Invariance of Linear PDE
Let \(L\) be a linear differential operator with smooth coefficients.
Consider: \[L u = f.\]
Let \(R_{\mathrm{all}}\) be a sufficiently broad class of admissible modes.
Theorem (Mode-Invariance of Linear PDE). Assume:
\(L\) is linear,
the PDE \(Lu = f\) admits a unique weak solution,
the operator is continuous under mode-stable differentiation.
Then: \[\forall R \subseteq R_{\mathrm{all}}: \quad S(R) = S(R_{\mathrm{all}}).\]
Interpretation.
Linear PDE are strongly mode-invariant: the solution cloud collapses to a single object independently of the mode class.
Mode-Sensitive PDE
Definition. A PDE is said to be mode-sensitive if there exist mode classes \(R_1, R_2\) such that: \[S(R_1) \neq S(R_2).\]
In this case, the equation alone does not determine the solution; the choice of admissible modes becomes part of the problem definition.
Interpretation
The distinction between mode-invariant and mode-sensitive PDE leads to the following classification:
Mode-invariant PDE: the solution cloud does not depend on the class of modes. The equation rigidly determines the solution.
Mode-sensitive PDE: the solution cloud depends on the admissible class of modes. The equation alone is underdetermined without specifying \(R\).
Examples
Linear PDE (e.g., transport, diffusion, wave equations) are natural candidates for mode-invariant systems.
Elliptic PDE (e.g., Poisson equation under standard conditions) are typically strongly mode-invariant.
Nonlinear conservation laws are mode-sensitive:
broad \(R\) yields all weak solutions,
restricted \(R\) (e.g., entropy-compatible modes) yields physically relevant solutions.
It is well known that nonlinear conservation laws admit multiple weak solutions, and additional admissibility criteria are required [1], [2], [6].
Conceptual Consequence
This classification formalizes the following principle:
Linearity corresponds to invariance with respect to admissible convergence modes, whereas nonlinearity manifests as sensitivity to the structure of the convergence process.
Mode Geometry
Mode Tangent Structure
Define the mode tangent cone at \(p\): \[T_p^R = \left\{ \lim_{n \to \infty} \frac{\Delta x_n}{\|\Delta x_n\|} : r \in R \right\}.\]
This generalizes the classical tangent space.
Properties
For broad \(R\): \[T_p^R = S^{n-1}.\]
For suppression class \(R_i\): \[T_p^R = \{\pm e_i\}.\]
For ordered modes: the tangent structure reflects the hierarchy of axes.
Mode Differential as Functional
The mode differential defines: \[d_R u(p) : T_p^R \to \mathbb{R}.\]
The classical differential is recovered when: \[T_p^R = \mathbb{R}^n.\]
Soundness and Completeness
Theorem (Soundness). If \(u\) is a classical solution, then:
\(u\) is stable for any broad \(R\),
mode-dependent operators coincide with classical ones.
Theorem (Completeness). If \(u\) is mode-stable for a broad class \(R\), then \(u\) is a weak solution.
Interpretation.
The mode framework is both sound and complete with respect to classical PDE theory.
Mode Interpretation of Kruzhkov Entropy Inequalities
Consider the scalar conservation law: \[u_t + f(u)_x = 0.\]
The classical Kruzhkov entropy condition introduced by Kruzhkov [1] requires that for all constants \(k \in \mathbb{R}\): \[\partial_t |u - k| + \partial_x \big( \operatorname{sgn}(u - k)(f(u) - f(k)) \big) \le 0\] in the sense of distributions.
Classical Interpretation.
This inequality is imposed as an additional admissibility condition selecting the physically relevant solution among weak solutions.
Mode Interpretation.
In the mode-dependent framework, the entropy inequality is not a primary condition on solutions, but a consequence of restricting the class of admissible convergence modes. While entropy conditions are traditionally imposed as additional constraints, we interpret them as restrictions on admissible convergence modes.
Let \(R_{\mathrm{ent}}\) be a class of modes satisfying:
viscosity-consistent scaling: \(\varepsilon_n \sim \Delta x_n\),
monotonicity or dissipation constraints,
stability with respect to the mode topology \(\tau\).
Then any limit \(u(r)\) with \(r \in R_{\mathrm{ent}}\) satisfies the Kruzhkov entropy inequalities.
Statement.
For all \(r \in R_{\mathrm{ent}}\), the corresponding limit \(u(r)\) is an entropy solution.
Conversely, any entropy solution can be realized as a limit along a sequence of modes in \(R_{\mathrm{ent}}\).
Interpretation.
The entropy condition does not impose an additional constraint on solutions. It characterizes the class of convergence modes that produce physically admissible limits.
Thus: \[\text{entropy solutions} \;\Longleftrightarrow\; S(R_{\mathrm{ent}}).\]
Statement 1. Entropy inequalities are not additional constraints on weak solutions. They are invariants of limits generated by entropy-compatible convergence modes.
Entropy Conditions Revisited: A Mode-Theoretic Interpretation
The classical theory of nonlinear conservation laws recognizes that weak solutions are not unique and must be supplemented by admissibility criteria.
As emphasized in [2], entropy conditions are introduced to select physically relevant solutions among all weak solutions.
Traditionally, entropy inequalities are formulated as additional constraints imposed on admissible solutions. In this interpretation, the partial differential equation alone does not determine the solution; the entropy condition plays the role of an external selection principle.
In the present framework, we propose a different interpretation.
Entropy conditions are not additional constraints on solutions, but restrictions on the admissible class of convergence modes.
More precisely, let \(R_{\mathrm{all}}\) denote a broad class of convergence modes generating all weak solutions: \[S(R_{\mathrm{all}}) = \{\text{all weak solutions}\}.\]
Let \(R_{\mathrm{ent}} \subset R_{\mathrm{all}}\) denote a restricted class of modes satisfying:
compatible regularization (e.g., vanishing viscosity),
monotonicity or entropy stability conditions,
stability with respect to the mode topology.
Then the entropy solution is characterized by: \[S(R_{\mathrm{ent}}) = \{u_{\mathrm{ent}}\}.\]
In this formulation, the role of entropy is structural rather than axiomatic: it restricts the operational procedures used to realize limits.
This perspective is consistent with classical results:
vanishing viscosity selects entropy solutions,
monotone schemes converge to entropy solutions,
entropy inequalities ensure \(L^1\)-contraction and uniqueness.
However, instead of interpreting these as independent mechanisms, we view them as different realizations of a single principle: selection of admissible convergence modes.
Thus, the classical statement
“entropy selects the physically relevant solution”
can be reformulated as:
“the physically relevant solution is the stable limit generated by an entropy-compatible class of convergence modes.”
Mode Interpretation of the Kruzhkov \(L^1\)-Contraction Property
Consider the scalar conservation law: \[u_t + f(u)_x = 0.\]
In the classical theory of scalar conservation laws, entropy solutions satisfy the Kruzhkov \(L^1\)-contraction property. If \(u\) and \(v\) are entropy solutions with initial data \(u_0\) and \(v_0\), then: \[\|u(\cdot,t)-v(\cdot,t)\|_{L^1} \le \|u_0-v_0\|_{L^1} \quad \text{for all } t \ge 0.\]
This property is central because it implies uniqueness and continuous dependence on initial data.
Mode Interpretation.
Let \(R_{\mathrm{ent}}\) be an entropy-compatible class of convergence modes. For each \(r \in R_{\mathrm{ent}}\), let: \[u^{(r)}(t)\] denote the limit generated by the mode \(r\).
We say that \(R_{\mathrm{ent}}\) is \(L^1\)-contractive if for any two mode-generated limits \(u^{(r)}\) and \(v^{(s)}\), with \(r,s \in R_{\mathrm{ent}}\), one has: \[\|u^{(r)}(\cdot,t)-v^{(s)}(\cdot,t)\|_{L^1} \le \|u_0-v_0\|_{L^1} \quad \text{for all } t \ge 0.\]
Statement.
Entropy-compatible mode classes are \(L^1\)-contractive: \[R_{\mathrm{ent}} \;\Rightarrow\; \|u^{(r)}(\cdot,t)-v^{(s)}(\cdot,t)\|_{L^1} \le \|u_0-v_0\|_{L^1}.\]
Consequently: \[S(R_{\mathrm{ent}})=\{u_{\mathrm{ent}}\}\] for fixed initial data.
Interpretation.
The \(L^1\)-contraction property is not merely a property of selected solutions. In the mode-dependent framework, it expresses stability of the entropy-compatible class of convergence modes.
Thus: \[\text{entropy compatibility} \;\Rightarrow\; L^1\text{-contraction} \;\Rightarrow\; \text{uniqueness and stability}.\]
Equivalently, Kruzhkov uniqueness can be reformulated as:
The entropy solution is the unique limit generated by the \(L^1\)-contractive class of admissible convergence modes.
Conceptual Consequence.
The classical role of the \(L^1\)-contraction property is to prove uniqueness of entropy solutions. In the mode framework, the same property characterizes a class of modes whose induced solution cloud collapses to a single element: \[S(R_{\mathrm{ent}}) = \{u_{\mathrm{ent}}\}.\]
Therefore, \(L^1\)-contraction is a mode-stability principle: it states that admissible perturbations of initial data and admissible variations of entropy-compatible modes cannot generate divergent solution branches.
Mode-Structured Evolution and Well-Posedness
Mode-Evolution Operator
Let a PDE be written abstractly as: \[u_t = F(u), \quad u(0) = u_0.\]
For a fixed mode \(r\), let \(u^{(r)}(t)\) denote the solution obtained as the limit of a discrete or regularized approximation compatible with \(r\).
Definition. For a class of modes \(R\), define the mode-evolution operator: \[E_R(t)u_0 = \lim_{r \in R} u^{(r)}(t),\] whenever the limit exists and is stable with respect to the topology \(\tau\) on \(R\).
Interpretation.
Evolution is defined as a mode-stable limit of operational procedures.
Mode-Stability of Evolution
Theorem. Let \(R_1, R_2\) be two classes of modes. If \[S(R_1) = S(R_2),\] then for all \(t \ge 0\): \[E_{R_1}(t) = E_{R_2}(t).\]
Proof Sketch.
If the solution clouds coincide, then for any initial data \(u_0\), admissible limits over \(R_1\) and \(R_2\) yield the same solution at each time. Stability of mode-dependent differentials ensures consistency in time.
Interpretation.
Mode-invariance of the PDE implies mode-invariance of the dynamics.
Mode-Sensitive Evolution
Theorem. If a PDE is mode-sensitive, i.e., \[S(R_1) \neq S(R_2),\] then there exists \(t > 0\) such that: \[E_{R_1}(t) \neq E_{R_2}(t).\]
Interpretation.
Different classes of modes induce different dynamical evolutions.
Mode-Semigroup Property
For a fixed class \(R\), define: \[E_R(t) : u_0 \mapsto u(t).\]
Definition. The evolution is a mode-semigroup if: \[E_R(t+s) = E_R(t) \circ E_R(s) \quad \text{for all } t,s \ge 0.\]
Theorem. If a PDE is strongly mode-invariant, then \(E_R(t)\) forms a semigroup.
If a PDE is mode-sensitive, the semigroup property may fail for broad classes \(R\), and is typically restored only for restricted classes (e.g., \(R_{\mathrm{ent}}\)).
Interpretation.
Entropy restores the semigroup property by restricting the admissible class of modes.
Mode-Dependent Well-Posedness
Classical well-posedness requires existence, uniqueness, and stability.
In the mode framework, these become properties of a class \(R\).
Mode-Existence.
A PDE admits mode-existence for class \(R\) if: \[E_R(t)u_0 \text{ exists for all } u_0.\]
Mode-Uniqueness.
A PDE admits mode-uniqueness for class \(R\) if: \[E_R(t)u_0 \text{ is independent of the choice of } r \in R.\]
Mode-Stability.
A PDE admits mode-stability if: \[u_0^{(k)} \to u_0 \;\Rightarrow\; E_R(t)u_0^{(k)} \to E_R(t)u_0.\]
Interpretation.
Well-posedness is a property of the mode class, not of the PDE alone.
Examples
Linear Transport.
Mode-invariant \(\Rightarrow\) semigroup \(\Rightarrow\) classical well-posedness.
Burgers Equation.
For broad \(R\):
non-uniqueness,
failure of semigroup structure,
instability.
For restricted \(R_{\mathrm{ent}}\):
uniqueness,
semigroup restored,
stability restored.
Interpretation.
Entropy is the minimal restriction on modes that restores well-posedness.
Interpretation
This section establishes the following structural correspondence:
modes define operational procedures,
stable limits define differential operators,
mode classes define solution clouds,
mode-invariance corresponds to linearity,
mode-sensitivity corresponds to nonlinearity,
entropy selects mode classes restoring dynamical well-posedness.
Final Statement.
PDE dynamics is not determined by the equation alone, but by the class of admissible convergence modes governing its evolution.
Mode-Dependent Compactness and Convergence
Compactness arguments play a central role in the analysis of nonlinear PDE [7].
Motivation
Classical compactness results (e.g., Arzelà–Ascoli, BV compactness, compensated compactness) rely on uniform bounds and continuity properties that are typically formulated along single sequences.
In the mode framework, convergence must be stable with respect to variations of the mode. This requires reformulating compactness in terms of classes of modes.
Mode-Bounded Families
Let \(\{u_n(r)\}\) be a family indexed by \(r \in R\).
Definition. The family is mode-bounded if: \[\sup_{r \in R} \sup_n \|u_n(r)\|_{L^\infty} < \infty,\] with bounds uniform with respect to the mode topology \(\tau\).
Mode-Equicontinuity
Definition. The family is mode-equicontinuous if for every \(\varepsilon > 0\) there exists \(\delta > 0\) such that: \[|x-y| < \delta \;\Rightarrow\; |u_n(r)(x) - u_n(r)(y)| < \varepsilon\] for all \(r \in R\) and all sufficiently large \(n\), with \(\delta\) stable under perturbations of \(r\).
Mode-Compactness
Theorem (Mode-Compactness). Let \(\{u_n(r)\}\) be mode-bounded and mode-equicontinuous.
Then there exists a subsequence converging to a limit \(u(r)\) uniformly on compact sets, and: \[r_k \to r \;\Rightarrow\; u(r_k) \to u(r).\]
Interpretation.
Compactness becomes a property of the pair \((\text{family}, R)\).
BV Compactness
Theorem. If: \[\sup_{r \in R} \sup_n TV(u_n(r)) < \infty,\] then: \[u_n(r) \to u(r) \quad \text{in } L^1_{\text{loc}},\] with stability under mode perturbations.
Compensated Compactness (Mode Form)
Statement. If a family satisfies:
uniform \(L^\infty\) bounds,
entropy inequalities stable with respect to \(R\),
then convergence holds in \(L^p_{\mathrm{loc}}\) for all \(p<\infty\).
Interpretation.
Entropy conditions enforce compactness through restrictions on admissible modes.
Interpretation
Mode-dependent compactness provides:
existence mechanisms for mode-weak solutions,
stability of limits under perturbations of procedures,
a bridge between numerical approximation and analytical convergence.
Mode-Dependent Numerical Analysis
Schemes as Mode Generators
Each numerical scheme \(\mathcal{A}\) induces a class of modes \(R(\mathcal{A})\).
Interpretation.
Numerical schemes are operational realizations of convergence modes.
Mode-Consistency
Definition. A scheme is mode-consistent if: \[\lim_{r \in R(\mathcal{A})} d_r u = F(u),\] i.e., the discrete operator converges to the PDE operator in the mode sense.
Mode-Stability
Definition. A scheme is mode-stable if: \[r_k \to r \;\Rightarrow\; u_n(r_k) \to u_n(r) \quad \text{for all } n.\]
Mode Convergence Principle
Theorem. For a consistent scheme: \[\text{mode-stability} \;\Longleftrightarrow\; \text{mode convergence}.\]
Interpretation.
Stability and convergence are equivalent when stability is defined at the level of modes.
Entropy-Compatible Schemes
Monotone schemes (e.g., Godunov, Lax–Friedrichs) satisfy: \[R(\mathcal{A}) \subset R_{\mathrm{ent}},\] and therefore converge to entropy solutions.
High-Order Schemes
High-order schemes (ENO, WENO, DG) generate mode classes with:
nonlinear adaptive regularization,
history-dependent increments,
non-monotone fluxes.
Interpretation.
Higher-order accuracy increases sensitivity to the structure of modes.
Mode-Dependent CFL
Classical CFL conditions are replaced by:
\[(\Delta t_n, \Delta x_n, \theta_n) \in R(\mathcal{A})\]
subject to compatibility with:
suppression constraints,
ordering constraints,
regularization constraints.
Interpretation
Mode-dependent numerical analysis provides:
a unified language for schemes and limits,
a structural explanation of entropy selection,
classification of schemes via induced mode classes,
a direct link between computation and physical observability.
Mode-Dependent Multiscale Systems
Motivation
Many physical, biological, and engineered systems exhibit behavior across multiple interacting scales:
spatial scales,
temporal scales,
regularization scales,
discretization scales,
stochastic or noise scales.
Classical multiscale analysis usually assumes a fixed hierarchy of scales. In the mode framework, this hierarchy is treated as part of the convergence mode itself.
Multiscale Modes
A multiscale mode is a mode whose increments include several scale parameters: \[r = \{(\Delta x_{i,n}, \Delta t_n, \varepsilon_n, \delta_n, \lambda_n, \theta_n)\}_{n\in\mathbb{N}},\] where \(\varepsilon_n\) may represent viscosity, \(\delta_n\) a smoothing scale, \(\lambda_n\) a homogenization or averaging scale, and \(\theta_n\) auxiliary parameters.
Definition. A class \(R_{\mathrm{ms}}\) is a multiscale mode class if it imposes asymptotic relations such as: \[\varepsilon_n \ll \delta_n \ll \Delta x_n, \qquad \Delta t_n \sim \lambda_n^2, \qquad \lambda_n \to 0.\]
Interpretation.
Multiscale structure is encoded directly in the mode.
Multiscale Suppression
Suppression extends from coordinate increments to scale parameters: \[\frac{\Delta x_{j,n}}{\Delta x_{i,n}} \to 0, \qquad \frac{\varepsilon_n}{\Delta x_{i,n}} \to 0, \qquad \frac{\lambda_n}{\varepsilon_n} \to 0,\] depending on the hierarchy imposed by \(R_{\mathrm{ms}}\).
Interpretation.
Modes encode which scales dominate and which scales vanish.
Mode-Dependent Multiscale Limits
Consider a PDE with several small parameters: \[u_t + f(u)_x = \varepsilon u_{xx} + \delta u_{xxx} + \lambda g(u).\]
A multiscale mode determines which terms survive in the limiting equation.
Principle (Mode-Dependent Multiscale Limit). Different asymptotic relations inside \(R_{\mathrm{ms}}\) induce different limiting PDE: \[\varepsilon_n \gg \delta_n,\lambda_n \quad \Rightarrow \quad \text{viscous limit},\] \[\delta_n \gg \varepsilon_n,\lambda_n \quad \Rightarrow \quad \text{dispersive limit},\] \[\lambda_n \gg \varepsilon_n,\delta_n \quad \Rightarrow \quad \text{forcing-dominated limit}.\]
Interpretation.
The limiting PDE depends on the multiscale structure of the mode.
Homogenization as Mode Selection
Classical homogenization assumes a separation between microscopic and macroscopic scales. In the mode framework this becomes: \[R_{\mathrm{hom}} = \{r : \lambda_n \ll \Delta x_n\}.\]
Then: \[u(r) \to u_{\mathrm{hom}}.\]
Interpretation.
Homogenization is a restriction on the multiscale mode class.
Multiscale Mode-Weak Solutions
A multiscale mode-weak solution satisfies a weak formulation in which all derivatives are understood relative to \(R_{\mathrm{ms}}\): \[\int u\,\partial_t^{R_{\mathrm{ms}}}\varphi + f(u)\,\partial_x^{R_{\mathrm{ms}}}\varphi - \varepsilon\,\partial_x^{R_{\mathrm{ms}}}u\,\partial_x^{R_{\mathrm{ms}}}\varphi - \delta\,\partial_x^{R_{\mathrm{ms}}}u\,\partial_{xx}^{R_{\mathrm{ms}}}\varphi - \lambda\,g(u)\varphi \,dx\,dt = 0.\]
Interpretation.
The weak formulation becomes scale-aware.
Interpretation
Mode-dependent multiscale systems:
unify homogenization, averaging, and asymptotic analysis,
encode scale hierarchies directly in convergence modes,
explain why different asymptotic regimes yield different macroscopic equations,
connect multiscale numerical methods with analytical limits.
Stochastic Modes and Randomized Convergence
Motivation
Many physical and computational processes involve randomness:
stochastic forcing,
Monte Carlo discretization,
randomized time stepping,
noisy measurements,
stochastic regularization.
Classical stochastic PDE theory often treats randomness as an external forcing term. In the mode framework, randomness may also be part of the convergence procedure itself.
Stochastic Modes
A stochastic mode is a mode whose increments include random variables: \[r = \{(\Delta x_{i,n}, \Delta t_n, \varepsilon_n, \xi_n, \theta_n)\}_{n\in\mathbb{N}},\] where \(\xi_n\) denotes random perturbations, noise, sampling error, or stochastic forcing.
Definition. A stochastic mode class \(R_{\mathrm{stoch}}\) consists of modes satisfying probabilistic constraints such as: \[\xi_n \in L^p(\Omega), \qquad \mathbb{E}[\xi_n]=0, \qquad \operatorname{Var}(\xi_n)\to 0.\]
Interpretation.
Randomness is encoded in the operational procedure of convergence.
Stochastic Mode-Dependent Differentials
A stochastic mode-dependent differential is defined by: \[d_r u(p) = \lim_{n\to\infty} \frac{u(p_n)-u(p)}{\Delta x_{i,n}},\] where the increments may include random perturbations.
Statement (Stochastic Mode-Stability). If random perturbations vanish sufficiently fast, for example: \[\sum_n \mathbb{E}|\xi_n| < \infty,\] then stochastic perturbations do not destroy mode-stable differentiability, and the corresponding differential exists almost surely whenever the deterministic mode-stable limit exists.
Interpretation.
Small random perturbations preserve mode-stable derivatives.
Stochastic Mode-Weak Solutions
A stochastic mode-weak solution satisfies: \[\mathbb{E} \left[ \int u\,\partial_t^{R_{\mathrm{stoch}}}\varphi + f(u)\,\partial_x^{R_{\mathrm{stoch}}}\varphi \,dx\,dt \right] = 0.\]
Interpretation.
The weak formulation becomes probabilistic and mode-dependent.
Randomized Numerical Schemes
Randomized schemes, such as Monte Carlo flux approximations or stochastic time stepping, induce stochastic mode classes: \[R(\mathcal{A}_{\mathrm{rand}}) = R_{\mathrm{stoch}}.\]
Statement (Stochastic Mode Convergence). If a randomized scheme is mode-consistent and stochastically stable, then: \[u(\mathcal{A}_{\mathrm{rand}}) \to u(R_{\mathrm{stoch}})\] in probability.
Interpretation.
Randomized numerical schemes converge to stochastic mode-weak solutions.
Stochastic Entropy Conditions
Entropy inequalities become probabilistic: \[\mathbb{E} \left[ \partial_t^{R_{\mathrm{stoch}}}\eta(u) + \partial_x^{R_{\mathrm{stoch}}}q(u) \right] \le 0.\]
Interpretation.
Entropy becomes a probabilistic monotonicity condition over stochastic modes.
Interpretation
Stochastic modes provide:
a unified view of deterministic and stochastic convergence,
a framework for randomized numerical methods,
a probabilistic extension of mode-stability,
a natural interpretation of noisy measurements and stochastic regularization.
Unification Principle
The preceding development suggests a unifying principle underlying classical analysis, functional analysis, and the theory of partial differential equations.
Core statement.
Limits, derivatives, and solutions of partial differential equations are not primary objects, but arise as stable invariants of admissible classes of convergence modes.
Modes as primary structure.
A convergence mode encodes:
discretization scales,
suppression relations between variables,
ordering of limiting processes,
regularization mechanisms,
possible stochastic or multiscale effects.
Thus, convergence is not a single operation, but a structured operational procedure.
Differentiation as stability.
Differential objects arise as stable limits:
classical derivatives correspond to highly restricted modes,
directional and Gateaux derivatives correspond to constrained mode families,
Frechet differentiability corresponds to invariance under admissible mode perturbations,
weak derivatives correspond to stability under broad regularizing modes.
PDE as mode-dependent systems.
A partial differential equation does not determine a unique solution in isolation.
Instead, it defines a mapping: \[F: R \to \mathcal{P}(U),\] where \(R\) is a class of convergence modes and \(S(R)\) is the corresponding solution cloud.
Different classes of modes produce different solution sets.
Classification of PDE.
This leads to a structural classification:
Mode-invariant PDE: the solution set is independent of the choice of admissible modes. These include linear, well-posed equations.
Mode-sensitive PDE: the solution set depends on the class of modes. These include nonlinear conservation laws.
Entropy and admissibility.
Entropy conditions, vanishing viscosity, and monotone numerical schemes are reinterpreted as restrictions on admissible mode classes:
\[S(R_{\mathrm{ent}}) = \{u_{\mathrm{ent}}\}.\]
Thus, admissibility is not imposed on solutions, but on the operational procedures used to obtain them.
Functional spaces as mode classes.
Sobolev, BV, and measure-valued frameworks correspond to different classes of modes:
integrability-controlled modes (Sobolev),
variation-controlled modes (BV),
oscillatory modes (measure-valued solutions).
Regularity becomes a property of admissible convergence modes rather than of functions alone.
Numerical and physical interpretation.
Numerical schemes act as generators of mode classes. Physically observable solutions correspond to limits that are stable under admissible variations of modes.
Thus, computation and physical measurement are naturally incorporated into the same framework.
Final perspective.
Classical analysis describes the outcomes of convergence.
The present framework describes the structure of convergence itself.
This shift from outcomes to operational structure provides a unified foundation linking analysis, computation, and physical modeling.
Conclusion
We have introduced a framework in which limits, derivatives, and solutions of partial differential equations are parameterized by classes of convergence modes.
In this framework, a limit is not treated as an abstract, procedure-independent object, but as the outcome of a structured operational process. Convergence modes make explicit the dependence of limiting behavior on discretization, scaling, ordering, regularization, and, more generally, on the admissible structure of approximation procedures.
Classical differential objects are recovered as stable invariants under admissible variations of modes. In particular, classical differentiability is characterized by invariance of the local differential with respect to perturbations within suppression-compatible mode classes. This provides a structural reinterpretation of differentiability as a stability property rather than a purely pointwise condition.
At the level of partial differential equations, solutions naturally organize into mode-dependent clouds \(S(R)\). Within this structure:
admissibility conditions such as entropy correspond to restrictions on classes of modes,
linear, well-posed PDE are strongly mode-invariant and generate a single solution independent of the mode class,
nonlinear conservation laws are intrinsically mode-sensitive, with different mode classes producing different solution clouds.
This yields a structural distinction between equations whose behavior is invariant under admissible convergence procedures and those for which the choice of mode is an essential part of the problem specification.
The framework extends naturally to dynamical systems. Evolution is described by mode-dependent evolution operators, and classical notions such as semigroup structure and well-posedness are recovered as properties of restricted mode classes. In particular, entropy conditions can be interpreted as minimal constraints restoring mode-stability, semigroup structure, and uniqueness.
Beyond deterministic settings, the framework accommodates multiscale and stochastic structures by incorporating scale hierarchies and randomness directly into convergence modes. This provides a unified language for homogenization, asymptotic limits, and stochastic perturbations, all treated as variations within the space of admissible operational procedures.
A central consequence is the reinterpretation of numerical methods: a numerical scheme is not merely an approximation, but a generator of a class of convergence modes. Convergence, stability, and entropy selection become properties of these induced mode classes, establishing a direct structural link between computation, analysis, and physical observability.
Conceptually, this shifts the role of limits from primary objects to derived invariants, with convergence modes forming the underlying structure. Differential calculus and PDE theory thus emerge as theories of stability over spaces of admissible operational procedures.
This perspective suggests several directions for further development:
refinement of the topology of mode spaces and associated compactness theory,
classification of PDE via mode-invariance and mode-sensitivity,
systematic study of multiscale and stochastic mode classes,
development of mode-based numerical analysis and adaptive schemes.
Overall, the proposed framework provides a unified and structurally explicit foundation linking analysis, numerical computation, and physical modeling through the concept of convergence modes.
Extensions.
The mode-based framework extends naturally beyond PDE theory.
In particular:
variational and optimization problems can be reformulated in terms of mode-admissible variations,
geometric structures can be generalized using mode-dependent tangent cones and differentials,
categorical formulations arise by viewing mode classes as objects and solution clouds as functors.
These directions are developed in subsequent work. Related ideas appear in variational convergence theory, in particular \(\Gamma\)-convergence [8], where the limiting behavior depends on the notion of convergence.
Axiomatic Formulation of Convergence Modes
Basic Objects
Let:
\(X = \mathbb{R}^n\) be the underlying space,
\(p \in X\) a point,
\(A = \{x_1, \dots, x_n\}\) the coordinate axes,
\(u : X \to \mathbb{R}\) a function.
Convergence Modes
Definition (Mode).
A convergence mode is a sequence: \[\rho = \{(\Delta x_{1,n}, \dots, \Delta x_{n,n}, \theta_n)\}_{n \in \mathbb{N}},\] where:
\(\Delta x_{i,n} \in \mathbb{R}\),
\(\theta_n \in \Theta\) encodes auxiliary parameters (regularization, scheme parameters, etc.).
Admissible class of modes.
Let \(R\) be a class of admissible modes.
Axiom A1 (Vanishing).
\[\Delta x_{i,n} \to 0 \quad \text{for all } i.\]
Axiom A2 (Suppression).
For each axis \(x_i\), define a subclass \(R_i \subset R\) such that: \[\frac{|\Delta x_{j,n}|}{|\Delta x_{i,n}|} \to 0 \quad (j \neq i).\]
Axiom A3 (Ordering of axes).
There exists a partial order \(\preceq\) on \(A\) such that: \[x_i \prec x_j \;\Rightarrow\; \frac{|\Delta x_{i,n}|}{|\Delta x_{j,n}|} \to 0.\]
Axiom A4 (Ordering of steps).
There exists a partial order on \(\mathbb{N}\) such that: \[(\Delta x_{i,n}, \theta_n) \text{ depends only on } (\Delta x_{i,k}, \theta_k), \; k \le n.\]
Axiom A5 (Topology on modes).
The space \(R\) is endowed with a topology \(T\) generated by coordinatewise convergence of \(\Delta x_{i,n}\) and \(\theta_n\).
Axiom A6 (Compatibility).
If \(n \le m\), then: \[\frac{\Delta x_{i,m}}{\Delta x_{j,m}} = O\!\left(\frac{\Delta x_{i,n}}{\Delta x_{j,n}}\right).\]
Mode-Dependent Differentials
Define: \[p_n(\rho) = p + (\Delta x_{1,n}, \dots, \Delta x_{n,n}).\]
Mode derivative along a mode.
\[\partial_i^\rho u(p) = \lim_{n \to \infty} \frac{u(p_n(\rho)) - u(p)}{\Delta x_{i,n}},\] when the limit exists.
Stable mode derivative.
We say that \(\partial_i^R u(p)\) exists if there exists \(L\) such that:
for any \(\varepsilon > 0\), there exists a neighborhood \(U \subset R_i\) and \(N\) such that: \[\left| \frac{u(p_n(\rho)) - u(p)}{\Delta x_{i,n}} - L \right| < \varepsilon\] for all \(\rho \in U\), \(n \ge N\).
Mode-stable gradient.
\[\nabla_R u(p) = (\partial_1^R u(p), \dots, \partial_n^R u(p)).\]
Relation to Classical Differentiability
If \(u\) is classically differentiable at \(p\), then: \[u(p_n) = u(p) + \langle \nabla u(p), \Delta x_n \rangle + o(\|\Delta x_n\|).\]
Consequently: \[\partial_i^R u(p) = \frac{\partial u}{\partial x_i}(p),\] for any suppression class \(R_i\).
Interpretation. Classical differentiability corresponds to stability of the differential under admissible convergence modes.
PDE as Mode-Dependent Objects
A PDE: \[F(u, \nabla u, D^2 u, \dots) = 0\] is replaced by a family: \[F_\rho(u) = 0,\] where operators are defined via mode-dependent differentials.
Solution along a mode.
\[u(\rho) = \lim_{n \to \infty} u_n^\rho.\]
Solution cloud.
\[S(R) = \{u(\rho) : \rho \in R\}.\]
Mode Invariance and Entropy
Mode-invariance.
A PDE is mode-invariant if: \[S(R) = \{u^*\}.\]
Mode-sensitivity.
A PDE is mode-sensitive if: \[S(R_1) \ne S(R_2).\]
Entropy interpretation.
For conservation laws: \[u_t + f(u)_x = 0,\]
there exists a broad class \(R_{\mathrm{all}}\): \[S(R_{\mathrm{all}}) = \{\text{all weak solutions}\}.\]
Restricting to \(R_{\mathrm{ent}}\): \[S(R_{\mathrm{ent}}) = \{u_{\mathrm{ent}}\}.\]
Interpretation. Entropy is a restriction on admissible convergence modes, not an additional condition on solutions.