The paper considers an approach to solving the problem of noise removal in a large array of sparse data under weak dependence. The approach is based on the method of controlling the false discovery rate (FDR). For this approach, the rate of convergence of the mean–square risk estimator to the normal law is obtained.
The particle method is a numerical method for modeling large systems based on their Lagrangian description.
The discontinuous particle method is of the “particle–particle” type and consists of two main stages: predictor and corrector. At the predictor stage, a particle shift occurs. At the corrector stage, a partner for interaction is selected among the neighbors of the particle, most influencing the local dynamics of the system. The “discontinuity” of the method lies in the method of density correction only one of the interacting particles, due to which the restoration of the distribution density occurs in a minimal region defined by only two selected particles, which leads to smearing of the front by only one particle.
The novelty of the method presented in this article is that The density of the particles is put in the foreground, not their shape. The criterion for restructuring is the preservation of the projection of the mass onto the plane passing through the centers masses of interacting particles. The neighbor for density correction is selected using the “impact parameter”. The density is constructed using two selected interacting particles, which makes it possible to reduce a two-dimensional problem to a one-dimensional one.
The effectiveness of the method is presented using the Crowley test as an example. It is shown that the Runge–Kutta method at the predictor stage significantly increases the accuracy of the numerical solution.
Our Lagrangian approach to constructing the particle method contrasts with another frequently used particle-particle method, the smoothed particle method (SPH).
The inverse Sturm–Liouville problem consists in determining the coefficient (potential) in the stationary Schrцdinger equation on a segment based on a set of eigenvalues. The paper considers a numerical solution to the inverse problem based on a finite set of the first eigenvalues of two Sturm–Liouville problems. The remaining eigenvalues are set according to the classical asymptotics.
The method of solving the inverse spectral problem is based on a one-to-one correspondence between the inverse spectral problem and the nonstationary inverse problem for a telegraphic equation with a variable coefficient (potential). The reduction to a non-stationary problem is carried out analytically by inverting the Laplace transform according to the Mellin formula. An explicit formula for the reaction function in the inverse scattering problem is obtained.
The inverse scattering problem for the telegraphic equation is to determine an unknown coefficient from the reaction function. This problem is solved numerically by the inversion of a difference scheme. The paper presents the results of solving a series of inverse Sturm–Liouville problems. In conclusion, it is noted that the number of given frequencies corresponds to the number of harmonics in the expansion of a desired potential.
Keywords:
Schrödinger and telegraph equation, reaction function, inverse scattering problem, revers of difference scheme
The author considers an inverse coefficient problem for a model of sorption dynamics. The inverse problem is reduced to a nonlinear operator equation for an unknown coefficient. The differentiability of the nonlinear operator is proved. The Newton–Kantorovich method and the modified Newton–Kantorovich method are constructed for the numerical solution of the inverse problem. The results of numerical calculations are presented.
Keywords:
mathematical model of sorption dynamics, inverse problem, nonlinear operator equation, operator derivative, Newton–Kantorovich method
When describing the group behavior of high-frequency traders, a boundary value problem arises based on the concept of mean field games. The system consists of two coupled partial differential equations: the Hamilton–Jacobi–Bellman equation, which describes the evolution of the average payoff function in backward time, and the Kolmogorov–Fokker–Planck equation, which describes the evolution of the probability density distribution of traders in forward time. The system is ill-conditioned due to the turnpike effect. Under certain assumptions, it is possible to reduce the system to a set of Riccati equations; however, the question of the well-posedness of the reduced problem remains open. This work investigates this question, specifically the conditions for the existence and uniqueness of the solution to the boundary value problem depending on the model parameters.
Keywords:
mean field games, a system of Riccati equations, a boundary value problem for a system of ODEs
Statistical inference often assumes that the distribution being sampled is normal. As observed, following the normal distribution assumption blindly may affect the accuracy of inference and estimation procedures. For this reason, many tests for normality have been proposed in the literature. This paper deals with the problem of testing normality in the case when data consists of a number of small independent samples such that in each small sample observations are independent and identically distributed while from sample to sample they have different parameters but the same type of distribution (call this multi-sample data). In this case it is necessary to use test statistics which do not depent on the parameters. A natural way to exclude the nuisance location parameter is to replace the observations within each small group by diferences. We obtain some estimates of stability of such a decomposition and study and compare the power of eight selected normality tests in the case of multi-sample data. The following tests are considered: the Pearson chi-square test, the Kolmogorov–Smirnov, the Cramer–von Mises, the Anderson-Darling, the Shapiro–Wilk, the Shapiro–Francia, the Jarque–Bera, and the adjusted Jarque–Bera tests. Power comparisons of these eight tests were obtained via the Monte Carlo simulation of sample data generated from several alternative distributions.
Keywords:
normal distribution; test for normality; multi-sample data; decomposition stability; the Levy– Cramer theorem; Monte-Carlo simulation
This article presents a new algorithm for calculating singularity (boundary) type of ribbon surface of generalized pseudo-Anosov homeomorphism using the surface’s combinatorial description provided with the so-called configuration. As an additional output the fundamental group relators of the ribbon surface are calculated for its co-presentation associated with a given ribbon partition. In comparison to a known algorithm, the one which is presented in this article does not involve any auxiliary sets nor recurrent functions.
Keywords:
pseudo-Anosov homeomorphism, ribbon surface, singularity type, adjacency matrix, fundamental group
The creation of cryptographic systems based on lattice theory is a promising direction in the field of post-quantum cryptography. The aim of this work is to obtain new properties of lattices through related objects — dense packings of equal spheres. The article proposes a method for constructing lattice packings of equal spheres corresponding to the packing density of the “Lambda” series in dimensions 1–24, using a series of coefficients to the height of a fundamental parallelepiped of dimension (n−1): 1/2, 1/3, 1/2, 0, 1/2, 1/3, 1/2, √−1, 1/2, 1/3, 1/2, 0, 1/2, 1/3, 1/2, √−1, 1/2, 1/3, 1/2, 0, 1/2, 1/3, 1/2. The construction of lattice packings of equal spheres using this method was carried out up to dimension 11 inclusive.
Keywords:
post-quantum cryptography, geometry of numbers, lattice theory, arithmetic minima of positive quadratic forms, lattice packings of equal spheres, Hermite constant
This paper solves the construction problem of embedding the complete rooted binary and ternary trees with k, k =1, 2, . . . , levels in rectangular lattices (RL) with minimum length and near minimum height. It is assumed that different vertices of the tree go to different (main) vertices of the RL, with the leaves of the tree going to the vertices of the RL located on its horizontal sides. It is also assumed that the edges of the tree go to simple (transit) chains of the RL, which connect the images of their end vertices and do not pass through other main vertices, with no more than 1 (correspondingly 2) transit chains passing through the same edge (the same vertex) of the RL.
Keywords:
tree embedding, rectangular lattices, minimum length
Time optimal control problem with state constraint investigated in this article. The behavior of an object is described by a system of second-order linear differential equations. The coefficient matrix for state variables has various positive eigenvalues. The state constraint is linear. A admissible control is a piecewise continuous function that takes values from a given compact set. Sets of controllability to the origin are constructed for time intervals of various lengths. A study of the dependence of the solution of the problem on the parameter determining the state constraint was carried out.
Keywords:
time optimal control, state constraint, linear system, controllability set
In the work on real data, the modeling and assessment of the infection rate of a population of ixodid ticks with tick-borne encephalitis virus and Borrelia burgdorferi sensu lato is carried out using the maximum likelihood and moment methods, and their comparative analysis is given. A review of methods for solving direct and inverse problems of binary object propagation based on individual and group observations is made.
Keywords:
infection rate model, grouped observations, Bernoulli tests, maximum likelihood method, method of moments, ixodid ticks
The paper considers the algebraic properties of the Hadamard product (Shur product, componentwise product) of error-correcting linear codes. We discuss the question of the complexity of constructing the basis of Hadamard’s product by known bases of factors. Also, we introduce the concept of quotients, quasi quotients, and maximal (concerning inclusion) quasi quotient from the Hadamard division of one linear code to another. We establish an explicit form of the maximal quasi quotients of the Hadamard division. It proved the criterion of existence for a given code of an inverse code in a semiring formed by linear codes of length n with operations of sum and Hadamard product of codes. We also describe the explicit form of codes with an inverse code in
this semiring.
Keywords:
Hadamard product, Shur product, Component-wise product, McEliece public-key cryptosystem, algorithm, Hadamard quotient, Hadamard quasi quotient, Hadamard maximum quasi quotient
The problem of maximizing the horizontal coordinate of a point mass moving in a vertical plane under the influence of gravity, viscous friction, curve reaction force and thrust is considered. It is assumed that state constraints are imposed on the trajectory inclination angle. The system of equations belongs to a certain type, which allows us to reduce the optimal control problem with constraints on the phase variable to the optimal control problem with control constraints. The sequence and number of inclusions of phase constraints in the optimal trajectory and synthesis of optimal control are determined.
Keywords:
brachistochrone, phase constraints, Pontryagin’s maximum principle, boundary value problem, optimal trajectory
The article deals with the task of testing drugs for bioequivalence. Bioequivalence studies underlie the reproduction of drugs that have confirmed their efficacy and safety. The main method for testing the bioequivalence hypothesis is the procedure of two one-way Schuirmann tests. Due to inaccurate data, as well as omitted data at the drug trial stage, the Schuirmann criterion allows for errors that exceed a given probability of error of the first kind. Such situations are dangerous for patients who may receive a drug that is not equivalent to the original drug. The authors presented a new criterion that is more sensitive to differences in characteristics affecting the bioavailability of drugs, which reduces patient risk. Note that the new criterion generalizes the
classical Schuirmann criterion, preserving its useful properties.