Editor(s)
Dr. Luigi Giacomo Rodino
Professor, Department of Mathematics, University of Turin, Italy.

ISBN 978-93-5547-721-7 (Print)
ISBN 978-93-5547-722-4 (eBook)
DOI: 10.9734/bpi/nramcs/v3

Profile Link: http://www.matematica.unito.it/persone/luigi.rodino


This book covers key areas of Mathematical and Computer Science. The contributions by the authors include Stochastic differential equations, Malliavin calculus, Euler scheme for delay SDE’s, integration by parts, densities of distributions, Euler approximation, numerical stability, weak consistency, Quintic equation, radical solution, gauss basic theorem of algebra, abel’s theory, galois’s theory, solvable group, lagrange’s resolvents, Clustering, k-means, generating clusters on the run, mapreduce framework, Bernstein’s inequality, Erdos – Lax inequality, Turan’s inequality, polynomial,  Saddle point problem, augmented Lagrangian algorithm, linear algebra, condition number, Nonlinear processes, nonlinear differential equation, finite difference method, Taylor series, Hoeffding’s lemma, Hoeffding’s tail bounds, chernoff’s bound, information theory, machine learning, signal processing, Bailey’s transform, basic hypergeometric functions, Mellin transforms, Fourier transforms, Laplace transforms,  electromagnetic field, vortex, maxwell’s laws, Biquadratic equation,  number theory, Gravitation, flat space-time,  matter arises, conservation of total energy, and non-expanding space. This book contains various materials suitable for students, researchers and academicians in the field of Mathematical and Computer Science.

 

Media Promotion:

         

Chapters


The objective of this study is that The Integration by parts formula which we have established in this work is needed to extend all the formulas by Bally and Talay (in [1]) to include delay SDE's as well as SDE's. This means that this work is very useful in finding the rate of convergence of the density of the distribution of the solution process of delay SDE's as well as ordinary SDE's. We have established an integration by parts formula involving Malliavin derevatives of solutions to the delay (functional) SDE’s, See equation (1.1). The integration by parts formula which we have established is in fact an extension of the integration by parts formula to include delay SDE’s as well as ordinary SDE’s. The integration by parts formula which we have established can be used to extend the formulas in work by Bally and Talay to include delay SDE’s as well as ordinary SDE’s

This work is a continuation to the work on Precise Estimates in [1] and the work on Approximation Theorems in [2]. Here we have proved that the Euler approximation of the S.F.D.E. considered in [2] and [1] is in fact numerically stable and weakly consistent. Note that here we have used the same introduction, notations and definitions as in [2] and [1]. In [2], [1], [3] and [4] we have calculated the uniform error Bound of the difference between the actual solution process and it's Euler approximation and we have found the upper bound for this difference. Here we have also discussed the dependence of the uniform error on the initial data. Also in this work we have calculated the error by considered the initial data of the Euler approximation equals to the initial data of the solution process. Also in this work we have calculated the error in case that the original initial data is replaced by different initial data. Also we have compared the difference of the solution processes obtained by different initial data.

   

Abel’s and Galois’s Proofs on Quintic Equations having no Radical Solutions are Invalid

Mei Xiaochun

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 21-53
https://doi.org/10.9734/bpi/nramcs/v3/6047F

The proofs of Abel and Galois that the quintic equations have no radical solutions are shown to be erroneous in this study. Around two hundred years ago, it was widely accepted that general quintic equations had no radical solutions, according to the work of Abel and Galois. Tang Jianer and colleagues recently demonstrated that radical solutions exist for some quintic equations with special forms. Abel's and Galois' ideas are unable to explain these findings. Gauss and his colleagues, on the other hand, proved the fundamental theorem of algebra. There were n solutions for the n order equations, including radical and non-radical solutions, according to the theorem. The fundamental theorem of algebra contradicts Abel and Galois' beliefs. The proofs of Abel and Galois should be re-examined and re-evaluated for the reasons stated above. The author meticulously examined Abel's original manuscript and discovered several severe errors. Abel used the known solution of the cubic equation as a premise to compute the parameters of his equation in order to prove that the general solution of algebraic equation he suggested was effective for the cubic equation. An expansion with 14 items was written as 7 items, the other 7 items were missing. Based on the permutation group  had no true normal subgroup, Galois concluded that the quintic equations had no radical solutions, but these two problems had no inevitable logic connection actually. In Galois' theory, several algebraic relations among the roots of equations were employed to replace the root itself in order to illustrate the efficiency of radical extension group of automorphism mapping for the cubic and quartic equations. This went against the initial notion of an automorphism mapping group, resulting in conceptual ambiguity and arbitrariness. It is concluded that there is only the  symmetry for the n order algebraic equations. The symmetry of Galois’s solvable group does not exist. Mathematicians should get rid of the constraints of Abel and Galois’s theories, keep to looking for the radical solutions of high order equations.  

   

An Efficient K-means Algorithm: Generating Clusters Dynamically in MapReduce Framework

Anupama Chadha

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 54-66
https://doi.org/10.9734/bpi/nramcs/v3/15830D

Background: K-Means is a widely used partition based clustering algorithm which organizes input dataset into predefined number of clusters. Simplicity and speed in clustering of massive data are two features which have made K-Means a very popular algorithm. The generation of huge amount of electronic data has resulted in modifications in data clustering algorithms to process the huge data. The performance of the K-Means can further be enhanced if we use distributed computing environment to deal with the big data. MapReduce paradigm can be used with the K-Means to give it a distributed computing environment and make it more efficient in terms of time.  K-Means has a major limitation -- the number of clusters, ‘K’, need to be pre-specified as an input to the algorithm. In absence of thorough domain knowledge, or for a new and unknown dataset, this advance estimation and specification of cluster number typically leads to “forced” clustering of data, and proper clustering does not emerge.

Method: In this paper, we introduce a new algorithm based on the K-Means that takes only the numerical dataset as an input and generates appropriate number of clusters on the run using MapReduce programming style.

Findings: The new algorithm not only overcomes the limitation of providing the value of K initially but also reduces the computation time using MapReduce framework.

   

On Erdos – Lax and Turan Type Inequalities of a Polynomial

Kshetrimayum Krishnadas, Barchand Chanam

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 67-75
https://doi.org/10.9734/bpi/nramcs/v3/6092F

Let p(Z) be a polynomial of degree  n if  p(Z) has no zero in the open unit disk then 

\(\max _{|z|=1}\left|p^{\prime}(z)\right| \leq \frac{n}{2} \max _{|z|=1}|p(z)\) 

But if p(Z) has all its zeros in the closed unit disk then

\(\max _{|=|=1}\left|p^{\prime}(z)\right| \geq \frac{n}{2} \max _{|z|=1}|p(z)\) .

The above inequalities are respectively the well-known Erdos – Lax inequality and the Turan’s inequality. A natural question that follows is to investigate the extension of these inequalities for open or closed disk of radius  K, K > 0  In literature, we find extensions of Erdos – Lax inequality for a polynomial p(Z) of degree  n having no zero in the open disk of radius K, K \(\geq\) 1. For K < 1, a similar extension does not seem to exist in general. In this paper, we discuss in brief why such an extension seems unattainable in general for K < 1 . Further, we also give a brief account of the existence of extensions of Turan’s inequality for a polynomial p(Z) of degree  n having all its zeros in the closed disk of radius  for every value of K > 0 in completion.

 

 

Applying the GSVD to the Analysis of the Augmented Lagrangian Method for Symmetric Saddle Point Problem

Felicja Okulicka-Dluzewska

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 76-90
https://doi.org/10.9734/bpi/nramcs/v3/2072A

The solution of the SPP requires the proper calculation in the case of the direct method and proper approximations of the inverse of the  block and of the Schur’s complement when the iterative method is used. An Augmented Lagrangian technique was propose [7] to improve the numerical properties of  block. The analysis of the Augmented Lagrangian Method for Symmetric Saddle Point Problem (SPP) takes the condition numbers of the  block and the Schur’s complement under consideration.

For the theoretical analysis the Generalized Singular Value Decomposition is used.
The solution of the SPP requires the proper calculation in the case of the direct method and proper approximations of the inverse of the  block and of the Schur’s complement when the iterative method is used. An Augmented Lagrangian technique was propose [7] to improve the numerical properties of  block. The analysis of the Augmented Lagrangian Method for Symmetric Saddle Point Problem (SPP) takes the condition numbers of the  block and the Schur’s complement under consideration.
For the theoretical analysis the Generalized Singular Value Decomposition is used.

   

The effect of the method for calculating the values of the thermal conductivity coefficient at the nodes of the difference grid on the numerical solutions of a one-dimensional nonlinear problem of heat conduction according to explicit and implicit conservative difference schemes is investigated in this article using the method of computational experiments on a computer.  The thermal conductivity is a power-law function of temperature. It is shown that the accuracy of numerical solutions obtained by both the explicit difference scheme and the implicit difference scheme depends on the method for calculating the values of the thermal conductivity coefficient at the nodes of the difference grid. A method is indicated that makes it possible to obtain a numerical solution that is sufficiently close to the analytical solution, including in the zone of the temperature front.

   

Improved Hoeffding’s Lemma and Hoeffding’s Tail Bounds: A Recent Study

David Hertz

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 99-104
https://doi.org/10.9734/bpi/nramcs/v3/2427B

The goal of this chapter is to is to improve Hoeffding’s lemma and consequently Hoeffding’s tail bounds. Our starting point is to present Hoeffding’s lemma with a proof somewhat different than the original one and then present the improved Hoeffding’s lemma and prove it.   The improvement pertains to left skewed zero mean random variables \(X \in[a, b], \text { where } a<0 \text { and }-a>b\) . The proof of Hoeffding’s improved lemma uses Taylor’s expansion, the convexity of  \(\exp (s x), s \in \mathbb{R}\) , and an unnoticed observation since Hoeffding’s publication in 1963 that for -a > b the maximum of the intermediate function \(\tau(1-\tau)\) appearing in Hoeffding’s proof is attained at an endpoint rather than at \(\tau\) = 0.5 as in the case b > - a. Using Hoeffding’s improved lemma we obtain one sided and two sided tail bounds for  \(\mathbb{P}\left(S_{n} \geq t\right) \text { and } \mathbb{P}\left(\left|S_{n}\right| \geq t\right)\) , respectively, where \(S_{n}=\sum_{i=1}^{n} X_{i} \text { and the } X_{i}\in\left[a_{i}, b_{i}\right], i=1, \ldots, n\)  are independent zero mean random variables (not necessarily identically distributed). We could also improve Hoeffding’s two sided bound for all \(\left\{X_{i}:-a_{i} \neq\right.\) \(\left.b_{i}, i=1, \ldots, n\right\}\) . This is due to the fact that the one-sided bound should be increased by \ \mathbb{P}\left(-S_{n} \geq t\right)\) , causing left-skewed intervals to become right-skewed and vice versa.

   

Study about Transformations Formulae and Certain Transformations: A Mathematical Approach

Harwinder Kaur, A. K. Shrivastav

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 105-112
https://doi.org/10.9734/bpi/nramcs/v3/2572B

In this section, an attempt has been made to generate certain transformation formulae involving various transforms. Using abstraction of several transforms like Mellin, Fourier and Laplace transforms in Bailey’s lemma to establish some new theorems which will be helpful for further analysis. The following theorems have been derived:

  • the Mellin sine and cosine transformation have developed involving infinite series.
  • In this infinite summation, identity has developed involving Fourier transformation.
  • infinite summation identity has developed involving Laplace transformation.
   

Determination of Free Energy as Described by the New Axioms and Laws

Valentina Markova

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 113-130
https://doi.org/10.9734/bpi/nramcs/v3/2836C

The present study uses Expanded Field Theory. It changes the Classic Field Theory to a much more general theory that consists of two new axioms and eight laws. The study decided to follow the advice of the great Einstein and to try changing the way of thinking. The article describes brand new field type through new axioms and laws. It was described from previous works of the same author: In this report is used only one (first) axiom and six laws. It is known that Maxwell’s laws (1864) are based on a single axiom [1]. It states that the movement in a closed loop leads to evenly movement (with constant speed) of a vector E: div rot E = 0. The author change this axiom with a new one, according which the movement in an open loop or vortex leads to unevenly movement (with variable speed) of a vector  : div rot \(E \neq 0\)  div Vor  \(E \neq 0\) for vortex [2]. The evenly movement is replaced with unevenly movement which can be decelerating or accelerating; in  there is a cross vortex and in  there is a longitudinal vortex; the cross vortex in  is transformed to a longitudinal vortex in  through a transformation \(\Delta 1\) ; the longitudinal vortex in is transformed to a cross vortex in  through another special transformation 42; decelerating vortex emits free cross vortices to the environment that are called "free energy"; accelerating vortex sucks the same ones free cross vortices and so on. External observers can see cross vortices because they reflect the sun's rays, while longitudinal vortices are invisible because they diffract (do not reflect) the sun's rays .The vector E is not a simple. It turns to be a complex vector: E = A + \(i V, E=V+1 \mathrm{~A} \text { or } E=-A-i V, E=-V-1 \mathrm{~A} .\) It can has or amplitude A , or velocity V as a real part. Cross vortices can form two kinds’ vortices: a vortex that is generated by amplitude A  and the vortex that is generated by velocity V . Each of these may be accelerating or decelerating and both of them are generators. They are prototypes of material particles. The temperature drops due to the accelerating vortex sucking in cross vortices, while the temperature rises due to the decelerating field emitting cross vortices. Inside of the conductor the velocity of Electromagnetic field is constant ( v max = c ) . On the periphery it decelerates because of resistance to the wall of conductor. So an increase in the size of voltage leads only to an increase in the size of current but not to an increase in the velocity. This report offers a new type of field-accelerating field. It suck in free cross vortices that are called "free energy" from environment. The mechanism of Positive Feedback turns acceleration process to a generation process. There is a significant difference in the states of a bound electron and a free electron. For example scientists measure the mass of a free electron with a decelerating cross vortex (E2D-) inward, and can’t measure the mass of a connected electron with an accelerating cross vortex (E2D+) inward.

   

Biquadratic Equation with Four Unknowns

Shreemathi Adiga

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 131-140
https://doi.org/10.9734/bpi/nramcs/v3/2366B

Number theory relies heavily on diophantine equations, which come in a wide variety of forms.  There are several Diophantine equations with no solution, trivial solutions, a finite number of solutions, and an infinite number of solutions. Among higher degree Diophantine Equations,there are mainly two types of equations.When the degree is four, they are  homogeneous and non-homogeneous bi-quadratic equations. In its most generic form, its integral solution may be required. Since ancient times, both homogeneous and non-homogeneous Biquadratic equations have piqued the curiosity of many mathematicians.  This paper concerns with the problem of determining non-trivial integral solutions of the non-homogeneous Biquadratic equation with four unknowns given by  7xy + 3z2  = 3 w4 . Infinitely many non-zero integer solutions of the equation is found by introducing the linear transformations x = u + v, y = u - v, z = v . 

   

No Big Bang in the Non-Expanding Universe

Walter Petry

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 141-150
https://doi.org/10.9734/bpi/nramcs/v3/2447B

A theory of gravitation in flat space-time is briefly summarized and applied to cosmological models. These models start with a uniformly distributed gravitational field without matter. Gravitational energy is converted to matter and the total energy of matter and  gravitation is conserved. The arising universe has no singularity (no big bang) and it is not expanding. The redshift is a gravitational effect. It follows by converting gravitation energy to matter changing the gravitational field. This is another gravitation theory different from general relativity and it also gives for weak gravitational fields the same results to measurable accuracy as general relativity. The application to cosmology of gravitation in flat space-time has no singularity, i..e. not as general relativity with a singularity, called big bang.

   

Probability Problems and Estimation Algorithms Associated with Symmetric Functions

David Hertz

Novel Research Aspects in Mathematical and Computer Science Vol. 3, 14 May 2022, Page 151-171
https://doi.org/10.9734/bpi/nramcs/v3/2428B

In this chapter, a simple powerful methodology is presented where we replace the independent variables \(\lambda_{1}, \ldots, \lambda_{n}\)  in multiple symmetric functions as well as in Vieta’s formulas by the indication functions of the events \(A_{i}, i=1, \ldots, n \text {, i.e., } \lambda_{i}=1\left(A_{i}\right), i=1, \ldots, n\) Both the random variable K that counts the number of events which actually occurred and the proposed identity \(\prod_{i=1}^{n}\left(z-1\left(A_{i}\right)\right) \equiv\) (z-1)^{K} z^{n-K} that solely depends on K play a major role in this chapter Just by selecting multiple values for z (real, complex, and random) and taking expectations of the different functions we provide other simple proofs of known findings as well as get new results. The estimated algorithms for computing the expected elementary symmetric functions via least squares based on IFFT in the complex domain \((z \in \mathbb{C})\) and least squares or linear programming in the real domain \((z \in \mathbb{R})\) are noteworthy. we use Newton’s identities and some popular inequalities to get new outcome and inequalities. we express an algorithm that exactly computes the distribution of K (i.e., qk \(:=\mathbb{P}(K=k), k=0,1, \ldots, n)\) for finite sample spaces. Finally, we provide a conclusion and areas for future research.