Editor(s)

Dr. Xingting Wang
Assistant Professor,
Department of Mathematics, Howard University, Washington, USA.

ISBN 978-93-91215-75-0 (Print)
ISBN 978-93-91215-83-5 (eBook)
DOI: 10.9734/bpi/ctmcs/v2

This book covers key areas of mathematics and computer science. The contributions by the authors include algorithm of Collatz, variables, Emiliano theorem, Diophantic equations, integral representation, super-singular kernels, invers formula, three-dimensional integral equations, Dirichlet type boundary value problem, three-body problem, Liberation points, Linear stability, zero velocity curve, handwriting recognition, database scheme, database interface, internet router, self-similarity, Markovian arrival process, queueing theory, matrix analytic methods, partial buffer sharing, packet loss probability, function zeta, convergent series, Abel-Ruffini Theorem, Enfer Diez equation, congruence method, polynomials, dimension reduction, multidimensional information visualization, Euclidean distance, embedding algorithms, pattern classification, infinite algorithms, hierarchical scale space, multiscale, feature extraction, segmentation, fuzzy inference system, landsat- 8, fuzzy knowledge rules, object oriented image analysis, google earth engine, machine learning, algorithms, supervised learning methods, predictive analytics, data analytics, climate change, support vector machine, linear discriminant analysis, feature extraction, feature selection, operational data center, MPLS computer networks, VDI, Model-view-controller, customized algorithm, spatio-temporal databases, Region of Interest, geospatial analysis, parallel programming, concurrency, autoregression, contaminated errors, distribution mixes. This book contains various materials suitable for students, researchers and academicians in the field of mathematics and computer science.

 

Media Promotion:


Chapters


In this study, we focus one class of three-dimensional integral equation involving tube domains that are in power basis and lateral surface and way have super-singularity. In depend of the roots of the characteristic equations (2), (3) integral representation manifold solution is obtained in an explicit form. In the case, when parameters present in kernels, such that general solution integral equation contain arbitrary functions, invers formula is found. On basis obtained integral representation and its invers formula , in the case when general solution integral equation contain arbitrary functions, determined correct stand Dirichlet boundary valued problem and found its solution.

In this work, we analyzed the motion of an infinitesimal mass and considered the Sun-Earth elliptic restricted three-body problem with radiation effect. The semi analytical expressions for the location of collinear points were obtained using the perturbation technique, and the influence of radiation on collinear points were analyzed. The critical mass expressions, which rely on the radiative primary, are used to investigate the stability of triangle points. The Jacobi constant ‘C' was also discovered, as well as Zero Velocity Curves.  

An Approach of Hidden Markov Model for Offline \(Yor\grave{u}b\acute{a}\) Handwritten Word Recognition

Jumoke F. Ajao, Stephen O. Olabiyisi, Elijah O. Omidiora, Oladayo O. Okediran

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 39-59
https://doi.org/10.9734/bpi/ctmcs/v2/1659E

This paper presents a recognition system for \(Yor\grave{u}b\acute{a}\) handwritten words using Hidden Markov Model(HMM). The work is divided into four stages, namely data acquisition, preprocessing, feature extraction and classification. Data were collected from adult indigenous writers and the scanned images were subjected to some level of preprocessing, such as: greyscale, binarization, noise removal and normalization accordingly. Features were extracted from each of the normalized words, where a set of new features for handwritten \(Yor\grave{u}b\acute{a}\) words is obtained, based on discrete cosine transform approach and zigzag scanning was applied to extract the character shape, underdot and the diacritic sign from spatial frequency of the word image. The \(Yor\grave{u}b\acute{a}\) handwritten words were subjected to some level of preprocessing to enhance its quality and Discrete Cosine Transform was used to extract the features of the \(Yor\grave{u}b\acute{a}\) handwritten image. A ten(10) state left-to-right HMM was used to model the \(Yor\grave{u}b\acute{a}\) words. The initial probability of HMM was randomly generated based on the model created for \(Yor\grave{u}b\acute{a}\) alphabet. In the HMM modeling, one HMM per each class of the image feature was constructed. The Baum-Welch re-estimation algorithm was applied to train each of the HMM class based on the DCT feature vector for the handwritten word images. Viterbi algorithm was used to classify the handwritten word which, gave the corresponding state sequences that best describe the model. Our experiments reported the highest test accuracy of 92\% and higher recognition rate of 95.6\% which, indicated that the performance of the recognition system is very accurate.

This paper approximates the Markovian model to study the router's loss behaviour employing the PBS mechanism under self-similar variable length input traffic by considering voids using the queueing system . As we know that the Broadband integrated digital service network (B-ISDN) is expected to support various kinds of services such as voice, data, video, and possible combinations of these. Because of this integration and demand, there may be congestion problem in networking. Congestion problem in the network can be dealt with some priority handling queueing mechanisms. One of such mechanisms is buffer access control (B.A.C.), also called space priority. There are strategies by which one can implement this space priority mechanism. One of such strategies is the partial buffer sharing (PBS) scheme. A limit (or threshold) is imposed on both low and high priority packets in this scheme. All arriving packets share the part of Buffer on or below the threshold. When the buffer occupancy is above the threshold, the queueing system's arriving low priority packets will be rejected. The high priority packets are lost only when the Buffer is full. Determination of an appropriate threshold is the significant design issue for a space priority queueing system. If the threshold is relatively high, then high priority packets will be lost more than expectedly. If the threshold is relatively low, low priority packets will be lost excessively. Either way, qualities of service (QoS) requirements are not guaranteed.

Hence the threshold setting is a trade-off between the queue utilization and the guaranteed QoS. On the other hand, when the packet length is variable, voids will occur in the router buffer, and the performance of the router degrades as voids will incur excess loads. We assume that voids length follows uniform distribution and packet length follows an exponential distribution. Here, in this paper, the input traffic is self-similar and modelled as the Markovian arrival process (M.A.P.). Suppose the packet lengths follow an exponential distribution, and voids follow a uniform distribution. In that case, the sum of them need not be exponential and presents difficulties in the computation of performance measures. Hence, we assume that sum of packet length distribution and void length distribution are exponential, but with modified parameters that involve both the parameters of exponential distribution and uniform distribution. We compute the performance measures such as packet loss probability against the threshold, buffer capacity, traffic intensity, and optimal threshold using matrix analytic methods. One could utilize the analysis of mean lengths of the critical and non-critical period to initialize the related call admission control schemes in the router to improve performance further.

Convergent Series for the Zeta Function

Andri Lopez

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 75-82
https://doi.org/10.9734/bpi/ctmcs/v2/9640D

In this article new convergent series are presented and they will be applied to the zeta function. With them we solve the relevant points such as: Calculate and define the absolute zero points on the straight line for everything (\(\frac{1}{n}\)) with (\(n \ge 2\)) and, one more than acceptable approximation to the absolute value of \(\pi\)(x) since, the greater the increase of (\(\Delta\)(x)) the relation between the new equation of \(\pi\)(x) and the value absolute of \(\pi\)(x) is \(\frac{\pi(\Delta x)}{\pi(\Delta x)}\) \(\cong\).

In this paper I present two methods to solve the polynomials of degree greater or equal to five in such a way that: Gn is Sn with n\(\geq\)5. With the first method we know if the polynomial of degree greater or equal to five contains an elliptic curve (if this is not viewed directly). The second method will be applied whenever the value of x defined with the equation of Enfer Diez is not the real value of the polynomial; this value tells us if the value of x in the polynomial is greater or less. The solution is obtained with the congruence method. It remains proven: The solution of the polynomial is make based on its coefficients.

 

A Novel Approach to Visualization of High-Dimensional Data by Pairwise Fusion Matrices Using t-SNE

Mujtaba Husnain, Malik Muhammad Saad Missen, Shahzad Mumtaz, Muhammad Muzzamil Luqman, Mickael Coustaty, Jean-Marc Ogier

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 89-108
https://doi.org/10.9734/bpi/ctmcs/v2/2338F

We applied t-distributed stochastic neighbor embedding ( t-SNE) to visualize Urdu handwritten numerals (or digits). The data set used consists of 28  images ofhandwritten Urdu numerals. The data set was created by inviting authors from diff erent categories ofnative Urdu speakers. One of the challenging and critical issues for the correct visualization of Urdu numerals is shape similarity between some of the digits. This issue was resolved using t-SNE, by exploiting local and global structures of the large data set at different scales. The global structure consists of geometrical features and local structure is the pixel-based information for each class of Urdu digits. We introduce a novel approach that allows the fusion of these two independent spaces using Euclidean pair wise distances in a highly organized and principled way. The fusion matrix embedded with t-SNE helps to locate each data point in a two (or three-) dimensional map in a very different way. Furthermore, our proposed approach focuses on preserving the local structure of the high- dimensional data while mapping to a low-dimensional plane. The novelty of our approach lies in the fact that we embed Euclidean distances in standard t-SNE in order to successfully visualize the high-dimensional data represented in multiple independent observations.  The visualizations produced by t-SNE outperformed other classical techniques like principal component analysis (PCA) and auto-encoders (AE) on our handwritten Urdu numeral dataset.

An Application of the Algorithm (3a+1) in the Problem of Computing O\((2^n)\)

Andri Lopez

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 109-112
https://doi.org/10.9734/bpi/ctmcs/v2/9637D

This article has have as objective one: Present of two new algorithm for its application in the computers, with they we will can know all and of each of values \(2^n\) in polynomial time not only the individual values of (n) if no all values of an interval = (1;2;3;4;5;6;……..\(K \rightarrow\) \(\infty\)). As second objective, demonstration that there are infinite algorithms in the form Xa +1 for the cycle (4,2,1) under the statement of the Collatz conjecture.

 

Hierarchical Scale-space Representational Measure for Estimating Land Cover

C. Rajabhushanam

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 113-133
https://doi.org/10.9734/bpi/ctmcs/v2/3615D

The Minimum Mapping Unit (MMU) for an object-oriented image analysis operation are shape-filling curves such as planar lines and rectilinear segments. The space-filling curves do not change the feature object representation when scale is varied, thus representing spatial and aspatial features with finer or coarser granularity. The increased collinearity can be explained by arrangement of topological objects in the aggregated feature space (neighbors/objects), thus producing image areal objects. Rather than performing one individual operation on the imagery objects by scanline rows, we can compute on custom built algorithms applied to the distinguishing objects. This operation outputs super-objects that are classified by texture and mathematical relationships, thereby leveraging the multi-scale object-oriented analysis procedures. For information retrieval, a continuous hierarchical scale space filtering operation is adapted for segmentation purposes. In fact, MMU variations will produce instances of image objects that preserve the spatial scale at a particular optimizing parameter. This article lays emphasis on object-oriented analysis and accompanying fuzzy inference analysis of the imagery scene. By denoting image analysis procedures based on image objects at the characteristic scale, one can delineate imagery semantics at the low and how-level spatial context.

Such a method becomes feasible with object oriented scale space hierarchical theory, with varying intra and inter scale parameters. While using image objects to calculate multi-variate statistics (Entropy measure, heterogeneity measure, local mean vs local variance measure, and mean vs. covariance measure), fuzzy modeling of mixed pixels is used to extract reliability without incorporating edges. Homogeneous areas of mixed pixels will be resolved by the Region Labeling Operator using in-class variance measures. Between class variances can be used to measure the distance of scale intervals that can be resolved by the scale object. This produces a hierarchical network that further delineates the final object features. The Scale Operator (SO) is defined to be the varying optimizer selection in the Region Growing and Region Merging procedures. While conducting region abstraction process, individual objects having similar sub-class variance, sub-class texture characteristics, will be fused to create a segmented super object. With the resulting increase in heterogeneity, the Scale Operator diffuses the super-objects and so more objects are merged and created within the class intervals.

Recently there has been an explosion of research articles being published in scientific journals regarding machine learning and specialized algorithms, for feature identification, feature selection, and feature extraction studies. This article distinguishes the application of proof-of-concept from domains such as computer vision, remote sensing, image processing, and geospatial based databases technology. Using Landsat-8 sensor satellite imagery for rendering in multimedia and in scalable vector processing modes, the article lends credence to the fundamentals and principles in digital image analysis technique. With the application of user defined algorithm and scientific approach, the article in vigour details RSVM and DAFE scientific methods in cloud computing platform. It is proposed to bring about nuances of data analytics in distributed computing and parallel databases.

Design of Spatio-temporal Data Archival for Big-data Satellite Imagery: Use Cases

C. Rajabhushanam

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 141-145
https://doi.org/10.9734/bpi/ctmcs/v2/3617D

This research article emphasizes the importance of production data systems in parallel and distributed spatial databases. It proposes a Multi Protocol Label Switching (MPLS) computer network design based on virtual desktop infrastructure (VDI), local area network, for performing complex workflows using spatio-temporal datasets such as Worldview-3, Sentinel-3, and Landsat-8 satellite imagery. The main intent is to facilitate a workflow processing procedure that will automatically devise a sequential and ordered flow of machine instructions in a pipelined approach with robust data redundancy and feature retrieval procedure. In addition, the research article implies that a spatial design pattern such as model-view-controller (MVC) can be designed and implemented using geographic markup language (GML) and OGC based webservices. In addition, a proprietary customized algorithm will be designed to perform geospatial analysis and archival of bigdata.

The research article details the analysis and design of parallel programming techniques in dual core multiprocessor architecture with multiusers, in a staged environment. It articulates accessing with design principles of parallel programming techniques, in multi-core computers. It further elaborates Asynchronous mode of operating constructs in Flynn’s taxonomy for computing.

Mixtures of Distributions and Volatility: A Theoretical Explanation

Juan Carlos Abril, María de las Mercedes Abril, Carlos Ismael Martínez

Current Topics on Mathematics and Computer Science Vol. 2, 12 June 2021, Page 157-169
https://doi.org/10.9734/bpi/ctmcs/v2/2333F

 We generate a time series with the following characteristics using Monte Carlo methods: a) series with distributions that are a combination of the two normal distributions with different variances, b) series that satisfy volatility models, c) series that satisfy an AR(1) model but with contaminated errors which follow the same distribution as the mixes given in a) and d) series that follow the same distribution as the mixes given in a) but with conditional heterocedasticity. We can see from the analysis that identifying the actual generation mechanism of the series in practise is difficult. In fact, the processes resulting from distribution mixes are very similar to the ones that satisfy the volatility scheme. We use the usual tools in the identification phase of any time series, such as series diagrams, histograms, the corresponding sampling distributions, correlograms, and partial correlograms, as well as the corresponding theoretical considerations.