In the context of rapid urbanization of society, modeling the processes of sustainable urban development has attracted considerable attention from scientists. This paper presents a study of fuzzy cognitive maps (FCMs) as an interdisciplinary model for simulating urban development processes. This highlights the versatility of FCM in integrating expertise and quantifying the impact of indicators that shape urban space, from infrastructure and housing to environmental sustainability and community well-being. The study uses a synthesis of an extensive literature review and expert opinions to create and refine a cognitive map tailored for municipal development. The methodology outlined formulates a systematic approach to selecting concepts, assigning weights, and validating the model. Through collaboration with cross-disciplinary experts, the study confirms the value of FCM for identifying cascading effects in the decision-making process when shaping urban development strategies. Recognizing the limitations of expert methods and the fuzzy nature of data, the article argues for the effectiveness of FCM in not only identifying but also addressing emerging urbanization problems. Ultimately, this article contributes a nuanced perspective to strategic planning discourse by advocating for the use of NCC as a management decision support tool that can assist policymakers in achieving a sustainable and equitable urban future.
Keywords: fuzzy cognitive maps, urban development, urban planning, sustainable urbanization, expert systems, social well-being
Stepper motors are often used in automated laser cutting systems. The control circuit of a stepper motor requires a special electronic device - a driver, which receives logical signals as input and changes the current in the motor windings to provide motion parameters. This research study evaluated stepper motor drivers to determine the feasibility of their use - PLDS880, OSM-42RA, OSM-88RA. To control the system, software code was written, which was connected to the controller via a link board. With each driver, in different modes, optimal parameters were selected (initial speed, final speed and acceleration), that is, the movement of the carriage without stalling for ten passes with a minimum travel time. The results of the experiments are presented in the form of tables.
Keywords: laser, laser cutting, automation, technological process, stepper motor, performance, driver, controller, control circuit, optimal parameters
In this paper, we reviewed and analyzed various time series forecasting models using data collected from IoT mobile devices. The main attention is paid to models describing the behavior of traffic in telecommunication systems. Forecasting methods such as exponential smoothing, linear regression, autoregressive integrated moving average (ARIMA), and N-BEATS, which uses fully connected neural network layers to forecast univariate time series, are covered. The article briefly describes the features of each model, examines the process of their training, and conducts a comparative analysis of the quality of training. Based on data analysis, it was noted that for the UDP protocol, the ARIMA model has the best learning quality, for the TCP protocol - linear regression, and for the HTTPS protocol - ARIMA.
Keywords: telecommunication systems, traffic analysis, forecasting models, QoS, artificial intelligence, linear regression, ARIMA, Theta, N-BEATS
This article explores the introduction and implementation of neural network models in the field of agriculture, with an emphasis on their use in smart greenhouses. Smart greenhouses are innovative systems for controlling the microclimate and other factors affecting plant growth. Using neural networks trained on data on soil moisture, temperature, illumination and other parameters, it is possible to predict future indicators with high accuracy. The article discusses the stages of data collection and preparation, the learning process of neural networks, as well as the practical implementation of this approach. The results of the study highlight the prospects for the introduction of neural networks in the agricultural sector and their important role in optimizing plant growth processes and increasing the productivity of agricultural enterprises.
Keywords: neural network, predicting indicators, smart greenhouse, artificial intelligence, data modeling, microclimate
The article is dedicated to analyzing methods for processing experimental data in the field of improving manufacturing technologies, with a focus on the processing of titanium alloys. In the modern context of natural science experiments, characterized by large volumes of information, there is a need to apply mathematical methods and computational systems for effective data analysis. Research conducted at the Department of "High-Performance Processing Technologies" at MSTU "Stankin" aims to identify the optimal type of wear-resistant coating for cutting tools when processing titanium plates. Titanium, with its low thermal conductivity and high chemical reactivity, presents certain challenges during processing, necessitating a thorough investigation of the relationships between cutting conditions and process parameters. The article discusses existing problems related to the insufficient study of these relationships and proposes a solution for data storage and processing through the creation of a unified database. This will enhance the visualization and analysis of experimental results, thereby improving the efficiency of research and the quality of manufactured products. The results of this work may be beneficial for both scientific research and practical applications in industry.
Keywords: technological process research, cutting tool, cutting mode characteristics, systems analysis
The study is devoted to the development of models, algorithms and software for computer training complexes (CTC) for training developers of automated information systems (AIS). The process of automated control of students' knowledge and skills using CTC in studying the mathematical support of AIS (using fuzzy modeling as an example) is formalized based on IDEF0 diagrams, and the process of assessing exercise performance as one of the control components. The advantage of CTC is that the teacher does not need to develop individual exercise options, since CTC configures the structure and complexity of the exercise and then automatically generates a unique version of the exercise for each student undergoing knowledge testing on the topic being studied. The student's performance is checked automatically by comparing the mathematical models of the student's solution to the task and the reference solution generated in CTC based on the problem statement. Algorithms for assessing task performance in fuzzy modeling exercises have been developed. A prototype of CTC has been created in the form of a web system with personal accounts for the teacher and the student. The developed concept and algorithms for monitoring knowledge and skills in fuzzy modeling using the CTC can be adapted for various disciplines in the field of mathematical, software, information and other types of support for AIS.
Keywords: automated information systems, mathematical support, fuzzy modeling, computer training complex, e-learning, distance learning
The paper presents a refined regression model of water level dynamics in the Siberian river Iya, which includes six natural factors on the right side (the number of days with precipitation in the Sayan Mountains, average day and night temperatures for the month, the amount of precipitation, snow depth, average atmospheric pressure for the month ) taking into account the delay, as well as a specially generated seasonal variable. The high adequacy of the model is indicated by the values of the criteria of multiple determination, Fisher, and the average relative error of approximation. The constructed model can be effectively used to solve a wide range of forecasting problems.
Keywords: regression model, river water level, lag time, seasonal variable, forecast
This paper examines and compares two neural networks, U-Net-Attention and SegGPT, which use different attention mechanisms to find relationships between different parts of the input and output data. The U-Net-Attention architecture is a dual-layer attention U-Net neural network, an efficient neural network for image segmentation. It has an encoder and decoder, combined connections between layers and connections that pass through hidden layers, which allows information about the local properties of feature maps to be conveyed. To improve the quality of segmentation, the original U-Net architecture includes an attention layer, which helps to enhance the search for the image features we need. The SegGPT model is based on the Visual Transformers architecture and also uses an attention mechanism. Both models focus attention on important aspects of a problem and can be effective in solving a variety of problems. In this work, we compared their work on segmenting cracks in road surface images to further classify the condition of the road surface as a whole. An analysis and conclusions are also made about the possibilities of using architectural transformers to solve a wide range of problems.
Keywords: machine learning, Transformer neural networks, U-Net-Attention, SegGPT, roadway condition analysis, computer vision
The article is devoted to the problems of managing the implementation of multi-scenario, multi-stage projects under conditions of uncertainty. The proposed approach is based on representing the project model in the form of a scenario network. The developed fuzzy linguistic model of a project stage is a set of linguistic variables corresponding to the stage indicators and external factors influencing the subsequent implementation of the project. The decisive rules for choosing the arc of transition to the next stage are constructed in the form of fuzzy products, the left parts of which are fuzzy statements regarding the preference of possible options. The constructed decision support procedure is based on the use of the Mamdani fuzzy inference algorithm, which has high interpretability. The proposed approach allows for multi-scenario planning and adaptability of management of the implementation of multi-stage projects.
Keywords: multi-scenario multi-stage projects, adaptive project management, scenario network, decision support, linguistic variable, fuzzy inference
The article is the result of an analytical study of the development of structures of medium and small businesses in the engineering implementation of the stages of survey, preparation in the production of building materials, semi-finished products, sections of projects, as well as participants in the commissioning of facilities for 2012-2022. During this period, the number of small and medium enterprises in the territory of the Russian Federation increased by 224 thousand units. In the Central Federal District (which includes the Tula Region), the increase was 31.8%. At the same time, their growth in construction amounted to 6.39%. However, the trend has changed from 2019 to 2022. the number of entrepreneurs significantly decreased by 457 thousand. In this regard, the authors in their studies solved the problem of analyzing the state, dynamics of changes in the number and content of the activities of structures of medium and small businesses in construction; developing proposals to improve development efficiency. The main attention is paid to specialization, the reasons for curbing the growth of business services and the economic results of their work.
Keywords: business planning, specialization, planning, project management, building complex
The paper presents an overview of the task of automatic text summarization. The formulation of the problem of automatic text summarization is carried out. The classification of algorithms for automatic text summarization by the type of the resulting summary and by the approach to solving the problem is carried out. Some existing problems in the field of automatic text summarization and disadvantages of certain classes of algorithms are described. The concepts of quality and information completeness of the summary are defined. The most popular approaches to the assessment of the information completeness of the summary and their classification in accordance with the methodology used are considered. The metrics of the ROUGE family are considered in relation to the task of automatic text summarization. Special attention is paid to the evaluation of the information completeness of the summary using such metrics of information proximity as the Kulback-Leibler divergence, the Jensen-Shannon divergence and the cosine distance (similarity). The metrics mentioned above can be applied to the text vector representations of the initial text and summary. The text vector representation in question can be performed using such methods like frequency vectorization, TF-IDF, static vectorizers and so on.
Keywords: automatic summarization, summary, information completeness, ROUGE, vectorization, TF-IDF, static vectorizer, Kullback-Leibler divergence, Jensen-Shannon divergence, cosine distance
The issue of using the screen of an aircraft's collimator system as a means of providing a help to the pilot about the vertical profile of the flight path in poor visibility conditions at low and extremely low piloting altitudes is being considered.
Keywords: low flight altitude, extremely low flight altitude, threat of collision, collimator, virtual elevation map, virtual reality, augmented reality, artificial intelligence, data fusion, pilot assistance system
The historical aspects of the emergence of the problem of noise-resistant image encoding are considered using the example of delivering photographs of the surface of Mars to Earth. Using the example of generalization of orthogonal matrices by quasi-orthogonal ones, the expansion of the number of matrices for use in image conversion for transmission in noise communication channels is shown.
Keywords: Hadamard matrices, Hadamard coding, Reed-Solomon codes, orthogonal matrices, quasi-orthogonal matrices, noise-resistant image encoding
This article analyzes and reviews modern methods and technologies used in anti-plagiarism systems, with an emphasis on the Russian market. The purpose of considering all of the above is to choose a suitable anti-plagiarism system for integration. The article presents the most popular Russian services for detecting borrowings, their business models, algorithms of operation, as well as a general description of the principles and mechanisms underlying these algorithms. It was determined that the most universal and effective system for finding loans is the service Antiplagiat.ru , since it has the possibility of integration via the API, as well as 34 additional modules that provide the opportunity to adapt the functionality of the system to individual needs.
Keywords: antiplagiarism, text analysis, text processing algorithms, semantic analysis, stylistic analysis
This article discusses the basic principles and design patterns of an application for collecting data from third-party sources. Research has been carried out on various methods of obtaining data, including web scraping, using APIs and file parsing. It also describes various approaches to extracting information from structured and unstructured sources.
Keywords: internet sources, API, parsing, web, headless browser, scraping, etag, data collection
Road surface quality assessment is one of the most urgent tasks in the world. To solve it, there are many systems that mainly interact with images of the roadway. They work on the basis of both traditional methods (machine learning is not used) and machine learning algorithms. Traditional approaches, for example, include methods for edge detection in images that are the object of this study. However, each of the algorithms has certain features. For example, some of them allow to get a processed version of the original photo faster. The following methods were selected for analysis: "Canny algorithm", "Kirsch operator", "Laplace Operator", "Marr-Hildreth algorithm", "Prewitt operator" and "Sobel Operator". The main indicator of effectiveness in the study is the average time to receive the processed photo. The initial material of the experiment is 10 different images of the road surface in 5 sizes (1000x1000, 894x894, 775x775, 632x632, 447x447) in bmp, jpg, png formats. The study found that the "Kirsch operator", "Laplace Operator" and "Prewitt Operator" and "Sobel operator" have a linear dependence of O(n), the "Canny algorithm" and the "Marr-Hildreth algorithm" have a quadratic character of O(n2). The best results are demonstrated by the "Prewitt Operator" and the "Sobel Operator".
Keywords: comparison, effectiveness, method, edge detection, image, photo, road surface, dependence, size, format
Unintentional errors occur in all data transmission channels. The standard way to deal with them is to use noise-resistant codecs based on the use of algebraic error correction codes. There are transmission channels in which a special type of error occurs – erasures, i.e. a type of error in which the location of the error is known, but its value is not known. Coding theory claims that error-control methods can be applied to protect data from erasure, however, these statements are not accompanied by details. This work fills this gap. Algorithms for correcting erasures using arbitrary decoders for error correcting codes are constructed. Lemmas about the correctness of the constructed algorithms are formulated, some estimates of the probability of successful decoding are obtained.
Keywords: channels with erasures, noise-resistant code, algebraic code, error correction code decoder, erasure correction algorithm
The problem of vulnerabilities in the Robot Operating System (ROS) operating system when implementing a multi-agent system based on the Turtlebot3 robot is considered. ROS provides powerful tools for communication and data exchange between various components of the system. However, when exchanging data between Turtlebot3 robots, vulnerabilities may arise that can be used by attackers for unauthorized access or attacks on the system. One of the possible vulnerabilities is the interception and substitution of data between robots. An attacker can intercept the data, change it and resend it, which can lead to unpredictable consequences. Another possible vulnerability is unauthorized access to the commands and control of Turtlebot3 robots, which can lead to loss of control over the system. To solve these vulnerabilities, methods of protection against possible security threats arising during the operation of these systems have been developed and presented.
Keywords: Robotic operating system (ROS), multi-agent system, system packages, encryption, SSL, TLS, authentication and authorization system, communication channel, access restriction, threat analysis, Turtlebot3
The article discusses the author's methodology for designing and developing a test data generation tool called "QA Data Source", which can later be used in software testing. The paper describes the basic requirements, application functionality, data model, and usage examples. When describing the application, methods of system analysis and modeling of information processes were used. As a result of the application of the proposed model for the implementation of information processes, it is possible to significantly reduce the time and resources for generating test data and subsequent product testing.
Keywords: quality assurance, software testing, test data, information technology, data generation, databases, application development
The work is devoted to the problem of providing electrical energy to remote production enterprises in the absence of a centralized power supply. The purpose of the work is to develop decision support tools for choosing autonomous power generation projects from a large number of possible alternatives. To achieve this purpose, a hierarchy of criteria was constructed and a comparative analysis of existing technical and economic solutions in the field of small-scale autonomous energy was carried out. It is shown that when choosing a power generation project for a particular enterprise, there is a fairly large number of alternatives, which makes the use of commonly used decision support procedures based on the hierarchy analysis method/analytical network method (in the classical version) ineffective. An iterative procedure with dynamic changes in feedback between criteria and alternatives is proposed, which makes it possible to reduce the dimension of the supermatrix during the calculation process and, thereby, reduce the time complexity of the algorithms. The effectiveness of the proposed modification of the analytical network method is confirmed by calculations. The constructed procedure for selecting an autonomous power generation project makes it possible to increase the level of scientific validity of technical and economic decisions when expanding the production activities of small enterprises in remote and sparsely populated areas.
Keywords: autonomous power system, decision support, analytical network method
Among the vast range of tasks that modern advanced video surveillance systems face, the dominant position is occupied by the task of tracing various objects in the video stream, which is one of the fundamental problems in the field of video analytics. Numerous studies have shown that, despite the dynamism of processes in the field of information technology and the introduction of various tools and methods, the task of object maintenance still remains relevant and requires further improvement of previously developed algorithms in order to eliminate some inherent disadvantages of these algorithms, systematization of techniques and methods and the development of new systems and approaches. The presented article describes the process of step-by-step development of an algorithm for tracking human movements in a video stream based on the analysis of color groups. The key stages of this algorithm are: the selection of certain frames when dividing the video stream, the selection of the object under study, which is further subjected to a digital processing procedure, the basis of which is to obtain information about color groups, their average values and percentages of their occupancy relative to the object under study. This information is used for the procedure of searching, detecting and recognizing the selected object with an additional function of predicting the direction of movement on video frames, the result of which is the formation of the entire picture of the movement of the person under study. The materials presented in this paper may be of interest to specialists whose research focuses on issues related to the automated acquisition of certain data in the analysis of various images and videos.
Keywords: surveillance cameras, u2– net neural network, rembg library, pattern recognition, clothing recognition, delta E, tracing, direction prediction, object detection, tracking, mathematical statistics, predicted area, RGB pixels
Currently, patent documents contain graphic images of device drawings, graphs, chemical and mathematical formulas, and formulas often need to be recognized and brought to a unified standard. In this work, the analysis of graphic images extracted from the descriptions of patents of the FIPS of Rospatent is carried out. Thematic filtering of mathematical and chemical formulas contained in patent documents and their recognition is provided. The theoretical value lies in the developed algorithms for parsing patents in the Yandex system.Patents; recognition of chemical and mathematical formulas among graphic patent images; translation of graphic images of chemical formulas into SMILES format; conversion of graphic images of mathematical formulas into LaTeX format. The practical significance of the work lies in the developed software module for analyzing graphic images from patent documents. The field of application of the developed system is the study of patents and the reduction of graphic images to a unified standard for solving patent search problems.
Keywords: patent, image, mathematical formula, chemical formula, LaTeX, SMILES
The article discusses the use of a recurrent neural network in the problem of forecasting pollutants in the air based on actual data in the form of a time series. A description of the network architecture, the training method used, and the method for generating training and testing data is provided. During training, a data set consisting of 126 measurements of various components was used. As a result, the quality of the conclusions of the resulting model was assessed and the averaged coefficients of the MSE metric were calculated.
Keywords: air pollution, forecasting, neural networks, machine learning, recurrent network, time series analysis
The paper analyzes various approaches to identifying and recognizing license plates in intelligent transport networks. A deep learning model has been proposed for localizing and recognizing license plates in natural images, which can achieve satisfactory results in terms of recognition accuracy and speed compared to traditional ones. Evaluations of the effectiveness of the deep learning model are provided.
Keywords: VANET, intelligent transport networks, YOLO, city traffic management system, steganography, deep learning, deep learning, information security, convolutional neural network, CNN
The article presents aspects of the development of a device for wirelessly picking up a vibration acceleration signal from the surface of a ball mill drum. The results of measuring vibration acceleration for a ball mill model for various levels of loading with crushed material are presented. According to these results, with an increase in the load of crushed materials relative to the ball, the level of vibration decreases. The work also presents the obtained pie diagrams of the distribution of vibration load across the mill drum, from which one can judge its current operating mode.
Keywords: ball mill, wireless signal, vibration acceleration, mill loading control