


No 3 (2024)
COMPUTER GRAPHICS AND VISUALIZATION
Specifics of the Development of an On-Board Visualization System for Civil Aircrafts
Abstract
The instrument panels of modern aircraft are created using the “glass cockpit” concept. This new interface philosophy improves the perception of important flight information by displaying it on a single multi-function display. The paper considers the problems that arise when developing a certified pilot display visualization system designed for operation on civil aircraft under the Russian real-time operating system JetOS. The paper presents several algorithmic solutions that allow achieving acceptable visualization speed. In particular, a solution to the problem of rigid scheduling of operating system partitions is described in detail. This solution allows to overcome the degradation of rendering speed. Directions for further work have been outlined.



Automatic Image Style Transfer Using an Augmented Style Set
Abstract
Image style transfer is an applied task for automatic rendering of the original image (content) in the style of another image (specifying the target style). Traditional image stylization methods provide only a single stylization result. If the user is not satisfied with it due to stylization artifacts, he has to choose a different style. The work proposes a modified stylization algorithm, giving a variety of stylization results, and achieves improved stylization quality by using additional style information from similar styles.



Influence of Unequilateral Apertures of the "Trunced Pyramid" and "Double Pyramid" Laplacian Digital Filters on the Accuracy of Television Measuring Systems
Abstract
In the modern world, digital image processing requires increasing the speed of the processing methods and algorithms used. One way to improve performance is to transform spatial filters into filters with a recursively separable form of implementation. The recursion property implies the use of previous output values of a function to form the current sample. The property of separability is understood as division into processing by column and row of a matrix of digital image values. The transformation of spatial filters consists of changing the aperture of the masks into a non-orthogonal (non-equilateral) form, which reduces the number of computational operations and speeds up the processing process, while maintaining its efficiency. The paper presents a description of the non-equilateral apertures of the previously developed “truncated pyramid” and “double pyramid”Laplacian digital filters. For non-equilateral apertures, results were obtained for the first time on their use for television measuring systems. From which it can be seen that a “truncated pyramid” Laplacian filter with non-equilateral processing apertures is recommended for use in TIS, since it increases the efficiency of measuring the range to objects of interest while reducing processing time. Based on the results of processing with modified filters, sets of processed images were obtained for each of the 10 original images. For each set of processed images, measurements of the peak signal-to-noise ratio, standard deviation and selection of the optimal central coefficient of the filter mask were carried out, for subsequent assessment of the effectiveness of processing with modified filters. The assessment of the influence of recursively separable “truncated pyramid” and “double pyramid” Laplacian filters with non-equilateral aperture masks on the accuracy of television measurement systems consisted of considering their influence on measuring the distance (from the camera) to the object of interest in the image, when – control of processing time. Based on the evaluation results, we can conclude that by using pre-processing of images with modified digital filters, the accuracy of measuring the distance from the camera to the measurement object is improved, while reducing processing time.



Conversion of Point Cloud Data to 3D Models Using PointNet++ and Transformer
Abstract
This work presents an approach to reconstructing 3D models from point cloud data, based on the use of modern neural network architectures. The basis of the method is PointNet++ and Transformer. PointNet++ plays a central role, providing efficient feature extraction and encoding of complex 3D scene geometries. This is achieved by recursively applying PointNet++ to nested partitions of the input point set in metric space. Convex decomposition, an important step in the approach, allows transforming complex three-dimensional objects into a set of simpler convex shapes. This simplifies data processing and makes the reconstruction process more manageable. The Transformer then trains the model on these features, allowing for the generation of high-quality reconstructions. It is important to note that the Transformer is used exclusively to determine the position of walls and object boundaries. This combination of technologies allows achieving high accuracy in the reconstruction of 3D models. The main idea of the method is to segment the point cloud into small fragments, which are then restored as polygonal meshes. To restore missing points in point cloud data, a method based on the L1-Median algorithm and local point cloud features is used. This approach can adapt to various geometric structures and correct topological connection errors. The proposed method was compared with several modern approaches and showed its potential in various fields, including architecture, engineering, digitization of cultural heritage, and augmented and mixed reality systems. This underscores its' wide applicability and significant potential for further development and application in various fields.



The Method to Order Point Clouds for Visualization on the Ray Tracing Pipeline
Abstract
Currently, the digitization of environment objects (vegetation, terrain, architectural structures, etc.) in the form of point clouds is actively developing. The integration of such digitized objects into virtual environment systems allows the quality of the modeled environment to be improved, but requires efficient methods and algorithms for real-time visualization of large point volumes. In this paper the solution of this task on modern multicore GPUs with support of hardware-accelerated ray tracing is researched. A modified method is proposed where the original unordered point cloud is split up into point groups which visualization is effectively parallelized on ray tracing cores. The paper describes an algorithm for constructing such groups using swapping arrays of point indices, which works faster than alternative solutions based on linked lists, and also has lower memory overhead. The proposed method and algorithm were implemented in the point cloud visualization software complex and approbated on a number of digitized environment objects. The results of the approbation confirmed the efficiency of proposed solutions as well as their applicability for virtual environment systems, video simulators and geoinformation systems, virtual laboratories, etc.



Dual Representation of Geometry for Ray Tracing Acceleration in Optical Systems with Freeform Surfaces
Abstract
This paper explores the possibility of using dual geometry representation to improve the speed of ray tracing and ensure the robustness of light propagation simulations in complex optical systems containing free-form surfaces defined by high-order polynomials (up to order 34) or Jacobi polynomials. An analysis was carried out of traditional methods of representing this geometry both in the form of a triangular mesh and in the form of an analytical expression. The analysis demonstrated the disadvantages of traditional approaches, which consist in the insufficient accuracy of calculating the coordinates of the meeting point of the ray with a triangular mesh, as well as the instability of the results of searching for the hit point of tangent rays with the analytically defined surface when using existing calculation methods. As a result, it was proposed to use a dual representation of the geometry in the form of a rough approximation of the surface by a triangular mesh, which is subsequently used as an initial approximation to find the point where the ray hits the surface specified by the analytical expression. This solution made it possible to significantly speed up the convergence of analytical methods and increase their stability. Moreover, using the Intel® Embree library to quickly find the point of intersection of a ray with a coarse triangular mesh and a vector calculation model to refine the coordinates of the point of intersection of the ray with the geometry represented analytically allowed authors to develop and implement a ray tracing algorithm in an optical system containing surfaces with dual geometry representation. Experiments conducted using the developed and implemented algorithm show the significant acceleration of ray tracing while maintaining computational accuracy and high stability of results. The results were demonstrated by calculating the point and flare spread function for two lenses with free-form surfaces defined by Jacobi polynomials. In addition, for these two lenses, the image formed by an RGB-D object simulating a real scene was calculated.



DATA ANALYSIS
Neural Network Method for Detecting Blur in Histological Images
Abstract
In this paper we consider the problem of detecting blurred regions in high-resolution full-slide histologic images. The proposed method is based on the use of a Fourier neural operator trained on the results of two simultaneously used approaches: blur detection using multiscale analysis of the discrete cosine transform coefficients and estimation of the degree of sharpness of objects edges in the image. The efficiency of the algorithm is confirmed on images from the datasets PATH-DT-MSU [1] and FocusPath [2].



Joint Super-Resolution and Tissue Patch Classification for Wholeslide Histological Images
Abstract
Segmentation of wholeslide histological images through the classification of tissue types of small fragments is an extremely relevant task in digital pathology, necessary for the development of methods for automatic analysis of wholeslide histological images. The extremely large resolution of such images also makes the task of increasing image resolution relevant, which allows storing images at a reduced resolution and increasing it if necessary. Annotating whole slide images by histologists is complex and time-consuming, so it is important to make the most efficient use of the available data, both labeled and unlabeled. In this paper we propose a novel neural network method to simultaneously solve the problems of super-resolution of histological images from 20× optical magnification to 40x and classifying image fragments into tissue types at 20× magnification. The use of a single encoder as well as the proposed neural network training scheme allows to achieve better results on both tasks compared to existing approaches. The PATH-DT-MSU WSS2v2 dataset presented for the first time in this paper was used for training and testing the method. On the test sample, an accuracy value of 0.971 and a balanced accuracy value of 0.916 were achieved in the classification task on 5 tissue types, for the super-resolution task, values of PSNR = 32.26 and SSIM = 0.89 were achieved. The source code of the proposed method is available at: https://github.com/Kukty/WSI_SR_CL.


