Next Article in Journal
Squinted Airborne Synthetic Aperture Radar Imaging with Unknown Curved Trajectory
Previous Article in Journal
A Novel Fetal Movement Simulator for the Performance Evaluation of Vibration Sensors for Wearable Fetal Movement Monitors
Previous Article in Special Issue
Continuous In-Bed Monitoring of Vital Signs Using a Multi Radar Setup for Freely Moving Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Radar Cross Section Near-Field to Far-Field Prediction for Isotropic-Point Scattering Target Based on Regression Estimation

1
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
2
Department of Computing, The Hong Kong Polytechnic University, 11 Yuk Choi Rd, Hung Hom, Hong Kong 999077, China
3
China North Industries Corp., Beijing 100053, China
4
Faculty of Electrical Engineering, Delft University of Technology, 2628 CN Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(21), 6023; https://doi.org/10.3390/s20216023
Submission received: 10 September 2020 / Revised: 16 October 2020 / Accepted: 21 October 2020 / Published: 23 October 2020
(This article belongs to the Special Issue Advances in Microwave and Millimeter Wave Radar Sensors)

Abstract

:
Radar cross section near-field to far-field transformation (NFFFT) is a well-established methodology. Due to the testing range constraints, the measured data are mostly near-field. Existing methods employ electromagnetic theory to transform near-field data into the far-field radar cross section, which is time-consuming in data processing. This paper proposes a flexible framework, named Neural Networks Near-Field to Far-Filed Transformation (NN-NFFFT). Unlike the conventional fixed-parameter model, the near-field RCS to far-field RCS transformation process is viewed as a nonlinear regression problem that can be solved by our fast and flexible neural network. The framework includes three stages: Near-Field and Far-field dataset generation, regression estimator training, and far-field data prediction. In our framework, the Radar cross section prior information is incorporated in the Near-Field and Far-field dataset generated by a group of point-scattering targets. A lightweight neural network is then used as a regression estimator to predict the far-field RCS from the near-field RCS observation. For the target with a small RCS, the proposed method also has less data acquisition time. Numerical examples and extensive experiments demonstrate that the proposed method can take less processing time to achieve comparable accuracy. Besides, the proposed framework can employ prior information about the real scenario to improve performance further.

1. Introduction

The radar cross section (RCS) of an object is a fictitious area that describes the intensity of the reflected wave in the radar [1]. The RCS definition requires that the target be located at infinity distance, which makes sure that a plane wave illuminates the target. In general, the target and measurement sensors are always situated at a finite distance apart. Thus, the incident wave is spherical, as shown in Figure 1a. In order to measure RCS with an acceptable error 1 dB or less [1], the target must be at distances greater than 2 d 2 / λ , where d is the linear size of the target under test (TUT) and λ is the wavelength [2]. This is the far-field criterion. It implies that, with the frequency increasing or the target expanding, the far-field range will be too extensive to permit RCS direct measurements.
As shown in Figure 1c,d, if the measurement distance r does not meet the far-field criterion both in the horizontal and vertical directions—i.e., r < 2 w 2 / λ & r < 2 h 2 / λ —the incident wave is spherical. While if the measurement distance r meets the far-field criterion in the vertical direction but not in the horizontal direction—i.e., r > 2 w 2 / λ & r < 2 h 2 / λ —the phase displacement between point 3 and point 0 is small enough (less than π / 8 ) [1]. Under this condition, the wave can be treated as a cylindrical wave.
If a TUT is measured in the range which does not meet the far-field criterion, the scattered field measured by sensors is called near-field. RCS calculated by near-field has an unacceptable error and needs to be transformed into far-field RCS. This process is named as the near-field to far-field transformation (NFFFT).
There are many common approaches to implement NFFFT based on electromagnetic theory. Most of them are called “image-based” techniques. That is because they need to make broadband measurements on the target. This “image-based” method can be summarized in two ways. The first kind evaluates the inverse synthetic aperture radar (ISAR) images in near-field and calculates RCS directly, as in [3,4,5,6]. The accuracy of the first kind depends on the imaging process. The measured errors lead to corresponding errors in the reconstructed far-field RCS [3]. The second kind, based on the Hankel function, calculates RCS without the image reconstruction process but still needs a broadband measurement, as in [7,8,9,10]. However, this method introduces fluctuations in the angle domain [9]. While spatial filtering techniques may suppress the fluctuations, they introduce a significant loss of angle resolution [11]. All of the “image-based” techniques require a broadband measurement in a wide-angle. Besides the “image-based” approach, NFFFT is also implemented by plane-wave expansion in [12,13]. In this kind of method, the exact shape of the object is needed. The accuracy of the algorithm depends on the sampling position and the discretization. The methods mentioned above reveal the relationship between near-field and far-field, while they both require a long time in data acquisition and signal processing.
In this paper, firstly, based on the Swerling Case I Model, point-scattering targets are used to simulate Near-field and Far-field data. The training samples can be easily acquired by measurements and simulations. Secondly, a new framework, called Neural Networks Near-Field to Far-Filed Transformation (NN-NFFFT), is proposed to predict the far-field RCS for isotropic-point scattering targets. The proposed method views NFFFT as a regression problem and electromagnetic scattering characteristics are integrated into the dataset as a priori knowledge. In contrast, the conventional method is a fixed parameter model.
Specifically, NN-NFFFT introduces a lightweight neural network to predict far-field RCS from the single frequency point near-field RCS data. Compared with the traditional method, the proposed NN-NFFFT can achieve more than ten times faster than the “Image-Based” NFFFT while maintaining comparable accuracy. The well-trained estimator can be used as a real-time mapping function between near-field RCS and far-field RCS. This process is shown in the simulation section.
The result shows that the far-field RCS can be predicted by the neural network with prior information efficiently. The complex calculations and the accuracy of the framework are only included in the training process. By using a large amount of data, the framework can be trained to meet the error requirement.
Furthermore, the antennas’ radiation patterns and the bistatic measurement can be completely incorporated in the training process with no additional complication. The sampling limitation introduced in [10] can also be ignored due to the training samples. The experiment result shows that, for the target, which has a small RCS, the method maintains good accuracy with less operation time.

2. Theory

2.1. The Relationship between Near-Field and Far-Field

RCS is defined as
σ = 4 π lim R R 2 | E s | 2 | E i | 2
where E s is a component of the scattered electric field at an observation point, and E i is a component of the incident electric field at the target position.
To implement NFFFT, it requires that the target must satisfy the scalar SAR “reflectivity density” model. The reflectivity distribution ρ ( r ) does not change with the look angle [6].
When a spherical wave illumines a target, the whole measurement process is a 3D scene, as shown in Figure 1b through the blue and red lines. If the target is flat enough, as mentioned above, it is approximately illumined by a cylindrical wave. This situation is common in RCS measurement because both aircrafts and vehicles are flat. In this condition, measuring sensors trace is shown in Figure 1b through the red line. The target model can be described in the x-y plane, as shown in Figure 2.
As shown in Figure 2a, the target is illuminated by a plane wave under the far-field condition. The monostatic scattered field at the receiver sensor is given by [5]
S F F ( r ^ , k ) = C FF V ρ ( r ) e j 2 k ( | r | r ^ · r ) d r
where ρ ( r ) is the TUT’s reflectivity distribution. r ^ is a unit vector that represents the look angle. k stands for wavenumber, which is equal to 2 π f / c . r stands for the measurement range. r indicates the location of the target. C F F is a constant that depends on measurement system’s parameters, which can be removed by calibration.
In the two-dimensional near-field scene, as shown in Figure 2b, the scalar Green’s function in free space can be expressed as G 0 ( r , r ) = exp ( j k | r r | ) / 4 π | r r | , Every scattering point on the target can be regarded as a reradiating source. Thus, the new two-way Green’s function is G ( r , r ) = exp ( j 2 k | r r | ) / ( 4 π | r r | ) 2 [8]. In this condition, the monostatic scattered field at the receiver point is given by
S N F ( r , k ) = C N F V ρ ( r ) e j 2 k | r r | | r r | 2 d r
where C N F is a constant that depends on the parameters of the system and can be removed by calibration. r stands for the measurement range and r indicates the location of the target. Equation (3) can be rearranged to the format including Green’s function of 2-D free space [8]
S N F ( r , k ) = V 2 k ρ ( r ) | r r | 3 2 e j 2 k | r r | 2 k | r r | d r
As shown in Figure 2b, the 2-D condition is discussed. Considering Hankel addition theory, Green’s function of 2-D free space can be expressed as
e j 2 k | r r | 2 k | r r | = H 0 ( 2 ) ( 2 k | r r | ) = n = H n ( 2 ) ( 2 k r ) J n ( 2 k r ) e j n ( φ 0 φ )
where H n ( 2 ) is the Hankel function of the second kind and J n is the Bessel function. φ stands for the far-field look angle while φ 0 is the near-field look angle.   φ is the angle of the point on the TUT. Thus Equation (3) can be changed as
S N F ( r , k ) = V 2 k ρ ( r ) | r r | 3 2 n = H n ( 2 ) ( 2 k r ) J n ( 2 k r ) e j n ( φ 0 φ ) d r
Exchanging the position of integration and summation, Equation (6) becomes
S N F ( r , k ) = n = { H n ( 2 ) ( 2 k r ) e j n φ 0 [ V 2 k ρ ( r ) | r r | 3 2 J n ( 2 k r ) e j n φ d r ] }
While in the far-field condition, the measurement distance r in Equation (7) is infinity—i.e., r . Then Equation (7) turns into the far-field monostatic scattered field given by
S F F ( r , k ) = lim r S N F ( r , k ) = lim r n = { H n ( 2 ) ( 2 k r ) e j n φ [ V 2 k ρ ( r ) | r r | 3 2 J n ( 2 k r ) e j n φ d r ] }
The relationship between the TUT and the measurement sensor turns from Figure 2b to Figure 2a. Furthermore, the representation of the look angle is also changed from φ 0 to φ . Due to | r | | r max | ( r max represents for the maximum size of the TUT) in the far-field condition, | r r | can be approximated as | r | or r without any error. The following can be derived
S F F ( r , k ) = 2 k r 3 n = { H n ( 2 ) ( 2 k r ) e j n φ [ V ρ ( r ) J n ( 2 k r ) e j n φ d r ] }
Note ρ ( r ) J n ( 2 k r ) exp ( j n φ ) d r = S n 2 k is the generalized Fourier series of the target image, an inherent scattering characteristic of the target and not related to measurement distance.
Equation (9) can be changed as
S F F ( r , k ) = 2 k r 3 n = S n 2 k H n ( 2 ) ( 2 k r ) e j n φ
In the far field range—i.e., 2 k r —using the large argument approximation theory in Hankel function [14], Equation (10) becomes
S F F ( r , k ) 1 r 2 2 π e j ( 2 k r π 4 ) n = S n 2 k e j n ( φ + π 2 )
As mentioned before, the generalized Fourier series of the target image S n 2 k is not related to measurement distance. Near-field data can also represent it.
In [7], weighted near-field data U N F are created by Fourier Transform:
U N F = FFT ( | r r | 3 2 IFFT ( S N F ) ) = V 2 k ρ ( r ) e j 2 k | r r | 2 k | r r | d r
Combining Equations (5), (12) and the definition of S n 2 k , U N F can be changed as
U N F ( r , k ) = 2 k n = S n 2 k H n ( 2 ) ( 2 k r ) e j n φ 0
S n 2 k can be acquired by applying Fourier transform to near-field data U N F ( r , k ) , given by
S n 2 k = 1 2 k U N F ( r , k ) e j n φ 0 d φ 0 H n ( 2 ) ( 2 k r )
Ignore the coefficient, which can be determined by calibration. Combining (11) and (14), the relationship between the near-field data U N F ( r , k ) and the far-field data S F F ( r , k ) can be derived as:
S F F ( r , k ) = U N F ( r , k ) n = j n e j n ( φ - φ 0 ) H n ( 2 ) ( 2 k r ) d φ 0 = U N F ( r , k ) ω ( φ 0 )
where ω ( φ 0 ) = N 0 N 0 j n exp ( j n φ 0 ) / H n ( 2 ) ( 2 k r ) stands for the NFFFT process.
According to [6], the RCS of the TUT is defined as
σ ( r , k ) = C | S F F ( r , k ) | 2
where C is a constant determined by calibration. Thus, the near-field RCS is defined as
σ N F ( r , k ) = C | S N F ( r , k ) | 2
Our framework focuses on predicting the far-field RCS from the near-field RCS. Unlike the conventional approach, this process is nonlinear.

2.2. Regression Estimation

Equations (12) and (15) reveal the relationship between near-field and far-field measurement data in theory. This process is liner, ignoring the multi-reflection [12]. Combining the NFFFT process with the RCS definition in Equations (16) and (17), the radar cross section near-field to far-field transformation is nonlinear.
In our framework, the NFFFT problem is defined as a nonlinear regression problem. From the Bayesian inference framework [15], the maximum a posteriori (MAP) finds the optimal RCS σ
σ = arg max σ p ( σ | σ N F , ξ ) p ( σ )
where σ is true far-field RCS while σ is the optimal one. ξ stands for the framework’s parameter to be optimized. p ( σ ) corresponds to the prior distribution of σ . Equation (18) can be changed into logarithmic form to find the most suitable parameter:
σ = arg max σ log p ( σ | σ N F , ξ ) + log p ( σ )
Equation (19) can be further transformed to represent the loss function:
σ = arg min σ 1 2 σ N F ξ σ 2 + λ ϕ ( σ )
where σ can be obtained by minimizing the 0.5 σ N F ξ σ 2 . A regularization term ϕ ( σ ) can restrict the model’s learning ability on the dataset so that the trained model has a stronger generalization ability for the new data. The larger the value of parameter λ , the greater the penalty for the model.
Based on (20), a regression estimator can be used to approach the mapping relationship between near-field RCS and far-field RCS. In order to approximate the whole process, large amounts of data pairs are needed. This process is called making a dataset. Then, the data pairs are used to train a specific estimator model using a nonlinear regression approach. If the trained estimator model meets the error requirements, it can be used as a prediction system for the NFFFT. This process is shown in Figure 3.

2.3. Neural Network

In this paper, the regression estimator is implemented by a supervised neural network (NN).

2.3.1. Fundamental Theory

A feed-forward artificial neural network used sigmoid or Gaussian activation function is a universal estimator [16]. It has already been proposed as an efficient tool to implement antenna radiation pattern NFFFT in [17].
In our framework, the L2 parameter norm penalty is used in Equation (20). The L2 parameter norm penalty is defined as
ϕ ( σ ) = 1 2 ξ 2 2
L2 regularization causes the NN to “perceive” the input data as having higher variance, which makes it shrink the weights on features whose covariance with the output target is low compared to this added variance [18].
A set of random isotropic scattering center distributions is generated to calculate near-field RCS over the angle of measurement and the same angle of far-field RCS. Before importing to the neural network, the maximum value of the input near-field RCS data is used to normalize the near-field and the corresponding far-field data to obtain a data pair. Then they are used to train the neural network. During the training process, according to Equation (20), the main aim of training is to optimize the NN model’s weight parameter to achieve an appropriate transformation network. This process is shown in Figure 4a.
During the implementation process, the input near-field data needs to be normalized first, and then it enters into the NN to predict the far-field RCS. The last step before the output is that the same value is used to denormalize the predicted RCS data. The structure of the NN-NFFFT framework is shown in Figure 4b.
Besides the neural network, convolutional neural network [19,20], deep residual network [21,22], generative adversarial networks [23] or the other deep learning models can also be used as a regression estimator.

2.3.2. Neural Network Structure

The network architecture of the proposed NN has five layers. In this NN structure, three hidden layers are used: two layers have 256 neurons connected to the input, and output layers and one layer in the middle has 512 neurons. The sigmoid function is chosen as an activation function and the regularization parameter λ = 0.05.

2.3.3. Priori Information

A large number of prior data pairs are needed to train the NN model. The data must have a similar probability distribution to the TUT.
As mentioned in [24], multiple scattering center targets can directly define the monostatic scattering field at any point. Once the scattering center distribution is known, the near-field and far-field data can be simulated as prior data pairs. What is more, the training dataset can also use measured data, which is easily accessible in the RCS test field.
It is of vital importance to find an appropriate modeling method. The most related modeling work is in fluctuating target detection [25]. The radar signal fluctuation is mainly caused by look angle changing. Marcum and Swerling have presented several fluctuating models to investigate the signal fluctuation distribution, which are summarized in [25]. Those models contain most of the radar detection targets.
Modeling in this paper is inspired by the Swerling Case I Model [26], which is commonly used to represent small jet aircraft in the front view. The Swerling Case I Model is defined as a target that can be represented as several independently fluctuating reflectors of approximately equal echoing area, even if the number of reflectors is as small as four or five.
A group of centrosymmetric point-scattering targets is used as a prior information. The target consists of three to five equal scattering centers that are evenly located on the x-axis. The maximum radius of it is R. The monostatic test antenna is located at a distance L away from the origin. Figure 5a shows an example of our model that is conforming to the Swerling Case I Model.
Furthermore, the experiment demonstrates that the neural network is a potential method to achieve NFFFT, based on this model. For a specific scatterer, the far-field RCS can be predicted by a neural network with prior information efficiently.

2.3.4. Evaluation

In order to assess the error in the results, in [12], the root-mean-square error (RMSE) is used. RMSE is defined as
RMSE = 1 N θ n = 1 N θ | σ m e a s ( n ) σ E x t r . ( n ) | 2
where σ m e a s ( n ) (units m 2 ) is the RCS output by the neural network. σ E x t r ( n ) (units m 2 ) is the far-field RCS defined by the target and n is the sampling point in the angle domain. The unit of the RMSE is m 2 and it can be changed in to dBsm. RMSE stands for the absolute error in the whole testing angle.

3. Numerical Analysis

3.1. Setting

In the implementation of the framework, the test angle θ is from −6° to 6° and has 121 sampling points. It takes into account the cost of data acquisition and the limitations of “image-based” NFFFT on the test angle [10]. The radius R of the target is limited to 3 λ to 8 λ . In this condition, the far-field range is 72 λ to 512 λ , while the measurement distance L is 33 λ .
Considering the balance between accuracy and training speed, the selection process of the training sample has the following steps.
Firstly, we create three different datasets. They have 3 × 66, 3 × 122 and 3 × 244 samples, respectively. The more samples we use, the closer the test error is to the training error.
Then, 45 samples are also randomly selected in the range of R to test the trained neural network framework. They are independent of the training samples and do not participate in the training process. There are 15 samples for the three, four, and five scattering centers, respectively. The RCS of each point is 0 dBsm (1 m 2 ). They are used to verify the error of the new data and are enough to demonstrate the framework’s applicability.
The iteration of the training is 5000 times. The error-iteration of the models is shown in Figure 6a–c. After 5000 iterations, the training set’s error is lower than −16 dB, and the training error between different datasets is not significant. The testing error is shown in Figure 6d–f. The training time of different datasets is 180, 275 and 380 s. Comparing the balance between accuracy and training speed, we selected 366 training samples. They are randomly picked with the radius limitation. The training samples are composed of three equal parts. There are 112 samples for the three, four, and five scattering centers, respectively.

3.2. Comparison

In order to compare with “image-based” NFFFT in [10], a three-point scattering target that meets the requirements in Figure 5a is used to define the wideband monostatic scattering field at the same angle. The center frequency of them is the same as the NN-NFFFT framework, while the relative bandwidth is 40%. The maximum radius R is 5.2   λ .
As mentioned in [10], the accuracy of “Image-Based” NFFFT is relative to the sample points in the frequency domain and the truncation number in Equation (15). The truncation number n m a x is related to the minimum diameter of the target D (D = 2R) and is restricted to be greater than k D + 10 [9], where k is the wavenumber. If the measurement distance and the test angle is decided, the error and runtime of NFFFT are only related to the frequency sampling points.
In this paper, eight groups of different frequency sampling points (401, 201, 101, 81, 51, 21, 11, 5) data are generated. The wideband near-field data are used in Equation (15) to implement “Image-Based” NFFFT to compare with the NN-NFFFT framework.

3.3. Performance

Four of the NFFFT results are shown in Figure 7. Three of them are output by NN-NFFFT, while the last one Figure 7d is the result given by “Image-based” NFFFT with 401 frequency points. As we can see, the NN successfully predicts the peaks and troughs of the far-field data, and for the different TUTs, which have different numbers of scattering centers, the error does not increase.
The RMSE of the NN-NFFFT output RCS is shown in Figure 8a. It shows that the RMSE of the test samples is lower than −7 dBsm. Compared with the 0 dBsm targets.
The RMSE of the above mentioned “image-based” NFFFT is shown in Figure 8b. The x-axis stands for the different frequency points. The RMSE increases as the frequency points reduce. If the frequency points are more than 201, the RMSE is higher than −7 dBsm.

3.4. Run Time

As we can see, when the frequency points are lower than 201, the RMSE of “image-based” NFFFT and NN-NFFFT is at the same level. In this condition, the operation time and RMSE for each framework are shown in Table 1. The simulation platform is Intel(R) Core(TM) i7-6700 CPU @ 3.4 GHz (Intel, Santa Clara, CA, USA) with 16 GB RAM.
The RMSE-Operation Time relationship of different NFFFT results based on the same target is shown in Figure 9. It shows that the “image-based” NFFFT needs more time to achieve the same RMSE as NN-NFFFT.
The result shows that a well-trained NN allows for real-time operation while maintaining good accuracy. Compared with the traditional “image-based” method, the framework is an alternative way to predict far-field RCS from near-field data.

3.5. Flexibility

The NN-NFFFT framework is proven to be an alternative approach to predict far-field RCS from near-field data. There are a few limitations in generalization due to the simple training dataset. The reason is that the framework is driven by a simple scattering model. However, the framework we proposed is flexible in multiple situations. By adjusting the training samples, the framework can be applied to more scenarios.
The first situation is that the scattering characteristics of the target and the test scenario must be considered as much as possible before training. If a new factor is encountered in the test scenario, the factor can be analyzed and considered in the training process to improve the result. This aspect is discussed in Section 4.2.
The second one is that the framework can be applied to more scattering configurations as long as the training samples are sufficient. We further apply the proposed framework to the asymmetric point scattering targets. We use double-scattering points to show the result. For more scattering center targets, the steps are the same.
The model is shown in Figure 5b. The maximum radius of the target range R is 10λ. Two scattering points are randomly distributed in the range. The distance between them is greater than 3λ to meet the near-field test conditions, and there are no other restrictions. The other test conditions remain unchanged. In this condition, TUT is asymmetric and flexibly distributed.
Based on the accuracy and the training speed, 2500 samples are chosen as the training dataset. The framework has the same structure and the same number of iterations. The training time increases to 1350 s due to the increase in samples.
Two of the test samples and their results are given in Figure 10. They are randomly selected. The results show that the framework can adapt to all the two-point scatterers in the area that do not meet the far-field conditions by using a reasonable dataset.
Due to the above two factors, the framework has great flexibility, even if its generalization is limited due to the current training dataset.

4. Real Scene Experiment

4.1. Experiment Setup

In order to verify the framework on the actual scene, the experiment is carried out in an anechoic chamber using a vector network analyzer (VNA) Ceyear-3655L (China Electronics Technology Instruments Co., Qingdao, China). The experiment setup is shown in Figure 11. The measurement is carried out at 10 GHz, which has a wavelength of 3 cm. The measurement antennas are located at a distance of 1 m, which is 33 λ away from the turntable center.
The aperture of the test antenna is about 0.1 m. According to antenna radiation theory [27], the antenna far-field zone is about 0.7 m. In the antenna’s far-field zone, the radiation power distribution is the same as the antenna pattern. The 3 dB beamwidth of the antenna is 20°. Thus, the aperture at 1 m away from the probe is 0.72 m, while the maximum length of the target at 10 GHz is 0.6 m. Tt means that the electromagnetic wave can cover the target completely. The setup is the same as the simulation.
Two different metal spheres are used in the experiment as different targets. According to the Mie series [1], the RCS of metal spheres is shown in Table 2, −26 dBsm, and −31 dBsm. The distance between them is more than 300 mm. In this condition, the metal sphere can be treated as a point-scattering target. The multiple reflections are much lower than the receiver’s ground noise. Thus, the effect is negligible.

4.2. Result and Discussion

Six groups of experiment samples are randomly selected, and each group uses two different metal spheres in Table 2. Two of the measurement result and output RCS predicted by the framework are shown in Figure 12 and Figure 13. By using the 50 Hz IF bandwidth, the absolute measurement error is lower than −38 dBsm. The NN predicts the peaks and troughs of the far-field. All groups’ measurement RMSE and prediction RMSE are shown in Figure 14. The RMSE of the predicted far-field data remains lower than −34 dBsm and −36 dBsm as the target’s RCS becomes larger.
As shown in Figure 12 and Figure 13, the RCS, predicted by NN, jitters compared with the reference result. This is because of the noise introduced by the test equipment. The signal to noise ratio (SNR) can be approximately calculated by
SNR = signal   power noise   power = 10 log S 0 2 ( S S 0 ) 2
where S0 is the absolute value of the signal (obtained by simulation), and S is the actual value measured by sensors. The SNR of the measurement result is 14.5 dB for the 25 mm sphere and 17 dB for the 45 mm sphere. The SNR of the NN output result is 10.5 dB for the 25 mm sphere and 13.1 dB for the 45 mm sphere. The SNR deteriorates by 4 dB, and the added noise mainly comes from the framework.
In order to evaluate the impact of noise, the VNA’s noise needs to be analyzed. The received power can be calculated by radar function
P r = P t G t σ G r λ 2 L t ( 4 π ) 3 r 4 L r
where Pt is transmitted power, it is equal to −30 dBm. Gt and Gr stand for the antenna gain, which are around 13 dBi. The other parameters are mentioned in the previous sections. The loss Lt and Lr are unknown. Thus, the received power is about −130 to −140 dBm.
The receiver sensitivity is about −150 dBm. It can be calculated by the function kTBF, where k is the Boltzmann constant. T is the Kelvin temperature, which is 300 K. B is the IF bandwidth, which is 50 Hz. F is the noise figure of the receiver, which is 6 dB. The SNR of the measurement result is around 10 to 20 dB. It is the same as the SNR calculated by Equation (23).
This is the reason why fluctuation exists in the results. The noise is not considered in the training process. Thus, the trained NN treated noise as a useful signal while processing the measurement data.
The noise needs to be considered in the training process to reduce its impact on the output results. The NN is a flexible denoising framework in signal processing [28] as long as the noise is considered in the training dataset. It can reduce the noise by merely replacing the noisy signal to the training dataset. The Gaussian white noise is added into the near-field data, and then the noisy near-field data and noise-free far-field data constitute new data pairs. They are added to the training dataset. The SNR of the near-field is 15 dB. The training samples is increased to 1344, while the training time is expanding to 566 s.
By changing the training dataset, the framework is used to the same measurement near-field data. The result is shown in Figure 15.
The improved framework successfully predicted the far-field picks and troughs, and the output RMSE of the data is reduced to the same magnitude as the measurement data. The SNR of the result is 14.42 dB for the 25 mm sphere and 17.4 dB for the 45 mm sphere. It shows that the SNR improves and is equal to the input one.
The result proves that the NN-NFFFT framework is highly adaptable. The accuracy of the framework mainly depends on the selection of the training dataset. If various situations in practical applications are considered during the training process, the framework’s ability in practice will also be powerful.
Furthermore, the measured data are single frequency scatter data. Thus, the “image-based” NFFFT cannot be used. If a broadband measurement is carried out, it takes a long time in data acquisition. In the above experiment, the RCS of the TUT is −26 dBsm and −31 dBsm, which is very small. It requires a relatively small IF bandwidth in VNA. Thus, the IF bandwidth is 50 Hz, and the testing point in the angle domain is 121. The speed of the turntable is 1 degree per second. The data used in the abovementioned framework can be acquired in about 12 s (= 12 ÷ 1). Correspondingly, if a broadband measurement with 101 points in the frequency domain is implemented, the measurement time is around 244.5 s ( 101   ÷   50   ×   121 ). It shows that a single frequency measurement is much more efficient in the data acquisition process. The experiment shows that the framework has potential in RCS NFFFT.

5. Conclusions

In this paper, a flexible Neural Networks Near-Field to Far-Filed Transformation (NN-NFFFT) framework has been proposed to predict far-field RCS from near-field RCS. This framework contains three significant steps. The first one is making a dataset. Targets in the dataset are based on the actual demand, and the shape of the measurement scatterer should fit the same distribution. In our framework, point-scattering targets conforming to the Swerling Case I Model are used to simulate the dataset. The second one is training the regression estimator. The accuracy of the framework depends on this process and the time consumption is mainly concentrated in this process. The last one is the prediction of far-field RCS. The result proved that NN-FFFFT is a potential method to achieve NFFFT, while for the specific TUT, the far-field RCS can be predicted by a neural network with efficient priori information. What is more, the framework increases operation speed while maintaining acceptable accuracy compared with the widely used “image-based” NFFFT method. The measured near-field data have been used in the framework. The result shows that the framework has scalability and can improve performance further by employing prior information about the measurement scenario. Furthermore, during the actual data acquisition, the method maintains good accuracy with less time.

Author Contributions

All authors contributed substantially to this work. Y.L., W.H., W.Z. and L.L. conceived and designed the framework in this work. W.H. administrated the project and funded it. Y.L., J.S. and B.X. performed the experiments. J.S. and B.X. processed the data. Y.L. wrote the original draft. W.Z. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61527805, 61731001 and 41775030, and 111 Project of China, grant number B14010.

Acknowledgments

The authors would like to thank to the China Electronics Technology Group Corporation No.41 Institute for providing the testing field.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Knott, E.F. Radar Cross Section Measurements; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  2. Ruck, G.T.; Barrick, D.E.; Stuart, W.D.; Krichbaum, C.K. Radar Cross Section Handbook; Plenum Press: New York, NY, USA, 1970; Volume 1–2. [Google Scholar]
  3. Vaupel, T.; Eibert, T.F. Comparison and Application of Near-Field ISAR Imaging Techniques for Far-Field Radar Cross Section Determination. IEEE Trans. Antennas Propag. 2006, 54, 144–151. [Google Scholar] [CrossRef]
  4. Osipov, A.; Kobayashi, H.; Suzuki, H. An Improved Image-Based Circular Near-Field-to-Far-Field Transformation. IEEE Trans. Antennas Propag. 2013, 61, 989–993. [Google Scholar] [CrossRef]
  5. Nicholson, K.J.; Wang, C.H. Improved Near-Field Radar Cross-Section Measurement Technique. IEEE Antennas Wirel. Propag. Lett. 2009, 8, 1103–1106. [Google Scholar] [CrossRef]
  6. Broquetas, A.; Palau, J.; Jofre, L.; Cardama, A. Spherical wave near-field imaging and radar cross-section measurement. IEEE Trans. Antennas Propag. 1998, 46, 730–735. [Google Scholar] [CrossRef] [Green Version]
  7. LaHaie, I.J. An improved version of the circular near field-to-far field transformation (CNFFFT). In Proceedings of the 27th Annual Meeting of the Antenna Measurement Techniques Association (AMTA ‘05), Newport, RI, USA, 30 October–4 November 2005; pp. 196–201. [Google Scholar]
  8. Li, N.; Hu, C.; Li, Y.; Zhang, L. A new algorithm about transforming from near-field to far-field of radar target scattering. In Proceedings of the 2008 8th International Symposium on Antennas, Propagation and EM Theory, Kunming, China, 2–5 November 2008; pp. 661–663. [Google Scholar]
  9. Lahaie, I.J. Overview of an Image-Based Technique for Predicting Far-Field Radar Cross Section from Near-Field Measurements. IEEE Antennas Propag. Mag. 2003, 45, 159–169. [Google Scholar] [CrossRef]
  10. Rice, S.A.; LaHaie, I.J. A partial rotation formulation of the circular near-field-to-far-field transformation (CNFFFT). IEEE Antennas Propag. Mag. 2007, 49, 209–214. [Google Scholar] [CrossRef]
  11. Wei, G.; Yuan, W.; Gao, F.; Cui, X. An accurate monostatic RCS measurement method based on the extrapolation technique. In Proceedings of the 2017 IEEE 9th International Conference on Communication Software and Networks (ICCSN), Guangzhou, China, 6–8 May 2017; pp. 784–788. [Google Scholar]
  12. Schnattinger, G.; Mauermayer, R.A.M.; Eibert, T.F. Monostatic Radar Cross Section Near-Field Far-Field Transformations by Multilevel Plane-Wave Decomposition. IEEE Trans. Antennas Propag. 2014, 62, 4259–4268. [Google Scholar] [CrossRef]
  13. Omi, S.; Uno, T.; Arima, T.; Fujii, T.; Kushiyama, Y. Near-Field Far-Field Transformation Utilizing 2-D Plane-Wave Expansion for Monostatic RCS Extrapolation. IEEE Antennas Wirel. Propag. Lett. 2016, 15, 1971–1974. [Google Scholar] [CrossRef]
  14. Jin, J.-M. Theory and Computation of Electromagnetic Fields; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  15. Seber, G.A.W.; Wild, C.J. Nonlinear Regression; Wiley-Interscience: Hoboken, NJ, USA, 2003; Volume 62, p. 63. [Google Scholar]
  16. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1994. [Google Scholar]
  17. Ayestarán, R.G.; Las-Heras, F. Near Field to Far Field Transformation Using Neural Networks and Source Reconstruction. J. Electromagn. Waves Appl. 2012, 20, 2201–2213. [Google Scholar] [CrossRef]
  18. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  19. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Proceedings of the 13th European Conference on Computer Vision, ECCV 2014, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  20. Weidong, H.; Wenlong, Z.; Shi, C.; Xin, L.; Dawei, A.; Leo, L. A Deconvolution Technology of Microwave Radiometer Data Using Convolutional Neural Networks. Remote Sens. 2018, 10, 275. [Google Scholar]
  21. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  22. Hu, W.; Li, Y.; Zhang, W.; Chen, S.; Lv, X.; Ligthart, L. Spatial Resolution Enhancement of Satellite Microwave Radiometer Data with Deep Residual Convolutional Neural Network. Remote Sens. 2019, 11, 771. [Google Scholar] [CrossRef] [Green Version]
  23. Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; Shi, W. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv 2016, arXiv:1609.04802. [Google Scholar]
  24. Bhalla, R.; Ling, H. Near-field signature prediction using far-field scattering centers extracted from the shooting and bouncing ray technique. IEEE Trans. Antennas Propag. 2000, 48, 337–338. [Google Scholar] [CrossRef]
  25. Swerling, P. Radar probability of detection for some additional fluctuating target cases. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 698–709. [Google Scholar] [CrossRef]
  26. Swerling, P. Probability of detection for fluctuating targets. IRE Trans. Inf. Theory 1960, 6, 269–308. [Google Scholar] [CrossRef]
  27. Lo, Y.T.; Lee, S.W. Antenna Handbook; Springer: New York, NY, USA, 1988. [Google Scholar]
  28. Vieira, D.A.G.; Travassos, L.; Saldanha, R.R.; Palade, V. Signal denoising in engineering problems through the minimum gradient method. Neurocomputing 2009, 72, 2270–2275. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram for radar cross section measurement: (a) Incident spherical wave in the target under test (TUT); (b) Near-field scattering data collection geometries; (c) The geometry of the TUT; (d) Incident wave analysis of the TUT.
Figure 1. Schematic diagram for radar cross section measurement: (a) Incident spherical wave in the target under test (TUT); (b) Near-field scattering data collection geometries; (c) The geometry of the TUT; (d) Incident wave analysis of the TUT.
Sensors 20 06023 g001
Figure 2. Two-dimensional far-field and near-field condition: (a) Far-field condition; (b) Near-field condition.
Figure 2. Two-dimensional far-field and near-field condition: (a) Far-field condition; (b) Near-field condition.
Sensors 20 06023 g002
Figure 3. Near-field to far-field transformation framework flow chart.
Figure 3. Near-field to far-field transformation framework flow chart.
Sensors 20 06023 g003
Figure 4. Neural network flow chart (a) NN-NFFFT training scheme (b) the structure of NN-NFFFT framework.
Figure 4. Neural network flow chart (a) NN-NFFFT training scheme (b) the structure of NN-NFFFT framework.
Sensors 20 06023 g004
Figure 5. Point scattering model and near-field measurement setting (a) an example of 5-points model (b) an example of a randomly placed 2-points model.
Figure 5. Point scattering model and near-field measurement setting (a) an example of 5-points model (b) an example of a randomly placed 2-points model.
Sensors 20 06023 g005
Figure 6. The training and testing result of three different datasets (a) training error for 3 × 66 samples (b) training error for 3 × 122 samples (c) training error for 3 × 244 samples (d) testing RMSE for 3 × 66 samples (e) testing RMSE for 3 × 122 samples (f) testing RMSE for 3 × 244 samples.
Figure 6. The training and testing result of three different datasets (a) training error for 3 × 66 samples (b) training error for 3 × 122 samples (c) training error for 3 × 244 samples (d) testing RMSE for 3 × 66 samples (e) testing RMSE for 3 × 122 samples (f) testing RMSE for 3 × 244 samples.
Sensors 20 06023 g006aSensors 20 06023 g006b
Figure 7. Simulation result for different test samples: (a) 5-point target R = 5.47λ NN-NFFFT result (b) 4-point target R = 6.8λ NN-NFFFT result (c) 3-point target R = 5.2λ NN-NFFFT result (d) 3-point target R = 5.2λ “image-based” NFFFT result with 401 frequency points.
Figure 7. Simulation result for different test samples: (a) 5-point target R = 5.47λ NN-NFFFT result (b) 4-point target R = 6.8λ NN-NFFFT result (c) 3-point target R = 5.2λ NN-NFFFT result (d) 3-point target R = 5.2λ “image-based” NFFFT result with 401 frequency points.
Sensors 20 06023 g007aSensors 20 06023 g007b
Figure 8. Test error (a) the RMSE of different sample output from neural network. (b) The RMSE of “image-based” NFFFT result (only the center frequency) respect to different frequency points.
Figure 8. Test error (a) the RMSE of different sample output from neural network. (b) The RMSE of “image-based” NFFFT result (only the center frequency) respect to different frequency points.
Sensors 20 06023 g008
Figure 9. The RMSE vs. operation time of “Image-Based” NFFFT and NN-NFFFT.
Figure 9. The RMSE vs. operation time of “Image-Based” NFFFT and NN-NFFFT.
Sensors 20 06023 g009
Figure 10. The shape and results of test targets: (a) asymmetric sample with little fluctuation; (b) asymmetric sample with fluctuation.
Figure 10. The shape and results of test targets: (a) asymmetric sample with little fluctuation; (b) asymmetric sample with fluctuation.
Sensors 20 06023 g010
Figure 11. The experiment setup: (a) the experiment analyzes and design; (b) experiment environment.
Figure 11. The experiment setup: (a) the experiment analyzes and design; (b) experiment environment.
Sensors 20 06023 g011
Figure 12. Sphere diameter = 25 mm (0.83λ), R = 0.21 m (7λ) result: (a) near-field measurement result and absolute error; (b) NN output result and absolute error.
Figure 12. Sphere diameter = 25 mm (0.83λ), R = 0.21 m (7λ) result: (a) near-field measurement result and absolute error; (b) NN output result and absolute error.
Sensors 20 06023 g012
Figure 13. Sphere diameter = 45 mm (1.5λ), R = 0.125 m (4.16λ) result: (a) near-field measurement result and absolute error; (b) NN output result and absolute error.
Figure 13. Sphere diameter = 45 mm (1.5λ), R = 0.125 m (4.16λ) result: (a) near-field measurement result and absolute error; (b) NN output result and absolute error.
Sensors 20 06023 g013
Figure 14. Measurement and neural network error (NN without optimization): (a) RMSE of each sample for 25 mm (0.83λ) diameter sphere; (b) RMSE of each sample for 45 mm (1.5λ) diameter sphere.
Figure 14. Measurement and neural network error (NN without optimization): (a) RMSE of each sample for 25 mm (0.83λ) diameter sphere; (b) RMSE of each sample for 45 mm (1.5λ) diameter sphere.
Sensors 20 06023 g014
Figure 15. Result and RMSE output by improved NN: (a) Sphere diameter = 25 mm (0.83λ), R = 0.21 m (7λ) NN output result; (b) Sphere diameter = 45 mm (1.5λ) R = 0.125 m (4.16λ) NN output result; (c) RMSE of each sample for 25 mm (1.5λ) diameter sphere; (d) RMSE of each sample for 45 mm (0.83λ) diameter sphere.
Figure 15. Result and RMSE output by improved NN: (a) Sphere diameter = 25 mm (0.83λ), R = 0.21 m (7λ) NN output result; (b) Sphere diameter = 45 mm (1.5λ) R = 0.125 m (4.16λ) NN output result; (c) RMSE of each sample for 25 mm (1.5λ) diameter sphere; (d) RMSE of each sample for 45 mm (0.83λ) diameter sphere.
Sensors 20 06023 g015aSensors 20 06023 g015b
Table 1. Temporal comparison.
Table 1. Temporal comparison.
FrameworkPre-Preparation Time 1Operation TimeRMSEExperiment Data Acquisition Time 2
NN-NFFFT275 s0.289 s−10.28 dBsm12 s
“image-based” NFFFTnone5.568 s−6.748 dBsm244.5 s
1 The pre-preparation time means the time consumption before the framework’s implementation, and it mainly refers to the training time of the neural network. 2 The experiment data acquisition time is explained in the Experimental Results section. It is estimated under the following conditions: the VNA IF bandwidth is 50 Hz, the testing point in angle domain is 121 and the speed of the turn table is 1 degree per second.
Table 2. Measurement target.
Table 2. Measurement target.
Sample NumberMetal Sphere DiameterProject Area of the SphereNRCS 1 a
at 10 GHz
RCS
145 mm−27.987 dBsm1.427 dB−26.557 dBsm
225 mm−33.092 dBsm1.433 dB−31.665 dBsm
1 NRCS stands for the normalized radar cross section for perfectly conducting sphere respect to the projected area. NRCS = σ/πa2, where σ stands for the RCS of sphere, and a represents the radius.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Hu, W.; Zhang, W.; Sun, J.; Xing, B.; Ligthart, L. Radar Cross Section Near-Field to Far-Field Prediction for Isotropic-Point Scattering Target Based on Regression Estimation. Sensors 2020, 20, 6023. https://doi.org/10.3390/s20216023

AMA Style

Liu Y, Hu W, Zhang W, Sun J, Xing B, Ligthart L. Radar Cross Section Near-Field to Far-Field Prediction for Isotropic-Point Scattering Target Based on Regression Estimation. Sensors. 2020; 20(21):6023. https://doi.org/10.3390/s20216023

Chicago/Turabian Style

Liu, Yang, Weidong Hu, Wenlong Zhang, Jianhang Sun, Baige Xing, and Leo Ligthart. 2020. "Radar Cross Section Near-Field to Far-Field Prediction for Isotropic-Point Scattering Target Based on Regression Estimation" Sensors 20, no. 21: 6023. https://doi.org/10.3390/s20216023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop