 the steps of: computing activity of multiple layers of hidden layer nodes in a feed forward neural network, given an input data instance, forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing, finding memories that are closest to a presented test data instance according to a class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion,wherein the step of forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing further comprises the substeps of: computing hidden layer activities of every training data instance, then lowpass filtering and stacking the hidden layer activities in a data structure
 keeping a first and second hidden layer activity memory, indexed by class label
 forming both class specific and class independent cluster centers as quantized memories of the training data's second hidden layer activity, via kmeans clustering, using each class data separately or using all the data together depending on a choice of class specificity
 keeping quantized second hidden layer memories, indexed by class labels or nonindexed, depending on the class specificity choice
 training a cascade of classifiers for enabling multiple hypotheses generation of a network, via utilizing a subset of the input data as the training data
 and keeping a classifier memory, indexed with the set of data used during training
 wherein the step of finding memories that are closest to the presented test data instance according to the class decision of the feed forward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion further comprises the substeps of: determining first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions
 computing a set of candidate samples for the second layer, that are closest Euclidian distance hidden layer memories to the test data's second hidden layer activity, using the multiple hypotheses class decisions of the network and a corresponding memory database then assigning the second hidden layer sample as one of the candidate hidden layer memories, via max or averaging operations depending on a choice of multihypotheses competition
 merging the second hidden layer sample with the test data's second hidden layer activity via weighted averaging operation, creating an updated second hidden layer activity
 using the updated second hidden layer activity to compute the closest Euclidian distance first hidden layer memory, and assigning as the first hidden layer sample, merging the first hidden layer sample with the test data first hidden layer activity via weighted averaging operation, creating an updated first hidden layer activity
 computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feedforward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation
 and repeating these steps for multiple iterations starting from the step of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, and using the output of step of computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feed forward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation in the beginning of the next iteration.
Method for pseudorecurrent processing of data using a feedforward neural network architecture
Updated Time 12 June 2019
Patent Registration DataPublication Number
US10152673
Application Number
US14/900177
Application Date
21 June 2013
Publication Date
11 December 2018
Current Assignee
ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI
Original Assignee (Applicant)
ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI
International Classification
G06K9/62,G06N3/04,G06N3/063
Cooperative Classification
G06N3/0454,G06N3/0445,G06N3/063,G06K9/6223,G06K9/6218
Inventor
YILMAZ, OZGUR,OZKAN, HUSEYIN
Patent Images
This patent contains figures and images illustrating the invention and its embodiment.
Abstract
Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforwardfeedback loop, fillingin the missing hidden layer activity with meaningful representations.
Claims
1. A computer implemented method for recurrent data processing, comprising the steps of:
computing activity of multiple layers of hidden layer nodes in a feed forward neural network, given an input data instance, forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing, finding memories that are closest to a presented test data instance according to a class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion,wherein the step of forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing further comprises the substeps of:
computing hidden layer activities of every training data instance, then lowpass filtering and stacking the hidden layer activities in a data structure; keeping a first and second hidden layer activity memory, indexed by class label; forming both class specific and class independent cluster centers as quantized memories of the training data's second hidden layer activity, via kmeans clustering, using each class data separately or using all the data together depending on a choice of class specificity; keeping quantized second hidden layer memories, indexed by class labels or nonindexed, depending on the class specificity choice; training a cascade of classifiers for enabling multiple hypotheses generation of a network, via utilizing a subset of the input data as the training data; and keeping a classifier memory, indexed with the set of data used during training;wherein the step of finding memories that are closest to the presented test data instance according to the class decision of the feed forward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion further comprises the substeps of:
determining first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions; computing a set of candidate samples for the second layer, that are closest Euclidian distance hidden layer memories to the test data's second hidden layer activity, using the multiple hypotheses class decisions of the network and a corresponding memory database then assigning the second hidden layer sample as one of the candidate hidden layer memories, via max or averaging operations depending on a choice of multihypotheses competition; merging the second hidden layer sample with the test data's second hidden layer activity via weighted averaging operation, creating an updated second hidden layer activity; using the updated second hidden layer activity to compute the closest Euclidian distance first hidden layer memory, and assigning as the first hidden layer sample, merging the first hidden layer sample with the test data first hidden layer activity via weighted averaging operation, creating an updated first hidden layer activity; computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feedforward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation; and repeating these steps for multiple iterations starting from the step of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, and using the output of step of computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feed forward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation in the beginning of the next iteration.
2. A computer implemented method according to claim 1 for enabling a feedforward network to mimic a recurrent neural network via making a class decision at the output layer of feedforward neural network, and selecting an appropriate memory to estimate hidden layer activities, then inserting the selected memory to the hidden layer activity as if the selected memory is a feedback from a higher layer network in classical recurrent networks.
Claim Tree

11. A computer implemented method for recurrent data processing, comprising

2. A computer implemented method according to claim 1 for enabling a feedforward network to mimic a recurrent neural network via making a class decision at the output layer of feedforward neural network, and selecting an appropriate memory to estimate hidden layer activities, then inserting the selected memory to the hidden layer activity as if the selected memory is a feedback from a higher layer network in classical recurrent networks.

Description
FIELD OF THE INVENTION
The present invention relates to method for implementing a recurrent neural network algorithm.
BACKGROUND OF THE INVENTION
Classification of incomplete data is an important problem in machine learning that has been previously tackled from both the biological and computational perspectives. The proposed solutions to this problem are closely tied to the literature on inference with incomplete data, generative models for classification problems and recurrent neural networks.
Recurrent Neural Networks (RNNs) are connectionist computational models that utilize distributed representation and nonlinear dynamics of its units. Information in RNNs is propagated and processed in time through the states of its hidden units, which make them appropriate tools for sequential information processing. There are two broad types of RNNs: stochastic energy based RNNs with symmetric connections, and deterministic ones with directed connections.
RNNs are known to be Turing complete computational models and universal approximators of dynamical systems. They are especially powerful tools in dealing with the longrange statistical relationships in a wide variety of applications ranging from natural language processing, to financial data analysis. Additionally, RNNs are shown to be very successful generative models for data completion tasks.
Despite their immense potential as universal computers, difficulties in training RNNs arise due to the inherent difficulty of learning longterm dependencies and convergence issues. However, recent advances suggest promising approaches in overcoming these issues, such as using better nonlinear optimizers or utilizing a reservoir of coupled oscillators. Nevertheless, RNNs remain to be computationally expensive in both the training as well as the test phases. The idea in the disclosed method of this patent is, to imitate recurrent processing in a network and exploit its power while avoiding the expensive energy minimization in training, or computationally heavy sampling in test. Generative models are used to randomly generate observable data, using the learnt probabilistic structure encoded in its hidden variables. In contrast to the discriminative models, generative models specify a joint probability distribution over the observed data and the corresponding class labels. For an example, Restricted Boltzmann Machines are generative RNN models. Mixture models are perhaps the most widely used generative tools and Expectation Maximization has become the standard technique for estimating the corresponding statistical parameters, i.e., the parameters of a mixture of subpopulations in the training data. Given the parameters of subpopulation distributions, new data can be generated through sampling methods.
Classification under incomplete data conditions is a well studied problem. Imputation is commonly used as a preprocessing tool, before the standard classification algorithms are applied. The Mixture of Factor Analyzer approach assumes multiple clusters in the data, estimates the statistical parameters of these clusters and uses them for fillingin the missing feature dimensions. Thus, in imputation stage, the missing feature values are filledin with sampled values from a precomputed distribution. Here, multiple imputation assumes that the data come from a mixture of distributions, and is capable of capturing variation in the data. Sampling from a mixture of factor analyzers and fillingin the data is effectively very similar to the feedback information insertion in a neural network from a higher layer of neurons onto a lower layer of neurons.
Previously, both feedforward and recurrent neural network methods were proposed for denoising of images, i.e. recovering original images from corrupted versions. Multilayer perceptrons were trained using backpropagation for denoising tasks, as an alternative to energy based undirected recurrent neural networks Hopfield models). Recurrent neural networks were trained for denoising of images, by forming continuous attractors. A convolutional neural network was employed which takes an image as input and outputs the denoised image. The weights of the convolutional layers were learned through backpropagation of reconstruction error. Denoising was used as a means to design a better autoencoder recurrent neural network. Pseudolikelihood and dependency network approaches solve data completion problem by learning conditional distributions that predict a data component using the rest of the components. These two approaches show similarities to the method disclosed in this patent, due to the maximum likelihood estimation approach of the missing data components, i.e. kmeans clustering and imputation of cluster center. However, none of the prior art propose an iterative procedure that roots from a high level class decision at the backend of a neural network and propagates this information back into the network for choosing the optimum sample, in a maximum likelihood sense from a mixture of distributions. Patent discloses a method for imputing unknown data components in a data structure using statistical methods. Tensor factorization method is used in patent to perform multiple imputation in retail sales data. Neural network method is disclosed in patent to denoise images that are compressed and decompressed.
OBJECTS OF THE INVENTION
The object of the invention is a method for implementing a recurrent data processing. The disclosed method, based on mixture models and multiple imputation, is applied onto a specific neural network's hidden layer activities. The disclosed multiple imputation approach can be considered as a pseudorecurrent processing in the network as if it is governed by the dynamical equations of the corresponding hidden layer activities. This framework provides a shortcut into the recurrent neural network computation, which is suitable for realtime operation. The disclosed method successfully performs classification for incomplete data in which the missing data components are completely unknown or some of the data components are largely distorted.
BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 schematically shows the connecting diagram of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Recent work on feedforward networks has proven the importance of dense sampling and number of the hidden layer units. The question is how a successful feedforward network can be transformed into a computationally not so intense pseudorecurrent one, with occlusion/incomplete data handling capability. In the disclosed method, Coates et al.'s network is adopted and modified to fillin incomplete (occluded) visual representations (hidden layer activities). The nonlinear dynamical equations to construct the attractors in the high dimensional space are replaced with linear distance comparators. And costly sampling operations such as MCMC are replaced with averaging and binary decision operations. In Hopfield networks and Boltzmann machines, “hidden memories” are interpretations of the sensory input, and they are formed by iterative energy minimization procedures. In our algorithm, hidden memories are formed using Kmeans clustering and linear filtering.
In a recurrent network, the hidden layer activity at time t is given as a function (parameterized over θ) of the hidden layer activity at t−1 and the current input as
h^{t}=F_{θ}h^{t−1},x^{t}.
In leaky integration approach, t<t−1, activity is added for smoother changes,
h^{t}=γht−1+(1−γ)F_{θ}(h^{t−1},x^{t}).
In our framework, for computational efficiency, F_{θ} is replaced with H^{t }i.e.,
h^{t}=γh^{t−1}+(1−γ)H^{t},
where H^{t }is the cluster center, which is the minimum distance cluster center to the previous hidden layer activity h^{t−1},
H^{t}=argmin_{k}(h^{t−1}−_{γ}^{k}<o ostyle="single">H</o>)^{2}.
Here, H is the set of the cluster centers, having K2 number of clusters for each class. The closest cluster center H^{t }computation is based on the previous decision on the class label using support vector machines (SVM). Therefore, the network uses its class decision to narrow down the set of candidate probability distributions for sampling hidden layer activity. Therefore, high level information is used to sample hidden layer activity that is merged with the current hidden layer activity. Repeating this procedure in a loop emulates the behavior of a dynamical system, i.e. RNN.
The disclosed method (100) comprises of, a feedforward neural network (101), training stage (200) for forming memories in hidden layers, and test stage (300) for exploiting memories in data completion tasks. In 101, network architecture is shown as that:
 it has one hidden layer 104, which performs Analysis (103) or dimensionality expansion on the presented multidimensional data (102)
 a subsequent hidden layer (106) computed by Pooling (105) the first hidden layer's activities in separate spatial regions (eg. quadrants in images)
 an SVM mimics the output layer of the network and performs multiclass Classification (107) on Layer 2 activity, and outputs class label (108).
For details of the feedforward network method, [9] should be visited.
During the training phase, a set of data (102) with known labels are used for training classifiers (205). There are 3 stages that are introduced for pseudorecurrent processing:
 4. Filter and Store (201): The first and second hidden layer activities of every training input data is low pass filtered and stored in data structures, called hidden layer memories (202):
{dot over (H)}_{1}={h_{1}^{1},h_{1}^{2},h_{1}^{2 }. . . h_{1}^{N}}, for N training examples and Layer 1.
{dot over (H)}_{2}={h_{2}^{1},h_{2}^{2},h_{2}^{2 }. . . h_{2}^{N}}, for N training examples and Layer 2.
 5. KMeans Clustering (203): Memory formation in an RNN through costly energy minimization is replaced with clustering. The second hidden layer activities (106) are vectorized and clustered using Kmeans with K2 number of clusters per class or K2*(# of Classes) number of clusters for nonclass specific processing (cf. section 3.1.3). Therefore the hidden layer activities of each class are quantized into K2 bins, or the hidden layer activities of the whole data is quantized into K2*(# of Classes) number of bins. Hidden Layer 2 memory (204):
<o ostyle="single">H</o>_{2}^{y}={h_{2}^{1},h_{2}^{2},h_{2}^{3 }. . . h_{2}^{K2}},K2 cluster centers for each class y or
<o ostyle="single">H</o>_{2}={h_{2}^{1},h_{2}^{2},h_{2}^{3 }. . . h_{2}^{C×K2}},K2*(# of Classes) cluster centers for the whole data.
 6. MultiHypotheses SVM Training (205): In an RNN, multiple hypotheses can form and compete with each other to explain sensory data. A cascaded, multi hypotheses classification framework is constructed to imitate this feature. The training is repeated for a subset of data in order to allow multiple hypotheses of the network. This is achieved by excluding a specific single class (eg. Class 1) or a pair of classes (eg. Class 1 and Class 2), and training an SVM for the rest of the data. In the case of single class exclusion, the trained SVM can be used for supplying a second hypothesis. For example, if Class 1 is the first choice of the network that is decided by the “full SVM classifier”, the classifier trained by leaving out Class 1 data is used to give a second hypothesis. In the case of a pair of class exclusions, for example both Class 1 and Class 2 data are left out, the trained SVM gives a third hypothesis, where the first choice is Class 1 and the second choice is Class 2. This collection of classifiers is used during test, to decide which cluster centers of hidden Layer 2 activities will be used for feedback insertion. The classifier memory (206) consists of:
 S is the SVM classifier for the first choice of the network
 S^{p }is the SVM classifier for the second choice when the first choice was class p.
 S^{pq }is the SVM classifier for the third choice when the first choice was class p and second q.
 4. Filter and Store (201): The first and second hidden layer activities of every training input data is low pass filtered and stored in data structures, called hidden layer memories (202):
During test phase an unknown labeled and possibly incomplete (i.e. occluded, subsampled etc.) test data instance (102) is presented. The test phase has following iterative stages for recurrent processing:
 8. Pooling (105): Test phase starts with the algorithm provided by Coates et al. [9] and computes hidden Layer 2 activity (106) via pooling Layer 1 activity (104). For test data instance i, at time=t:
h_{2}^{i,t}=P(h_{1}^{i,t}),
 where P is the pooling operation (105) over hidden Layer 1 (104) activity.
 9. MultiHypotheses SVM Testing (301): First, second and third class label choices of the network are extracted using the corresponding SVM in the classifier memory (206). The multiple hypotheses (302) of the system are:
y^{1}=S(h_{2}^{i,t})
 where S( ) is the classification operation (107) and y^{1 }is the class label of the first choice.
y^{2}=S^{y}^{1}(h_{2}^{i,t}) and y^{3}=S^{y}^{1}^{y}^{2}(h_{2}^{i,t}).
 10. Cluster Selection (303): For each class hypothesis, the cluster centers in the hidden Layer 2 memory (204) which are closest (Euclidian distance) to the test data hidden Layer 2 activity (106) are computed. These are hidden layer hypotheses of the network. 3 cluster centers (one for each class hypothesis) that are closest to the test data instance Layer 2 activity (106) are computed as follows:
{tilde over (h)}_{2,1}^{i,t}=argmin_{k}(h_{2}^{i,t}−<o ostyle="single">H</o>_{2}^{y}^{2}^{,k})^{2}, the first class hypothesis cluster center,
{tilde over (h)}_{2,2}^{i,t}=argmin_{k}(h_{2}^{i,t}−<o ostyle="single">H</o>_{2}^{y}^{2}^{,k})^{2}, the second class hypothesis cluster center,
{tilde over (h)}_{2,3}^{i,t}=argmin_{k}(h_{2}^{i,t}−<o ostyle="single">H</o>_{2}^{y}^{3}^{,k})^{2}, and third class hypothesis cluster center.
 In a “winnertakesall” configuration, the closest of the clusters (min distance to the test hidden layer activity) computed above is chosen as the Layer 2 hidden activity sample (304), and for the “average” configuration, the average of the three clusters is assigned as the sample (304):
{tilde over (h)}_{2,A}^{i,t}=argmin_{m}({tilde over (h)}_{2,m}^{i,t}−h_{2}^{i,t})
 assigned Layer 2 sample for “winnertakesall”.
 8. Pooling (105): Test phase starts with the algorithm provided by Coates et al. [9] and computes hidden Layer 2 activity (106) via pooling Layer 1 activity (104). For test data instance i, at time=t:
 assigned Layer 2 sample for “average” scheme.
 For nonclass specific configuration, instead of computing 3 closest centers for each of the class hypotheses, 3 closest clusters are computed regardless of class hypotheses. Another set of hidden Layer 2 memory (see training phase, stage 203) is used:
{tilde over (h)}_{2,A}^{i,t}=argmin_{k}(h_{2}^{i,t}−<o ostyle="double">H</o>_{2}^{k})^{2}.
 11. Feedback (305, Layer 2): The Layer 2 sample is merged (feedback magnitude, α) with the test data instance Layer 2 activity (106), to generate hidden layer activity at time t+1
h_{2}^{i,t+1}=(h_{2}^{i,t}+α{tilde over (h)}_{2,A}^{i,t})/(1+α). (109)
 12. Layer 1 Sampling (306): The modified hidden Layer 2 activity (109) is used to compute the most similar training set data instance, using the Euclidean distance.
L^{i,t}=argmin_{k}(h_{2}^{i,t+1}−{dot over (H)}_{2}^{k}),
 is the index of the most similar training data. The hidden Layer 1 activity of the most similar training data is fetched from the Layer 1 memory (202), as the Layer 1 sample (307) of the network:
{tilde over (h)}_{1,A}^{i,t}={dot over (H)}_{1}^{L}.
 13. Feedback (308 Layer 1): The Layer 1 sample (307) is merged (feedback magnitude β) with the test data instance Layer 1 activity (104), to generate hidden layer activity at time t+1.
h_{1}^{i,t+1}=(h_{1}^{i,t}+β{tilde over (h)}_{1,A}^{i,t})/(1+β). (110)
 14. Pooling (105, second run): Modified Layer 1 activity (110) is pooled (105) to compute the most recent Layer 2 activity (111) in the iterative loop. Then, this activity is averaged (feedback ratio, τ) with previously computed Layer 2 activity (109) coming from Layer 2 feedback (305). The updated Layer 2 activity (112) with feedback is:
h_{2}^{i,t+1}:=[h_{2}^{i,t+1}+τP(h_{1}^{i,t+1})]/(1+τ). (309)
 Update rule (309) for the Layer 2 activity (112) can be rewritten using the references in FIG. 1 and text as:
(112):=[(109)+τ(111)]/[1+τ]
This procedure is repeated for multiple iterations starting from the second stage (301) and output Layer 2 activity (112). The feedback magnitude is halved at each iteration. for simulated annealing purposes.
The perspective adopted in the disclosed method binds three distinct approaches to the data generation: RNNs, mixture models and incomplete data classification. An intuitive and realtime operable method is disclosed. Imputation and mixture of factor analyzers in the disclosed method are used as a part of the pseudorecurrent processing. In the method disclosed in this patent, a feedforward neural network makes a class decision at its output layer and selects an appropriate cluster to estimate selected model's hidden layer activities. After this sampling stage, the algorithm inserts the cluster center as if it is a feedback from a higher layer. As opposed to the case in classical imputation technique, in our network, the incomplete hidden layer activities can't be isolated due. to spatial pooling, thus it is assumed that the missing dimensions are not known a priori. Since the missing data dimensions are unknown, the sample and the test data hidden layer activities are merged in all dimensions. This procedure is repeated multiple times to emulate the feedforwardfeedback iterations in an RNN. Other related concepts such as multihypotheses feedback and winnertakesall are also applied. We suggest this method as a shortcut into feedback processing and a baseline for the performance of RNNs in data completion tasks.