Los Alamos National Laboratory
 
 

Science >  LANL Institutes >  Engineering Institute

National Security Education Center

Contacts

  • Institute Director
    Charles Farrar
    (505) 663-5330
  • UCSD EI Director
    Michael Todd
    (858) 534-5951
  • Institute Office Manager
    Jutta Kayser
    (505) 663-5649

The Statistical Pattern Recognition Paradigm

Our approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1) Operational Evaluation, (2) Data Acquisition, (3) Feature Extraction, and (4) Statistical Model Development for Feature Discrimination.  When one attempts to apply this paradigm to data from “real-world” structures, it quickly becomes apparent that data cleansing, normalization, fusion and compression, which can be implemented with either hardware or software, are inherent in Parts 2-4 of this paradigm.  The authors believe that all approaches to SHM, as well as all traditional non-destructive evaluation procedures (e.g ultrasonic inspection, acoustic emissions, active thermography) can be cast in the context of this statistical pattern recognition paradigm.

It should be noted that the statistical modeling portion of the structural health monitoring process has received the least attention in the technical literature.  The algorithms used in statistical model development usually fall into the three categories of group classification, regression analysis or outlier detection.  The ability to use a particular statistical procedure from one of these categories will depend on the availability of data from both an undamaged and damaged structure.  This paper will now discuss each portion of the SHM statistical pattern recognition paradigm. 

Operational Evaluation

The first step in the development of a SHM system is referred to as operational evaluation.  Operational evaluation attempts to answer four questions regarding the implementation of a damage detection capability:

  • What are the life-safety and/or economic justification for performing the SHM?
  • How is damage defined for the system being investigated and, for multiple damage possibilities, which cases are of the most concern?
  • What are the conditions, both operational and environmental, under which the system to be monitored functions?
  • What are the limitations on acquiring data in the operational environment?

Operational evaluation begins to set the limitations on what will be monitored and how the monitoring will be accomplished.  This evaluation starts to tailor the damage detection process to features that are unique to the system being monitored and tries to take advantage of unique features of the damage that is to be detected.

Top of page

Data Acquisition

The data acquisition portion of the structural health monitoring process involves selecting the types of sensors to be used including required bandwidth and resolution, the locations where the sensors should be placed, the number of sensors to be used, and the data acquisition/storage/transmittal hardware. The trade-offs between an optimal sensing system and a redundant sensing system must also be considered. For real-world applications the ruggedness and long-term stability of the data acquisition system will also be a concern. Another important issue is to decide if the application requires a passive of active sensing approach. The LANL group is focusing much of its current work on active sensing where a known input can be applied to the structure locally to enhance the damage detection process. Finally, we believe that it is important to integrate the sensor with a local data processing capability to provide an estimate of the structure’s condition in near real time while minimizing the power consumption of the sensor/processor/telemetry system. Often this process will require a numerical simulation of the structure and postulated damage in order to define the required sensing system properties.  The actual implementation of this portion of the SHM process will be application specific.  Economic considerations will play a major role in making decisions about the type and extent of the data acquisition system that can be deployed.

A fundamental premise regarding data acquisition and sensing is that these systems do not measure damage.  Rather, they measure the response of a system to it operational and environmental loading.  Depending on the sensing technology deployed and the type of damage to be identified, the sensor reading may be more or less directly correlated to the presence and location of damage.  However, data interrogation procedures are the necessary components of a structural health monitoring (SHM) system that convert the sensor data to information about the structural condition.  Furthermore, to achieve successful SHM the data acquisition system will have to be developed in conjunction with these data interrogation procedures. 

Top of page

Data Normalization

As it applies to SHM, data normalization is the process of separating changes in sensor reading caused by damage from those caused by varying operational and environmental conditions.  Because data can be measured under varying conditions, the ability to normalize the data becomes very important to the damage detection process.  One of the most common procedures is to normalize the measured responses by the measured inputs.  When environmental or operational variability is an issue, the need can arise to normalize the data in some temporal fashion to facilitate the comparison of data measured at similar times of an environmental or operational cycle.  This normalization may require additional types of measurements (e.g. temperature) to be made.  Sources of variability in the data acquisition process and with the system being monitored need to be identified and minimized to the extent possible. In general, not all sources of variability can be eliminated.  Therefore, it is necessary to make the appropriate measurements such that these sources can be statistically quantified.  Variability can arise from changing environmental and test conditions, changes in the data reduction process, and unit-to-unit inconsistencies.

Top of page

Data Cleansing

Data cleansing is the process of selectively choosing data to pass on to or reject from the feature selection process.  The data cleansing process is usually based on knowledge gained by individuals directly involved with the data acquisition.  As an example, an inspection of the test setup may reveal that a sensor was loosely mounted and, hence, based on the judgment of the individuals performing the measurement, this set of data or the data from that particular sensor may be selectively deleted from the feature selection process.  Signal processing techniques such a filtering and re-sampling can also be thought of as data cleansing procedures.

Top of page

Feature Extraction

A damage-sensitive feature is some quantity extracted from the measured system response data that indicates the presence of damage in a structure.  Identifying features that can accurately distinguish a damaged structure from an undamaged one is the focus of most SHM technical literature (Doebling, et a.l, 1996, Sohn, et al., 2003).  Fundamentally, the feature extraction process is based on fitting some model, either physics-based or data-based, to the measured system response data.  The parameters of these models or the predictive errors associated with these models then become the damage-sensitive features.  An alternate approach is to identify features that directly compare the sensor waveforms or spectra of these waveforms.  Many of the features identified for impedance-based and wave propagation-based SHM studies fall into this category.

One of the most common methods of feature extraction comes from correlating observations of measured quantities with the first-hand observations of the degrading system.  Another method of developing features for damage detection is to apply engineered flaws, similar to ones expected in actual operating conditions, to systems and develop an initial understanding of the parameters that are sensitive to the expected damage.  The flawed system can also be used to validate that the diagnostic measurements are sensitive enough to distinguish between features identified from the undamaged and damaged system.  The use of analytical tools such as experimentally-validated finite element models can be a great asset in this process.  In many cases the analytical tools are used to perform numerical experiments where the flaws are introduced through computer simulation.  Damage accumulation testing, during which significant structural components of the system under study are subjected to a realistic degradation, can also be used to identify appropriate features.  This process may involve induced-damage testing, fatigue testing, corrosion growth, temperature cycling, etc. to accumulate certain types of damage in an accelerated fashion. Insight into the appropriate features can be gained from several sources and is usually the result of information obtained from some combination of these sources.

Top of page

Data Fusion

Data fusion is the process of combining information from multiple sensors in an effort to enhance the fidelity of the damage detection process.  Inherent in many feature selection processes is the fusing of data from multiple sensors and condensation of these data.  Common examples of data fusion include the extraction of mode shapes from sensor arrays and the averaging of spectral quantities to remove noise from the measurements.  Additional data fusion procedures focus on establishing other types of correlations (or quantifying loss of correlation) between different sensors in an effort to identify the presence of damage and locate the sources of damage.

Top of page

Data Compression

The operational implementation and diagnostic measurement technologies needed to perform SHM produce more data than more traditional uses of dynamic response information.  A condensation of the data is advantageous and necessary when comparisons of many feature sets obtained over the lifetime of the structure are envisioned.  Also, because data will be acquired from a structure over an extended period of time and in an operational environment, robust data reduction techniques must be developed to retain feature sensitivity to the structural changes of interest in the presence of environmental and operational variability. To further aid in the extraction and recording of quality data needed to perform SHM, the statistical significance of the features should be characterized and used in the compression process.

Top of page

Statistical Model Development

The portion of the SHM process that has received the least attention in the technical literature is the development of statistical models for discrimination between features from the undamaged and damaged structures. Statistical model development is concerned with the implementation of the algorithms that operate on the extracted features to quantify the damage state of the structure. The algorithms used in statistical model development usually fall into three categories.  When data are available from both the undamaged and damaged structure, the statistical pattern recognition algorithms fall into the general classification referred to as supervised learningGroup classification and regression analysis are categories of supervised learning algorithms.  Unsupervised learning refers to algorithms that are applied to data not containing examples from the damaged structure.  Outlier or noveltydetection is the primary class of algorithms applied in unsupervised learning applications.  All of the algorithms analyze statistical distributions of the measured or derived features to enhance the damage detection process.

The damage state of a system can be described as a five-step process along the lines of the process discussed in to answers the following questions:

  • Is there damage in the system (existence)?;
  • Where is the damage in the system (location)?;
  • What kind of damage is present (type)?;
  • How severe is the damage (extent)?; and 
  • How much useful life remains (prognosis)? 

Answers to these questions in the order presented represent increasing knowledge of the damage state. When applied in an unsupervised learning mode, statistical models are typically used to answer questions regarding the existence and location of damage.  When applied in a supervised learning mode and coupled with analytical models, the statistical procedures can be used to better determine the type of damage, and the extent of damage.  The statistical models are also used to minimize false indications of damage.  False indications of damage falls into two categories: (1) False-positive damage indication (indication of damage when none is present), and (2) False-negative damage indication (no indication of damage when damage is present). Errors of the first type are undesirable as they will cause unnecessary downtime and consequent loss of revenue as well as loss of confidence in the monitoring system. More importantly, there are clear safety issues if misclassifications of the second type occur. Many pattern recognition algorithms allow one to weight one type of error above the other, this weighting may be one of the factors decided at the operational evaluation stage.

Top of page

Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA
Inside | © Copyright 2008-09 Los Alamos National Security, LLC All rights reserved | Disclaimer/Privacy | Web Contact