National Science Foundation Workshop on

SIGNAL PROCESSING FOR MANUFACTURING AND MACHINE MONITORING

Draft 1.2 (11/15/96) of Final Report

Chairs: Les Atlas, University of Washington and Douglas Jones, University of Illinois

Alexandria, VA, March 14-15, 1996

Sponsored by the NSF Microelectronics and Information Processing Systems Division (Computer and Information Sciences and Engineering Directorate)


[Note: This is only an in-progress draft of a final report. The final report will be ready on or about April 3, 1997. Any opinions, findings, conclusions, and recommendations expressed in this report are those of the workshop Co-Chairs and do not necessarily reflect the views of the National Science Foundation.]

1.0 Executive Summary

1.1 The Meeting

This report presents the findings of a workshop called to discuss potential advanced signal processing applications in the areas of manufacturing and machine monitoring. The current design of manufacturing and monitoring processes do not necessarily exploit the state-of-the-art in digital signal processing. In fact, it can be argued that little research has actually been done within the signal processing community on the needs and nature of these types of problems. The workshop was called in an attempt to explore important the potential of and strategic directions for supporting this type of signal processing research. It must be emphasized that this is an initial meeting and the participants feel that more work is needed in individual areas to further refine research goals.

The meeting brought together 37 researchers from industry and academe to address these questions.

1.2 Findings and Recommendations

1.2.1 Short term (3 years or less) market/industry-driven research thrusts.

Findings: The forms of signal processing and classification which are commonly used by industry are older technology which is often far from state-of-the-art. Also, many academic studies have shown success on the only data available, which are simplified versions of actual industrial data. Thus, industry often sees too much of a leap between their current and future needs and the latest signal processing research results.

Recommendations: Highest priority for projects which can provide the academic and the industrial research community with open and easy access to non-proprietary manufacturing or machine monitoring problems or data bases. High priority for projects which identify near-term market needs and can, as verified by an industrial partner (but without any need to share proprietary information), show a company's improved competitiveness via substitution of more modern signal processing techniques.

1.2.2 Long-term (5-10 years), high-risk, potential high payoff research directions.

Findings: The most advanced current signal processing approaches seem mostly devoid of the physics and mechanics of manufacturing processes and machinery function. Also, the state-of-the-art in mechanical models is often devoid of the principals and capabilities of advanced digital signal processing techniques.

Recommendations: Cross-disciplinary studies between digital signal processing and mechanical and/or physical research, where basic questions of physical modeling are linked to fundamental assumptions, models and algorithms in digital signal processing. Also, cross-disciplinary studies between the above areas and the theory and practice of user interfaces, where issues involving the enhancement of skills of machine operators are well integrated into proposed solutions.

1.2.3 Efficacious methods for better industry/university interaction.

Findings: As mentioned in the above section 1.2.1, the distance between a typical presentation of a research result and the needs (current and future) of industry are too large. Quite often, the cost of testing new research ideas is too great for even the largest companies to justify.

Recommendations: Research support should be prioritized for projects which produce easy-to-use software implementing their proposed new methods This software could be easily accessible via the world-wide web, such that industry applications engineers could quickly and easily test out new schemes on their problems. We also note that companies which are OEM suppliers to end-users of manufacturing or machine monitoring systems may be more open to the investment in development costs. Also see the above section 1.2.1 and the below sections 1.2.4 and 1.2.5.

1.2.4 Education changes, whether for higher education students or for outreach to or from industry.

Findings: While there is a problem with industry's less-than-total understanding of advances in signal processing, there is, we believe, an ever greater problem of signal processing researchers' lack of awareness of significant problems in industry. The basic fundamentals of signal processing are already covered well in most universities, but the connection of these concepts to examples in manufacturing and machine monitoring are inadequate. Unfortunately, industry's need to keep their technology and related problems proprietary presents a roadblock to the release of specific information.

Recommendations: Programs where students (as, say, interns) or faculty (as, perhaps, summer visitors) are employed by industry should be encouraged and projects which integrate these programs with directions for theoretical or applied research should be encouraged. These programs should be consistent with the needs of non-disclosure agreements. Also, see the above recommendation 1.2.3.

1.2.5 Techniques to ease and encourage technology transfer.

Findings: As stated in the above sections 1.2.1 and 1.2.3, the usual presentation of research is done at a level of abstraction which is too far from potential industry application. This makes it difficult for industry to envision the possible application and get excited about the latest theoretical result. The typical research presentation also does not indicate the possible generality of new developments, thus putting industry's possible amelioration of development costs at further risk. Lastly, for related reasons, initial contacts between researchers and end users have been less fruitful than those with OEM suppliers.

Recommendations: Priority given to research projects which are either closely coupled with industry from the start or will, as a final result, provide software systems which are accessible and understandable by industry application engineers, especially to those that beneficially link with OEM suppliers. Priority should also be given to projects which will demonstrably solve general classes of problems, as opposed to single problems.

1.2.6 Beneficial cross-disciplinary collaborations. Also, what are the roadblocks which have prevented past cross-disciplinary interaction and how can they be removed?

Findings: Multi-disciplinary teams are essential to the integration of novel signal processing techniques into effective long-term practical advances in mechanical, physical, and material systems. However, the usual specialized peer research communities predominantly look for depth in only one of the disciplines, while not considering impact upon other disciplines.

Recommendations: Cross-disciplinary projects should receive higher priority, with minimal requirements of at least a small amount of representation of disciplines other than solely signal processing. Workshops and conferences which are multi-disciplinary with equal stature for all disciplines should be encouraged.

1.2.7 Ways to improve the research infrastructure.

Findings: While advances in technology are often very visible and publicized for typical mass-market products, the less-visible within-industry processes are less likely to be visible. Thus, research in these process- or maintenance-oriented areas is often less desirable than for mass-market products.

Recommendations: Encourage research into process-related areas by publicizing and supporting human-interface, computation, and other conventionally product-related applications within large-scale process applications. Consider education and advances of machine operators to be an important adjunct of research advances.

1.2.8 Enhancements to technical exchange, whether through workshops, special publications, and/or web sites. For example, should a dedicated Signal Processing for Manufacturing and Machine Monitoring home page be established and who should (or will) do it?

Findings: The Internet is particularly appropriate for sharing data bases, algorithms, and actual programs. However, the best advances will require researcher interaction with the problem, not with just data representing the problem.

Recommendations: The use of the Internet has already begun with the MIMOSA open systems standardization effort. (See Appendix B). Priority should be given to projects which will publish their results, data bases, algorithms, and actual programs (say, via the Java programming language) on the internet. A follow-on workshop should be funded which involve researchers working with actual machine monitoring systems or with manufacturing systems, with an appropriate interdisciplinary balance. This follow-on workshop should involve more than just data collection--there should be interaction and a potential for better understanding of the physical system(s).

2.0 Introduction

Rapid advances in certain signal processing techniques, along with cost-effective digital technologies for their implementation, are poised to address important manufacturing and machine monitoring issues for which no solution currently exists. These advances include both improvements on existing methods such as spectral analysis and cyclostationary signal analysis, and emerging new techniques. Among these new technologies are advances in wavelet and time-frequency signal analysis. By virtue of their ability to characterize both transient phenomena and persistent harmonic structure, they appear well-matched to the signals associated with rotating machinery. Other recent developments, such as higher-order spectral theory, could also possibly contribute in these applications. Also, higher-level techniques such as neural networks and statistical pattern recognition and classification provide means for combining lower-level processing into detection and categorization of faults. In fact, preliminary research by several groups in applying the techniques mentioned above to a variety of related problems has demonstrated improvements over traditional approaches. These methods, with appropriately directed research, may offer solutions for the critical technology needs in manufacturing and machine monitoring.

Unfortunately, the academic signal processing work to date is predominantly oriented toward other applications with traditionally stronger ties to the signal processing community, or toward theoretical issues which are mostly not germane to the problems of manufacturing and machine monitoring. In fact, most signal processing researchers are entirely unaware of the importance of manufacturing and machine monitoring and of the potential relevance of their work to technologies regarded as critical to national competitiveness.

The purpose of this workshop was to bring the academic researchers and the industrial users together to identify the pertinent signal processing technologies and the critical industrial needs, and to disseminate this information to the entire community. The 37 workshop participants had been selected as industry and academic leaders in these areas.

Our goal in the workshop and this report was to begin the exploration of these issues, identifying areas of need and potential research questions. From the outset it was clear that several workshops would be required to carefully explore the problem. This report will outline our initial questions, the participants, the evolution of our thinking during the workshop, our conclusions and recommendations for future research and future workshops.

2.1 Objective of the Meeting

We have outlined some of the general questions that the meeting was meant to address. Here we present the questions that were raised in the planning stage of the meeting in order to give the readers our starting point and to indicate what came out of the meeting that was not conceived in the planning stage.

2.2 The Participants

A call was sent out to a number of organizations and individuals to participate in the workshop. One of the guidelines was to have significant industry expertise represented. A second guideline was to try to ensure balance between participants from industry and from academe. The names and affiliations of the 37 participants is given in Appendix A.

2.3 Planned structure of the Meeting

The workshop was held at the Morrison House, Alexandria, VA on March 14-15, 1996, The participants were asked to come with overheads that introduced themselves and their application and/or research expertise.


The schedule for the workshop was:

Thursday, March 14

08:45AM - 09:00AM

Welcome introductions by Bernard Chern, NSF MIPS Division Director.

09:00AM - 10:15AM

Participant introductions. Two-minute self-introductions by each invited panel member with a brief listing of background and interests.

10:30AM - 11:45AM

Issues in manufacturing and machine monitoring. Goal: Identify important applications, and the key problems in each.

01:15PM - 02:30PM

Current methods. Goal: Identify techniques currently used in industry, their strengths and weaknesses, and open or unsolved problems.

02:45PM - 04:00PM

Sensor and data types. Goal: List the types of sensors and data currently available, advantages and limitations, and the potential for exploiting new types of sensors and data.

04:15PM - 05:30PM

Modeling. Goal: Discuss existing models for machine fault, tool degradation, and/or other operational characteristics and their strengths and weaknesses, and evaluate the need for more or better modeling.


Friday, March 15

09:00AM - 10:30AM: Small group discussions (On both topics below)

10:45AM - 11:45AM: Full group discussion (On both topics below)

Advanced signal processing techniques. Goal: Review the body of new (or old) methods in light of previous discussions, and speculate on their short-term and long-term potential and pitfalls for these applications.

Higher-level data processing. Goal: Discuss current techniques and application needs for sensor data interpretation, fusion, and pattern classification and analysis, and machine control and user interface issues. Also discuss their short-term and long-term potential and pitfalls.

01:15PM - 02:00PM: Small group discussions (On all topics below)

02:00PM - 03:00PM: Full Group Discussion (On all topics below)

Panel recommendations. Summarize with priorities and recommendations for researchers and funding agencies on these subjects:

  • Short term (3 years or less) market/industry-driven research thrusts.
  • Long-term (5-10 years), high-risk, potential high payoff research directions.
  • Efficacious methods for better Industry/University interaction.
  • Education changes, whether for higher education students or for outreach to or from industry.
  • Techniques to ease and encourage technology transfer.
  • Beneficial cross-disciplinary collaborations. Also, what are the roadblocks which have prevented past cross-disciplinary interaction and how can they be removed?
  • Ways to improve the research infrastructure.
  • Enhancements to technical exchange, whether through workshops, special publications, and/or web sites. For example, should a dedicated Signal Processing for Manufacturing and Machine Monitoring home page be established and who should (or will) do it?

  • 2.4 Actual structure of the meeting

    Although we did not stick to the exact timetable listed the operation of the workshop is captured in this agenda. The presentations of initial viewpoints took much longer than scheduled and generated much discussion. Based on this discussion we worked to divide the topics of concern into five small discussion groups. Based upon participants suggestions, the composition of these groups was changed between the Thursday and the Friday discussions.

    3.0 Industrial and Government Needs

    Manufacturing and the operation and maintenance of machinery by both the commercial sector and government represent a huge proportion of the gross domestic product (GDP) of the United States. Technical advancements which enhance the productivity, efficiency, or quality of these operations can potentially provide enormous cost savings and enhance industrial competitiveness. This chapter describes some applications in manufacturing and in machine monitoring, lists some sensors and signal processing analyses currently in use, identifies many technical needs which are not adequately met with current technologies, and lists some practical and cultural constraints arising in applications which will impact the ultimate industrial success of any new technology. The surveys here are by no means intended to be exhaustive or even representative, but are rather to give an introduction to some of the needs of industry and government as well as an indication of the vast range of issues which need to be addressed. Researchers are urged not to consider this report as a comprehensive shopping-list of industrial and government needs, but as an invitation to seriously explore these important application areas further, preferably in collaboration with industrial partners.

    3.1 Applications in manufacturing

    Taken as a whole, the manufacturing industry represents a large fraction of the GDP of the United States. An incredible variety of materials and products are manufactured, from chemicals, cloth, paper, metal, silicon wafers and other materials through extraordinarily complex products such as airplanes, automobiles, and computers. At a detailed level, the range of manufacturing methods and specific issues probably approaches the variety of the manufactured products themselves, precluding an exhaustive survey of either the industry or its needs; here we describe several applications within the purview of the workshop attendees.

    In many major manufacturing sectors, such as automobile and airplane manufacturing, metal removal is a major component of the manufacturing process, and sensors and signal processing for the enhancement of metal removal processes have been an important subject of research for some time. Many procedures are used for metal removal, including drilling, reaming, grinding, milling, and polishing. Most of the machine tools for the various metal removing processes involve a rotating spindle which applies the tool face to the workpiece; however, the process of removal itself generally consists of a large number of transient, often random chipping or flaking events which each detach a small fragment of metal from the workpiece. Spectral, statistical, and transient analyses have all been investigated for metal machining assessment.

    Manufacturing failures can take several forms, from slightly exceeding the tolerances specified for the product (such as from using an excessively worn drill bit) to catastrophic failure of a tool which destroys an extremely expensive workpiece or causes an unscheduled shut-down of an entire assembly line. Depending on the type of failure, the product, and the factory configuration, the cost of a failure may be trivial or huge. The value and needs in process monitoring vary accordingly. However, several trends common to most applications can be identified. First, the ultimate concern in the manufacturing process is the quality of the product; the condition of the machine tool itself is of interest primarily insofar as it relates to the final product. For example, the use of a worn drill-bit is often perfectly acceptable if the holes drilled with it meet manufacturing tolerances.

    The modern machine tool is often a completely programmable, extremely flexible machine able to apply many different tools at many different rates, orientations, and pressures to many different materials. Many significant manufacturing industries, such as airplane manufacture, involve small lots, frequent changes in operating conditions, and great adaptability in the machine tools. Similarly, useful process monitoring methods require broad applicability, adaptability, and little or no training or adjustment for new conditions. Other industries produce much greater lots, with each manufacturing step completely specialized and characterized; considerably less adaptability and flexibility is required in process monitoring methods, and a more substantial initial characterization, training, and optimization of the methods may be acceptable. The trend in manufacturing, however, is toward "agile" manufacturing and smaller lots, so the demands on process monitoring are generally becoming greater.

    Other manufacturing processes involving metal are stamping processes, in which the dynamic measurement of pressures at many locations has been studied, and the manufacture of continuous sheets of metal, with on-line measurement of these qualities and closed-loop control to maintain them within desired tolerances. The manufacture of paper also involves the continuous, rapid formation of large sheets of material of uniform quality and thickness. Characterization of both the materials and the bonds in products assembled from them is a general requirement for most classes of materials and products.

    Semiconductor manufacture represents a major industry with considerably different manufacturing processes from those mentioned above. While many of the processes are chemical rather than mechanical, measurement plays an important role, and signal processing can enhance such measurements.

    Quality evaluation and control is an almost universal concern in across all manufacturing sectors. Statistical methods play an important role, as do both automated and human inspection. Many sensing modalities, including electrical, physical, optical, X-ray, acoustical, ultrasound, and chemical measurements, are applied in inspection procedures; many of these measurements already contain, or could be enhanced by, signal conditioning and processing.

    3.2 Applications in machine monitoring

    The repair and maintenance of machinery represents an annual cost of many billions of dollars to U.S. consumers, industry, and government. While for inexpensive and non-critical machines (e.g., lawnmowers) monitoring is not cost-effective, for many critical applications machinery condition assessment has the potential to save billions of dollars while dramatically increasing safety and reliability. Examples include power generation turbines and critical equipment in nuclear reactors or on large oil rigs, where unscheduled failure can result in lost revenue approaching a million dollars per day. Failure during operations of aircraft engines or power train components in helicopters often results in loss of life along with the equipment. For example, between 1985 and 1992, the U.S. Navy lost 67 airframes and 84 lives due to material-related failures in helicopters. Failure during operations of critical machinery on a Navy capital ship during wartime could endanger the security of the nation. Methods for machinery condition assessment which provide warning in time to cease operations or schedule maintenance offer immense value in such applications, and monitoring is already routinely used or eagerly sought.

    A brief overview of Navy maintenance concerns, while representing only a small fraction of the total demand and applications, will serve to highlight some of the key issues and give an indication of the magnitude of the need. The U.S. Navy alone spends billions of dollars per year on maintenance. Studies have shown that 21% of the total life-cycle cost of Navy capital equipment is for maintenance. Corrosion is the single largest maintenance problem, representing about 30% of the total. Methods for corrosion testing include physical measurement, chemical and particulate analysis, and ultrasound. Rotating or reciprocating machinery such as engines, motors, and pumps composes much of the remainder and together represents about another 30% of the total maintenance costs.

    The maintenance of turbine engines alone costs the Navy hundreds of millions of dollars per year. The cost and security risk of unscheduled failure is enormous, so prophylactic maintenance is routinely practiced. At great expense in both dollars and down-time, critical components are routinely replaced long before their mean time to failure to reduce the risk of failure during operations. Unfortunately, it is suspected that the majority of failures are due to problems introduced by faulty maintenance; that is, the routine maintenance itself is the dominant cause of failure! For both cost reduction and for improved reliability, the Navy would prefer to adopt an "if it ain't broke, don't fix it" philosophy. But this can only be done without endangering Navy operations with reliable machinery condition assessment and early detection of precursors to equipment failure.

    Potential commercial applications of machinery condition assessment are even more widespread and varied. For example, one-third to one-half of the life cycle cost of an elevator is for service; prognostic condition assessment which allowed most maintenance to be performed during regularly scheduled service rather than on an emergency basis after failure would greatly reduce the total cost. The needs in the electric power generation industry are similar to those for monitoring the main engines on Navy ships.

    Ever more stringent demands on automobile performance, gas mileage and emissions control have already required the introduction of various sensors and microprocessor-based control systems in most cars, and more sophisticated sensors, signal processing, and control will be essential to achieving future improvements.

    Most of the applications discussed here involve either rotating or reciprocating machinery. It thus appears quite possible that a promising method could potentially solve a wide variety of machine monitoring problems. Workshop participants pointed out, however, that the requirements, signals, and data rates can be very different for rotating and reciprocating machinery, and that different methods may be required.

    3.3 Current technologies

    By far the most prominent method for manufacturing and machine monitoring is spectral, or "FFT" (fast Fourier transform) analysis. Cepstral variants are often employed to increase robustness or to reduce the variability of the estimates. Typically, a number of spectral lines associated with harmonics of the various rotating frequencies of the machinery are identified and their levels are compared to preselected thresholds. Spectral analysis has the advantages of a natural and direct association with the characteristics of rotating machinery, relatively simple interpretation, a certain robustness to noise, propagation path, and other sources of distortion, backed by a large body of theory and experience.

    Many other signal analysis methods are applied in selected manufacturing and machine monitoring applications. These include auto-regressive (AR) models, time-domain signal envelope analysis, and other time-domain methods such as kurtosis or root-mean-square (RMS) signal power. While these methods are occasionally used, they are usually considered to be less robust and less reliable than FFT analysis, which in large part accounts for their less frequent use. Generally, thresholds are set via experience and experimentation on the various measurements and a fault or warning is declared when a threshold is exceeded. Multiple measurements, thresholds, or parameters are often combined or "fused" using linear discriminants or an expert system, and neural networks and hidden Markov models (HMMs) have recently been introduced. "Trending," in which the evolution of parameters over time is tracked, is also commonly used; for example, the rate of increase of the magnitude of a spectral line may be estimated or even used to predict the time to failure.

    In many manufacturing centers, particularly in the semiconductor industry and in quality control circles, statistical process control is the dominant methodology. Statistical process control is based on measuring parameters, such as the physical dimensions, of manufactured products or test devices, and performing statistical analyses of these parameters to identify problems and to minimize the deviation from the desired values. The adoption of statistical process control has led to revolutionary improvements in production quality in certain industries and will remain a mainstay in many manufacturing industries for the foreseeable future. However, the open-loop, after-the-fact nature of statistical process control precludes immediate response to incipient problems. Furthermore, the cost of the lost opportunity sacrificed in making test devices may be substantial. For example, the value of an 8 inch wafer from a modern semiconductor line is generally between ten and twenty thousand dollars, so each test wafer for statistical process control represents a substantial financial loss. Alternate methodologies which reduce this cost or enhance the effectiveness of statistical process control could be of great value.

    Many types of sensors which measure a great variety of physical phenomena are used for both manufacturing and machine monitoring. Mechanical characteristics such as vibration, torque, displacement, shaft velocity, strain and pressure are measured by many different types of sensors, ranging from accelerometers to strain gauges to non-contact displacement pickups using eddy currents. Electrical characteristics such as motor current, capacitance, and RF emissions are often used. Acoustic emissions (AE) play an increasingly important role in manufacturing applications and are under investigation for certain machine monitoring tasks. Visual, infrared, ultrasonic, and X-ray inspection for non-destructive evaluation (NDE) play major roles in certain applications. In spite of this vast array of sensor technologies, there appears to be a constant need for new, more, and better sensors. Many types of sensors have significant limitations, such as restricted bandwidth, noocessing technologies may overcome some of these drawbacks and address some of the needs discussed in the following section. However, spectral analysis is unlikely to be displaced as a key method in many or even most applications; new methods, if successful, will enhance, complement, or extend existing methods and provide additional information to improve performance. Some workshop participants believe that perhaps the most important task is to identify the applications in which new methods can offer improvements over existing methods and to focus research in these areas.

    3.4 Technical needs

    There are many aspects of monitoring technology in which significant improvements are needed. The discussion which follows presents some of these areas which new signal processing technology should be able to address.

    3.4.1 Prognostics

    Workshop participants from industry made it clear that the value of monitoring lies primarily in fault prediction. An after-the-fact detection of a serious failure is generally of little use; the value of monitoring comes in predicting failure in time to prevent it and in reliable estimation of the remaining time to failure. For example, in the automotive manufacturing industry, it is a common practice to change all of the tool faces in all of the machines at the end of a shift. The only monitoring question of real interest in this context is whether a tool will fail before the end of the shift and thus cause an extremely expensive unscheduled shut-down; the exact amount of wear on a drill bit is of little interest unless it presages a catastrophic failure. Research efforts should thus be more focussed on prognostics and on early detection of fault precursors. Reliable estimation of time-to-failure is one of the greatest un-met needs in manufacturing and machine monitoring, and one of weakest points of existing methods.

    Severity estimation of faults represents another significant need closely related to time-to-failure prediction. Most faults of interest are believed to begin with small precursor events and to stem from a progressive degradation of the tool or machine component. Thus the tracking of this degradation along with ongoing prediction of the time-to-failure is of great importance. As the signal characteristics from many types of degradations are not monotonic, continuous monitoring which tracks the history of the developing fault is often essential.

    3.4.2 Robust and adaptive methods

    A perennial need in manufacturing and machine monitoring is for more robust methods; i.e., methods which tolerate deviations from assumed or nominal signal characteristics. In general, the signal and noise environment in such applications is highly complex, non-Gaussian, and exhibits large variability or non-stationarity. The operating conditions may vary dramatically. Low false alarm rates are an absolute necessity for user acceptance of monitoring methods, which places an additional burden on the robustness of the methods.

    Adaptive methods which automatically adjust to changing conditions are a pressing need in most applications. Agile manufacturing, in which each machine produces small lots under different operating conditions, virtually precludes extensive training or optimization of monitoring methods for each individual product. This is a major weakness of current methods, and new ideas are needed to successfully apply monitoring in the ever-increasing proportion of manufacturing applications in which variable operation is becoming the norm.

    3.4.3 Analysis of transients

    Many events of import in monitoring produce partly or entirely transient signals, particularly in the early phases of the failure cycle. For example, an initial chip in a ball bearing will produce a transient signal whenever the damaged region rolls across a load-bearing surface. In manufacturing, the process of metal removal actually consists of a large number of individual transient events in which a single chip of metal is removed. More and better methods for transient analysis are thus needed. Needs include both detection and classification of both isolated and overlapping transients. As an example of a complex problem of interest, in certain metal removal operations, there are three main processes by which individual chips of metal are removed, two of which are acceptable and one of which is undesirable. A method which could determine the relative percentages of the three classes of transients associated with chip removal would be of significant value.

    3.4.4 Sensors

    While many types of sensors already exist and new ones are constantly introduced, more and better sensors remain a continual need. Improvements in existing sensors, such as increased linearity, spectral flatness, and bandwidth in accelerometers, are greatly desired. New types of sensors are also needed; laser vibrometers, fiber optic sensors, micro-electronic machined sensors (MEMS), and multiple integrated sensor chips were mentioned as examples of promising new sensor technologies. Given the tremendous demand for more, lower cost, sensors, it was suggested that MEMS may be a major component of future sensor technology. The ability to inexpensively incorporate powerful signal processing on the same chip with the sensor to create "intelligent sensors" was also cited.

    Along with a proliferation of sensors comes a greatly increased need to combine, or "fuse," data from multiple sensors for more effective monitoring. Needs range from the processing of multiple data streams from a simple array of identical sensors to data from sensors based on entirely different physical phenomena operating asynchronously at vastly different rates. These needs can be expected to grow as reduced cost allows the use of ever more and varied sensors in the future.

    Sensor placement greatly impacts the quality of the data and the performance of a monitoring system. As the options proliferate and suites of sensors become available, intuitive methods of placing sensors or experimental optimization may become inadequate. CAD or signal processing tools which help optimize sensor placement are needed. For example, a software package which would simulate the effects of sensor placement and aid the monitor designer in making the best trade-offs would be of great value.

    There are tremendous needs for signal conditioning, processing, and compensation of raw sensor data as well. The quality of sensor data can often be greatly improved by post-processing. For example, inverse filters could flatten the spectral response and extend the bandwidth of accelerometers, and certain types of nonlinearities could be corrected. Data from one or more existing sensors could be processed to create a "virtual sensor" output similar to that from a desired sensor too expensive or inconvenient to actually install. Self-calibration and self-testing of sensors would be extremely valuable in many applications, particularly because in some applications the sensors are less reliable than the process being monitored! Many of these problems appear well within the capability of existing signal processing methods and should yield significant benefits with only a modest effort.

    3.4.5 Modeling

    Workshop participants considered modeling to be one of the most important areas of technical needs. No participant felt that fully complete and accurate models (such as a detailed finite element analysis of the entire machine) are required. The goal of modeling should be to obtain better insight and intuitive understanding of the process being measured to guide, refine, and validate the development of better methods. To this end, the model should be of minimum useful complexity; that is, the simplest model which illuminates the key process features is desired. However, panelists noted that fault conditions generally require more sophisticated models than those describing normal operation. For example, under normal conditions the signal from a motor may be described by a simple aggregate model, but when a single winding fails, the model may require terms related to individual windings. While models for normal operation are often available, models adequate to characterize fault conditions are much less common.

    The panelists identified three major classes of models, all of which were considered important for manufacturing and machine monitoring. Physical models, or mathematical descriptions of a system derived from its physics, represent the first class of models. As mentioned above, the most useful physical models do not capture every detail of the system, but capture the essential features with minimum complexity. Phenomenological models identify certain key features of the data, such as spectral lines or modulations, with which to characterize the system, but with a much looser or even only a qualitative coupling between the actual physics of the process and the model features. Finally, "empirical," or data-driven models, are those based predominantly on features extracted from training data by mathematical or statistical methods without direct reference to the physical system. Hidden Markov models (HMMs), neural networks, Karhunen-Loeve expansions, and various methods from statistical pattern recognition are examples from this class of models.

    Perhaps the greatest differences of opinion among workshop participants centered on the topic of physical versus empirical modeling. Some participants felt that only models well grounded in physics could lead to significant progress. Proponents of empirical modeling argued that, while empirical modeling might not lead to the best possible solution, it can offer substantial improvements, it can be applied immediately in situations for which adequate physical models do not currently exist or are too expensive or complicated to obtain, and substantial success has been demonstrated in real applications. Perhaps grudgingly, almost all workshop participants ultimately agreed that both physical and empirical models have an important role to play, and that significant research is needed in both of these directions.

    The issue of model validation must be addressed whenever models are used. As models never fully capture all details of a system, the accuracy and adequacy of a model must be confirmed by validating it on real test data. Empirical models require test data for both model construction (training) and validation. One drawback of empirical models is that they require much more extensive and careful validation than physical models. Adequate training and testing of empirical models requires that both the training and validation data contain sufficient examples spanning the full range of machines, faults, or situations that could be encountered in practice; this makes it much more challenging to develop a robust empirical model. Physical models tend to involve a much smaller, more restricted set of parameters, and the validity of the model can be ascertained with a much smaller test set. Furthermore, the intrinsic confidence in a physical model is much higher since it is based on known principles of physics rather than obscure characteristics automatically identified in the data. Empirical models in general require much more rigorous, extensive, and expensive training and validation than physical models; however, there are situations in which the necessary quantity and quality of training and validation data is available or can be collected more easily than an adequate physical model can be developed.

    The need for both robust and adaptive models was highlighted. Robust models are relatively insensitive to deviations of the real system or the data from the model; clearly this is a requirement for success given the complexity, variability, and non-ideal nature of the systems involved. Adaptive models adjust their parameters to match the particular characteristics of each machine or operating condition, or track these changes over time. In particular, even easily measured and obviously influential parameters, such as rotation rate or temperature, are generally not incorporated in the current models. With modern numerically controlled machine tools, for example, substantial information on the system state is already available in the controller and should be exploited by the monitoring system. As machine tools become more powerful and flexible and industry moves towards smaller lots, the need for adaptive models increases.

    3.4.6 Feature extraction and reduction

    Both traditional methods such as FFT analysis and new methods such as time-frequency or wavelet analysis generate large numbers of data features. There is a pressing need to reduce the dimensionality of this data by extracting a limited number of features which best preserve the useful information. While researchers have applied certain methods such as linear discriminant analysis and neural networks to a limited extent, it is clear that much remains to be done in this arena. There are many reasons why feature reduction is essential, including reduction of computational cost, reduced requirements in terms of training time and data, noise reduction, increased robustness, and more rapid training of classifiers.

    Adaptivity in feature identification and extraction is essential for a single method to tolerate changing conditions or multiple tasks. Modern flexible manufacturing operations involving small lots demand fast, adaptable feature detectors which work well in noisy environments. Rapid training of classifiers, or even real-time, on-line training and classification, is required in such applications.

    3.4.7 Specific improvements in methods

    Increasing numbers of researchers are investigating time-frequency and wavelet methods for manufacturing and machine monitoring applications. A need remains for high-resolution, positive time-frequency representations with a clear relationship between visualized time-frequency structure and the underlying physics of the machine process, and for faster algorithms for computing these representations.

    Even though spectral analysis has been extensively studied and used in manufacturing and machine monitoring for many years, certain questions are not yet fully answered. In particular, most spectral estimation methods implicitly assume either a completely stochastic or a fully deterministic model of the signal; In particular, the question of whether spectral lines represent deterministic sinusoidal components or narrow stochastic spectral resonances has generally not been addressed, and issues related to spectral estimation of mixed deterministic and stochastic components have not received sufficient attention.

    Synchronization of processing to the rotational period of the machine has proven very beneficial in many applications. Often synchronization information is not available and must be inferred from the data. Better methods for accurate period estimation and alignment in periodic processes are needed. Samples are conventionally taken with a fixed temporal spacing; however, sampling with respect to fixed angular spacing (e.g., a fixed number of samples per rotation of a motor shaft) is often more appropriate and robust for machines with varying rotational periods. Methods for angular rate estimation and temporal-to-angular interpolation could be of great value yet have not been carefully studied.

    Small signals in high background noise are characteristic of many manufacturing processes. Improved methods for extracting and processing signals in low signal-to-noise ratio situations are widely needed. While there exist theoretical bounds which cannot be breached, current methods are unlikely to have reached these limits in all manufacturing and machine monitoring problems.

    Current detection or classification methods generally provide only the most probable condition class assignment. Certainty assessment, or provision of an estimate of the reliability of that classification, would be of great additional benefit in many situations. In fact, in some situations an estimate of the certainty of a classification may even be more valuable than the classification itself. There appears to be relatively little work on certainty estimation in contexts meaningful to monitoring problems, yet there is a pressing need for it.

    3.4.8 Integration of monitoring and production

    The ultimate concern in manufacturing is the quality of the product, not of the machine. Monitoring methods which relate more directly to the production result are needed. Closer integration of the manufacturing process control and monitoring offers one means of achieving this goal. This could take the form of tightly coupling the monitoring in a closed-loop control system, or of using monitoring to assess the current machine or system state for the benefit of the controller or vice versa. Other potential benefits include the ability to adapt the machining process to work around a problem; as an example, the rate of drilling could be adjusted to the current state of the drill bit, thus using the maximum acceptable speed throughout the life of the bit or preventing a failure at the nominal rate. While relating monitoring more directly to the product quality appears to be a challenging problem, successful solutions would provide tremendous value.

    3.4.9 Technology assessment and technology transfer

    Standard spectral analysis has proven itself useful in many applications, and it will probably remain the single most important method in the foreseeable future. One of the greatest needs is determining in which monitoring problems new methods are likely to add value. This requires sufficient understanding of both the manufacturing process and the signal processing tools to make appropriate judgements. It must also be recognized that the problems themselves are moving targets and that reassessment of signal processing, sensor, and machine technologies should accompany new developments. For example, the next generation of high-speed manufacturing and machining tools are likely to present both new challenges and new opportunities for advanced monitoring methods, and research should be addressing these issues now.

    With the continual proliferation of manufacturing tools, machines, sensors, and signal processing methods, the problem of matching the tool to the problem becomes ever more challenging. A better high-level understanding of manufacturing and machine monitoring problems is needed to identify a few underlying classes of problems common to many specific applications. The abstraction of a few general classes of problems would allow signal processing research to be focussed much more effectively on promising methods which could be leveraged to solve a large body of problems.

    One of the greatest difficulties in technology transfer is the tremendous effort usually required by a manufacturing or monitoring engineer to try out new signal processing methods for their application. They must generally develop sufficient expertise in the new method in question to develop from scratch software implementing the method, or they must struggle to modify and use poorly documented research code from the algorithm developers. Given the magnitude of the task and the high risk that it will not succeed, monitoring engineers are understandably reluctant to make the effort. However, the internet may offer a new means of reducing these difficulties if innovative new ways of developing, distributing, and testing software become widespread. The panelists envisioned a future in which well-documented, easy-to-use software implementing any new methods published in journals would be easily accessible via the world-wide web, and that applications engineers could quickly and easily test out new schemes on their problems. While clearly a utopian vision, to a limited extent this is already happening, and researchers can greatly increase the speed and likelihood of technology transfer by developing and distributing well-documented, carefully designed software.

    3.5 Practical and cultural constraints

    A number of practical and cultural issues arise in industrial and government applications which may appear unrelated to the technical issues which researchers expect to address, but yet may have a greater impact on the success of new technologies in industry than the performance itself. Several workshop participants from industry argued strongly that these constraints represent more significant barriers to industrial success than the purely technological issues researchers are accustomed to addressing. They regard technology transfer, not technology development, as the most difficult hurdle. Researchers should be very aware of these issues and carefully plan their research in light of these constraints. The signal processing researcher should not underestimate the significance of these concerns to industry, and should recognize that the industrial success of any new method, however successful in the laboratory, will depend in large part on working within or circumventing these constraints.

    A thorough understanding of the application, the user's needs and goals, and the larger context in which the monitoring must operate is essential for maximizing the chance of successful transition to industry. The primary "customer" for the new monitoring methods is not the end user, but an original equipment manufacturer (OEM) in the business of making monitoring equipment. Many manufacturers will not consider using a monitoring technology, however promising, unless it can be purchased from a major OEM well established in the monitoring business. For this reason, most workshop participants from industry strongly advocated industry/OEM/academic teams as the most likely to achieve success, and recommended that funding agencies favor such teams.

    Sensors and monitoring generally receive a lower priority than machine functionality, factory layout, convenience, and ease of installation. A sensor or sensor wiring which limits tool motion or interferes with the operator's vision or movement, for example, is generally unacceptable. Operational constraints often force sensors to be placed in less-than-optimal locations or the use of less desirable but less intrusive modalities; thus, low signal-to-noise ratios and complicated propagation paths between source and sensor may be the rule rather than the exception. Successful new technologies will tolerate such limitations. Reliability and maintenance are extremely important; in many situations, sensors fail more frequently than the machines being monitored.

    The operational environment, either for manufacturing or machine condition assessment, is rarely well-controlled or consistent. Factories, machines, or operational usage are rarely designed with monitoring in mind; the environment may be extremely noisy or variable, and both operating conditions and individual machines may vary dramatically. The need for robust methods which provide acceptable performance in the face of this variability is overwhelming. Researchers must be extremely careful in their algorithm development and test to avoid creating "hot-house" methods which appear promising in initial tests but which fail in the industrial setting. Even test-stand data from real machines is often not adequate to train or test monitoring methods, as the environment is carefully controlled and consistent. Useful training and test data sets must contain sufficient examples spanning the entire range of operating conditions as well as data from many different machines, and these data sets must be used very carefully to obtain statistically valid results and to avoid overgeneralization.

    Cost is very much a constraint in most applications, although it is considerably more severe in some applications than others. The added value of monitoring is always weighed against the cost and inconvenience; in many situations, the incremental value of monitoring is too low to warrant any additional cost. For example, in many situations it is cheaper to discard inexpensive drill bits well before failure than to assess their condition. However, if the tool is very expensive, the cost of the down-time for frequent tool changes is high, or the loss due to damage to the workpiece is large, monitoring of tool condition can add significant value.

    Researchers should keep in mind that the primary concern of manufacturing is the quality of the finished product; monitoring only adds value insofar as it contributes to better, faster, or cheaper production. More emphasis should be placed on relating the monitoring activity to the product outcome, rather than to the tool status. Similarly in machine condition assessment, the primary interest is whether the machine can function properly until completion of its mission or until the next scheduled maintenance; prognostics and accurate prediction of time until failure are the key issues, not merely detection of the existence of faults.

    The immediate user of a monitoring technology is very often a person, rather than a machine or an automatic alarm system. In manufacturing applications, this person is very often a highly skilled machinist whose expertise in manufacturing and machining may equal or exceed that of the researcher in their theoretical specialty. It is important that researchers and new technology developers recognize this expertise, respect it, and seek to utilize and complement it, not to replace it. Skilled machinists, using their senses, knowledge, and expertise, perform much better than any existing monitoring systems in many applications. Monitoring systems are very often used not for automatic condition assessment but as a sensory extension or enhancement for the benefit of the machinist. The decision to adopt or reject a new monitoring technology often falls to the machinist, so successful research will give their needs and desires high priority.

    To exemplify this point and to illustrate an successful case, we quote from The Economist [March 4, 1995, page 63]:

    The company's legendary Toyota Production System (TPS) was the basis of the lean-production revolution (everything from just-in-time delivery, through total quality management to continuous improvement). The TPS was dubbed the machine that changed the world." Now, under pressure not only from rivals who have copied its techniques, but also from the high yen (which has risen by 64% against the dollar since its low point in 1990), Toyota is pushing ahead again towards a system which, rather than replacing workers with machines, tries more clearly than ever to restrict the machines to doing only those things that make life easier for the workers.

    The RAV4 workers at Toyota City still work around a conventional assembly chain, with car bodies coming along on an overhead conveyor. But the line is sub-divided into five parts with buffer zones in between to make the work less stressful; three or four cars at a time enter a given sub-section of the line. Further, cars can come in only when the workers there are ready for them. Workers stand on their own little rubber conveyor belts that follow the car they are working on as it moves through their area. The only other signs of automation are simple rolling devices to move engines and gearboxes into position, so that they can be fitted in without stopping the line.

    No one is sweating, yet 428 finished cars roll off the line every day. Toyota does not give out precise productivity figures, but the man-hours per vehicle could be as low as ten--over twice as efficient as conventional American assembly plants. The production rate of 9,000 cars a month is more than four times the break-even point, which is so low because investment has been minimal. Rather than being a greenfield site, the plant is a revamped 36-year-old factory. Compared with a normal assembly line, automation has been cut back by 66%.

    To switch from machines to people looks (like) a strange strategy particularly given Japan's astronomical labour costs. Barely two years ago, Toyota, alongside other Japanese car makers, was opening huge new car plants on Japan's Southern island, Kyushu, that were the last word in automation. But the promised economies proved to be false ones. Toyota, for instance, found that, although automation reduced the number of line workers at its new factory, the number of maintenance personnel rose dramatically. Since they alone really understood the robots, there was little scope for "kaizen," or continuous improvement. On the new, less automated RMV4 line the number of defects has fallen to barely 12% of its previous level, and productivity has risen by one-fifth.

    The existence of "a man in the loop" presents both challenges and opportunities to the developer of new technologies. To aid the machinist in exploiting the additional information provided by new sensors or signal processing, the design of the user interface becomes critical. The quality and design of the user interface can be more important than the quality of the signal processing technology. On the opportunity side, the developer need not supply a detection algorithm or higher-level condition assessment; the superior talents of the human brain at information processing, pattern recognition, adaptability, and data fusion can be exploited! In other applications, however, the user may be very inexperienced (for example, a young sailor recently assigned the maintenance of a large number of different machines on a Navy ship), and a successful monitoring technology must provide considerably more guidance.

    Low false alarm rates are absolutely essential for industrial acceptance in most manufacturing and machine monitoring applications. Many studies, both in manufacturing and machine condition assessment, have found that monitoring devices with false alarm rates exceeding very modest levels are usually disabled or ignored by their users. Obtaining low false alarm rates in light of the many difficulties in monitoring applications presents a major challenge, but it is necessary for practical utility.

    From the industrial perspective, monitoring is a necessary evil to be minimized as much as possible. While methods which allow industry to avoid monitoring may appear less exciting than techniques which reach the factory floor, from the industry perspective a signal processing tool which allows them to identify the source of a problem and redesign the manufacturing process to prevent it without further monitoring is ideal!

    Certain cultural considerations may negatively impact the prospect of introducing new monitoring technology for manufacturing and machine condition assessment. Monitoring is often an unpleasant afterthought; manufacturing procedures or machines are designed without regard to the needs of monitoring. The development of a monitoring plan may be given a low priority and assigned to junior staff with little experience, support, or visibility. Within the signal processing community, relatively few researchers are even aware that applications in manufacturing and machine monitoring exist, let alone of their importance. The workshop participants believe that the commercial value of monitoring is under-appreciated, both by industrial management and signal processing researchers, and that greater visibility is needed. Quoting from a panelist's published view [John S. Mitchell, "Leadership or Management?-Progress or Status Quo?," Sound and Vibration, September 1995, page 5]:

    The condition assessment field is not led but managed. Peter Drucker stated that "leaders do the right things, managers do ttry experience. Should we expect vision, drive and success comparable to Microsoft?

    Condition assessment must move quickly toward broad, comprehensive, integrated information systems, able to readily exchange data within an enterprise wide information network. All data defining current and forecast condition including performance, maintenance and operating history, and a constantly updated life cycle cost must be accessible at a single workstation in one standard format. Headroom for growth is equally important. No one can anticipate what will be possible tomorrow except that it will be vastly better than today. If condition assessment systems don't make substantial progress, the entire technology risks being preempted by the process control industry! Their story is compelling. More important, the addition of machine condition monitoring to a process management system will be inexpensive and thereby irresistible to senior management with little or no appreciation of the requirements for effective condition assessment or even maintenance.

    A significant reason for underutilization of monitoring technologies is the lack of a common standard for data exchange, interface, and communication. Currently, each monitoring system is essentially a stand-alone, independent, proprietary device. A very important initiative, the Machinery Information Management Open Systems Alliance (MIMOSA), is attempting to define an open standard for data communication and interoperability for monitoring systems. Standards have proven vital to rapid growth and acceptance in industries ranging from foodstuffs to personal computers, and success of the MIMOSA endeavor should yield greatly increased growth, visibility, and new technology adoption in the monitoring arena. MIMOSA deserves the fervent support of everyone interested in this field.

    4.0 Potential Contributions from Signal Processing Research

    Section 3.4 of this report describes a large number of current technical needs in manufacturing and machine monitoring, most of which are largely signal processing issues. In this chapter, we review a number of signal processing technologies which appear particularly relevant for addressing some of these needs. Successful solution of manufacturing and machine monitoring problems will generally require considerably more research and extension of these tools, but these represent a partial list of some of the more promising directions as related to this application area.

    4.1 Spectral analysis

    Spectral analysis has long been the key signal processing technique in manufacturing and machine monitoring. FFT-based approaches are generally used; however, intensive signal processing research on spectral analysis has created a large body of alternative methods which might offer some advantages. While researchers have applied many of these approaches to monitoring applications, further research might yield useful results. In particular, issues related to the mixture of deterministic components, stochastic resonances, and noise which may be typical in monitoring applications have not yet been fully addressed.

    The speech recognition community struggles with challenges similar to those faced by the monitoring community in terms of the nonstationarity, non-Gaussianity, and high variability of signals and environments. The results of their intensive efforts to develop highly robust spectral estimates (usually based on cepstral approaches) for speech recognition might provide more robust spectral analysis for monitoring applications as well.

    4.2 Time-frequency and wavelet analysis

    Most signals from manufacturing and machine monitoring applications are believed to consist of (at a minimum) periodic components, transient events, and stochastic elements, all of which may be worth monitoring. Traditional spectral analysis assumes stationarity of the signal, so it effectively ignores transient events or nonstationary signal characteristics. Signal processing tools which exploit all of these signal components could yield great improvements over traditional spectral analysis. Time-frequency analysis offers the possibility of doing so, and its application to monitoring problems has become an active research area.

    The retention of frequency structure allows time-frequency analysis to represent periodic elements, while time axis preservation supports the description of time-limited events such as transients and nonstationary evolution of the frequency structure. Research in the statistical characteristics of time-frequency representations provides the means for stochastic aspects of signals to be well characterized. On-going research in time-frequency analysis has yielded methods offering improved resolution, positivity, adaptivity, and statistically optimized performance, all of which may prove beneficial in monitoring.

    Even for machines which mostly operate in a steady-state mode, significant information may appear in the transient "ramp-up" or "ramp-down" intervals. For example, transient or nonstationary events occurring at the initial break-in or break-out of a drill-bit may be very informative. Unlike traditional spectral analysis, time-frequency methods can analyze signals during these times.

    Wavelet methods are a class of time-frequency decompositions based on a proportional-bandwidth (as opposed to a fixed-bandwidth) analysis. Adaptive time-frequency/wavelet methods, such as wavelet packets, offer an even greater variety of frequency/bandwidth trade-offs and the ability to adapt them to the signal. The promise of wavelet methods in monitoring applications stems from the same issues motivating time-frequency analysis. Insufficient research exists to date to speculate on the trade-offs among various time-frequency/wavelet methods, and the preferred method probably varies with the application. However, this general class of tools shows substantial promise for improvements over traditional spectral analysis for monitoring applications.

    4.3 Cyclostationary processing

    Most machines and many manufacturing processes involve either rotating or reciprocating components. In steady-state operation, these machines produce signals with periodically repeating, or cyclostationary, statistics. Cyclostationary signal processing could potentially play a major role in most monitoring applications.

    Cyclostationary signal analysis has been an increasingly active research area in signal processing for a number of years. A substantial body of theory and many signal processing methods already exist, and many research groups are actively developing new methods. The panel felt that cyclostationary processing represents one of the most promising new signal processing technologies for monitoring applications.

    4.4 Transients

    In many monitoring applications, the early precursors of failure are transient events. A metal removal process consists of a large number of individual transient metal chipping events. For these reasons, the workshop participants believe that methods for transient signal analysis could lead to breakthroughs in many applications. Transient analysis is a relatively undeveloped research area in signal processing; however, several methods exist, and it is a topic attracting increasing attention. Issues of particular relevance to manufacturing and machine monitoring include transient detection, classification, identification, estimation, and characterization.

    Transients pose several challenges beyond those faced by traditional methods; first, transient signals tend to be both short and small, so the signal-to-noise ratio (SNR) tends to be low. As they are isolated, single events, signal averaging generally cannot be applied to increase the SNR. Transient events must be detected and located before further analysis. Finally, in many situations many transient events occur simultaneously, and the isolation of single events may be difficult or impossible. In spite of these challenges, panelists believe that transient analysis represents a promising research direction, and the signal processing research community may be well positioned to begin addressing it.

    4.5 Inverse models and data preconditioning

    Sensor characteristics and placement often leave much to be desired. Signal processing methods for compensation of sensor or channel characteristics or data preconditioning could greatly enhance data quality. Although their application to monitoring problems has been little studied, many powerful methods for system identification, inverse modeling, equalization and data preconditioning have been developed for other applications. Research extending existing methods to sensors and machinery characteristics most relevant to monitoring applications might yield significant advances with relatively modest effort.

    Many linear modeling and preconditioning methods exist. Examples include many system identification methods, linear equalizers, and linear channel models. Various tools also exist for nonlinear processes, including Volterra series, polynomial network models, and neural networks. These models could also aid in sensor placement and optimization. Successful application to monitoring clearly requires a thorough understanding of the specific sensors and machines.

    Various forms of data conversion could also enhance its usefulness. For example, analog-to-digital converters typically sample at fixed time intervals; however, the processing of data from machines with variable rotational rates often performs much better when the samples are taken at fixed angular intervals. Interpolation from fixed temporal to even angular intervals would be of great benefit in some situations. The theory of interpolation is considered rather mature in the signal processing community, yet this particular application may not yet be fully addressed. Similarly, the synchronization of data processing to the rotational rate of a machine offers many known benefits, yet the extensive body of techniques for pitch estimation in speech and audio applications appears not to have been fully exploited for monitoring purposes.

    4.6 Statistical signal processing

    A very extensive body of theory and techniques exist in the broad area of statistical signal processing. Major branches of probable relevance to monitoring problems include detection, classification, and estimation theory, non-Gaussian signal processing, robust methods, and statistical pattern recognition. A large body of very powerful and general, yet very abstract and mathematically sophisticated theory exists. It seems almost certain that this body of theory would yield new insight and significant improvements in monitoring applications; however, there is a substantial disconnect between the rather abstract formulation of this theory and the specific problems in a given situation. Teaming between statistical signal processors and monitoring experts would overcome this gap and would be likely to produce significant new developments.

    4.7 Models and classifiers

    A number of powerful modeling tools and classification methods used extensively in the signal processing community might provide similar value in monitoring applications. For example, hidden Markov models (HMMs) are the primary tool for speech and speaker recognition, which are essentially classification problems. HMMs are very powerful, general tools which might be similarly useful for many problems in monitoring. Neural networks have been intensively studied for various nonlinear modeling and classification tasks and look promising in certain fields; again, they might prove useful for monitoring.

    Feature extraction and reduction has long been a subject of study in statistical pattern recognition. Many techniques have proved useful in other applications and might be fruitfully applied to monitoring.

    4.8 Adaptive processing

    Agile manufacturing and highly variable operating conditions for machinery demand adaptive methods for manufacturing and machine monitoring, yet current methods are mostly fixed. A large number and variety of adaptive signal processing methods are extensively used in many applications, including adaptive equalizers in communications, speaker-adaptive speech recognizers, and adaptive speech and image coders. While these existing methods may not necessarily apply directly to manufacturing or machine monitoring, there is a large body of knowledge, experience, and techniques in signal processing which could be used to advantage in approaching monitoring needs.

    4.9 Array processing and sensor fusion

    The cost of sensors and signal processing continues to decline, thereby encouraging a trend toward the use of multiple sensors in manufacturing and machine monitoring. Sensor array processing has been one of the most active research areas in the signal processing community for many years, and many techniques exist which could be applied to arrays of sensors in monitoring applications. The simultaneous exploitation or fusion of data from multiple sources has also been a topic of research in various other applications; again, with suitable specialization, these methods might enhance the performance of monitoring systems equipped with multiple sensors.

    5.0 Recommendations

    5.1 Teaming

    Manufacturing and machine monitoring problems are complex and multi-faceted, and workshop participants strongly felt that significant success was most likely to result from research teams of diverse industrial, academic, and multidisciplinary composition.

    5.1.1 Industrial and academic teaming

    The development of practically useful methods is likely to require a thorough understanding of industrial needs. Academic signal processing researchers need to learn about the real problems and constraints faced by industry. Conversely, industry often needs to adopt a more fundamental perspective more typical of academic approaches. Thus various methods for fostering closer contact between industry and academic researchers are essential. Industrial participants at the workshop strongly recommended academic sabbaticals in industry and internships for students as excellent means for academic researchers to become versed in industrial needs. Short courses are one mechanism through which industry can be made aware of significant academic developments.

    Joint research teams with both academic and industrial participants were considered the most likely to make significant contributions. As many manufacturing concerns prefer to purchase monitoring solutions from trusted OEM suppliers, some felt that the ideal team would consist of researchers from a university, an OEM, and a manufacturing firm. Such teams should focus on a specific, carefully chosen problem for which a significant need exists yet success is likely and aim for a series of realistic goals which would provide visible progress and engender ongoing support for the project. Frequent, close interaction (such as one or more meetings per week) between the participants working as peers were considered essential for obtaining full benefit from an academic/industrial team.

    Several cultural and practical difficulties often serve as barriers to industrial/academic teaming. Differing goals and measures of success create sources of conflict between academic and industrial partners; for example, industry tends not to fully appreciate the academic need to publish results, while academicians may not respect industrial needs for non-disclosure of proprietary information. We recommend greater sensitivity and flexibility on the part of industry, government, and academe to overcome these barriers. Funding expectations and restrictions also limit the ability to form industrial and academic teams. Many governmental funding agencies restrict grants solely to academic institutions and expect industry to "pay their own way;" however, initial research may be too high-risk or long term for industry to support, so very promising research may go undone simply because there no mechanism exists to support it. To foster industry/academic teaming, we recommend that funding agencies favor such partnerships in funding decisions. Furthermore, we recommend that they develop more flexible funding mechanisms which allow them to support unconventional teams. The National Science Foundation's GOALI program, for example, is one step in this direction. Finally, it should be noted that the formation of teams from different organizations, the development of effective working relationships, and the creation and demonstration of solutions to challenging, complex problems requires a substantial, long-term commitment. Funding of multidisciplinary teams should be committed for a minimum of three years, and preferably five years.

    5.1.2 Multi-disciplinary teams

    Problems in manufacturing and machine monitoring tend to involve many different fields, and multidisciplinary teams are most likely to be successful in addressing these issues. For example, models of physical processes are needed as the basis for new signal processing methods; mechanical engineers and physicists should work together with signal processors to jointly develop models and the algorithms which use them. Statisticians could contribute statistical tools and much-needed insight for the many problems involving noise, certainty, training and validation. The coupling of monitoring with machine control and product quality in manufacturing could yield enormous benefits, but would require collaboration of control engineers with monitoring experts. These are just a few examples of the many multidisciplinary technical needs best addressed by multidisciplinary teams.

    The panel strongly felt that multidisciplinary teams are much more likely to yield significant results. We recommend that such teaming be encouraged by favoring multidisciplinary teams in funding decisions.

    5.2 Increased visibility

    In terms of total financial value, significance to the economy, and improvement of military operations, manufacturing and machine monitoring is potentially one of the most significant applications of signal processing. However, this application area is virtually unknown in the traditional signal processing community. Increased visibility within this community should lead to increased research effort in these applications and faster transfer of signal processing technology. Greater awareness within the signal processing community can be fostered by workshops and special sessions at traditional signal processing conferences. More extensive recommendations along these lines are listed above. Finally, significant funding of signal processing research related to manufacturing and machine monitoring will inevitably lead through the normal course of research to sessions at conferences, workshops, and greater interest and visibility in the signal processing community.

    It should be noted that the manufacturing and machine monitoring community is much more aware of developments in signal processing than the converse. Sophisticated signal processing methods are regularly developed or applied by researchers in manufacturing and machine monitoring. Serious effort from the traditional signal processing community will be required to make significant contributions to this application area; increased visibility is only a necessary first step.

    Manufacturing and machine monitoring also suffers from less visibility in commercial circles than warranted by its importance. One major reason is the lack of standards for monitoring equipment and data exchange. The development of standards would allow dramatic improvements in the system-wide utility of monitoring, reduced cost, and rapid innovation and growth of monitoring technology. The adoption of a standard such as MIMOSA would greatly stimulate commercial development, innovation, and deployment. We strongly recommend the support of the MIMOSA standardization effort as a most important means of increasing the visibility, market, and opportunity for innovation in monitoring technology. The second reason is that innovation is inherently more difficult for "process" than it is for "product." Namely, manufacturing processes and monitoring techniques are generally not as visible to outsiders or even to company managers as a company's products are. Thus, there is less awareness of the scale, nature, and efficacy of the company's products. Researchers and funding agencies should be alert to these less visible yet large-scale locations for the application of research.

    5.3 A problem-focused workshop

    As a follow-on to this workshop, we would like to propose another workshop for the near future. This workshop would have a smaller number of participants (8-16) but would be focused toward a class of applications which would be studied in-depth for at least two weeks. These problems would be represented by actual machines, as opposed to having just databases. As has been experienced in the speech recognition research community, having "competitions" with standard databases has allowed for significant progress at the unfortunate expense of over-tuning algorithms for these databases. Namely, practical speech recognition systems often work much less accurately than would be predicted by standard database studies.

    The next workshop should be planned with a set of signal processing challenges, which participants would be given far in advance. They could then prepare prototype algorithms, which could be tested early during the workshop. Most importantly, as the workshop progresses, the facility should allow for participants' close interaction with the problem--not just with the data representing the problem.

    This workshop should also allow for the participation of experienced machine operators, and should also provide adequate computation facilities on-site. Aside from the experience the interaction would give the research community, large well-understood databases for the community at-large would be a lasting result.


    Appendix A

    Workshop Attendees

    Jeff Allen, NRaD

    Les Atlas, U. of Washington

    Richard Baraniuk, Rice University

    Gary Bernard, Boeing

    Charles Bouman, Purdue Univ.

    Kevin Buckley, U. of Minnesota

    Leon Cohen, Hunter College

    John Cozzens, NSF

    David Dornfeld, UC Berkeley

    Martin Dowling, Liberty Technologies

    Ben Friedlander, UC - Davis

    Richard Furness, Ford Motor Co.

    G. Giannakis, U. of Vrginia

    David Hall, Penn State U.

    Larry Heck, SRI International

    Jimmy Hosch, SEMATECH

    Zhaohui Huang, United Technologies Res. Ctr.

    Harry Hurd, Harry L. Hurd Assoc.

    Douglas Jones, U. of Illinois

    Mos Kaveh, U. of Minnesota

    Hamid Krim, MIT

    Douglas Lake, ONR

    Kam C. Lau, Automated Precision

    Mark Libby, DLI Engineering

    Jose Lopez, Alpha Tech

    Pat Loughlin, U. of Pittsburgh

    George Mellman, MathSoft

    John Mitchell

    Bill Nickerson, Penn State U.

    Gene Parker, Barron Assoc.

    Dennis Polla, U. of Minnesota

    Giorgio Rizzoni, Ohio State U.

    Bob Rohrbaugh, NSWC

    Akbar Sayeed, U. of Illinois

    Peter Sherman, Iowa State

    Ahmed Tewfik, U. of Minnesota

    Randy Young, Penn State U.


    Appendix B

    Pertinant Web Links

    [Please pass more link ideas on to Les Atlas via e-mail <atlas@ee.washington.edu>.]


    by Les Atlas <atlas@ee.washington.edu>