Do finite element studies have limited usefulness, as they are not validated by experimental data

views updated

Do finite element studies have limited usefulness, as they are not validated by experimental data?

Viewpoint: Yes, finite element studies have limited usefulness because they produce idealized data that may be useful for theoretical analysis but could produce serious errors in real-world machines or structures.

Viewpoint: No, most finite element studies are useful and are being validated in ever-increasing numbers of computer-aided design (CAD) applications.

In solving any problem, even those that arise in ordinary life, it is a common belief that the best way to attack the situation is to reduce it to smaller parts. This is the essence of the finite element study, a technique in engineering based on the idea of dividing structures into smaller parts or elements for the purposes of study and analysis. Finite element studies are based in finite element analysis (FEA), a set of mathematical principles underlying the entire process, and the finite element method (FEM), which is simply the application of these principles. The concepts of FEA and FEM are so closely linked that they are often used interchangeably in discussions of finite element studies.

Though the idea of reducing a problem to its constituent parts is an old one, FEA is a relatively new concept. It can be traced to the work of mathematicians and engineers working in the middle of the twentieth century, most notably German-born American mathematician Richard Courant (1888-1972) and American civil engineer Ray William Clough (1920-). Courant was the first to apply what would become known as the FEM when, in 1943, he suggested resolving a torsion problem (one involving the twisting of a body along a horizontal axis, as when turning a wrench) by resolving it into a series of smaller triangular elements. Clough, who coauthored a 1956 paper formulating the principles of the FEM, actually coined the term "finite element method" in 1960.

The occasion of the 1956 paper was a study of stiffness in wing designs for Boeing aircraft, and in the years that followed, the aircraft and aerospace industries would serve as incubators for the new technique known as FEM. The basics of FEM involve use of a number of techniques across a whole range of mathematical disciplines, particularly calculus. These include vector analysis, concerned with quantities that have both direction and magnitude; matrix theory, which involves the solution of multiple problems by applying specific arrangements of numbers in columns and rows; and differential equations, a means of finding the instantaneous rate at which one quantity changes with respect to another.

In performing finite element analysis, engineers build models using principles borrowed from graph theory and combinatorics, areas of mathematics pioneered by Swiss mathematician Leonhard Euler (1707-1783) in solving the "Königsberg bridge problem." Since Euler's method of resolving this brain-teaser illustrates how engineers develop models based on FEA, it is worth considering very briefly how he did so.

The town of Königsberg, Prussia (now Kaliningrad, Russia), astride the Pregel River, included two islands. Four bridges connected the mainland with the first island, and an additional two bridges connected the second island with the shore, while a seventh bridge joined the two islands. The puzzle was this: could a person start from a particular point and walk along a route that would allow him or her to cross each bridge once and only once? In solving the problem, Euler drew a graph, though not a "graph" in the sense that the word is typically used. Euler's graph was more like a schematic drawing, which represented the four land masses as points or nodes, with segments between them representing paths. Using the nodes and segments, he was able to prove that it is impossible to cross each bridge without retracing one's steps at some point.

Models using FEA also apply nodes, which in this case refer to connections between elements or units of the model. A group of nodes is called a mesh, and the degree to which the nodes are packed into a given space is referred to as the mesh density. Engineers then take data derived from studies of the mesh and apply computer-aided analysis to study structural properties. This method is in contrast to what is called analytical techniques, or means that involve direct study of the structure in question.

FEA is only one of several techniques for mathematical model-building applied by engineers, but it is among the most popular. From its origins in the aircraft industry, its uses have expanded to include studies of shipbuilding, automobile manufacture (including crash tests), and various problems in civil and architectural engineering—for example, stress capacity of bridges. Thanks to the development of computer software that does the calculations for the user, application of FEA has also spread to other sciences—even, as noted below, paleontology.

Despite its wide application, FEA is not without controversy. Detractors maintain that FEA applies an undue level of abstraction to problems that are far from abstract—problems that, if miscalculated, could result in errors that would cost human lives. Furthermore, opponents of FEA hold that the underlying principle is invalid: whereas on paper it is possible to break a large entity into constituent parts, in the real world those parts are inseparable, and their properties impinge on one another in such a way as to affect—sometimes drastically—the operation of the whole.

Supporters of FEA, on the other hand, point out that there are many situations for which analytical techniques are simply inadequate. If a bridge can only be sufficiently tested when it is built, for instance, then that may well be too late, and the expenses and dangers involved far outweigh the possible risks that may result from the approximations necessary for FEA calculations. It is true that many engineering designs can be tested in model form by using a wind tunnel—another by-product of developments in the aerospace industry—but no method of pre-testing can be considered absolutely fail-safe.

The benefits of FEA to design are demonstrable, with a number of examples, cited below, of successful implementation in the development of aircraft, automobiles, and other products. Certainly there are risks, but these (according to supporters of FEA) reside more in the engineer's ability to apply the method than in the method itself. The mathematics involved are so complex that it is essential for the practitioner to understand what he or she is doing, regardless of whether that person possesses computer technology that can aid in calculations.

—JUDSON KNIGHT

Viewpoint: Yes, finite element studies have limited usefulness because they produce idealized data that may be useful for theoretical analysis but could produce serious errors in real-world machines or structures.

In solving engineering problems, a designer has at his or her disposal two basic choices: analytical techniques, which involve direct observation of the structure in question, and one or another form of mathematical model-building. Among the most widely used forms of the latter is the finite element method, also known as finite element analysis (FEA). The finite element method is a "divide and conquer" strategy: it breaks larger problems down into smaller ones, for instance by dividing a physical area to be studied into smaller subregions. At the same time, it calls for the development of approximations that further simplify the problem or problems in question.

If the finite element method sounds like a time-saver, that is because it is: it makes it possible for engineers to reduce insurmountable problems to a size that is conquerable. On paper, in fact, the finite element method is extremely workable, but "on paper" may be the method's natural habitat, since there are legitimate questions regarding its value in real-world situations. The phrase "close enough for government work," with its implication that minor errors will eventually become invisible if camouflaged in a large enough bureaucracy, might be applied to the finite element method. The latter, after all, is based in part on the idea that small irregularities in surfaces and measurements can and should be eliminated on paper, so as to make calculations easier.

This approach raises a very reasonable concern—namely, is a method that makes use of approximation good enough for design and analysis of real-world structures? On the surface of it, the answer is clearly no. It is clear enough intuitively that, given a certain number of irregularities that have been eliminated from calculations, the results may be in error by a degree significant enough to challenge the integrity of the larger structure. Assuming the structure in question is one with the potential to either protect or take human life—a bridge or an airplane, for instance—miscalculation can have serious results.

The Finite Element Method in Action

The finite element method has its roots in engineering developments from around the turn of the nineteenth century. During that era, it first became common for bridge-builders and other engineers to apply the theory of structures, or the idea that a larger structure is created by fitting together a number of smaller structural elements. Today, of course, this concept seems self-evident, a concept as simple as assembling interlocking Lego blocks to form a model skyscraper, but at the time it was revolutionary. Of particular significance was the realization that, if the structural characteristics of each element could be known, it was theoretically possible—by algebraic combination of all these factors—to understand the underlying equations that governed the whole.

This matter of algebraic combination, however, proved to be a thing more easily said than done where extremely complex structures were involved. Engineering analysis of large structures hit a roadblock; then, in the mid-twentieth century, a new and promising method gained application in the aircraft industry. This new method was finite element analysis, itself a product of several developments in mathematics and computer science, including matrix algebra and the digital computer, which made possible the solution of extremely large and detailed problems.

The "finite elements" in the finite method are the simple regions into which a larger, more complex geometric region is divided. These regions meet at points called nodes, which are the unknown quantities in problems of structural analysis involving stress or displacement. Within the realm of finite element analysis, a number of methods exist for solving problems, depending on the nature of the factors involved: for example, in solid mechanics, functions of the various elements are used as representations for displacements within the element, a technique known as the displacement method.

An application of the displacement method could be a two-dimensional computer-aided design (CAD) model of an automobile before and after collision with another object. Finite element analysis would make it possible to evaluate the crumpling that would occur in each door panel, and in other discrete areas of the car body. Moreover, with the use of proper techniques, it is possible to plug in values for factors including the car's velocity, the velocity of the other object, the direction of impact, and so forth, so as to obtain an accurate model of how each part of the car responds to the crash. There are four essential stages to this process: generation of a model, verification of data, analysis of the data through generation of equations, and postprocessing of the output thus obtained so as to make it efficient for use.

Though pioneered by the aircraft industry, the finite element method is applied today in shipbuilding, automobile manufacture, and both civil and architectural engineering. Engineers use it to predict levels of thermal stress, effects of vibration, buckling loads, and the amount of movement a building will experience due to wind and other factors. Paleontologist Emily Rayfield even used it to study the jaw of Allosaurus fragilis, a dinosaur of the Jurassic period. "I'm trying to see how this apparently carefully evolved bone structure translated into biting strength to see if it can further our understanding of how Allosaurus hunted and fed," she told Mechanical Engineering.

What's Wrong with This Picture?

Like many users of the finite element method, Rayfield availed herself of software—in this case a program called Cosmos, produced by Structural Research and Analysis of Los Angeles, California—that performs the tedious and time-consuming computations involved. Such software, supporters of the method maintain, has made finite element analysis easier for the non-mathematician to apply. Wrote Ulises Gonzalez in Machine Design, "Admittedly, early FEA [soft-ware] versions were difficult to use. Engineers needed significant expertise to build and analyze models." However, "The required expertise level has since diminished so rapidly that most engineers can now produce accurate results. Expert analysts still have their place solving the most difficult problems and mentoring their juniors. But for the rest of the user audience, modern FEA packages automatically and intelligently deal with geometry, material, and control settings so users are free to be more creative."

As rosy as this picture may seem, the ease of operation associated with modern FEA software does not come without a price. In a Design News primer on such software for engineers, Bob Williams noted that "there's some tradeoff of speed for accuracy." Rayfield, in her calculations involving the Allosaurus skull, had to operate as though all the skull bones were fixed in place, when in fact she had reason to believe that some of the bones moved in relation to one another. This assumption was necessary in order to apply the finite element method. "It's faster than calculating stress by hand," observed Williams, discussing a finite element program applied for motion simulation, "but it uses a rigid-body motion program, so the engineer must accept certain assumptions. For instance, parts like gaskets are flexible, but a rigid-body program can't calculate that."

In using the finite element method, some accuracy is lost for the sake of speed and convenience. Assuming the design is for a machine or structure that has the potential to protect or take human life, the results can be grave indeed. According to Paul Kurowski in Machine Design, "Idealizing … a 3-D model [using the finite element method] eliminates small and unimportant details. Sometimes the process replaces thin walls with surfaces, or drops a dimension to work with a 2-D representation of the part…. The process eventually forms a mathematical description of reality that we call a mathematical model." But, as Kurowski noted, finite-element analysis "hides plenty of traps for uninitiated users. Errors that come from idealization … can be bad enough to render results either misleading or dangerous, depending on the importance of the analysis."

Kurowski explained the popularity of the finite method thus: "To analyze a structure, we solve its equations. Solving complex equations 'by hand' is usually out of the question because of complexity. So we resort to one of many approximate numerical methods," of which finite element analysis is the most widely used. Variables in a finite element model are represented by polynomial functions, which, if they described the entire model, would have to be extraordinarily complex. "To get around that difficult task," Kurowski went on, "the model (a domain) is split into simply shaped elements (subdomains)…. Notice that a continuous mathematical model has an infinite number of degrees of freedom while the discretized (meshed) model has a finite number of degrees of freedom."

In other words, the finite element model, by breaking the problem into smaller pieces (a process known as discretizing or meshing), creates artificial boundaries between parts that are not separated in the real world. Two ballast compartments in a ship's hull, for instance, may be treated in a finite element model as though they were entirely separated from one another. However, if in reality there is a flow-through system such that overflow from one compartment enters the other compartment, then this can have a serious impact on issues such as the ship's center of gravity. Noting that "The allowed complexity of an element's shape depends on the built-in complexity of its polynomials," Kurowski observed that "Errors from restrictive assumptions imposed by meshing … can have serious consequences. For example, modeling a beam in bending [using just] one layer of first-order elements is a recipe for disaster."

Alternatives

M. G. Rabbani and James W. Warner, writing in the SIAM Journal on Applied Mathematics, noted the shortcomings of finite element techniques for modeling the transport of contaminants through an underground water or sewer system. "The main disadvantage with the finite element method," they maintained, "is that its mathematical formulations are relatively complicated and follow set rules step by step. In a recent [early 1990s] investigation, it was found that such prescribed rules are not applicable everywhere." Similarly, Shyamal Roy, discussing what goes on in the initial stages of project planning, wrote in Machine Design that "when ideas are in flux and design goals are moving targets, CAD and FEA are not the most appropriate tools."

Roy went on to note that "Predesign work often prompts questions such as, when will we know how sensitive belt tension is to tolerances in pulley diameter and belt length? How much work should it take to optimize an extrusion profile so the material extrudes straight? How long will it take to determine section properties for a complex beam? Or, why is it necessary to wait until finishing a design to check for interferences in the mechanism? The common thread through all these questions is that design decisions are waiting on answers CAD programs"—and, by extension, finite element analysis—"cannot deliver." Roy went on to argue for modifications to premodeling design-analysis (PMDA) software, particularly by allowing opportunities for graphical solutions (i.e., drawings) that may not require the use of equations.

It should also be noted that the finite element method is not the only mathematical technique that can be used in engineering analysis. Alternatives include the boundary element, discrete, finite difference, finite volume, spectral, and volume element methods. With each mathematical method, however, there is still the potential for pitfalls that may result from relying too heavily on mathematical models. Not only are those models no substitute for real-world testing of materials, but when the model is based on approximations rather than exact figures, there is a danger that the resulting design may only be "close enough for government work"—which in many cases is unacceptable or even downright dangerous.

—JUDSON KNIGHT

Viewpoint: No, most finite element studies are useful and are being validated in ever-increasing numbers of computer-aided design (CAD) applications.

Finite element studies, which are identified more commonly as finite element analysis (FEA) and finite element method (FEM), have become a mainstay for computer-aided industrial engineering design and analysis. The difference between FEA and FEM is so subtle that the terms are frequently used interchangeably. FEM provides the mathematical principles used in finite element analysis. FEA software is robust and friendly. It is used by engineers to help determine how well structural designs survive in actual conditions, such as loads, stress, vibration, heat, electromagnetic fields, and reactions from other forces.

The majority of FEA are used by design engineers to confirm or choose between different systems or components. According to Steven J. Owen, Senior Member Technical Staff at Sandia National Laboratories, increasingly larger and more complex designs are being simulated using the finite element method. FEA was first applied in the aerospace and nuclear industries. The National Aeronautics and Space Administration (NASA) developed a powerful general purpose FEA program for use in computer-aided engineering. It would be accurate to say finite element studies have a limitless usefulness, for they are being validated in ever-increasing numbers of computer-aided design (CAD) applications.

Background

Finite element studies are based on the idea that any structure, no matter how complex, can be divided into a collection of discrete parts, i.e., elements. This is not a new idea. Ancient mathematicians used it to estimate the symbol pi ([.pi]) from a polygon inscribed in a circle. What is relatively new is its application to structural analysis, fluid and thermal flows, electromagnetics, geomagnetics, and biomechanics.

Prussian mathematician Richard Courant is credited with having suggested in 1943 that a torsion problem could be solved by using triangular elements and applying approximations following the Rayleigh-Ritz method. Although he did not use the term finite element, Courant's work is often referred to as the beginning of FEM because he used the key features of the finite element method in his studies. The Rayleigh-Ritz method of approximation was developed earlier, and independently, by the two mathematicians, Lord Rayleigh (1842-1919) and Walter Ritz (1878-1909).

Some say FEM started with a classic paper titled "Stiffness and Deflection Analysis of Complex Structures," published in September 1956 by M. J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp in the Journal of Aerospace Science. The paper was based on the combined research of Professor Ray Clough of the University of California at Berkeley, Professor Harold C. Martin of the University of Washington, and the other coauthors who were working with them at Boeing Airplane Company in 1953, analyzing the stiffness of a specific wing design by dividing the wing structure into triangular segments.

By 1960 Professor Clough had come up with the official name "finite element method," and by that time, there were computers powerful enough to meet the computational challenge. Also, by that time, NASA was working on the implementation of the finite element method using computers at the Goddard Flight Center. One of the NASA projects ultimately produced the powerful and proprietary program for FEM called NASTRAN (NASA STRuctural ANalysis).

How Does FEA Work?

FEA is an analytical, mathematical-based computerized engineering tool. The mathematics of finite element analysis is based on vector analysis, matrix theory, and differential equations. Fundamental to FEA is the concept that a continuous function can be approximated using a discrete model. That is, FEA is based on the idea of building complicated objects from small manageable pieces. Element is the term for the basic subdivision unit of the model. A node is the term for the point of connection between elements. A complex system of points (nodes) make up a grid called a mesh. How closely packed the nodes are is described as the mesh density. Data on the mesh is what is used in computer-aided analysis of the material and its structural properties under study.

Using FEA, virtually any structure, no matter how complex, can be divided into small elements that simulate the structure's physical properties. The model created by FEA is then subjected to rigorous mathematical examination. FEA significantly reduces the time and costs of prototyping and physical testing. Computer-aided engineering has been extensively used in the transportation industry. It has contributed significantly to the building of stronger and lighter aircraft, spacecraft, ships, and automobiles.

Industry generally uses 2-D modeling and 3-D modeling, with 2-D modeling better suited to small computers. Today's desktop PC can perform many complex 3D analyses that required a mainframe computer a few years ago. A great deal of powerful software has been developed for finite element analysis applications. Among the new developments are mesh generation algorithms to tackle the challenges of smoothing surface domains. Where early FEA utilized only tens or hundreds of elements, users of FEA have pushed technology to mesh complex domains with thousands of elements with no more interaction than the push of a button.

Supercomputer vs. PCs for FEA

In-depth studies of material synthesis, processing, and performance are essential to the automobile industry. Questions on how well lighter steels absorb energy compared to heavier materials during a crash are the types of problems that are being studied on supercomputers at Oak Ridge National Laboratory (ORNL). Researchers at ORNL have shown that complex models can be developed and modified to run rapidly on massively parallel computers.

While automotive design verification is provided dramatically from actual vehicle collision tests, a clear understanding of material behavior from vehicle impact simulation provides valuable information at a lot less expense. Computational simulations for the single-and two-car impacts have been run numerous times to identify deficiencies of existing vehicle models, and to provide data to improve their performance. The large, detailed finite element models are not feasible for single processors. According to an ORNL report on the analysis of material performance applications, for the best vehicle models to capture complex deformation during impact, it is not unusual to have 50,000 or more finite elements.

Finite element analysis of aerospace structures is among the research projects being carried out at the Pittsburgh Supercomputing Center, a joint effort of Carnegie Mellon University and the University of Pittsburgh together with Westinghouse Electric Company. The center was established in 1986 and is supported by several federal agencies, the Commonwealth of Pennsylvania, and private industry.

Charles Farhat, Professor and Director of the Center for Aerospace Structures at the University of Colorado at Boulder, has worked to develop sophisticated computational methods for predicting the "flutter envelope" of high performance aircraft and to encourage wider use of these methods in industry. Flutter is a vibration that amplifies. The potential of flutter to cause a crash challenges designers of high-performance aircraft. Wing flutter—partly the result of faulty structural maintenance—caused an F-117A Nighthawk, an operational "stealth" plane, to lose most of one wing and crash at an air show in September 1997.

Flutter envelope is a curve that plots speed versus altitude. It relates information on the speed not to exceed, even if the engines can do it, because any perturbation is going to be amplified, according to Farhat. Flutter involves the interaction between flow and structure. To predict it requires solving equations of motion for the structure simultaneously with those for the fluid flow. As computational technology improves, it is possible to use sophisticated modeling techniques, including detailed finite element structure modeling of aircraft. In 2001 Professor Farhat did flutter predictions for the F-16 using the Pittsburgh Supercomputing Center's new Terascale Computing System to validate the innovative simulation tool he developed.

The question of which is best—supercomputer or desktop PC—depends entirely on the application of FEA. If the job requires dismantling one model of a sport utility vehicle to do an FEA of each part and simulate crash tests, ORNL researchers with their massively parallel computers are needed to do the job. However, when the study involves flaw evaluation and stress analysis for materials and welds, a desktop PC can perform complex 3D analyses using available commercial software packages such as ABAQUS. Many jobs that would have required a mainframe or high-end workstation a few years ago can be done on a PC today. Even NASA's NASTRAN software has been adapted for use on a PC.

The NASA Connection

The background for the development of NASTRAN, FEA software for computerized engineering, is one of NASA's Spinoff stories. The Spinoff stories describe developments that started as part of NASA's aeronautic and space research and have now become useful to other industries. In the 1960s the MacNeal-Schwendler Corporation (MSC) of Los Angeles, California, was commissioned to develop software for aerospace research projects, identified as the NASA Structural Analysis program, NASTRAN. In 1982, MSC procured the rights to market versions of NASTRAN. NASA says the transfer of technology to non-aerospace companies is viewed as important to developing U.S. competitiveness in a global market. Several types of licenses are available to use NASTRAN.

NASTRAN is described as a powerful general purpose finite element analysis program for computer-aided engineering. It has been regularly improved and has remained state-of-the-art for structural analysis. NASTRAN can be used for almost any kind of structure and construction. Applications include static response to concentrated and distributed loads, thermal expansion, and deformation response to transient and steady-state loads. Structural and modeling elements include the most common types of structural building blocks such as rods, beams, shear panels, plates, and shells of revolution, plus a "general" element capability. Different sections of a structure can be modeled separately and then modeled jointly. Users can expand the capabilities using the Direct Matrix Abstraction Program (DMAP) language with NASTRAN.

NASA continues to be involved. The Finite Element Modeling Continuous Improvement (FEMCI) group was formed at NASA to provide a resource for FEA analytical techniques necessary for a complete and accurate structural analysis of spacecraft and spacecraft components.

Some Applications of FEA

PC software adapted from NASTRAN by Noran Engineering (NE) of Los Alamitos, California, is available for such applications as the analysis of the Next Generation Small Loader, which will replace an older aircraft loader used by the Royal Air Force (RAF) of the UK and the Royal Norwegian Air Force. As described by NE, the loader will have to be able to withstand both arctic and desert conditions as well as support large weights, withstand dock impacts, and various types of point loading. The FEA was completed in less than an hour with an Intel Pentium III 500 MHz computer.

FEA software identified as LS-DYNA was developed at Lawrence Livermore National Laboratory. Work on applications for stress analysis of structures subjected to a variety of impact loading began in 1976. The software was first used with supercomputers, which at that time were slower than today's PCs. The laboratory continued to improve the software. By the end of 1988 the Livermore Software Technology Corporation was formed to continue development of LS-DYNA. While research is still improving it, software is available for use in Bio-Medical applications. Serious head and neck injuries sustained by autocrash victims are simulated in LS-DYNA to aid in the design of airbags, head support, and body restraint systems. Other applications include simulations for airbag deployment, heat transfer, metal forming, and drop testing. In addition, the software is used in seismic studies, as well as for military and automotive applications.

As with any computer program, the results of using sophisticated applications is only as good as the input. FEA is described as a powerful engineering design tool. When using FEA software, it is exceedingly important for the user to have some understanding of the finite element method and of the physical phenomena under investigation. Finite element studies are becoming available as part of engineering programs in many universities, and research continues to make both FEM and FEA more useful.

—M. C. NAGEL

Further Reading

Buchanan, George R. Schaum's Outline of Finite Element Analysis. New York: McGraw-Hill, 1994.

Gonzalez, Ulises. "Smarter FEA Software Unburdens Users." Machine Design 73, no. 5: 120-26.

Hughes, Thomas J. The Finite Element Method: Linear, Static and Dynamic Finite Element Analysis. Mineola, NY: Dover Publications, Inc., 2000.

Kurowski, Paul. "More Errors That Mar FEAResults." Machine Design 74, no. 6: 51-56.

Kwon, Young W., and Hyochoong Bang. The Finite Element Method Using MATLAB. Boca Raton, FL: CRC Press LLC, 2000.

Lander, Jeff. "Pump Up the Volume: 3DObjects That Don't Deflate." Game Developer 7, no. 12: 19.

Lee, Jennifer. "Behind the Leaky Faucet: Dissecting the Complex Drip." New York Times (6 March 2001): F-4.

Rabbani, M. G., and James W. Warner. "Short-comings of Existing Finite Element Formulations for Subsurface Water Pollution Modeling and Its Rectification: One-Dimensional Case." SIAM Journal on Applied Mathematics 54, no. 3: 660-73.

Roy, Shyamal. "Don't Touch CAD Until You've Firmed Up the Concept." Machine Design 74, no. 4: 70-72.

Thilmany, Jean. "Digital Analysis of aDinosaur's Head." Mechanical Engineering 123, no. 11 (November 2001): 18.

Williams, Bob. "Six Things All Engineers Should Know Before Using FEA." Design News 55, no. 24: 85-87.

KEY TERMS

COMPUTER-AIDED DESIGN (CAD):

The use of complex engineering or architectural software in design work, particularly for the purpose of providing two-dimensional models of the structure to be built.

DIFFERENTIAL EQUATION:

A term used in calculus relating to an equation that expresses a relationship between functions and derivatives. Function (f) is defined as an association between variables such as would be expressed f (x) = y. Derivative means the limiting value of a rate of change of a function with respect to a variable, as could be expressed by dy/dx = a constant.

FINITE ELEMENT METHOD/FINITE ELEMENT ANALYSIS (FEA):

A mathematical technique for solving complex engineering problems by dividing a physical area to be studied into smaller subregions, with variables represented by polynomial equations, and developing approximations of some values that further simplify the problems in question.

MATRIX THEORY:

The algebraic study of matrices and their use in evaluating linear processes. A matrix is a rectangular array of numbers or scalars from a vector space.

NODE:

In the context of finite element analysis, a node is the point at which subregions meet. In problems of structural analysis involving stress or displacement, nodes represent unknown quantities.

SCALAR:

A quantity which has magnitude but no direction in space.

VECTOR:

A quantity that has both magnitude and direction in space.

More From encyclopedia.com