Moment magnitude scale facts for kids
The moment magnitude scale (MMS; denoted explicitly with M_{w} or Mw, and generally implied with use of a single M for magnitude), is a way to measure the power of earthquakes. The higher the number, the bigger the earthquake. It is the energy of the earthquake at the moment it happens. Like the similar and older Richter scale, it is logarithmic, with a base of ten.
Moment magnitude (M_{w}) is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturate—that is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the U.S. Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude (M_{L}) and surface wave magnitude (M_{s}) scales. Subtypes of the moment magnitude scale (M_{ww}, etc.) reflect different ways of estimating the seismic moment.
Scale Number  Earthquake Effect 

less than 3.5  This would be a very weak earthquake. People would not feel it, but it would be recorded by Geologists 
3.55.4  Generally felt by people, but it rarely causes damage. 
5.46.0  Will not cause damage to welldesigned buildings, but can cause damage or destroy small or poorlydesigned ones. 
6.16.9  Can be destructive in areas up to about 100 kilometers across where people live. 
7.07.9  Considered a "major earthquake" that causes a lot of damage. 
8 or greater  Large and destructive earthquake that can destroy large cities. 
Contents
History
Richter scale: the original measure of earthquake magnitude
At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnitude" that was internally consistent and corresponded roughly with estimates of an earthquake's energy. He established a reference point and the now familiar tenfold (exponential) scaling of each degree of magnitude, and in 1935 published what he called the "magnitude scale", now called the local magnitude scale, labeled M_{L}. (This scale is also known as the Richter scale, but news media sometimes use that term indiscriminately to refer to other similar scales.)
The local magnitude scale was developed on the basis of shallow (~15 km (9 mi) deep), moderatesized earthquakes at a distance of approximately 100 to 600 km (62 to 373 mi), conditions where the surface waves are predominant. At greater depths, distances, or magnitudes the surface waves are greatly reduced, and the local magnitude scale underestimates the magnitude, a problem called saturation. Additional scales were developed – a surfacewave magnitude scale (M_{s}) by Beno Gutenberg in 1945, a bodywave magnitude scale (mB) by Gutenberg and Richter in 1956, and a number of variants – to overcome the deficiencies of the M_{L} scale, but all are subject to saturation. A particular problem was that the M_{s} scale (which in the 1970s was the preferred magnitude scale) saturates around M_{s} 8.0 and therefore underestimates the energy release of "great" earthquakes such as the 1960 Chilean and 1964 Alaskan earthquakes. These had M_{s} magnitudes of 8.5 and 8.4 respectively but were notably more powerful than other M 8 earthquakes; their moment magnitudes were closer to 9.6 and 9.3.
Single couple or double couple
The study of earthquakes is challenging as the source events cannot be observed directly, and it took many years to develop the mathematics for understanding what the seismic waves from an earthquake can tell us about the source event. An early step was to determine how different systems of forces might generate seismic waves equivalent to those observed from earthquakes.
The simplest force system is a single force acting on an object. If it has sufficient strength to overcome any resistance it will cause the object to move ("translate"). A pair of forces, acting on the same "line of action" but in opposite directions, will cancel; if they cancel (balance) exactly there will be no net translation, though the object will experience stress, either tension or compression. If the pair of forces are offset, acting along parallel but separate lines of action, the object experiences a rotational force, or torque. In mechanics (the branch of physics concerned with the interactions of forces) this model is called a couple, also simple couple or single couple. If a second couple of equal and opposite magnitude is applied their torques cancel; this is called a double couple. A double couple can be viewed as "equivalent to a pressure and tension acting simultaneously at right angles".
The single couple and double couple models are important in seismology because each can be used to derive how the seismic waves generated by an earthquake event should appear in the "far field" (that is, at distance). Once that relation is understood it can be inverted to use the earthquake's observed seismic waves to determine its other characteristics, including fault geometry and seismic moment.
In 1923 Hiroshi Nakano showed that certain aspects of seismic waves could be explained in terms of a double couple model. This led to a threedecade long controversy over the best way to model the seismic source: as a single couple, or a double couple? While Japanese seismologists favored the double couple, most seismologists favored the single couple. Although the single couple model had some shortcomings, it seemed more intuitive, and there was a belief – mistaken, as it turned out – that the elastic rebound theory for explaining why earthquakes happen required a single couple model. In principle these models could be distinguished by differences in the radiation patterns of their Swaves, but the quality of the observational data was inadequate for that.
The debate ended when Maruyama (1963), Haskell (1964), and Burridge & Knopoff (1964) showed that if earthquake ruptures are modeled as dislocations the pattern of seismic radiation can always be matched with an equivalent pattern derived from a double couple, but not from a single couple. This was confirmed as better and more plentiful data coming from the WorldWide Standard Seismograph Network (WWSSN) permitted closer analysis of seismic waves. Notably, in 1966 Keiiti Aki showed that the seismic moment of the 1964 Niigata earthquake as calculated from the seismic waves on the basis of a double couple was in reasonable agreement with the seismic moment calculated from the observed physical dislocation.
Dislocation theory
A double couple model suffices to explain an earthquake's farfield pattern of seismic radiation, but tells us very little about the nature of an earthquake's source mechanism or its physical features. While slippage along a fault was theorized as the cause of earthquakes (other theories included movement of magma, or sudden changes of volume due to phase changes), observing this at depth was not possible, and understanding what could be learned about the source mechanism from the seismic waves requires an understanding of the source mechanism.
Modeling the physical process by which an earthquake generates seismic waves required much theoretical development of dislocation theory, first formulated by the Italian Vito Volterra in 1907, with further developments by E. H. Love in 1927. More generally applied to problems of stress in materials, an extension by F. Nabarro in 1951 was recognized by the Russian geophysicist A. V. Vvedenskaya as applicable to earthquake faulting. In a series of papers starting in 1956 she and other colleagues used dislocation theory to determine part of an earthquake's focal mechanism, and to show that a dislocation – a rupture accompanied by slipping — was indeed equivalent to a double couple.
In a pair of papers in 1958, J. A. Steketee worked out how to relate dislocation theory to geophysical features. Numerous other researchers worked out other details, culminating in a general solution in 1964 by Burridge and Knopoff, which established the relationship between double couples and the theory of elastic rebound, and provided the basis for relating an earthquake's physical features to seismic moment.
Seismic moment
Seismic moment – symbol M_{0} – is a measure of the fault slip and area involved in the earthquake. Its value is the torque of each of the two force couples that form the earthquake's equivalent doublecouple. (More precisely, it is the scalar magnitude of the secondorder moment tensor that describes the force components of the doublecouple.) Seismic moment is measured in units of Newton meters (N·m) or Joules, or (in the older CGS system) dynecentimeters (dyncm).
The first calculation of an earthquake's seismic moment from its seismic waves was by Keiiti Aki for the 1964 Niigata earthquake. He did this two ways. First, he used data from distant stations of the WWSSN to analyze longperiod (200 second) seismic waves (wavelength of about 1,000 kilometers) to determine the magnitude of the earthquake's equivalent double couple. Second, he drew upon the work of Burridge and Knopoff on dislocation to determine the amount of slip, the energy released, and the stress drop (essentially how much of the potential energy was released). In particular, he derived a now famous equation that relates an earthquake's seismic moment to its physical parameters:

 M_{0} = μūS
with μ being the rigidity (or resistance) of moving a fault with a surface areas of S over an average dislocation (distance) of ū. (Modern formulations replace ūS with the equivalent D̄A, known as the "geometric moment" or "potency".) By this equation the moment determined from the double couple of the seismic waves can be related to the moment calculated from knowledge of the surface area of fault slippage and the amount of slip. In the case of the Niigata earthquake the dislocation estimated from the seismic moment reasonably approximated the observed dislocation.
Seismic moment is a measure of the work (more precisely, the torque) that results in inelastic (permanent) displacement or distortion of the earth's crust. It is related to the total energy released by an earthquake. However, the power or potential destructiveness of an earthquake depends (among other factors) on how much of the total energy is converted into seismic waves. This is typically 10% or less of the total energy, the rest being expended in fracturing rock or overcoming friction (generating heat).
Nonetheless, seismic moment is regarded as the fundamental measure of earthquake size, representing more directly than other parameters the physical size of an earthquake. As early as 1975 it was considered "one of the most reliably determined instrumental earthquake source parameters".
Introduction of an energymotivated magnitude M_{w}
Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy E_{s} could be estimated as
(in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of M_{s}. This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an M_{s} 8.2. Caltech seismologist Hiroo Kanamori recognized this deficiency and took the simple but important step of defining a magnitude based on estimates of radiated energy, M_{w}, where the "w" stood for work (energy):
Kanamori recognized that measurement of radiated energy is technically difficult since it involves the integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, M_{0}. Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy),
(where E is in Joules and M_{0} is in Nm), Kanamori approximated M_{w} by
Moment magnitude scale
The formula above made it much easier to estimate the energybased magnitude M_{w}, but it changed the fundamental nature of the scale into a moment magnitude scale. USGS seismologist Thomas C. Hanks noted that Kanamori's M_{w} scale was very similar to a relationship between M_{L} and M_{0} that was reported by Thatcher & Hanks (1973)
Hanks & Kanamori (1979) combined their work to define a new magnitude scale based on estimates of seismic moment
where is defined in newton meters (N·m).
Current use
Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice, seismic moment (M_{0}), the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes.
Popular press reports most often deal with significant earthquakes larger than M ~ 4. For these events, the preferred magnitude is the moment magnitude M_{w}, not Richter's local magnitude M_{L}.