3-D Seismic Volume Visualization in Color: Part 1

During the last decade, the increase in the size of the three-dimensional seismic data volumes as well as the proliferation of the attributes generated from them have added to the woes of seismic interpreters, as they are expected to churn through such large quantities of data in short periods of time. Though machine learning techniques promise capabilities to sift through “big” data, interactive visualization of the generated attributes that make an effective use of color continue to be lacking. In order to effectively visualize seismic data/attributes in color, we need first to understand how color is perceived by the human eye and then how such colors are rendered on workstation monitors.

The Visible Spectrum

Light consists of waves that form a small region of the electromagnetic spectrum that are detected by the human eye and interpreted by the brain. Wavelengths of light that we can see range from just under 400 nanometers (violet) to a little above 700 nanometers (red); different wavelengths appear as different colors. Any particular color can be picked up from the visible spectrum or obtained by mixing lights of two or more colors. As an example, yellow color (580 nanometers) can be picked up from the visible spectrum or formed by mixing red and green lights. Because we have only three relatively broad-band cones in our eyes, most people cannot tell the difference between these two yellow colors. Experiments mixing lights of different colors prove that this observation about yellow color is correct If we were to plot the intensity as a function of the wavelengths of different colors of light that make up the color of an object that reaches our eyes, we would probably see a distribution as shown on top of figure 1a. Such a graphical representation is referred to as spectral power distribution (SPD). If the wavelength of any specific color dominates, then the shape of the SPD reflects it. Only when the individual frequencies are present equally in the light does the human brain perceive it as white light. In cases in which one frequency dominates, then the brain sees only that color. Absence of light is perceived by the brain as black.

Cones and Rods

The human eye has a curved array of light-sensing cells or photoreceptors along the retina that are shaped like rods and cones. While rods detect the presence or absence of light, cones respond differently to light of different wavelengths (what we perceive as color). There are three types of cones in the human eye – those that are sensitive to long wavelengths (L) or red, the medium wavelengths (M) or green, and the short wavelengths (S) or the blue part of the spectrum. Interestingly, any one cone might not be sensitive to a single frequency in a rigid sense, but exhibits some sensitivity overlap. For example, a yellow color will excite both red and green cones, and to a much lesser extent the blue color, and this sensitivity is converted by the brain to yellow. Similarly, if light comprised of red and green frequencies falls on the eye, it will excite both red and green cones, and will be perceived as yellow, even though no yellow-colored light falls on the eye. In the same way, when blue light falls on the eye, the blue cone is excited and, to a negligible extent, so are the red and green cones. The brain perceives the color as blue. When white light falls on the retina, all three cones (red, green and blue) are excited and thus the brain perceives the light as white. Interestingly, magenta is perceived as a color by the human brain but is not present as a unique wavelength in the color spectrum; rather it is a mixture of violet and red light. We use this observation to construct cyclical color bars, placing magenta between violet and red (figure 1b). Thus, the human color perception depends on the strength of the incoming signal, or its SPD, and its brightness is perceived in terms of the color.

As our eyes are sensitive to only three main parts of the spectrum, they have the inability to distinguish certain SPDs from others. For example, light emanating from a light filament lamp may appear to us as white light, and again the sunlight will also appear as white, even though the SPDs for the two sources are different. While the SPD for the filament source may consist of certain spikes occurring at R, G and B regions of the spectrum, for the sunlight the SPD may appear as flat as shown in figure 1. This limitation of the human eye is referred to as “metamerism.” Thus, our eyes may not be able to capture all the possible colors of the spectrum range, but those generated from the RGB regions of the spectrum should be good enough. This is known as “tristimulus” theory, which is true for human vision. When it comes to reproducing color in TV or computer screens, the different color spaces are also based on the tristimulus theory. Such a basis for combining lights of different colors in the R, G and B areas of the spectrum is sufficient, as another more accurate basis generating excessively more colors would not be perceived by the human eye.

Additive versus Subtractive Color

As different receptors are sensitive to different levels of stimulation or excitations, millions of colors can still be distinguished by the human eye. The different color combinations as perceived by the three types of cones can be summarized in the matrix shown in figure 2. The combination of red, green and blue active light sources for reproduction of other colors is commonly referred to as the RGB model and forms the working model for computer and television monitors. The colors red, green and blue are referred to as primary colors as other colors can be obtained by mixing them, but the reverse is not true. Such a model is referred to as the “additive color” model.

Another color system is the “subtractive color” model used in painting and printing images, wherein the color reflected by an object is obtained by absorption of the opposite color. For instance, cyan, magenta and yellow (CMY) colors are the opposites of red, green and blue, on the color wheel (figure 1b) and are used as primary colors for printing. The color printers use cartridges of the CMY colors in addition to black, and together all four colors are indicated as CMYK. Cyan is opposite to red and halfway between green and blue; magenta is the color opposite of green and halfway between red and blue; and yellow is opposite of blue and halfway between red and green.

Please log in to read the full article

During the last decade, the increase in the size of the three-dimensional seismic data volumes as well as the proliferation of the attributes generated from them have added to the woes of seismic interpreters, as they are expected to churn through such large quantities of data in short periods of time. Though machine learning techniques promise capabilities to sift through “big” data, interactive visualization of the generated attributes that make an effective use of color continue to be lacking. In order to effectively visualize seismic data/attributes in color, we need first to understand how color is perceived by the human eye and then how such colors are rendered on workstation monitors.

The Visible Spectrum

Light consists of waves that form a small region of the electromagnetic spectrum that are detected by the human eye and interpreted by the brain. Wavelengths of light that we can see range from just under 400 nanometers (violet) to a little above 700 nanometers (red); different wavelengths appear as different colors. Any particular color can be picked up from the visible spectrum or obtained by mixing lights of two or more colors. As an example, yellow color (580 nanometers) can be picked up from the visible spectrum or formed by mixing red and green lights. Because we have only three relatively broad-band cones in our eyes, most people cannot tell the difference between these two yellow colors. Experiments mixing lights of different colors prove that this observation about yellow color is correct If we were to plot the intensity as a function of the wavelengths of different colors of light that make up the color of an object that reaches our eyes, we would probably see a distribution as shown on top of figure 1a. Such a graphical representation is referred to as spectral power distribution (SPD). If the wavelength of any specific color dominates, then the shape of the SPD reflects it. Only when the individual frequencies are present equally in the light does the human brain perceive it as white light. In cases in which one frequency dominates, then the brain sees only that color. Absence of light is perceived by the brain as black.

Cones and Rods

The human eye has a curved array of light-sensing cells or photoreceptors along the retina that are shaped like rods and cones. While rods detect the presence or absence of light, cones respond differently to light of different wavelengths (what we perceive as color). There are three types of cones in the human eye – those that are sensitive to long wavelengths (L) or red, the medium wavelengths (M) or green, and the short wavelengths (S) or the blue part of the spectrum. Interestingly, any one cone might not be sensitive to a single frequency in a rigid sense, but exhibits some sensitivity overlap. For example, a yellow color will excite both red and green cones, and to a much lesser extent the blue color, and this sensitivity is converted by the brain to yellow. Similarly, if light comprised of red and green frequencies falls on the eye, it will excite both red and green cones, and will be perceived as yellow, even though no yellow-colored light falls on the eye. In the same way, when blue light falls on the eye, the blue cone is excited and, to a negligible extent, so are the red and green cones. The brain perceives the color as blue. When white light falls on the retina, all three cones (red, green and blue) are excited and thus the brain perceives the light as white. Interestingly, magenta is perceived as a color by the human brain but is not present as a unique wavelength in the color spectrum; rather it is a mixture of violet and red light. We use this observation to construct cyclical color bars, placing magenta between violet and red (figure 1b). Thus, the human color perception depends on the strength of the incoming signal, or its SPD, and its brightness is perceived in terms of the color.

As our eyes are sensitive to only three main parts of the spectrum, they have the inability to distinguish certain SPDs from others. For example, light emanating from a light filament lamp may appear to us as white light, and again the sunlight will also appear as white, even though the SPDs for the two sources are different. While the SPD for the filament source may consist of certain spikes occurring at R, G and B regions of the spectrum, for the sunlight the SPD may appear as flat as shown in figure 1. This limitation of the human eye is referred to as “metamerism.” Thus, our eyes may not be able to capture all the possible colors of the spectrum range, but those generated from the RGB regions of the spectrum should be good enough. This is known as “tristimulus” theory, which is true for human vision. When it comes to reproducing color in TV or computer screens, the different color spaces are also based on the tristimulus theory. Such a basis for combining lights of different colors in the R, G and B areas of the spectrum is sufficient, as another more accurate basis generating excessively more colors would not be perceived by the human eye.

Additive versus Subtractive Color

As different receptors are sensitive to different levels of stimulation or excitations, millions of colors can still be distinguished by the human eye. The different color combinations as perceived by the three types of cones can be summarized in the matrix shown in figure 2. The combination of red, green and blue active light sources for reproduction of other colors is commonly referred to as the RGB model and forms the working model for computer and television monitors. The colors red, green and blue are referred to as primary colors as other colors can be obtained by mixing them, but the reverse is not true. Such a model is referred to as the “additive color” model.

Another color system is the “subtractive color” model used in painting and printing images, wherein the color reflected by an object is obtained by absorption of the opposite color. For instance, cyan, magenta and yellow (CMY) colors are the opposites of red, green and blue, on the color wheel (figure 1b) and are used as primary colors for printing. The color printers use cartridges of the CMY colors in addition to black, and together all four colors are indicated as CMYK. Cyan is opposite to red and halfway between green and blue; magenta is the color opposite of green and halfway between red and blue; and yellow is opposite of blue and halfway between red and green.

RGB Color Space

As most of the seismic interpretation work is carried out on workstation monitors, and as stated above, because RGB represents a greater range of human color perception, we discuss it in more detail in what follows.

Based on metamerism criterion, the different colors of the spectrum can be matched by a combination of RGB. Early work on color perception (prior to the 1930s) by some of the pioneers in the field suggested that the spectral response of each of the three colors – R, G, B, with receptive wavelengths long (L), medium (M) and short (S) respectively – as received by the cones is different. There is a significant overlap in the sensitivity of the L and M cones (figure 3a), and it is not possible to match some of the wavelengths in the blue to green part of the spectrum. Some of these colors can be produced by adding red to the target colors (figure 3a), which is very confusing and inconvenient. In 1931, the International Commission on Illumination was formed and based in Vienna, Austria, which created the RGB color space based on different experiments conducted at that time, mapping the full range of human visual perception. This color standard is recognized internationally so that there is consistency across different manufactures of visualization equipment ranging from airplane control dials to traffic lights. The CIE defined three hypothetical lights (X, Y and Z) related to the original wavelengths by means of a linear transformation, such that any wavelength could be matched perceptually by positively combining fractions of X, Y and Z (figure 3b).

Defining 3-D Color Space

As stated earlier, the cones in the human eye respond differently due to the three light-sensitive pigments in them, which makes each of the cones absorb a different fraction of the incoming light as a function of its wavelength and have a different spectral response. The existence of three types of cones suggests that color is a 3-D quantity and needs to be defined as a 3-D color space model. The term “color space” was conceived due to the fact that the three primary colors, R, G and B, could be used as the basis of a vector space, which we describe later, where each color vector is defined by three components, or values of RGB. Any color may be represented as a vector in RGB color space as shown in figure 4. It may also be mentioned here that the distribution of cones in the eye varies from person to person. This implies that the various color combinations perceived may be slightly different from one person to another, but all the combinations can still be represented graphically by the same 3-D color space. Because the color gene is on the X chromosome, if the color genes from the two parents don’t overlap, these tetrachromatic women will have a much greater perception of color, although most do not know they do.

The 3-D color space can be defined in an abstract mathematical way by the three vectors along the three axes corresponding to R, G and B colors as shown in figure 4a. If the standard color has a magnitude 1, then the origin is black, red (1,0 0), green (0, 1, 0) and blue (0, 0, 1). The combination of R and G is yellow and is indicated by the dashed line vector and has components (1, 1, 0). In the space defined by the three vectors, in-between other colors can be created. The range of possible colors represented by the primary colors is often referred to as a “gamut.” As there could be different types of color combinations we come across, we talk about different gamuts, e.g. a gamut of human vision representing all the color that the human eye can perceive; similarly, a gamut of TV colors represents the color combination that the TV color system displays on the screen, while a CMY gamut represents the color combinations for hard copy printers.

In the context of 3-D color vector space, such a combination is good in principle but somewhat awkward in practice, as the gamut of all perceptible colors is a complicated shape. If it were a two-dimensional plane, it would be more convenient. Such a 3-D to 2-D mapping can be carried out by forming an equilateral triangle and projecting all color combinations on it. In figure 4b, the vertices R, G and B have been joined together to form an equilateral triangle, and anywhere on this plane the sum of the three values is 1, giving it the name unit plane. Within this plane, only two coordinates are required to specify a location. We can now attempt at mapping the tristimulus values for the colors of the spectrum. As we attempt this, we notice that a vector closer to the blue axis would exhibit a bluish color, a vector closer to the green axis would exhibit a greenish color and a vector closer to the red axis would appear reddish. As all the vectors corresponding to the colors of the spectrum are mapped, one traces out a line called the “spectral locus” in the shape of a horseshoe of color as shown by the dashed black line in figure 4b.

The equilateral triangle shown in figure 4b can be shown in two-dimensions as shown in figure 5. It shows the complete gamut of human vision with the wavelengths indicated on the boundary. As two values are required to define any color on the diagram, it is sometimes also called the XY diagram, or chromaticity diagram.

Such a two-dimensional representation of a 3-D color space system provides insight, as one can easily see the boundary representing the complete range of the three-color combinations and can be used conveniently to compare two or more color spaces. What this implies is that the gamut of different TV or color monitors are subsets of the gamut of human vision (CIE 1931 color space). The color gamuts are important specifications as they have a bearing on the image quality produced, though not the only one. Over the years, many new color spaces have been suggested that represent improvements over the 1931 color space proposed by the CIE, but the latter remains the most commonly used color space.

How Can Such a 3-D Color Space Be Stored Digitally in a Computer?

A ‘bit’ represents the way information is stored in a computer. A single bit would simply have 2 values. 0 and 1, and for a pixel, this would represent just black and white. Two bits implies there are four possible combinations (00, 01, 10 and 11), and three bits suggests eight possible combinations (000, 001, 010, 011, 100, 101, 110 and 111). In general, we can say the number of combinations is 2 raised to the power of the number of bits. Thus, an 8-bit pixel (or one byte) would represent 28 = 256 possible integer values, which are represented as integers between 0 and 255.

The next question that comes to mind is, are more bits on a pixel useful, and how? If we have a 2-bit pixel on a black to white scale, it would mean four values which would represent black, dark gray, light gray and white colors. Such a scale or color bar could be rather coarse or lumpy for a display or a photograph. Having more bits allows more gray values on the scale, which would add a smoother gradient (or color depth) to the image or the photograph being displayed.

Now let us turn to a colored image. As stated above, an image in color is typically composed of red, green and blue colors, which are handled by the workstation monitors as a “channel.” An 8-bit color scheme allows only 28=256 colors. However, if we use 8 bits for red, 8 bits for green and 8 bits for blue, or 24 bits per pixel, we end up with 224 = 16,777,216 possible color combinations. More color combinations yield smoother shade gradations resulting in more realistic images. Much of today’s interpretation software was written before the adoption of the Open Graphics Library (OpenGL) by computer hardware and monitor manufacturers. Using OpenGL does not significantly impact performance but, because graphical display forms the basis of interpretation software, its adoption might require a costly rewrite.

Over the years, the colors available in workstations have increased from 1-bit (two) colors (recall the green phosphor on a black screen) to 8-bit (256) colors to high-end systems providing 24-bit (256 x 256 x 256) colors.

Sometimes, the RGB model is extended by bringing in transparency on an “alpha channel” stored as the first 8 bits and the RGB into the next 24 bits, making up the full 32 bits.

How Do We See Images in Color on a TV Screen?

In the older TVs employing a picture tube or a cathode ray tube technology, an electron gun projects a stream of electrons through a vacuum onto a display screen coated with phosphors. Each pixel on the screen has red, green and blue phosphors. Midway in the tube are magnetic anodes that make the electron beam move in different directions as per the instructions received from the display controller, which in turn acts according to the signal received from the cable carrying the transmitted information. As the electrons strike the phosphors they are lit up and the red, green and blue colors combine at each pixel to form the image on the screen.

TVs with picture tubes were bulky and heavy and made way for slimmer LCD (liquid crystal display) screens. These screens generate images in a different way, employing the electronic switching of liquid crystals which in turn rotate polarized light. Unlike solids and liquids, liquid crystals have the ordering of atoms as in solids and the fluidity of liquids. In one of the phases of liquid crystals, the molecules may be pointing in different directions, but with the application of electricity they all tend to align in the same direction.

Visible light is electromagnetic radiation that propagates through space as waves of electrical and magnetic energy vibrating in different directions. If these light waves are made to pass through a grid that has openings in one direction, say vertical, the emerging light has vibrations in just one direction, or is plane-polarized, and thus is dimmer. This is how polarizing glasses work. If the emergent vertical plane-polarized light through a pair of glasses is made to pass through another pair of glasses rotated perpendicular to the first pair, no light emerges.

The flat TV screen has millions of pixels and each of these pixels consists of subpixels that are colored red, green and blue. Behind the screen is a bright light source that projects light towards it. Each of the pixels has a polarizing glass filter behind it and another one in front of it. There is also a liquid crystal between the polarizing filters, which can be switched off and on electronically. When it is switched on it rotates the light passing through it, which can then pass through the polarizing glass filter ahead, so that light passes through both the polarizing filters. When it is switched off, the light passing through the first polarizing filter gets blocked by the second and no light passes through. The pixels are individually connected electronically and can be switched on and off many times per second. As the pixels are lit up, the red, green and blue colors impart the pixel their color.

You may also be interested in ...