How does diffraction affect visual acuity




















From Mills S. AII amacrine cells limit scotopic acuity in central macaque retina: A confocal analysis of calretinin labeling. J Comp Neurol. As noted above, a similar process may occur in the photopic system Green, , where photopic resolution beyond an eccentricity of two degrees, falls below that predicted by cone density.

The works by both Green and Mills and Massey provide evidence that post-receptoral processing is another factor that may limit visual acuity.

During steady fixation, the eyes are in constant motion. Under these conditions, retinal images traverse a distance of about 3 minutes of arc in one second. Contrast Sensitivity. Contrast is an important parameter in assessing vision. Visual acuity measurement in the clinic use high contrast, that is, black letters on a white background. In reality, objects and their surroundings are of varying contrast.

Therefore, the relationship between visual acuity and contrast allows a more detailed understanding of our visual perception.

Grating patterns are used as a means of measuring the resolving power of the eye because the gratings can be adjusted to any size. The contrast of the grating is the differential intensity threshold of a grating, which is defined as the ratio:. The luminance of contrast gratings vary in a sinusoidal manner figure This allows the contrast of the grating to be altered without changing the average luminance of the screen displaying the gratings.

Luminance profile of sinusoidal gratings of contrast ratio of 1. For the contrast value of 1, the grating would have the maximum and minimum available luminance. The size of the bars of the grating can be expressed in terms of the number of cycles one cycle consists of one light bar plus one dark bar of the grating per degree subtended at the eye.

This is called the spatial frequency of the grating and can be thought of as a measure of the fineness or coarseness of the grating. The units can be cycles per degree figure Spatial frequency is a measure of the number of cycles subtended at the eye per degree.

We can determine the sensitivity of the visual system as a function of grating size spatial frequency. The contrast of the grating patterns is adjusted to determine the threshold for a given spatial frequency. That is, with a given spatial frequency, the contrast can be lowered until detection of the grating becomes impossible contrast threshold.

The reciprocal of this contrast threshold is called contrast sensitivity. A plot of contrast sensitivity versus spatial frequency is called the spatial contrast sensitivity function SCSF, or usually abbreviated to CSF.

Under photopic conditions, contrast sensitivity measurements reveal a band-pass function when using sinusoidal gratings figure The peak of the CSF function is in the mid-spatial range and only under high contrast conditions, is resolution at its maximal level.

Photopic contrast sensitivity function. The shape and critical parameters of the CSF depends on a number of factors including: the mean luminance of the grating, whether the luminance profiles of the gratings are sinusoidal or square waveforms, the level of defocus, and the clarity of the optics of the eye. As mean light levels increase, the peak of the contrast sensitivity function is now close to 0. Also shown in figure 24, at photopic light levels, the peak contrast sensitivity is at approximately 5 to 10 cycles per degree van Ness and Bouman, Contrast sensitivity function showing a change in shape from low pass at low luminances and bandpass at high luminances.

Chapter 5. In: Cronly-Dillon, J. London: Macmillan Press, The contrast sensitivity function provides a more thorough representation of the visual system. For example, the pivotal visual developmental study of Harwerth et al. The loss of sensitivity in the mid to high spatial frequencies was profound during abnormal visual development, with increased deprivation leading to further contrast losses.

For example, patients with multiple sclerosis will have mid to low contrast sensitivity losses figure 25 — B , while patients with cataracts will have an overall reduction in contrast sensitivity figure 25 — C. Mild refractive error or mild amblyopia, will lead to a CSF similar to D in figure 25, with more severe refractive errors or severe amblyopia, resulting in a CSF similar to curve C.

Examples of how the CSF is altered due to refractive error or disease. This area over which spatial summation operates is called the critical diameter. Threshold is reached when the product of luminance L and stimulus area A equals or exceeds this constant value. In another words, when luminance is halved, a doubling in stimulus area is required to reach threshold.

When luminance is doubled, the stimulus area can be halved and still reach threshold. Critical area varies with eccentricity. Spatial summation occurs due to the convergence of photoreceptors onto ganglion cells. This convergence of photoreceptors form a receptive field thus stimulating different photoreceptor within this receptive field would result in one signal. Receptive field sizes vary with eccentricity figure 26 , and helps explain the reason why critical area varies with eccentricity Shapley and Enroth-Cugell, Clearly, the size of spatial summation functional receptive field , will limit resolution capabilities as outlined earlier.

Schematic illustration of the size of receptive fields in a the parafoveal region 7o eccentricity and in b the peripheral retina 35o eccentricity. A schematic spatial summation graph is shown in figure A simple logarithmic transform of L. Outside the critical area, such a plot has a slope of 0, indicating that the size of the target does not affect threshold. Spatial summation data plotted on a logarithmic scale as log L versus log A.

Figure 28 shows data on spatial summation where spots of light with different background luminance are presented 6. Note that the critical area is larger for low luminance and smaller for high luminance. Such a change reflects the functional alteration of receptive field size with changes in adaptation level Shapley and Enroth-Cugell, Chapter 8. Arden GB. The importance of measuring contrast sensitivity in cases of visual disturbance. Br J Ophthalmol. The effect of pupil size on visual acuity in uncorrected and corrected myopia.

Am J Optom Physiol Opt. New design principles for visual acuity letter charts. Optical and retinal factors affecting visual resolution.

J Physiol. Under optimum conditions, this occurs for a distance between the centres of the retinal spots of about 2 m , and corresponds to an angular resolution of just less than half a minute. If two point sources are positioned about 10 m from the eye, about 1 mm apart they subtend approximately this angle at the eye and are just resolved by the optical system. If the separation between two adjacent retinal spots is to be perceived, there must be a receptor present in the intervening non-stimulated area.

See the diagram. There must also be adequate illumination to ensure stimulation of the relevant receptors and the mosaic of retinal receptors must have a sufficiently fine structure to record the resolved images of the optical system.

So, a minimum perceivable separation corresponds to a spacing of at least two receptor diameters. At the fovea , the cone diameter averages about 1. Since there are only cones present here, a separation of about 3 m can be perceived, corresponding to a visual angle of just over half a minute. Can you be quantitative? Just what is the limit?

To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it similar to a slit see Figure 2a. The accepted criterion for determining the diffraction limit to resolution based on this angle was developed by Lord Rayleigh in the 19th century.

The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See Figure 2b. Figure 2. Note that, similar to a single slit, the central maximum is wider and brighter than those to the sides. Shown here is the Rayleigh criterion for being just resolvable. The central maximum of one pattern lies on the first minimum of the other.

All attempts to observe the size and shape of objects are limited by the wavelength of the probe. Even the small wavelength of light prohibits exact precision. When extremely small wavelength probes as with an electron microscope are used, the system is disturbed, still limiting our knowledge, much as making an electrical measurement alters a circuit. The primary mirror of the orbiting Hubble Space Telescope has a diameter of 2.

Being in orbit, this telescope avoids the degrading effects of atmospheric distortion on its resolution. Once this angle is found, the distance between stars can be calculated, since we are given how far away they are. As noticed, diffraction effects are most noticeable when light interacts with objects having sizes on the order of the wavelength of light.

However, the effect is still there, and there is a diffraction limit to what is observable. The actual resolution of the Hubble Telescope is not quite as good as that found here. As with all instruments, there are other effects, such as non-uniformities in mirrors or aberrations in lenses that further limit resolution. Figure 3. These two photographs of the M82 galaxy give an idea of the observable detail using the Hubble Space Telescope compared with that using a ground-based telescope.

The answer in Part 2 indicates that two stars separated by about half a light year can be resolved. The average distance between stars in a galaxy is on the order of 5 light years in the outer parts and about 1 light year near the galactic center. Therefore, the Hubble can resolve most of the individual stars in Andromeda galaxy, even though it lies at such a huge distance that its light takes 2 million years for its light to reach us. Figure 4 shows another mirror used to observe radio waves from outer space.

Figure 4. A m-diameter natural bowl at Arecibo in Puerto Rico is lined with reflective material, making it into a radio telescope. It is the largest curved focusing dish in the world.

Arecibo is still very useful, because important information is carried by radio waves that is not carried by visible light. Diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself.

This spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. As light enters the eye, it interacts with the iris. When viewing a distant point source, if the optics of the eye were perfect, the diffraction pattern caused by the pupil would be seen.

Thus, diffraction ultimately limits our acuity because it forces a point of light to have a finite size in the retina. Generally, the smaller the area that the wave is forced through, the larger the diffraction pattern will be. Slide 3 illustrates diffraction.

Scatter is another diffraction effect in which light interacts with a series of small particles. The particles absorb the light and re-radiate it into all different directions. The size and spacing between particles determine the degree of scatter. Furthermore, shorter wavelengths are more prone to scattering than longer wavelengths. Smoke, fog, corneal edema, and cataracts all cause scatter. As smoke scatters light, it takes on a blue tinge due to the higher degree of scatter in the blue end of the spectrum.

Since infrared light is less prone to scatter, it is used to penetrate into the retina to see structure at the level of the choroids. Scatter can be a source of glare for patients.

Peripheral light sources can scatter off scattering bodies in the cornea and lens and end up on the fovea. This scattered light is superimposed onto the image from the object on which the person is fixated.

The scattered light causes a reduction in contrast in the image, thus making it more difficult to see. Slide 4 illustrates scatter. Molecules can absorb photons of light and move into an excited state. At a later time, they can emit a photon to return to a lower energy state. This process is the foundation for fluorescence. Typically, a photon of a given wavelength is absorbed by the molecule causing a change in the molecular state.

As the molecule returns to a resting state, some energy is lost due to vibrational and rotational effects, while the remaining energy is emitted as a photon. The new photon has a longer wavelength than the original photon.

Fluorescein dye is routinely used to evaluate corneal integrity and the fit of contact lenses. The dye is illuminated with a blue wavelength of light and it emits or fluoresces in the green. Polarization deals with the orientation of the electric field in a propagating electromagnetic wave. Unpolarized light has its electric field in random orientations. Conversely, polarized light has its electric field oriented in a single plane.

Polaroid material is designed to only transmit light that has its electric field oriented in one direction. If the pass-axis of the Polaroid is lined up with the electric field of incident light, the light is transmitted. Light from the sun is unpolarized. Sunlight incident on Polaroid sunglasses would, therefore, allow only have half of its light to be transmitted. Sunlight reflecting off a shiny object becomes partially polarized in the horizontal direction. By orienting the pass-axis of the Polaroid in the vertical direction, the reflected light is dramatically reduced.

Thus, Polaroid sunglasses can dramatically reduce the effects of glare from reflections. As light is incident upon the interface between two media, the light can be transmitted, reflected, or absorbed. Generally, all three happen and just the relative amounts of each vary depending on the material. For visible wavelengths, glass provides high transmission, little absorption, and a small amount of reflection. When looking through a store window, a reflection of yourself is clearly seen.

Asphalt absorbs a large portion of the incident light, reflects a small portion of the light since we see it as a dark shade of gray , and transmits nothing. The amount of light transmitted and reflected from the interface between materials is governed by the angle of incidence of the light and the indices of refraction of the two materials.

Generally, significant differences in the indices of refraction cause more light to be reflected. The Purkinje images are an example of light reflected from the interfaces of differing materials. The first Purkinje image is the reflection from the anterior corneal surface.

The second Purkinje is the reflection from the cornea-aqueous interface. The third Purkinje is the reflection from the aqueous-lens interface and the fourth Purkinje is the reflection from the lens-vitreous interface. The differences in indices of refraction between the various internal media of the eye are small. Consequently, the second, third, and fourth Purkinje images are dim. The difference between the index of refraction of air and the index of refraction of the cornea is large, leading to a bright reflection in the first Purkinje image.

The waves incident on the boundary between two media change direction as a result of interacting with boundary. The laws of reflection and refraction govern these direction changes. The law of reflection indicates that the angle of incidence, q i , is equal to the angle of reflection, q r.

The angles are measured relative to a line perpendicular to the interface. The law of reflection is analogous to a billiard ball reflecting off a bumper.



0コメント

  • 1000 / 1000