Solution:
When a wave encounters an obstacle, such as a slit, that is comparable in size to its wavelength, diffraction occurs. This phenomenon can result in the formation of a pattern of bright and dark regions called fringes on the other side of the slit. In the case of a single-slit diffraction, both the width and the intensity of the fringes vary.
The central maximum is the brightest and the widest. As one moves away from the center towards the edges, the intensity of the fringes decreases and their width becomes narrower. Thus, the width and the intensity of the fringes are not constant. The intensity falls off more slowly than the width, but both are functions of the angle from the central maximum.
The diffraction pattern for a single-slit can be explained using the Huygens-Fresnel principle, where each point on a wavefront within the slit is considered as a source of secondary spherical wavelets, and the wavelets interfere with each other to produce the diffraction pattern.
The width of the diffraction fringes can be described mathematically by the formula that gives the position of the minima: dsinθ=mλ where: d is the width of the slit, θ is the diffraction angle, m is the order of the minimum (with m=±1,±2,±3,... ), and λ is the wavelength of the light.
As m increases, the value of sinθ for the minima also increases, but not linearly, and thus the spacing between fringes changes. The intensity distribution of the single-slit diffraction pattern is governed by the intensity function, which shows that the intensity of the fringes falls off as a function of the angle θ.
The correct answer is : Option D - unequal width and unequal intensity.
© examsnet.com