Camera device. Film and digital cameras

When using the words “digital photography”, most people imagine a compact digital “soap box” and the pictures taken from it on the monitor screen. But what exactly is digital photography?

Over the past 10 years, there has been a sharp rise in the photography industry with the development of digital photography and the global decline in digital camera prices. Let's dive into the history of digital photography a little. It began in the early 80s with a conference in Tokyo on August 25, 1981, at which Sony presented a prototype of the company - the Mavica (Magnetic Video Camera) camera. In it, the image was recorded on a two-inch floppy disk, SONY called it "Mavipak" - it contained 50 color images in a resolution of 570x490 pixels. At that time, this was considered the maximum resolution of the TV, on which the resulting photographs were viewed. But the Mavica was not a digital camera, but a video camera capable of freezing. The device had only one shutter speed, equal to 1/60 of a second, and the ISO sensitivity was 200.

The revolution took place in 1990 when the first consumer camera, the Dycam Model 1 or Logitech FotoMan, went on sale. The camera had a CCD matrix with a resolution of 376x240 pixels and the ability to obtain black-and-white images with 256 shades of gray. The device was equipped with a built-in memory of 1 megabyte, which allowed saving up to 32 images and transferring them to a personal computer. But the camera had one very serious drawback - if the batteries powering the camera went down, all the pictures from it disappeared.

A year after that, Kodak introduced the DCS-100 professional camera, designed on the basis of the Nikon F3. The filling of the camera consisted of a matrix with a resolution of 1.3-megapixel (currently in mobile phones dies that are three times the size of the DCS-100 are already being installed). The images in the camera were stored on an external hard disk with a capacity of 200Mb. The entire kit weighed almost 25 kg and cost about $ 30,000.

Now it's time to consider how traditional photography differs from digital. The fundamental difference lies in the way of registration and storage of the image. In classical photography, the image is fixed in an analog form, that is, passing through the lens of the objective, light particles are fixed on a special film covered with layers of silver emulsion. To obtain the final result of shooting - a printed image, the film is subjected to chemical processing, that is, developing, fixing, washing and drying. In traditional photography, film is an intermediate storage medium. In this case, the image on the photographic film after development becomes visible, but negative (i.e., white becomes black, and vice versa) and mirror-reversed. The negative image is projected onto the surface of the photosensitive paper through an enlarger or contact printing machine. Then the exposed paper is developed, fixed, washed and dried, and as a result the final result is obtained - the finished photograph.

In digital photography, light rays passing through the lens of the lens fall on a transducer sensor (the so-called camera matrix), which consists of several million sensor pixels that are sensitive to green, red and blue colors. The image is created through interpolation, and the sensitive pixels give the photo a thousand shades. Then the signal from the matrix is ​​processed by the camera's processor and written to the memory card or to the built-in flash memory of the camera.

There are several recording formats for captured images:
- Jpeg(Joint Photographic Experts Group) - was created in 1990 by a joint group of experts in the field of photography and is by far the most popular image compression format. It gained its popularity due to the optimal size-quality ratio. For example, a 15 megabyte file can be compressed to 1.2 megabytes with virtually no loss of quality, i.e. the difference can be noticed only by a trained eye and then only at 100% magnification of the image. Compression is performed using the Huffman algorithm.
- Tiff(Tagged Image File Format) - was released in 1986 by the Aldus Corporation and was introduced as a standard format for storing images created by layout software packages and scanners. The expandability that allows you to record bitmap images of any color depth makes this format very promising for storing and processing graphic information and wide application in the printing industry. The TIFF format supports several compression options:
- do not compress the image;
- use simple scheme PakBits;
- use T3 and T4 compression (an algorithm also used in facsimile communication);
- use some additional methods, including LZW and JPEG.
- RAW(from English raw - raw) - an image format that is directly received data from the camera matrix without processing. RAW data is 12 or 14 bits per pixel (for JPEG 8 bits) and contains much more complete information about the image. This format is often referred to as "digital negative" and, like film in analog format, there is a special software for developing a "raw" format into a JPEG understandable for most users.
RAW format extensions for some cameras:
- .bay - Casio
- .arw, .srf, .sr2 - Sony
- .crw, .cr2 - Canon
- .dcr, .kdc - Kodak
- .erf - Epson
- .mrw - Minolta
- .nef - Nikon
- .raf - Fujifilm
- .orf - Olympus
- .ptx, .pef - Pentax
- .x3f - Sigma.

Separately, one should dwell on DNG(Digital Negative Specification) - An image format called digital negative. It was developed by Adobe and announced in 2004 with the aim of standardizing the digital negative format. The company provides specifications of the DNG format free of charge, so any manufacturer of digital photographic equipment can enable support for this format. Currently Leica, Pentax, Hasselblad, Ricoh, Sinar have included DNG support in their new cameras along with their own RAW files. DNG also requires "development" and can be perfectly translated into other formats using, for example, the Adobe DNG converter.

With the advent of digital photography, the procedure for obtaining a finished photograph on photographic paper has become much easier. Now there is no need to "conjure" in a dark room with the red light of a lamp with chemical solutions, and it is enough to connect the camera to a personal photo printer and press the "Print" button on the picture you like. The cost of purchasing consumables has also decreased, for example, the cost of film for 36 frames is about 100 rubles, and the cost of an SD card for 4Gb is about 400 rubles, but unlike film, about 1500 pictures are placed on the card, with a camera resolution of 5 megapixels. Considering that the card can be used long years, then the savings are obvious! How much film should I take when traveling on vacation? On a digital camera, even if the memory card runs out of space, you can immediately delete less interesting frames and continue to shoot new, interesting scenes! And on film, the result can only be seen after returning from vacation and developing the film, which allows inexperienced photographers to experiment more and make faster progress. These and many other factors that have simplified the life of the photographer, with the advent of digital photography, have contributed to the mass passion for photography among modern youth and also made life much easier for professional photographers.

Today digital photography has practically supplanted its “film” predecessor and does not stop at its development. Every month we witness the announcement of new digital cameras, the resolution of some of them has already crossed the 20 megapixel mark and the realism of the resulting picture already corresponds to the best film "SLRs". For some, digital photography is an opportunity to capture the joyful moments of the life of loved ones and friends, but for someone it is a means of self-realization and an opportunity to translate your most incredible ideas into the world of ones and zeros.

Anatoly Shishkin ©

Digital photography came into life gradually, step by step. The US National Aerospace Agency began using digital signals in the 1960s, along with flights to the moon (for example, to create a map of the lunar surface) - as you know, analog signals can be lost during transmission, and digital data are much less prone to errors. The first ultra-precise image processing was developed during this period, as the full power of computer technology was used by the National Aerospace Agency to process and improve space images. The Cold War, which used a wide variety of spy satellites and secret image processing systems, also accelerated the development of digital photography.

The first filmless electronic camera was patented by Texas Instruments in 1972. The main disadvantage this system consisted in the fact that photographs could only be viewed on television. A similar approach was taken with Sony's Mavica, which was announced in August 1981 as the first commercial electronic camera. The Mavica camera could already be connected to a color printer. At the same time, it was not a real digital camera - it was rather a video camera with which you can take and show individual pictures. The Mavica (Magnetic Video Camera) camera allowed up to fifty images to be recorded on two-inch floppy disks using a 570x490 pixel CCD sensor, which corresponded to the ISO 200 standard. Lens: 25mm wide-angle, 50mm regular and 16-65mm vari-focal lengths. Nowadays, such a system may seem primitive, but do not forget that Mavica was developed almost 25 years ago!

In 1992, Kodak announced the first professional digital camera, the DCS 100, based on the Nikon F3. The DCS 100 has an integrated 1.3 MB CCD image sensor and a portable hard drive for storing 156 captured images. It should be noted that this disk weighed about 5 kg, the camera itself cost $ 25 thousand, and the resulting images were only suitable for printing on the pages of newspapers. Therefore, it was advisable to use such photographic equipment only in those cases when the timing of obtaining images was more important than their quality.

The outlook for digital photography became clearer with the introduction in 1994 of two new types of digital cameras. Apple Computer first released the Apple QuickTake 100, an oddly sandwich-shaped camera capable of capturing 8 images at 640 x 480 pixels. It was the first digital still camera available at a retail price of $ 749. The images obtained with its help were also of poor quality, which did not allow them to be properly printed, and since the Internet was then at an early stage of its development, this camera did not find widespread use.

A second camera, released in the same year by Kodak in conjunction with the Associated Press news agency, was intended for photojournalists. Its NC2000 and NC200E models combined the look and functionality of a film camera with the instant image access and capture convenience of digital cameras. The NC 2000 was widely adopted by many newsrooms, prompting the switch from film to digital technology.

Since the mid-1990s, digital cameras have become more sophisticated, computers have become faster and less expensive, and software has become more sophisticated. In their development, digital cameras have gone from an alien species of devices that could only be dear to their creators, to a universal, easy-to-use photographic equipment that is built into even ubiquitous Cell Phones and has the same technical characteristics as the latest models of full format (35mm) digital cameras. And in terms of the quality of the images obtained, such photographic equipment surpasses film cameras.

The changes constantly taking place in digital camera technology are remarkable.

1. Purpose of work

To study analog and digital technologies of image registration, basic principles of operation, device, controls and settings of modern cameras. Classification, structure of black-and-white and color negative photographic films, the main characteristics of photographic films and the method of choosing photographic materials for solving specific photographic problems. Analog and digital photography technology. Get practical skills in operating the devices under study.

2. Theoretical reference device of a film (analog) camera

The modern autofocus camera is reasonably compared to the human eye. In fig. 1 on the left, schematically shows a human eye. When the eyelid is opened, the light flux forming the image passes through the pupil, the diameter of which is regulated by the iris depending on the light intensity (limits the amount of light), then it passes through the lens, refracts in it and focuses on the retina, which converts the image into electric current signals and transfers them along the optic nerve to the brain.

Rice. 1. Comparison of the human eye with the device of the camera

In fig. 1 on the right, schematically shows the structure of the camera. When photographing, the shutter opens (adjusts the illumination time), the luminous flux forming the image passes through the hole, the diameter of which is regulated by the diaphragm (adjusts the amount of light), then it passes through the lens, refracts in it and focuses on the photographic material, which registers the image.

Film (analog) camera- an optical-mechanical device with the help of which photographing is carried out. The camera contains interconnected mechanical, optical, electrical and electronic components (Fig. 2). Camera general purpose consists of the following main parts and controls:

- a case with an opaque camera;

- lens;

- diaphragm;

- photographic shutter;

- shutter button - initiates the shooting of the frame;

- viewfinder;

- focusing device;

- camera roll;

- cassette (or other device for placing photographic film)

- film transporting device;

- photoexposure meter;

- built-in photo flash;

- camera batteries.

Depending on the purpose and design, photographic devices have various additional devices to simplify, clarify and automate the process of photographing.

Rice. 2. The device of a film (analog) camera

Frame - the basis of the design of the camera, combining units and parts into an optical-mechanical system. The walls of the body are a light-tight camera, in the front of which there is a lens, and in the back - a photographic film.

Lens (from the Latin objectus - an object) - an optical system enclosed in a special frame, facing the subject and forming its optical image. The photographic lens is designed to obtain a light image of the subject of photography on a light-sensitive material. The nature and quality of the photographic image largely depends on the properties of the lens. Lenses are either permanently built into the camera body or interchangeable. Lenses, depending on the ratio of the focal length to the diagonal of the frame, are usually subdivided into normal,wide-angle and telephoto lenses.

Varifocal lenses (zoom lenses) allow you to capture images at different scales at a constant shooting distance. The ratio of the longest to the shortest focal length is called the lens magnification. So, lenses with a variable focal length from 35 to 105 mm are called lenses with a 3x change in focal length (3x zoom).

Diaphragm (from the Greek diaphragma) - a device with which the beam of rays passing through the lens is limited to reduce the illumination of the photographic material at the time of exposure and change the depth of field. This mechanism is realized in the form of an iris diaphragm, which consists of several blades, the movement of which ensures a continuous change in the diameter of the hole (Fig. 3). The aperture value can be set manually or automatically using special devices. In the lenses of modern cameras, the aperture setting is performed from the electronic control panel on the camera body.

Rice. 3. The iris diaphragm mechanism consists of a series of overlapping plates

Photographic shutter - a device with the help of which the effect of light rays on photographic material for a certain time is ensured, called endurance... The shutter is opened at the command of the photographer by pressing the shutter button or using the program mechanism - the self-timer. Exposures that are processed by the photographic shutter are called automatic. There is a standard range of exposures measured in seconds:

30

15

8

4

2

1

1/2

1/4

1/8

1/15

1/30

1/60

1/125

1/250

1/500

1/1000

1/2000

1/4000

Adjacent numbers of this series differ from each other by 2 times. Moving from one shutter speed (for example 1/125 ) to the neighboring one, we increase ( 1/60 ) or decrease ( 1/250 ) the exposure time of photographic material is twice.

According to the device, the closures are divided into central(folding) and focal-plane(focal plane).

Central shutter has light cutters, consisting of several metal leaflets, concentrically located directly near the optical unit of the objective or between its lenses, actuated by a system of springs and levers (Fig. 4). A simple clock mechanism is most often used as a time sensor in central gates, and at short shutter speeds, the opening time of the gate is regulated by the force of the spring tension. Modern models central gates have an electronic control unit for holding time, the petals are held open by an electromagnet. The central shutters automatically operate from shutter speeds from 1 to 1 / 500th of a second.

Shutter-aperture- central shutter, the maximum degree of opening of the blades of which is adjustable, due to which the shutter also acts as a diaphragm.

In the central shutter, when the release button is pressed, the cutters begin to diverge and open the lens light hole from the center to the periphery like an iris diaphragm, forming a light hole with the center located on the optical axis. In this case, a light image appears simultaneously over the entire area of ​​the frame. As the petals diverge, the illumination increases, and then, as they close, decreases. The shutter is reset before the next frame is taken.

Rice. 4. Some types of central shutters: on the left - with single-acting light cutoffs; center - with double-acting light cutters; on the right - with light cutters that act as a shutter and aperture

The principle of operation of the central shutter ensures high uniformity of the illumination of the resulting image. The central shutter allows the flash to be used over virtually the entire exposure range. The disadvantage of the central gates is the limited possibility of obtaining short exposures associated with large mechanical loads on the cut-off devices, with an increase in the speed of their movement.

Curtain-slotted shutter has cut-off devices in the form of shutters (metal - brass corrugated tape) or a set of movably fastened lamellas (Fig. 5), made of light alloys or carbon fiber, located in the immediate vicinity of the photographic material (in the focal plane). The shutter is built into the camera body and is actuated by a spring system. Instead of a spring that moves the shutters in a classic focal plane shutter, modern cameras use electromagnets. Their advantage is the high accuracy of exposure. In the cocked state of the shutter, the photographic material is blocked by the first curtain. When the shutter is released, it moves under the action of the spring tension, opening the way for the light flux. At the end of the specified exposure time, the luminous flux is blocked by the second curtain. At shorter exposures, the two shutters move together with a certain interval, through the formed gap between the trailing edge of the first curtain and the leading edge of the second curtain, the photographic material is exposed, and the exposure time is regulated by the width of the gap between them. The shutter is reset to its original position before the next frame is taken.

Rice. 5. Curtain-slit shutter (curtain movement across the frame window)

The curtain-slit shutter allows the use of various interchangeable lenses, since it is not mechanically coupled to the lens. This shutter provides shutter speeds up to 1/12000 sec. But it does not always make it possible to obtain uniformity of exposure over the entire surface of the frame window, being inferior in this parameter to the central shutters. The use of pulsed light sources with a focal-plane shutter is possible only at such shutter speeds ( sync speed), at which the slit width ensures full opening of the frame window. In most cameras, these shutter speeds are: 1/30, 1/60, 1/90, 1/125, 1/250 s.

Self-timer- a timer designed to automatically release the shutter with an adjustable delay after pressing the shutter button. Most modern cameras are equipped with a self-timer as an additional component in the shutter design.

Photoexponometer - an electronic device for determining exposure parameters (shutter speed and aperture number) at a given brightness of the subject and a given photosensitivity. In automatic systems, the search for such a combination is called program processing. After determining the nominal exposure, the shooting parameters (f-number and shutter speed) are set on the corresponding scales of the lens and photographic shutter. In cameras with varying degrees of automation, both exposure parameters or only one of them are set automatically. To improve the accuracy of determining the exposure parameters, especially in cases when shooting is carried out using interchangeable lenses, various attachments and attachments that significantly affect the lens aperture, the photocells of exposure metering devices are placed behind the lens. Such a system for measuring the luminous flux was named TTL (English Through the Line - "through the lens / objective"). One of the variants of this system is shown in the diagram of the mirror viewfinder (Fig. 6). The light metering sensor, which is a receiver of light energy, is illuminated by light that has passed through the optical system of the lens mounted on the camera, including light filters, attachments and other devices that the lens may currently be equipped with.

Viewfinder - an optical system designed to accurately determine the boundaries of the space included in the image field (frame).

Frame(from French cadre) photographic - a single photographic image of the subject. Frame boundaries are set by cropping at the stages of shooting, processing and printing.

Cropping for photography, film and video- purposeful selection of the shooting point, angle, shooting direction, angle of view of the lens to obtain the necessary placement of objects in the field of view of the camera's viewfinder and in the final image.

Cropping When Printing or Editing an Image–Selection of the boundaries and aspect ratio of the photographic image. Allows you to leave out of the frame all insignificant, random objects that interfere with the perception of the image. Cropping provides the creation of a certain visual accent on the plot important part of the frame.

Optical viewfinders contain only optical and mechanical elements and do not contain electronic.

Parallax viewfinders are an optical system separate from the shooting lens. Parallax occurs due to the misalignment of the optical axis of the viewfinder with the optical axis of the lens. The effect of parallax depends on the angle of view of the lens and viewfinder. The greater the focal length of the lens and, accordingly, the smaller the angle of view, the greater the parallax error. Usually, in the simplest models of cameras, the axes of the viewfinder and the lens are made parallel, thereby limiting the linear parallax, the minimum effect of which when the focus is set to "infinity". In more complex models of cameras, the focusing mechanism is equipped with a parallax compensation mechanism. In this case, the optical axis of the viewfinder is tilted to the optical axis of the lens, and the smallest discrepancy is achieved at the focusing distance. The advantage of the parallax viewfinder is its independence from the shooting lens, which allows you to achieve higher image brightness and obtain a smaller image with clear frame boundaries.

Telescopic viewfinder(fig. 6). It is used in compact and rangefinder cameras and has a number of modifications:

Galileo's viewfinder- Galileo's inverted telescope. Consists of a short-focus negative lens and a long-focus positive eyepiece;

Viewfinder Albada... Galileo's viewfinder development. The photographer observes an image of a frame located near the eyepiece and reflected from the concave surface of the viewfinder lens. The position of the frame and the curvature of the lenses are chosen so that its image appears to be located at infinity, which solves the problem of obtaining a clear image of the frame boundaries. The most common type of viewfinder on compact cameras;

Parallax-free viewfinders.

Mirrored viewfinder consists of a lens, a deflecting mirror, a focusing screen, a pentaprism and an eyepiece (Fig. 6). Pentaprism turns the image into a straight line, familiar to our vision. The deflecting mirror, during framing and focusing, reflects almost 100% of the light entering through the lens onto the frosted glass of the focusing screen (in the presence of automatic focusing and metering, part of the light flux is reflected to the corresponding sensors).

Beam splitter. When using a beam splitter (translucent mirror or prism), 50-90% of the light passes through a mirror tilted at an angle of 45 ° onto the photographic material, and 10-50% is reflected at an angle of 90 ° degrees onto frosted glass, where it is viewed through the eyepiece, as in a mirror camera. The disadvantage of this viewfinder is its low efficiency when shooting in low light conditions.

Focusing consists in installing the lens relative to the surface of the photographic material (focal plane) at such a distance at which the image on this plane is sharp. Sharp images are captured by the relationship between the distances from the first principal point of the lens to the subject and from the second principal point of the lens to the focal plane. In fig. 7 shows five different cases of the location of the subject and the corresponding image positions:

Rice. 6. Schemes of telescopic and mirror viewfinders

Rice. 7. Relationship between the distance from the main point of the lens O to the object K and the distance from the main point of the lens O to the image of the object K "

The space to the left of the lens (in front of the lens) is called object space, and the space to the right of the lens (behind the lens) is called image space.

1. If the object is in "infinity", then its image will be obtained behind the lens in the main focal plane, ie. at a distance equal to the main focal length f.

2. As the subject approaches the lens, its image begins to move more and more towards the point of double focal length F ' 2 .

3. When the object is at the point F 2 , i.e. at a distance equal to double the focal length, his image will be at point F '2. Moreover, if up to this point the dimensions of the object were larger than the dimensions of its image, then now they will become equal.

5. When the object is at the point F 1 , the rays coming from it behind the lens form a parallel beam and the image will not work.

In large-scale photography (macro photography), the subject is placed at a close distance (sometimes less than 2 f) and use various devices to extend the lens to a greater distance than the frame allows.

Thus, to obtain a sharp image of the object being shot, it is necessary to set the lens at a certain distance from the focal plane before shooting, that is, to focus. In cameras, focusing is performed by moving a group of objective lenses along the optical axis using a focusing mechanism. Usually, focusing is controlled by rotating the ring on the lens barrel (it may not be available on cameras with a lens set at the hyperfocal distance or in cameras that only have an autofocus mode - autofocus).

It is impossible to focus directly on the surface of the photographic material, therefore, various focusing devices for visual control of sharpness.

Distance scale focusing on the lens barrel provides good results for lenses with a large depth of field (wide angle). This method of aiming is used in a wide class of scale film cameras.

Focusing with a rangefinder is characterized by high accuracy and is used for high-aperture lenses with a relatively shallow depth of field. A schematic of a rangefinder combined with a viewfinder is shown in Figure 8. When observing an object through a rangefinder viewfinder, two images are visible in the central part of its field of view, one of which is formed by the rangefinder optical channel, and the other - by the viewfinder channel. Moving the lens along the optical axis through the levers 7 causes the deflection prism to rotate 6 so that the transmitted image moves in the horizontal direction. When both images match in the viewfinder's field of view, the lens will be in focus.

Rice. 8. Schematic diagram of a rangefinder device for focusing the lens on sharpness: a: 1 - viewfinder eyepiece; 2 - a cube with a translucent mirror layer; 3 - diaphragm; 4 - camera lens; 5 - rangefinder lens; 6 - deflecting prism; 7 - levers for connecting the lens barrel with a deflecting prism; b - focusing the lens on focus is performed by combining two images in the viewfinder's field of view (two images - the lens is not installed accurately; one image - the lens is installed accurately)

Focusing in a SLR camera. A schematic of a reflex camera is shown in Fig. 6. Rays of light, passing through the lens, fall on the mirror and are reflected by it on the matte surface of the focusing screen, forming a light image on it. This image is inverted with a pentaprism and viewed through an eyepiece. The distance from the rear main point of the lens to the frosted surface of the focusing screen is equal to the distance from this point to the focal plane (surface of the photographic film). The lens is focused by rotating the ring on the lens barrel, with continuous visual control of the image on the frosted surface of the focusing screen. In this case, it is necessary to determine the position at which the sharpness of the image will be maximum.

Various auto focus systems.

The autofocusing of the lens is carried out in several stages:

Measurement of a parameter (distance to the shooting object, maximum image contrast, phase shift of the components of the selected beam, delay time of arrival of the reflected beam, etc.) of the sharpness-sensitive image in the focal plane and its vector (to select the direction of change in the mismatch signal and predict the possible distance focusing at the next moment in time when the object is moving);

Generation of a reference signal equivalent to the measured parameter and determination of the error signal of the automatic autofocus control system;

Send a signal to the focusing actuator.

These processes take place almost simultaneously.

Guidance optical system the sharpness is carried out by an electric motor. The time taken to measure the selected parameter and the time taken for the mismatch signal by the lens mechanics determine the speed of the autofocus system.

Autofocus system operation can be based on various principles:

Active autofocus systems: ultrasonic; infrared.

Passive AF Systems: phase (used in SLR film and digital cameras); contrast (camcorders, non-mirrored digital cameras).

Ultrasonic and infrared the systems calculate the distance to the object by the time of return from the object of shooting of the fronts emitted by the camera of infrared (ultrasonic) waves. The presence of a transparent barrier between the subject and the camera leads to erroneous focusing of these systems on this obstacle, and not on the subject.

Phase autofocus. In the body of the camera, special sensors are placed that receive fragments of the light flux from different points of the frame using a system of mirrors. Inside the sensor there are two separating lenses that project a double image of the subject of photography onto two rows of light-sensitive sensors that generate electrical signals, the nature of which depends on the amount of light falling on them. In the case of accurate focusing on an object, two light fluxes will be at a certain distance from each other, specified by the design of the sensor and the equivalent reference signal. When the focal point TO(Fig. 9) is closer to the object, two signals converge to each other. When the focal point is further away from the object, the signals diverge further from each other. The sensor, having measured this distance, generates an electrical signal equivalent to it and, by comparing it with the reference signal using a specialized microprocessor, determines the misalignment and issues a command to the focusing actuator. Focusing motors of the lens, work out the commands, refining the focus until the signals from the sensor coincide with the reference signal. The performance of such a system is very high and depends mainly on the performance of the lens focusing actuator.

Contrast autofocus. The principle of operation of contrast autofocus is based on the constant analysis by the microprocessor of the degree of contrast of the image, and the development of commands to move the lens to obtain a sharp image of the object. Contrast autofocus is characterized by low performance due to the lack of initial information on the current state of the lens focusing from the microprocessor (the image is considered initially blurry) and as a consequence of the need to issue a command to shift the lens from the initial position and analyze the resulting image for the degree of contrast change. If the contrast has not increased, the processor changes the sign of the command to the autofocus actuator and the electric motor moves the lens group in the opposite direction until the maximum contrast is recorded. When the maximum is reached, autofocus stops.

The delay between pressing the shutter button and the moment the frame is taken is explained by the operation of passive contrast autofocus and the fact that in non-mirrored cameras the processor is forced to read the entire frame from the matrix (CCD) in order to analyze only the focus areas for the degree of contrast.

Photo flash ... Electronic flash units are used as the main or additional light source, and can be of different types: built-in camera flash, external self-powered flash, studio flash. While built-in flash has become standard on all cameras, the high power of stand-alone flash units provides the added benefit of more flexible aperture control and enhanced shooting techniques.

Rice. 9. Scheme of phase detection autofocus

The main components of the flash unit:

A pulsed light source - a gas-discharge lamp filled with an inert gas - xenon;

Lamp ignition device - step-up transformer and auxiliary elements;

Electric energy storage - high-capacity capacitor;

Power supply device (batteries of galvanic cells or accumulators, current converter).

The units are combined into a single structure, consisting of a body with a reflector, or arranged in two or more blocks.

Pulsed gas discharge lamps- this is powerful sources light, the spectral characteristic of which is close to natural daylight. Lamps used in photography (Fig. 10) are glass or quartz tubes filled with an inert gas ( xenon) under a pressure of 0.1–1.0 atm, at the ends of which molybdenum or tungsten electrodes are installed.

The gas inside the lamp does not conduct electricity. To turn on the lamp (ignite), there is a third electrode ( igniting) in the form of a transparent layer of tin dioxide. When a voltage is applied to the electrodes not lower than the ignition voltage and a high-voltage (> 10000 V) ignition pulse between the cathode and the ignition electrode, the lamp ignites. The high voltage pulse ionizes the gas in the lamp bulb along the outer electrode, creating an ionized cloud connecting the positive and negative electrodes of the lamp, allowing the gas to be ionized now between the two electrodes of the lamp. Due to the fact that the resistance of the ionized gas is 0.2–5 Ohm, the electrical energy accumulated on the capacitor is converted into light energy in a short period of time. Pulse duration - the period of time during which the pulse intensity decreases to 50% of the maximum value and is 1/400 - 1/20000 s and shorter. Quartz cylinders of flash lamps transmit light with a wavelength from 155 to 4500 nm, glass - from 290 to 3000 nm. Emission of flash lamps begins in the ultraviolet part of the spectrum and requires a special coating on the flask, which not only cuts off the ultraviolet region of the spectrum, acting as an ultraviolet filter, but also corrects the color temperature of the flash source to the photographic standard of 5500 K.

Rice. 10. The device of a pulsed gas-discharge lamp

The power of flash lamps is measured in joules (watt-second) using the formula:

where WITH- capacitance of the capacitor (farad), U ignition - ignition voltage (volts), U pog - extinction voltage (volts), E max - maximum energy (Wts).

The flash energy depends on the capacity and voltage of the storage capacitor.

Three ways to control flash energy.

1. Parallel connection several capacitors ( C = C 1 + C 2 + C Z + ... + C n) and, turning on / off some of their groups to control the radiation power. The color temperature with this power control remains stable, but power control is only possible with discrete values.

2. Changing the initial voltage on the storage capacitor allows you to regulate the energy in the range of 100–30%. At lower voltages, the lamp will not ignite. Further improvement of this technology, the introduction of another small capacitor into the lamp triggering circuit, at which a voltage sufficient to start the lamp is achieved, and the remaining capacitors are charged to a lower value, which makes it possible to obtain any intermediate power values ​​in the range from 1: 1 to 1: 32 (100-3%). The discharge in this mode of switching on the lamp in its characteristics approaches the glowing one, which lengthens the lamp glow time, and the total color temperature of the radiation approaches the standard 5500K.

3. Interruption of the pulse duration when reaching required power... If, at the moment of ionization of the gas in the lamp bulb, the electric circuit leading from the capacitor to the lamp is broken, ionization will stop and the lamp will go out. This method requires the use of special electronic circuits in the control of a flash lamp that monitor a given voltage drop across the capacitor, or take into account the luminous flux returned from the subject.

Leading number - the power of the flash unit, expressed in conventional units, is equal to the product of the distance from the flash unit to the subject by the f-number. Guide number depends on flash energy, light scattering angle and reflector design. Typically, the guide number is indicated for photographic material with a sensitivity of 100ISO.

Knowing the guide number and the distance from the flash to the subject, you can determine the aperture required for correct exposure by the formula:

For example, with the guide number 32, we get the following parameters: aperture 8 = 32/4 (m), aperture 5.6 = 32 / 5.7 (m) or aperture 4 = 32/8 (m).

The amount of light is inversely proportional to the square of the distance from the light source to the object (the first law of illumination), therefore, to increase the effective distance of the flash by 2 times, with a fixed aperture value, it is necessary to increase the sensitivity of the photo material by 4 times (Fig. 11).

Rice. 11. The first law of illumination

For example, with a guide number of 10 and an aperture of 4, we get:

At ISO100 - effective distance = 10/4 = 2.5 (m)

At ISO400 - effective distance = 5 (m)

Flash automatic modes

A modern photographic flash, in accordance with the data of the sensitivity of the film and the aperture set on the camera, can dose the amount of light, cutting off the lamp discharge at the command of the automation. The amount of light can be adjusted only in the direction of decreasing, i.e. either full discharge, or a smaller part of it if the subject is close enough and maximum energy is not required. The automation of such devices captures the light reflected from the object, assuming that there is a medium-gray object in front of it, the reflectance of which is 18%, which can lead to exposure errors if the reflectivity of the object differs significantly from this value. To solve this problem, flash units provide exposure compensation mode, which will allow you to adjust the flash energy, based on the lightness of the object, both in the direction of increasing (+) and decreasing (-) energy from the level calculated by the automation. The mechanism of exposure compensation when working with a flash is similar to that discussed earlier.

It is very important to know at what shutter speed manual or automatic flash can be used, since the flash duration is very short (measured in thousandths of a second). The flash must fire when the shutter is fully open, otherwise the shutter curtain may block part of the image in the frame. This shutter speed is called sync speed... It ranges from 1/30 to 1/250 s for different cameras. But if you choose a shutter speed slower than the sync speed, you will be able to designate the flash firing time.

Synchronization on the first (opening) curtain- allows you to immediately after the full opening of the frame window to produce a pulse of light, and then the moving object will be illuminated by a constant source, leaving blurry traces of the image in the frame - a trail. In this case, the train will be in front of the moving object.

Synchronization on the second (closing) curtain- synchronizes the triggering of the pulse before the shutter of the camera starts closing the frame window. As a result, a trail from a moving object is exposed behind the object, emphasizing its dynamics of movement.

In the most advanced models of photo flashes, there is a mode of dividing energy into equal parts and the ability to give it out in alternating parts during a certain time interval and with a certain frequency. This mode is called stroboscopic and the frequency is indicated in hertz (Hz). If the subject is moving relative to the frame space, stroboscopic mode will allow you to fix individual phases of movement, "freezing" them with light. In one frame, you can see all the phases of the object's movement.

Red-eye effect. When shooting people with the flash, their pupils may appear red in the picture. Red-eye is caused by the light emitted from the flash from the retina at the back of the eye reflected back directly into the lens. This effect is typical for a built-in flash due to its close position to the optical axis of the lens (Fig. 12).

Ways to reduce the red-eye effect

Using a compact camera for photography can only reduce the likelihood of red-eye. The problem is also subjective - there are people whose “red-eye” effect can appear even when shooting without a flash ...

Rice. 12. The scheme of formation of the "red eye" effect

To reduce the likelihood of the "red-eye" effect, there are a number of methods based on the property of the human eye to reduce the size of the pupil with increasing illumination. The eyes are illuminated using a preliminary flash (lower power) before the main pulse or a bright lamp at which the subject must look.

The only reliable way to combat this effect is to use an external autonomous flash unit with an extension, positioning its optical axis approximately 60 cm from the optical axis of the lens.

Film transportation. Modern film cameras are equipped with a built-in motor drive for transporting the film inside the camera. After each shot, the film automatically rewinds to the next frame and simultaneously cocks the shutter.

There are two modes for transporting film: single frame and continuous. In single-frame mode, one shot is taken after the shutter button is pressed. Continuous mode shoots a series of frames while the shutter button is pressed. The rewinding of the shot film is carried out by the camera automatically.

The film transport mechanism consists of the following elements:

Film cassette;

Take-up reel on which the captured film is wound;

The pinion roller engages the perforation and moves the film in the frame window one frame. More advanced film transport systems use special rollers instead of a toothed roller, and one row of film perforations is used by a sensor system to accurately position the film on the next frame;

Locks for opening and closing the back cover of the device for changing the cassette with film.

Cassette- is a light-proof metal case in which the film is stored, installed in the camera before shooting and removed from it after the shooting is over. The cassette of a 35 mm camera has a cylindrical shape, consists of a coil, a body and a cover, and can hold film up to 165 cm in length (36 frames).

camera roll - a photosensitive material on a flexible transparent base (polyester, nitrate or cellulose acetate), on which a photographic emulsion is applied containing grains of silver halides, which determine the photosensitivity, contrast and optical resolution of the photographic film. After exposure to light (or other forms of electromagnetic radiation, such as X-rays), a latent image is formed on the film. A visible image is obtained by subsequent chemical treatment. The most common 35 mm wide perforated film for 12, 24 and 36 frames (frame format 24 × 36 mm).

Films are subdivided into: professional and amateur.

Professional films are designed for more precise exposure and post-processing, they are produced with tighter tolerances in basic characteristics and, as a rule, require storage at a lower temperature. Amateur films are less demanding on storage conditions.

Film can be black and white or color:

Black and white film is intended for registration of black and white negative or positive images using a camera. V black and white film there is one layer of silver salts. Upon exposure to light and further chemical treatment, silver salts are converted into metallic silver. The structure of black-and-white photographic film is shown in Fig. 13.

Rice. 13. The structure of black and white negative photographic film

Color film is intended for registration of color negative or positive images using a camera. Color film uses at least three layers. Coloring, adsorbing substances, interacting with crystals of silver salts, make the crystals sensitive to different sites spectrum. This method of changing spectral sensitivity is called sensitization. Sensitive only to blue, usually non-sensitized, the layer is located on top. Since all other layers, in addition to their "own" spectral ranges, are sensitive to blue, they are separated by a yellow filter layer. Next comes green and red. During exposure, clusters of metallic silver atoms form in the crystals of silver halides, just like in black-and-white film. Subsequently, this metallic silver serves for the development of colored dyes (in proportion to the amount of silver), then again turns into salts and is washed out in the process of bleaching and fixing, so that the image in the color film is formed with colored dyes. The structure of color photographic film is shown in Fig. fourteen.

Rice. 14. Structure of color negative film

There is a special monochrome film, it is processed according to the standard color process, but produces a black and white image.

Color photography has become widespread thanks to the advent of a variety of cameras, modern negative materials and, of course, the development of a wide network of mini-photo laboratories that allow you to quickly and efficiently print images of various formats.

The film is divided into two large groups:

Negative... On this type of film the image is inverted, that is, the lightest areas of the scene correspond to the darkest areas of the negative, on color film the colors are also inverted. Only when printing on photographic paper does the image become positive (real) (Fig. 15).

Reversible or Slide Films so named because on the processed film the colors correspond to the real ones - a positive image. Reversible film often referred to as slide film, it is used primarily by professionals and achieves great results in rich color and clarity of detail. The developed reversible film is already the final product - a transparency (each frame is the only one).

By the term "slide" we mean a slide, framed by a frame measuring 50 × 50 mm (Fig. 15). The main use of slides is projection onto a screen using an overhead projector and digital scanning for printing purposes.

Selecting the photosensitivity of the photographic film

Lightsensitivity photographic material - the ability of photographic material to form an image under the influence of electromagnetic radiation, in particular light, characterizes the exposure that can normally convey the photographic subject in the picture, and is numerically expressed in ISO units (abbreviated from the International Standard Organization), which are universal standard for calculation and designation of photosensitivity of all photographic films and matrices of digital cameras. The ISO scale is arithmetic - doubling the value corresponds to doubling the photosensitivity of the photographic material. ISO 200 sensitivity is twice as high as ISO 100 and half as low as ISO 400. For example, if for ISO 100 and given scene illumination you got exposure: 1/30 sec., F2.0, for ISO 200 you can reduce the shutter speed to 1/60 sec., And at ISO 400 - up to 1/125.

Among general purpose color negative films, the most common are ISO100, ISO 200, and ISO 400. The most sensitive general purpose film is ISO 800.

A situation is possible when the simplest cameras lack the range of exposure parameters (shutter speed, aperture) for specific shooting conditions. Table 1 will help you navigate the choice of photosensitivity for the planned shooting.

Rice. 15. Analog photoprocess

Rice. 16. Analog photography technology

Table 1

Evaluation of the possibility of shooting on photographic material of different photosensitivity

Light sensitivity, (ISO)

Shooting conditions

The sun

Cloudiness

Movement, sports

Flash photography

Acceptable

Acceptable

The lower the ISO sensitivity of a photographic film, the less graininess, especially at high magnifications. Always use the lowest ISO available for the shooting conditions.

Film grain parameter speaks of the visual visibility of the fact that the image is not continuous, but consists of separate grains (clumps) of the dye. Film grain is expressed in relative units of grain size O.E.Z. (RMS - in the English-language literature). This personality is quite subjective, since it is determined by visual comparison under a microscope of test samples.

Color distortion. The presence of color distortions associated with the quality of the films is reflected in the reduction of color differences between details in highlights and shadows ( gradation distortion), by decreasing the color saturation ( color separation distortion) and on reducing color differences between fine details of the image ( distortion of visual perception). Most color films are versatile and balanced for daylight shooting at color temperature 5500 K(Kelvin is a unit for measuring the color temperature of a light source) or with a pulsed flash ( 5500 K). A mismatch between the color temperatures of the light source and the film used will cause color distortion (unnatural tints) to appear on the print. Artificial lighting with fluorescent lamps ( 2800-7500 K) and incandescent lamps ( 2500-2950 K) when shooting with daylight film.

Let's take a look at a few of the most typical examples of shooting with all-purpose natural light film:

- Shooting in clear sunny weather... The color rendition in the picture is correct - real.

- Indoor shooting with fluorescent lamps... The color rendition in the image is shifted towards the prevalence of green.

- Indoor shooting with incandescent lamps... The color rendition in the image is shifted towards the prevalence of a yellow-orange tint.

Such color distortions require the introduction of color correction in photography (correction filters) or in photographic printing, so that the perception of prints is close to reality.

Modern photographic films are packed in metal cassettes. Photo cassettes, on their surface, have a code containing information about the photographic film.

DX coding - a method of designating the type of photographic film, its parameters and characteristics for the input and automatic processing of this data in the control system of an automatic camera for photographing or an automatic mini-photo laboratory for photographic printing.

For DX coding, bar and chess codes are used. A barcode (for miniphotos) is a series of parallel dark stripes of different widths with light gaps, applied in a certain order to the surface of the cassette and directly onto the film. The code for minilabs contains the data necessary for automatic development and photo printing: information about the type of film, its color balance, and the number of frames.

Chess DX code is intended for automatic cameras and is executed in the form of 12 light and dark rectangles alternating in a certain order on the surface of the cassette (Fig. 17). Conductive (metallic) sections of the chess code correspond to "1", and isolated (black) - "0" of the binary code. For cameras, the photosensitivity of the photographic film, the number of frames, and photographic latitude are encoded. Zones 1 and 7 are always conductive - correspond to "1" of the binary code (common contacts); 2–6 - photosensitivity of photographic film; 8–10 - number of frames; 11–12 - determine the photographic width of the film, ie the maximum deviation of the exposure from the nominal (EV).


Rice. 17. DX encoding with chess code

Dynamic range - one of the main characteristics of photographic materials (film, digital photo or video camera matrices) in photography, television and cinema, which determines the maximum range of brightness of the shooting object that can be reliably transmitted by this photographic material at a nominal exposure. Reliable transmission of brightness means that equal differences in the brightness of the elements of an object are transmitted by equal differences in brightness in its image.

Dynamic range Is the ratio of the maximum permissible value of the measured value (brightness) to the minimum value (noise level). Measured as the ratio of the maximum and minimum exposure values ​​of the linear portion of the characteristic curve. Dynamic range is usually measured in exposure units (EV) or aperture stops and expressed as a base 2 logarithm (EV), less often (analog photography) logarithm decimal (denoted by the letter D). 1EV = 0.3D .

where L - photographic latitude, H - exposure (Fig. 1).

To characterize the dynamic range of photographic films, the concept is usually used photographic latitude , showing the range of brightness that the film can transmit without distortion, with uniform contrast (the range of brightness of the linear part of the characteristic curve of the film).

The characteristic curve of silver halide (photographic film, etc.) photographic materials is nonlinear (Fig. 18). In its lower part there is a veil area, D 0 is the optical density of the veil (for photographic film, the optical density of the veil is density of unexposed photographic material). Between points D 1 and D 2, one can distinguish an area (corresponding to the photographic latitude) of an almost linear increase in blackening with increasing exposure. At high exposures, the degree of blackening of the photographic material passes through the maximum D max (for photographic film this is density of highlights).

In practice, the term “ useful photographic latitude"Photographic material L max, corresponding to a longer section of" moderate nonlinearity "of the characteristic curve, from the threshold of the least blackening D 0 +0.1 to a point near the point of maximum optical density of the photo layer D max -0.1.

Have photosensitive elements of the photoelectric principle of operation exists physical limit, called the "charge quantization limit". The electric charge in one photosensitive element (matrix pixel) consists of electrons (up to 30,000 in one saturated element - for digital devices this is the “maximum” value of the pixel, which limits the photographic latitude from above), the intrinsic thermal noise of the element is not less than 1–2 electrons. Since the number of electrons roughly corresponds to the number of photons absorbed by the photosensitive element, this determines the maximum theoretically achievable photographic latitude for the element - about 15EV (binary logarithm of 30,000).

Rice. 18. Characteristic curve of photographic film

For digital devices, the lower limitation (Fig. 19), expressed in an increase in "digital noise", the causes of which are the sum of: thermal noise of the matrix, charge transfer noise, analog-to-digital conversion (ADC) error, also called "sampling noise" or "quantization noise signal ".

Rice. 19 Characteristic curve of the digital camera sensor

For an ADC with a different bit depth (number of bits) used for quantizing the binary code (Fig. 20), the larger the number of quantization bits, the smaller the quantization step and the higher the conversion accuracy. In the process of quantization, the number of the nearest quantization level is taken as the reference value.

Quantization noise means that a continuous change in brightness is transmitted in the form of a discrete, step signal, therefore, not always different levels of brightness of an object are transmitted by different levels of the output signal. So with a three-bit ADC in the range from 0 to 1 exposure stops, any changes in brightness are converted to a value of 0 or 1. Therefore, all image details that are in this exposure range will be lost. With a 4-bit A / D converter, detail transmission in the exposure range from 0 to 1 becomes possible - this practically means expanding the photographic latitude by 1 stop (EV). Hence, the photographic latitude of a digital apparatus (expressed in EV) cannot be greater than the digit capacity of the analog-to-digital conversion.

Rice. 20 Analog-to-digital conversion of continuous brightness variation

Under the term photographic latitude it is also understood the value of the permissible deviation of the exposure from the nominal for a given photographic material and given shooting conditions, while maintaining the transfer of details in the light and dark areas of the scene.

For example: the photographic width of KODAK GOLD Film is 4 (-1EV .... + 3EV), which means that at a nominal exposure for a given scene of F8, 1/60, you will get details of acceptable quality in the picture that would require shutter speeds of 1 / 125 sec to 1/8 sec, with fixed aperture.

When using FUJICHROME PROVIA slide film with a photographic width of 1 (-0.5EV .... + 0.5EV), it is necessary to determine the exposure as accurately as possible, since at the same nominal exposure F8, 1/60, with a fixed aperture you get details of acceptable quality in the picture, which would require shutter speeds from 1/90 sec to 1/45 sec.

Insufficient photographic latitude of the photographic process leads to the loss of image details in the light and dark areas of the scene (Fig. 21).

The dynamic range of the human eye is ≈15EV, the dynamic range of typical subjects is 11EV, and the dynamic range of night scenes with artificial lighting and deep shadows can be up to 20EV. It follows that the dynamic range of modern photographic materials is insufficient to convey any subject of the surrounding world.

Typical indicators of the dynamic range (useful photographic latitude) of modern photographic materials:

- colored negative films 9-10 EV.

- color reversible (slide) films 5–6 EV.

- matrices of digital cameras:

Compact cameras: 7–8 EV;

SLR cameras: 10-14 EV.

- photo print (reflection): 4–6.5 EV.

Rice. 21 Influence of the dynamic range of photographic material on the shooting result

Camera Batteries

Chemical power sources- devices in which the energy of the chemical reactions taking place in them is converted into electricity.

The first chemical current source was invented by the Italian scientist Alessandro Volta in 1800. The Volta element is a vessel with salt water with zinc and copper plates lowered into it, connected by a wire. Then the scientist assembled a battery of these elements, which was later called the Voltaic column (Fig. 22).

Rice. 22. Volt pillar

Chemical current sources are based on two electrodes (a cathode containing an oxidizing agent and an anode containing a reducing agent) in contact with the electrolyte. A potential difference is established between the electrodes - an electromotive force corresponding to the free energy of the redox reaction. The action of chemical current sources is based on the course of spatially separated processes with a closed external circuit: the reducing agent is oxidized at the cathode, the resulting free electrons pass, creating an electric current, along the external circuit to the anode, where they participate in the oxidant reduction reaction.

Modern chemical power sources use:

- as a reducing agent (at the anode): lead - Pb, cadmium - Cd, zinc - Zn and other metals;

- as an oxidizing agent (at the cathode): lead oxide PbO 2, nickel hydroxide NiOOH, manganese oxide MnO 2, etc .;

- as an electrolyte: solutions of alkalis, acids or salts.

Whenever possible reusable chemical power sources are divided into:

galvanic cells, which, due to the irreversibility of the chemical reactions occurring in them, cannot be used repeatedly (recharge);

electric accumulators- rechargeable galvanic cells that can be recharged and reused using an external current source (charger).

Galvanic cell- a chemical source of electric current named after Luigi Galvani. The principle of operation of a galvanic cell is based on the interaction of two metals through an electrolyte, leading to the emergence of an electric current in a closed circuit. The EMF of a galvanic cell depends on the material of the electrodes and the composition of the electrolyte. The following electrochemical cells are now widely used:

The most common salt and alkaline cells of the following standard sizes:

ISO designation

IEC notation

As the chemical energy is exhausted, the voltage and current fall, the element ceases to function. Galvanic cells are discharged in different ways: salt cells - they reduce the voltage gradually, lithium cells - keep the voltage throughout the entire service life.

Electric accumulator- reusable chemical current source. Electric batteries are used for energy storage and autonomous power supply for various consumers. Several batteries combined in one electrical circuit are called a storage battery. Battery capacity is usually measured in ampere hours. The electrical and performance characteristics of a battery depend on the electrode material and electrolyte composition. The following batteries are now most common:

The battery principle is based on reversibility chemical reaction... As the chemical energy depletes, the voltage and current fall - the battery is discharged. The battery's performance can be restored by charging with a special device, passing current in the direction opposite to the direction of the current during discharge.

Modern digital cameras are a lot like old film cameras. And this is not surprising, because digital photography, in fact, grew out of film, borrowing various components and components. A special similarity can be traced between a DSLR digital camera and a film camera: after all, a lens is used both there and there, with the help of which the camera focuses on the object being filmed. A similar process: the photographer simply presses the shutter button and, ultimately, a photo is taken.

Nevertheless, despite the similarity of the shooting process, the structure of a digital camera is much more complex than a film camera. And this design complexity provides digital cameras with significant advantages - instant shooting results, convenience, wide functionality for managing photography and image processing. In order to understand the structure of a digital camera, you must first of all answer the following questions: How is a photographic image created? What nodes digital camera borrowed from the film? And what's new in the camera with the development of digital technology?

How film and digital cameras work

The principle of operation of a conventional film camera is as follows. Light, reflected from the subject or scene, passes through the lens diaphragm and focuses in a special way on a flexible, polymer film. The film is covered with a light-sensitive emulsion layer based on silver halide. The smallest granules of chemicals on the film change their transparency and color under the influence of light. As a result, the photographic film, due to chemical reactions, "memorizes" the image.

As you know, for the formation of any shade existing in nature, it is enough to use a combination of three primary colors - red, green and blue. All other colors and shades are obtained by mixing them and changing the saturation. Each microgranule on the surface of the photographic film is responsible, respectively, for its color in the image and changes its properties exactly to the extent to which the rays of light hit it.

Since the light differs in color temperature and intensity, as a result of a chemical reaction on photographic film, an almost complete duplication of the scene being shot is obtained. Depending on the characteristics of optics, illumination, exposure / exposure time of the scene on the film and the time of opening the aperture, as well as other factors, a particular style of photography is formed.

As for the digital camera, the optics system is also used here. Light rays pass through the objective lens, refracting in a special way. Then they reach the diaphragm, that is, the variable opening, through which the amount of light is regulated. Further, when photographing, the light rays no longer fall on the emulsion layer of the photographic film, but on the light-sensitive cells of the semiconductor sensor or matrix. A sensitive sensor reacts to photons of light, captures a photographic image and transmits it to an analog-to-digital converter (ADC).

The latter analyzes simple, analog electrical impulses and converts them using special algorithms to digital form. This recoded image is digitally stored on embedded or external electronic media. The finished image can already be viewed on the LCD screen of a digital camera, or displayed on a computer monitor.

Throughout this multi-step process of capturing a photographic image, the camera electronics continually polls the system for an immediate response to the photographer's actions. The photographer himself, through numerous buttons, controls and settings, can influence the quality and style of the resulting digital image. And all this difficult process inside a digital camera takes place in a matter of seconds.

Basic elements of a digital camera

Even visually, the body of a digital camera is similar to a film camera, except that the digital camera does not provide for a film reel and a film channel. Film was attached to a coil in film cameras. And at the end of the frames on the film, the photographer had to rewind the frames in the opposite direction manually. In the film channel, the film was rewound to the required frame for shooting.

In digital cameras, all this has sunk into oblivion, and by getting rid of the film channel and space for a roll of film, it was possible to make the camera body significantly thinner. However, some of the reins of film cameras have smoothly passed into digital photography. To verify this, consider the main elements of a modern digital camera:

- Lens


In both film and digital cameras, light rays pass through the lens to produce an image. A lens is an optical device that consists of a set of lenses and is used to project an image on a plane. DSLR digital cameras are virtually indistinguishable from those used in film cameras. Moreover, many modern "SLRs" are compatible with lenses designed for film models. For example, older F-mount lenses can be used with all Nikon DSLRs.

- Aperture and shutter

- this is round hole, through which you can adjust the amount of light flux falling on the photosensitive matrix or photographic film. This variable aperture, usually located inside the lens, is formed by several crescent-shaped petals that converge or diverge when shooting. Naturally, there is a diaphragm in both film and digital devices.


The same can be said about the shutter, which is installed between the matrix (photographic film) and the lens. True, in film cameras a mechanical shutter is used, which is a kind of shutter that limits the effect of light on the film. Modern digital devices are equipped with an electronic equivalent of the shutter, which can turn on / off the sensor to receive the incoming light flux. Electronic provides accurate regulation of the time of reception of light by the matrix of the camera.

In some digital cameras, however, there is also a traditional mechanical shutter, which serves to prevent light rays from entering the matrix after the exposure time has expired. This prevents blurring of the picture or the appearance of a halo effect. It is worth noting that since a digital camera may take some time to process an image and save it, there is a time lag between the moment the photographer pressed the shutter button and the moment the camera captured the image. This time lag is called shutter lag.

- Viewfinder

Both a film and a digital camera have a sighting device, that is, a device for preliminary frame estimation. The optical viewfinder, consisting of mirrors and a pentaprism, shows the photographer the image exactly as it exists in nature. However, many modern digital cameras are equipped with an electronic viewfinder. It takes an image from the light sensor and shows the photographer the way the camera sees it, taking into account the preset settings and the effects used.

In inexpensive compact digital cameras, the viewfinder as such may simply not be available. Its functions are performed by the built-in LCD screen with LiveView function. LCD screens are now being built into DSLRs as well, because thanks to such a screen, the photographer can immediately see the results of the shooting. Thus, if the picture was not successful, you can immediately delete it and shoot a new frame with different settings or in a different angle.

- Matrix and analog-to-digital converter (ADC)

After we examined the principle of operation of a film and a digital camera, it became clear what the main difference between them actually is. In a digital camera, a photosensitive matrix or sensor appeared instead of photographic film. The matrix is ​​a semiconductor wafer on which a huge variety of photocells are placed.

Do not exceed the size of the film frame. Each of the sensitive elements of the matrix, when the light flux hits it, creates a minimum image element - a pixel, that is, a one-color square or rectangle. Sensor elements react to light and create an electrical charge. Thus, the matrix of a digital camera captures light fluxes.

The matrix of a digital camera is characterized by such parameters as physical dimensions, resolution and sensitivity, that is, the ability of the matrix to accurately capture the flow of light falling on it. All of these parameters have an impact on the quality of the photo image.

The information received from the sensor in the form of electrical impulses is then fed to the analog-to-digital converter (ADC) for processing. The function of the latter is to convert these analog pulses into a digital data stream, that is, to digitize the image.

- Microprocessor

The microprocessor was present in some of the latest models of film cameras, but in the digital camera it became one of the key elements. The microprocessor is responsible in the digital camera for the operation of the shutter, viewfinder, matrix, autofocus, image stabilization system, optics, as well as for recording the footage and video material on the media, choosing settings and program shooting modes. This is a kind of brain center of the camera, which controls all the electronics and individual nodes.


The performance of the microprocessor largely determines how quickly a digital camera can shoot continuously. In this regard, in some advanced models of digital cameras, two microprocessors are used at once, which can perform separate operations in parallel. This ensures maximum burst shooting speed.

- Information carrier

If an analog (film) camera immediately captures the image on film, then in digital, electronics records the image in digital format on an external or internal storage medium. For this purpose, in most cases are used. But some cameras also have a small built-in memory, which is enough to accommodate several captured frames.


Also, digital cameras must be equipped with appropriate connectors to be able to connect them to a personal or tablet computer, TV and other devices. Thanks to this, the photographer is able to post the finished image on the Internet, send it by e-mail or print it just a few minutes after shooting.

- Battery

Many film cameras use a rechargeable battery to power the electronics, which, in particular, controls the focusing and auto exposure of the scene. But this work does not require significant power consumption, so the film camera can work for several weeks on one battery charge.

Digital photography is another matter. Here, the life of a camera battery is measured in hours. Therefore, in order to maintain the operation of the camera in the absence of a source of electricity, the photographer sometimes has to stock up on additional batteries.

Despite the fact that digital photography has borrowed many of the components and components from film photography, it has a number of significant advantages. First of all, it is the ability to quickly control the shooting results and make the necessary adjustments. A digital camera, due to the peculiarities of its device, provides any photographer with more flexibility in the shooting process due to the wide range of control over image quality. Digital technology provides instant access to any frame and high-speed photography. The combination of flexibility, wide functionality and the efficiency of the shooting guarantee the owner of a digital camera to get excellent quality photos in almost any conditions.

The possibilities of digital photographic equipment are far from being exhausted today. As the development of digital cameras will become more and more sophisticated, they will implement new technologies that increase the functionality of the devices and provide even higher image quality.

It is quite difficult to learn how to photograph well if you do not know the basics and main terms and concepts in photography. Therefore, the purpose of this article is to give a general understanding of what photography is, how a camera works, and to get acquainted with basic photographic terms.

Since today, film photography has already become mainly history, then we will talk further about digital photography. Although 90% of all terminology is the same, the principles of obtaining a photograph are the same.

How is the photograph made

The term photography means painting with light. In fact, the camera captures the light that enters through the lens, onto the matrix, and an image is formed on the basis of this light. The mechanism of how an image is obtained on the basis of light is rather complicated and many scientific papers have been written on this topic. By and large, detailed knowledge this process not so necessary.

How does the image formation take place?

Passing through the lens, the light hits the photosensitive element, which fixes it. In digital cameras, this element is the matrix. The matrix is ​​initially closed from light by a shutter (camera shutter), which, when the shutter button is pressed, retracts for a certain time (shutter speed), allowing the light to act on the matrix during this time.

The result, that is, the photo itself, directly depends on the amount of light hitting the matrix.

Photography is the fixation of light on the camera's matrix

Types of digital cameras

By and large, there are 2 main types of cameras.

Mirrored (DSLR) and non-mirrored. The main difference between them is that in a SLR camera, through the mirror installed in the body, you see the image in the viewfinder directly through the lens.
That is, "what I see, I take pictures."

In modern ones without mirrors, 2 methods are used for this.

  • The viewfinder is optical and is located away from the lens. When shooting, you need to make a small correction for the displacement of the viewfinder relative to the lens. Usually used on "soap dishes"
  • Electronic viewfinder. The simplest example is transferring an image directly to the camera display. Usually used on point-and-shoot cameras, but in DSLR cameras this mode is often used in conjunction with the optical one and is called Live View.

How the camera works

Consider the work of a DSLR camera as the most popular option for those who really want to achieve something in photography.

A DSLR camera consists of a body (usually - "carcass", "body" - from the English body) and a lens ("glass", "lens").

Inside the body of the digital camera there is a matrix that captures the image.

Pay attention to the diagram above. When you look through the viewfinder, light passes through the lens, is reflected off the mirror, then refracted in the prism and into the viewfinder. This way you see through the lens what you will be shooting. The moment you press the shutter, the mirror rises, the shutter opens, the light enters the matrix and is fixed. Thus, a photograph is obtained.

Now let's move on to the basic terms.

Pixel and megapixel

Let's start with the term “new digital era”. It belongs more to the computer field than to the photo, but it is important nonetheless.

Any digital image is created from small dots called pixels. In digital photography, the number of pixels in a picture is equal to the number of pixels on the camera's matrix. The matrix itself consists of pixels.

If you enlarge any digital image many times, you will notice that the image consists of small squares - these are the pixels.

A megapixel is 1 million pixels. Accordingly, the more megapixels in the camera's matrix, the more pixels the image consists of.

If you enlarge the photo, you can see the pixels

What gives a large number of pixels? It's simple. Imagine that you are drawing a picture not with strokes, but with dots. Can you draw a circle if you only have 10 points? It may be possible to do this, but most likely the circle will be "angular". How more points, the more detailed and accurate the image will be.

But there are two pitfalls that are successfully exploited by marketers. Firstly, megapixels alone are not enough to obtain high-quality images, for this you still need a high-quality lens. Secondly, a large number of megapixels is important for printing large size photos. For example, for a full-wall poster. When viewing a picture on a monitor screen, especially a reduced one to fit the screen, you will not see the difference between 3 or 10 megapixels for a simple reason.

The monitor screen usually fits a lot fewer pixels than is contained in your picture. That is, on the screen, when you compress a photo to screen size or less, you lose most of your "megapixels". And a 10 megapixel image will turn into a 1 megapixel one.

Shutter and shutter speed

The shutter is what blocks the light from the camera until you press the shutter button.

Exposure is the time for which the shutter opens and the mirror rises. The shorter the shutter speed, the less light will hit the matrix. The longer the exposure time, the more light.

On a bright sunny day, in order to get enough light onto the sensor, you need a very fast shutter speed - for example, just 1/1000 of a second. At night, it can take a few seconds or even minutes to get enough light.

The shutter speed is defined in fractions of a second or in seconds. For example 1 / 60sec.

Diaphragm

The diaphragm is a multi-blade baffle located inside the lens. It can be completely open or closed so much that it only remains small hole for the light.

The aperture also serves to limit the amount of light that eventually enters the lens matrix. That is, shutter speed and aperture perform the same task - to regulate the flow of light entering the matrix. Why use exactly two elements?

Strictly speaking, the diaphragm is optional. For example, in cheap soap dishes and cameras of mobile devices, it is absent as a class. But aperture is extremely important to achieve certain effects related to depth of field, which will be discussed later.

The aperture is denoted by the letter f followed by the aperture number, for example, f / 2.8. The lower the number, the more open the petals and the wider the opening.

ISO sensitivity

Roughly speaking, this is the sensitivity of the matrix to light. The higher the ISO, the more sensitive the sensor is to light. For example, in order to get a good shot at ISO 100, you need a certain amount of light. But if there is little light, you can set ISO 1600, the matrix will become more sensitive and good result you will need several times less light.

What is the problem, it would seem? Why do different ISOs when you can get the maximum? There are several reasons. First, if there is a lot of light. For example, in winter, on a bright sunny day, when there is only snow all around, we will have the task of limiting the colossal amount of light and a high ISO will only interfere. Secondly (and this is the main reason) - the emergence of "digital noise".

Noise is the scourge of the digital matrix, which manifests itself in the appearance of "grain" in the photo. The higher the ISO, the more noise, the poorer the photo quality.

Therefore, the amount of noise at high ISO is one of critical indicators the quality of the matrix and the subject of continuous improvement.

In principle, the high ISO noise performance of modern DSLRs, especially the top-end ones, is at a fairly good level, but still far from ideal.

Due to technological features, the amount of noise depends on the real, physical dimensions of the matrix and the dimensions of the matrix pixels. The smaller the matrix and the more megapixels, the higher the noise.

Therefore, "cropped" matrices of cameras of mobile devices and compact "soap boxes" will always make much more noise than professional DSLRs.

Exposition and exposition

Having got acquainted with the concepts - shutter speed, aperture and sensitivity, let's move on to the most important thing.

Exposure is a key concept in photography. Without understanding what exposure is, you are unlikely to learn how to photograph well.

Formally, exposure is the amount of light from the light-sensitive sensor. Roughly speaking - the amount of light hitting the matrix.

Your snapshot will depend on this:

  • If it turns out to be too light, then the image is overexposed, too much light has hit the matrix and you "lit up" the frame.
  • If the image is too dark, the image is underexposed, so more light is needed on the matrix.
  • Not too light, not too dark means the exposure is correct.

Left to right - overexposed, underexposed and correctly exposed

The exposure is formed by choosing a combination of shutter speed and aperture, which is also called "exposure coupler". The photographer's task is to choose a combination so as to provide the required amount of light to create an image on the matrix.

In this case, the sensitivity of the matrix must be taken into account - the higher the ISO, the lower the exposure should be.

Focus point

The focus point, or just focus, is the point at which you "sharpened". Focusing the lens on an object means choosing the focus in this way so that this object is as sharp as possible.

Autofocus is usually used in modern cameras, a complex system allowing you to automatically focus on the selected point. But how autofocus works depends on many parameters, such as lighting. In poor lighting conditions, autofocus may miss or even be unable to complete its task. Then you have to switch to manual focus and rely on your own eye.

Eye focusing

The point at which the autofocus will focus is visible in the viewfinder. This is usually a small red dot. It is initially centered, but on DSLRs you can choose a different point for better framing.

Focal length

Focal length is one of the characteristics of a lens. Formally, this characteristic shows the distance from the optical center of the lens to the matrix, where a sharp image of the object is formed. Focal length is measured in millimeters.

The physical definition of the focal length is more important, and what is the practical effect. Everything is simple here. The longer the focal length, the more the lens "brings" the object closer. And the smaller the "angle of view" of the lens.

  • Lenses with a short focal length are called wide-angle ("wide") - they do not "bring anything closer", but they capture a large angle of view.
  • Lenses with a long focal length are called telephoto or telephoto lenses ("telephoto").
  • called "fixes". And if you can change the focal length, then this is a "zoom lens", or, more simply, a zoom lens.

The zooming process is the process of changing the focal length of the lens.

Depth of field or depth of field

Another important concept in photography is depth of field - depth of field. This is the area behind and in front of the focus point where objects in the frame appear sharp.

With a shallow depth of field, objects will be blurred already a few centimeters or even millimeters from the focusing point.
With a large depth of field, objects can be sharp at a distance of tens and hundreds of meters from the focusing point.

Depth of field depends on aperture value, focal length and distance to focus point.

More details about what determines the depth of field can be found in the article ""

Aperture ratio

The aperture ratio is throughput lens. In other words, it is maximum amount light that the lens is able to transmit to the matrix. The more aperture, the better and the more expensive the lens.

The aperture ratio depends on three components - the smallest possible aperture, focal length, as well as the quality of the optics itself and the optical scheme of the lens. The quality of the optics itself and the optical design just affect the price.

Let's not go into physics. We can say that the lens aperture is expressed by the ratio of the maximum open aperture to the focal length. Usually, it is the aperture ratio that manufacturers indicate on lenses in the form of numbers 1: 1.2, 1: 1.4, 1: 1.8, 1: 2.8, 1: 5.6, etc.

The higher the ratio, the higher the aperture. Accordingly, in this case, the fastest lens will be 1: 1.2.

Carl Zeiss Planar 50mm f / 0.7 - one of the fastest lenses in the world

The choice of aperture lens should be treated reasonably. Since the aperture depends on the aperture, a fast lens at the minimum aperture will have a very shallow depth of field. Therefore, there is a chance that you will never use f / 1.2, as you simply will not be able to really focus.

Dynamic range

The concept of dynamic range is also very important, although it is not very often heard aloud. Dynamic range is the ability of the matrix to transmit both bright and dark areas of an image without loss.

You've probably noticed that if you try to remove the window from the center of the room, you will get two options in the picture:

  • The wall on which the window is located will turn out well, and the window itself will be just a white spot
  • The view from the window will be clearly visible, but the wall around the window will turn into a black spot

This is due to the very large dynamic range of such a scene. The difference in brightness inside the room and outside the window is too great for a digital camera to perceive in its entirety.

Another example of high dynamic range is landscape. If the sky is bright and the bottom is dark enough, then either the sky in the picture will be white or the bottom black.

Typical example of a scene with high dynamic range

We see everything normally, because the dynamic range perceived by the human eye is much wider than that perceived by the sensors of cameras.

Bracketing and Exposure Compensation

There is another concept associated with exposure - bracketing. Bracketing is the sequential shooting of several frames with different exposures.

The so-called automatic bracketing is commonly used. You tell the camera the number of frames and the exposure offset in stops (stops).

Three frames are most commonly used. Let's say we want to make 3 frames at an offset of 0.3 stops (EV). In this case, the camera will first take one frame with a given exposure value, then with an exposure shifted by -0.3 stops and a frame with an offset of +0.3 stops.

You end up with three frames - underexposed, overexposed, and normally exposed.

Bracketing can be used to fine tune exposure parameters. For example, you are not sure what you have chosen correct exposure, shoot a series with bracketing, look at the result and understand in which direction you need to change the exposure, up or down.

Sample shot with exposure compensation at -2EV and + 2EV

Then you can use exposure compensation. That is, you set on the camera in the same way - take a frame with an exposure compensation of +0.3 stops and press the shutter release.

The camera takes the current exposure value, adds 0.3 stop to it and takes a frame.

Exposure compensation can be very convenient for quick adjustments when you don't have time to think about what needs to be changed - shutter speed, aperture or sensitivity to get the correct exposure and make the picture lighter or darker.

Crop factor and full frame sensor

This concept came to life with digital photography.

Full-frame is considered to be the physical size of the matrix, equal to the size of a 35mm frame on film. In view of the desire for compactness and the cost of manufacturing the matrix, "cropped" matrices are installed in mobile devices, soap dishes and non-professional DSLRs, that is, reduced in size relative to a full-frame one.

Based on this, a full-frame sensor has a crop factor of 1. The larger the crop factor, the less area matrix relative to the full frame. For example, with a crop factor of 2, the matrix will be twice as small.

A lens designed for a full frame, on a cropped matrix will capture only part of the image

What is the disadvantage of a cropped matrix? Firstly - what smaller size matrices - the higher the noise. Secondly, 90% of the lenses produced over the decades of the existence of the photo are designed for the size of the full frame. Thus, the lens "transmits" the image based on the full frame size, but the small cropped matrix perceives only a part of this image.

White balance

Another characteristic that has emerged with the advent of digital photography. White balance is the adjustment of the colors in an image to produce natural looking tones. In this case, the starting point is clean White color.

With the correct white balance, white in the photo (such as paper) looks truly white, not bluish or yellowish.

White balance depends on the type of light source. It is one for the sun, another for cloudy weather, and a third for electric lighting.
Usually beginners shoot with automatic white balance. This is convenient, since the camera itself selects the desired value.

Unfortunately, automation isn't always that smart. Therefore, pros often set the white balance manually, using a sheet of white paper or other object that is white or as close to it as possible.

Another method is to correct the white balance on a computer after the picture has been taken. But for this it is highly desirable to shoot in RAW.

RAW and JPEG

A digital photograph is a computer file with a set of data from which an image is formed. The most common file format for displaying digital photographs is JPEG.

The problem is that JPEG is a so-called lossy compression format.

Let's say we have a beautiful sunset sky, in which there are a thousand semitones of various colors. If we try to preserve all the variety of shades, the file size will be huge.

Therefore, JPEG, when saving, throws out "extra" shades. Roughly speaking, if there is blue color in the frame, a little more blue and a little less blue, then JPEG will leave only one of them. The more "compressed" Jpeg - the smaller its size, but the fewer colors and the details of the image he conveys.

RAW is a "raw" dataset captured by the camera's sensor. Formally, this data is not yet an image. This is the raw material for creating the image. Due to the fact that RAW stores a complete set of data, the photographer has much more opportunity to process this image, especially if some kind of "error correction" made at the shooting stage is required.

In fact, when shooting in JPEG, the following happens, the camera transmits "raw data" to the microprocessor of the camera, it processes them according to the algorithms embedded in it "to make it look beautiful", throws out everything superfluous from its point of view and saves the data in JPEG which you can see on the computer as the final image.

Everything would be good, but if you want to change something, it may turn out that the processor has already thrown out the data you need as unnecessary. This is where RAW comes in. When you shoot in RAW, the camera simply gives you a set of data, and then do whatever you want with it.

Newbies often bump their foreheads about this - after reading that RAW gives the best quality. RAW does not give the best quality on its own - it gives a lot more opportunities to get it best quality in the process of processing a photo.

RAW is raw material - JPEG finished result

For example, upload to Lightroom and create your image by hand.

A popular practice is to shoot RAW + Jpeg at the same time - where the camera saves both. JPEG can be used for quick review of material, and if something goes wrong and requires serious correction, then you have the original data in the form of RAW.

Conclusion

I hope this article will help those who just want to take photography on a more serious level. Perhaps some of the terms and concepts will seem too complicated to you, but do not be afraid. In fact, everything is very simple.

If you have any wishes and additions to the article - write in the comments