SOKOINE UNIVERSITY OF AGRICULTURE DIRECTORATE OF COMPUTER CENTER DIT 0108: COMPUTER GRAPHICS SUBJECT LECTURE NOTES (2013/2014) Instructor: AYUBU,S…. [600075]
1 | P a g e
SOKOINE UNIVERSITY OF AGRICULTURE
DIRECTORATE OF COMPUTER CENTER
DIT 0108: COMPUTER GRAPHICS
SUBJECT LECTURE NOTES (2013/2014)
Instructor: AYUBU,S.
Department of informatics – SUA
2 | P a g e
LESSON 1 : SURVEY OF COMPUTER GRAPHICS APPLICATIONS
Introduction
History of Computer Graphics
Advantage of computer graphics
Applications of Computer Graphics
Classification of computer graphics
Basic concepts/terminologies
Introduction
Computer Graphic is the discipline of producing picture or images using a computer wh ich
include modeling, creation, manipulation, storage of geometric objects, rendering, converting a
scene to an image, the process of transformations, rasterization, shading, il lumination,
animation of the image, etc. Computer Graphics has been widely used in graphics presentation,
paint systems, computer-aided design (CAD), image processing, simulation, etc.
In computer graphics, pictures or graphics objects are presented as a collec tion of
discrete picture elements called pixels. The pixel is the small est addressable screen element. It
is the smal lest piece of the display screen which we can control. The control is achiev ed by
setting the intensity and color of the pixel which compose the screen.
Each pixel on the graphics display does not represent mathematical point. Rather, it represents a
region which theoretically can contain an infinite number of points. For exam ple, if we want to
display point P₁ whose co-ordinates are (4.4, 3.8) and point P₂ whose co-ordinates are (4.7,
3.3) then P₁ and P₂ are represented by only one pixel (4, 3), as shown in the figure below
The special procedures is needed to determine which pixel will provide the best approximation
3 | P a g e
to the desired picture or gra phics object .The process of determining the appropriate pixels for
representing picture or graphics object is known as rasterization, and the process
of representing continuous picture or graphics object as a collec tion of discrete pixels is
called scan conversi on.
The computer graphics allows rotation, translation, scaling and performi ng various projections
on the picture before displaying it. It also allows adding effects such as hidden surface
removal, shading or transparency to the picture before final representation. It provides user the
control to modify contents, structure, and appearance of pictures or graphics objects using
input devices such as a keyboard, mouse, or touch-sensitive panel on the screen. There is a
close relationship between the input devices a nd display devices. T herefore, graphics devices
includes both input devices and display devi ces.
History of Computer Graphics
In 1950 the first computer driven display , attached to MIT‘s computer, was used to generate
simple pictures. This display used a Cathode -Ray tube (CRT). Interactive computer
graphics made progress and the term computer graphics was first used in 1960 .
Advantage of computer graphics
1. A high quality graphics display of p ersonal computer provide one of the most natural
means of communicating with a computer.
2. It has an ability to show moving pictures, and thus it is possible to produce animation
with computer graphics.
3. With computer graphics use can also control the animati on by adjusting the speed, the
portion of the total scene in view, the geometric relationship of the objects in the scene to
one another, the amount of detail shown and so on.
4. The computer graphics also provides facility called update dynamics. With update
dynamics it is possible to change the shape, color or other properties of the objects being
viewed.
5. With the recent development of digital signal processing (DSP) and audio synthesis chip
the interactive graphics can now provide audio feedback along with the graphical
feedbacks to make the simulated environment even more realistic.
Application of Computer graphics
1. User interfaces: It is now a well established fact that graphical interfaces provide an
attractive and easy interaction between users and computers. The built-in graphics
provided with user interfaces use visual control items such as buttons, menus, icons,
scroll bar etc, which allows user to interact with computer only by mouse-click. Typi ng
is necessary only to input text to be stored and manipulated.
2. Plotting of graphics and chart: In industry, business, gove rnment, and educational
organizations, computer gra phics is most commonly used to create 2D and 3D
4 | P a g e
graphs of ma thematical, physical and economic functions in form of histograms, bars
and pie-charts. These graphs and charts are very useful for decision making.
3. Com puter-aided drafting and design: the computer –aided drafting uses graphics to
design components and systems electrical, mechanical, electromechanical an
electronic devices such as automobile bodies, structures of building, air plane, ships, very
large scale integrated chips, optical systems and computer networks.
4. Simulation and Animation: use of graphics in simulation makes mathematic models and
mechanical systems more realistic and easy to study. The interactive graphics supported
by animation software proved their use in production of animated movies and cartoons
films.
5. Art and Comm erce: There is a lot of development in the tools provided by computer
graphics. This allows user to create artistic pictures which express message and
attract attentions. Such pictures are very useful in advertising.
6. Process Control: By the use of computer now it is possible to control various
processes in the industry from a remote control room. In such cases, process
systems and processing parameters are shown on the computer with graphic
symbols and identifications. This makes it easy for operator to monitor and control
various processing parameters at a time.
7. Cartography: Computer graphics is also used to represent geographic maps, weather
maps, oceanographic charts, counter maps, population density maps and so on.
8. Education and training: computer graphics can be used to generate m odels of
physical aids. Models of physical systems, physiological systems, population trends, or
equipment, such as color –coded diagram can help trainees to understand the operation
of the system.
9. Image processing : in computer graphics , a computer is used to create pictures . Image
processing, on the other hands, applies techniques to modify or interpret existi ng picture
such as photographs and scanned image . Image processing and computer graphics are
typically com bined in many applications such as to model and study physical functions,
to design artificial limbs, and to plan and practice surgery. Image processing techniques
are most comm only used for picture enhancements to analyze satellite photos, X-ray
photography and so on.
Classification of computer graphics
In the last section we have seen various uses of computer graphics. These uses can be classified
as shown in the fig. below. As shown in fig. below, the use of computer gra phics can classified
according to dimensionality of the object to be drawn: 2D or 3D. It can also be classified
according to kind of picture: symbolic or Realistic. Many computer graphics application
are classified by the type of interaction. The type of interaction determines the user‘s degree of
control over the object and its imag e. Incontrollable interaction user can change the attributes
of the images. Role of picture gives another classification. Com puter graphics is either used for
5 | P a g e
representation or it can be an end product such as drawings. P ictorial representation gives the
final classification of used computer graphics. It classifies the use of computer graphics to
represent pictures such as line drawing, black and white, color a nd so on.
Basic Concepts and Principles
Image
In common usage, an image or picture is an artifact, usually two -dimensional, that has a similar
appearance to some subject —usually a physical object or a person. Images may be two –
dimensional, such as a photograph, screen display, and as well as a three -dimensional, such as a
statue. They may be captured by optical devices —such as cameras, mirrors, lenses, telescopes,
microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces.
Digital Image
A digital image is a representation of a two -dimensional image using ones and zeros (binary).
Depending on whether or not the image resolution is fixed, it may be of vector or raster type.
Without qualifications, the term "digital image" usually refers to raster images.
Pixel
6 | P a g e
In digital imaging, a pixel is the smallest piece of information in an image. Pixels are normally
arranged in a regular 2 -dimensional grid, and are often represented using dots or squares. Each
pixel is a sample of an original image, where more samples typically provide a more accurate
representation of th e original. The intensity of each pixel is variable; in color systems, each pixel
has typically three or four components such as red, green, and blue, or cyan, magenta, yellow,
and black.
Raster
Raster images have a finite set of digital values, called pi cture elements or pixels. The digital
image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual
element in an image, holding quantized values that represent the brightness of a given color at
any specific point.
Typica lly, the pixels are stored in computer memory as a raster image or raster map, a two –
dimensional array of small integers. These values are often transmitted or stored in a compressed
form.
2D Computer Graphics
2D computer graphics are the computer -based g eneration of digital images mostly from two –
dimensional models, such as 2D geometric models, text, and digital images, and by techniques
specific to them. The word may stand for the branch of computer science that comprises such
techniques, or for the mode ls themselves. 2D computer graphics started in the 1950s.
2D computer graphics are mainly used in applications that were originally developed upon
traditional printing and drawing technologies, such as typography, cartography, technical
drawing, advertisin g, etc.. In those applications, the two -dimensional image is not just a
representation of a real -world object, but an independent artifact with added semantic value; two –
dimensional models are therefore preferred, because they give more direct control of t he image
than 3D computer graphics, whose approach is more akin to photography than to typography.
3D Computer Graphics
3D computer graphics in contrast to 2D computer graphics are graphics that use a three –
dimensional representation of geometric data tha t is stored in the computer for the purposes of
performing calculations and rendering 2D images. Such images may be for later display or for
real-time viewing.3D computer graphics are often referred to as 3D models. However, there are
differences.
Compute r Animation
Computer animation is the art of creating moving images via the use of computers. It is a
subfield of computer graphics and animation. Increasingly it is created by means of 3D computer
graphics, though 2D computer graphics are still widely use d for stylistic, low bandwidth, and
faster real -time rendering needs. Sometimes the target of the animation is the computer itself, but
7 | P a g e
sometimes the target is another medium, such as film. It is also referred to as CGI (Computer –
generated imagery or compu ter-generated imaging), especially when used in films.
To create the illusion of movement, an image is displayed on the computer screen then quickly
replaced by a new image that is similar to the previous image, but shifted slightly. This technique
is iden tical to the illusion of movement in television and motion pictures.
Lesson r eview questions
1. Define the terms below as applied to computer graphics
i). computer graphics
ii). pixel
iii). image processing
2. Differentiate between the following
i). scan conversion and raste rization
ii). image and digital image
iii). 2D graphics and 3D graphics
3. Discuss the following :
i). Computer aided design
ii). Computer aided manufacturing
iii). application of computer graphics in entertainment
iv). application of computer graphics in visualization
8 | P a g e
LESSON 2: GRAPHI CS SYSTEM
Computer display devices
Raster scan
Vector scan
Graphic processing units
Graphic input/output devices
2.1 Parts of graphics systems
The computer graphics extends the elements that are found in a normal computer system. The
computer system in ter ms of hardware basically has the input devices, output devices, processing
devices and storage devices.
However, the computer graphics system consist all the parts of general computer system but
additionally it adds more specialized processing and storage devices that only are responsible
graphics processing. Thus, a computer graphics system a normal computer with addition of
specialized devices for graphics processing
The graphic computer system consist the following parts:
1. Input devices
2. Central Processing Unit
3. Graphics Processing Unit
4. Memory
5. Frame buffer
6. Output devices
The following diagram shows the basic organization of the computer graphic system components
Figure 1: Graphic System
Based on interactivity, that is the ability o f human being to interact with a graphic system, the
computer graphics system can be interactive or passive.
9 | P a g e
The interactive graphics involves two way communicat ions between computer and user. The
computer up on receiving signals from the input device can modify the displayed picture
appropriately. Thus in this systems a u ser controls contents, structure, and appearance of objects
and their displayed images via rapid visual feedback. A good example of this is a computer
based graphics system
In contrast t o an in interactive graphics where a user can send a command to computer and the
computer responds with an appropriate answer, Passive graphics refers computer graphics
operation that transfers automatically and without operator intervention. Non -interacti ve
computer graphics involves one way communication bet ween the computer and the user that is a
picture is produced on the monitor and the user does not have any control over the produced
picture. A good example this system is a TV based graphics.
Most gr aphics nowadays are interactive; this is due to advantages that it offers over the non –
interactive graphics which includes
High quality of image produces.
More reliable results.
Lower cost.
Large productivity.
Lower analysis
In This course will only deal w ith interactive graphics, thus even the graphic system components
described above are for interactive graphics
2.2 Pixel and Frame buffer
A digital memory or F rame buffer is a memory , in which the displayed Image is stored as a
matrix of intensity values . For simple system the frame buffer may be anywhere in the main
memory but other system have separate memory for graphics application. Thus, a frame buffer is
a large, contiguous piece of computer mem ory
Inside the frame buffer the image is stored as a pa ttern of binary digital numbers, which
represent a rectangular array of picture elements, or pixel.
The pixel is the smallest addressable screen element. In the Simplest case where we wish to store
only black and white images, we can represent black pixels by 0's in the frame buffer and white
Pixels by 1's. As a minimum there is one mem ory bit for each location or pixel in the raster.
This amount of memory is called a bit plane.
A 512 X 512 element square raster requires 2^18 memory bits in a single bit pl ane. The picture
is built up in the frame buffer 1 bit at a ti me
Since a mem ory bit has only two states (0 or 1), a single bit plane yields a black and white
display. Color or gray levels can be incorporated into a frame buffer raster graphics device
by using additional bit planes. . On a black a nd white system with one bit per pixel, the frame
10 | P a g e
buffer is called a bitmap. For systems with multiple bits per pixel, the frame buffer is often
referred to as a pixmap.
The capacity of the frame buffer depends on the number of bits representing each pixel, on the
numb er of pixels per scan line and on the number of the scan lines.
The depth , or precision , of the frame buffer, defined as the number of bits that are used for each
pixel, determines properties such as ho w many colors can be represented on a given system. For
example, a 1 -bit-deep frame buffer allows only two colors, whereas an 8 -bit-deep frame buffer
allows 28 (256) colors. In full-color systems, there are 24 (or more) bits per pixel. Such systems
can dis play sufficient colors to represent most images realistically. They are also called true-
color systems, or RGB -color systems, because individual groups of bits in each pixel are
assigned to each of the three primary colors —red, green, and blue —used in most displays
2.2 Output devices
These devices are used to give out images from the graphics system so that the user can see
through the display or can have a hard copy that can be stored for future use.
2.3.1 Video Display Devices ( CRT)
The display devices are primary output devices. The most comm only used o utput device in a
graphics video monitor. The operations of most video monitors are based on the standard
cathode-ray-tube (CRT) design.
A CRT is an evacuated glass tube. An electron gun at the rear of the tube produces a beam of
electrons which is directed towards the front of the tube (screen). The inner side of the screen
is coated with phosphor substance which gives off light when it is stroked by electrons. It is
possible to control the point at which the electron beam strikes the screen, and therefore the
position of the dot upon the screen, by deflecting the electron beam. The fig below shows the
electrostatic deflection of the electron beam in a CRT.
The deflection system of the cathode-ray-tube consists of two pairs of parallel plates, referred to
as the vertical and horizontal deflection plates. T he voltage applied to ver tical plates controls
the vertical deflection of the electron beam and voltage applied to the horizontal deflection
plates controls the horizontal deflection of the electron beam
11 | P a g e
Figure 2: Cathode Ray tube
The three most common types of CRT display technologies are:
i). Direct view storage tube display
ii). Calligraphic refresh display
iii). Raster scan refresh displ ay
Storage Tube Graphics Displays
This storage tube display, also called a bistable storage tube, can be considered a CRT with a
long persistence phosphor. A line or character will remain visible up to an hour until erased. To
draw a line on the display the intensity of the electron beam is increased sufficiently to cause
the phosphor to assume its permanent bright ―storage‖. The display is erased by flooding the
entire tube with a specific voltage which causes the phosphor to assume its dark state.
Featur es of storage tube graphics display
1- Storage tube display is flicker free
2- Capable of displaying an unlimited number of vectors
3- Resolution is typically 1 0 2 4 X 1024 addressable points on an 8 X 8 inch square or
4096 X 4096 on either 14 X 14 or an 18 X 18 inch square.
4- Display of dyna mic motion or animation is not possible.
5- A storage tube display is a line drawing or random scan display. This means that a
line (vector) can be drawn directly from any addressable point to any other addressable
point. This device plots continuous lines and curves rather than separate pixels.
6- A storage tube displa y is easier to program than a calligraphic or raster scan refresh
display.
7- The level of interactivity is lower than with either a refresh or raster scan display.
The Callig raphic display
Calligraphic monitors draw s a picture one line at a time and for this reason is also referred to as
vector displays ( line drawing or vector or random or stroke -writing ). The component lines of
12 | P a g e
a picture can be drawn (Figure below ) and refres hed by a random -scan system in any specified
order.
Refresh rate on a random -scan system depends on the number of lines to be displayed. Picture
definition is now stored as a set of line -drawing commands in an area of memory referred to as
the refresh di splay file. Sometimes the refresh display file is called the display list, display
program, or simply the refresh buffer. To display a specified picture, the system cycles through
the set of commands in the display file, drawing each component line in turn . The part of system
whose function is to repeatedly cycle through display file is called display controller . After all
line- drawing comman ds have been processed, the system cycles back to the first line command
in the list. Random -scan displays are desig ned to draw al the component lines of a picture 30 to
60 times each second. These CRT display needs to be refreshed many times each second
because they use phosphor with short persistence.
Figure 3: Vector display
Two factors which limit the complexity (number of vectors displayed) of the picture are the
size of the display buffer and the speed of the display controller. A further limitation is the
speed at which picture infor mation can be processed.
Features of calligraphic refresh displays
1- It is a vector graphics display
2- Resolution is the same as storage tube display
3- Employee the concept of pictu re segmentation that support the interactive graphics
progra ms
Raster – display s
In a raster – scan system, the electron beam is swept a cross the screen, one row at a time from
top to bottom. As the electron beam moves across each row, the beam intensity is turned on and
off to create a pattern of illuminated spots. Picture definition is stored in memory area called the
refresh buffer or frame buffer. This memory area holds the set of intensity values for all the
13 | P a g e
screen points. Stored intensity values are then retrieved from the refresh buffer and ―painted‖ on
the screen one row (scan line) a t a time (Fig. below ). Each screen point is ref erred to as a pixel or
pel (shortened forms of picture element).
Refreshing on raster -scan displays is carried out at the rate of 60 to 80 frames per second,
although some systems are designed for higher refresh rates . Sometimes, refresh rates are
describe d in units of cycles per second, or Hertz (Hz), wh ere a cycle corresponds to one frame .
At the end of each scan line, the electron beam returns to the left side of the screen to begin
displaying the next scan line. The return to the left o f the screen, aft er refreshing each scan line,
is called the horizontal retrace of the electro n beam. And at the end of each frame (displayed in
1/80th to 1/60th of a second), the e lectron beam returns (vertical retrace ) to the top left corner of
the screen to begin the ne xt frame.
On some raster -scan systems (and in TV sets), each frame is di splayed in two passes using an
interlaced refresh procedure. In the first pass, the beam sweep s across every other scan line from
top to bottom. Then after the vertical retrace, the b eam sweeps out the remaining scan lines (fig.
below). Interlacing of the scan lines in this w ay allows us to see the entire screen displayed in
one-half the time it would have taken to sweep across all the lines at once from top to bottom
Raster CRT grap hics devices can be considered a matrix of discrete cells each of which can be
made bright. Thus it is a point plotting devices.
It is not possible except in special cases to directly draw a straight line from one addressable
point, or pixel in the matrix to another addressable point, or pixel. The line can only
approxi mated by a series of dots (pixels) close to the path of the line.
Figure 4: Raster Scan Display
14 | P a g e
Differences between vector scan display and raster scan display
Vector Sc an Display Raster Scan Display
1. In vector scan display the beam
is moved between the end
points of the graphics
primitives.
In raster scan display the beam is moved all over the
screen one scan line at a time, from top to bottom and
then back to top.
2. Vector display flickers when
the number of primitives in the
buffer becomes too large. In raster display, the refresh process is independent of
the complexity of the imag e.
3. Scan conversion is not
required. Graphics primitives are specified in terms of their
endpoints and must be scan converted into their
corresponding pixels in the frame buffer.
4. Scan conversion hardware is
not required. Because each primitive must be scan -converted, real
time dynamics is far more computational and requir es
separate scan conversion hardware.
5. Vector display draws
continuous and smooth lines. Raster display can display mathematically smooth
lines, polygons, and boundaries of curved primitives
only by approximating them with pixels on the raster
grid.
6. Cost is more. Cost is low.
7. Vector display only draws line
characters. Raster display has ability to display areas filled with
solid colors or patterns.
Continual refresh display vs. Storage displays
Based on the operation of the CRT we can group th e CRT technology under either continual
refresh displays or storage displays
The refresh displays require that the image have to be re -drawn after a certain time interval. for
example both raster and random displays fall under these category
In contrast to the refresh display, the storage display doesn‘t need a constant refresh of the
image.
Important Charac teristics of Video D isplay Devices
1. Persistence: The major difference between phosphors is their persistence. It decides
how long they continue to emit light after the electron beam is removed. Persistence is defined
as the time it takes the emitted light from the screen to decay to one-tenth of its original
intensity. Lower persistence phosphors require higher refreshing rates to mai ntain a picture on
the screen without flicker. However it is useful for displaying animations. On the other hand
higher persistence phosphors are useful for displaying static and highly complex pictures.
15 | P a g e
2. Resolution: Resol ution indicates the maxi mum number of points that can be displayed
without overlap on the CRT. It is defined as the number of points per centimeter that can be
plotted horizontally and vertically. Resol ution depends on the type of phosphor, the
intensity to be displayed and the focusing and deflection systems used in the CRT.
3. Aspect Ratio: It is the ratio of vertical points to horizontal points to produce equal
length lines in both directions on the screen. An aspect ratio of 4/5 means that a vertical line
plotted with four points has the same length as a horizontal line plotted with five poi nts.
4. Luminance: measured in candelas per square metre (cd/m²).
5. Size: measured diagonally. For CRT the viewable size is one inch (25 mm) smaller then the
tube itself.
6. Dot pitch: Describes the distance between pixels of the same color in millimetres. In general,
the lower the dot pitch (e.g. 0.24 mm, which is also 240 micrometres), the sharper the picture
will appear.
7. Response time: The amount of time a pixel in an LCD monitor takes to go fro active
(black) to inactive (white) and back to active (black) again. It is measured in milliseconds
(ms). Lower numbers mean faster transitions and therefore fewer visible image artifacts.
8. Refresh rate. The number of times in a second that a display is illuminated
9. Power consumption, measured in watts (W).
Color CRT Monitors
A color CRT monitor displays color picture by using a combination of phosphors that emit
different colored light. By combining the emitted light a range of colors can be generated. Two
basic m ethods for producing color displays are:
Beam Penetration Method
Shadow -Mask Method
Beam Penetration Method
Random scan monitors use the beam penetration method for displaying color picture. In this, the
inside of CRT scr een is coated two layers of phos phor namely red and green. A beam of slow
electrons excites only the outer red layer, while a beam of fast electrons penetrates red layer and
excites the inner green layer. At intermediate beam speeds, a combination of red and green light
is emitted to show two additional colors – orange and yellow.
Advantages
Less expensive
16 | P a g e
Disadvantages
Quality of images are not good as compara ble with other methods
Four colors are allowed only
Shadow Mask Method
Raster scan system are use shadow mask methods to produce a much more range of colors than
beam penetration method. In this, CRT has three phosphor color dots. One phosphor dot emits a
red light, second emits a green light and third emits a blue light. This type of CRT has three
electrons guns and a shadow mask gr id as shown in figure below:
In this figure, three electrons beams are deflected and focused as a group onto the shadow mask
which contains a series of holes. When three beams pass through a hole in shadow mask they
activate dot triangle as shown in figu re below:
The colors we can see depend on the amount of excitation of red, green and blue phosphor. A
white area is a result of all three dots with equal intensity while yellow is produced with green
and red dots and so on.
Advantages
produce realistic i mages
Produce different colors and shadows scenes.
Disadvantages
low resolution
expensive
electron beam is directed to whole screen
17 | P a g e
Color look up tables (LUT)
In raster system the number of color choices available depends on the amount of storage
provided per pixel in the frame buffer. Also color information can be stored in two ways: in first
way we can store the color codes directly in frame buffer and in second way the color codes are
stored in separate table and use pixel value as index into this table .
When the color codes are stored in frame buffer, the number of color that can be produced
depends on the number bits in frame buffer that represents each pixel. The minimum number of
bits for each pixel is 3 i.e. one bit for Red, another bit for Green an d another bit for Blue, and this
produces a maximum of 8 colors (23).Now suppose we use 6 bits per pixel the number of color
that can be presented by a system is 26=64 colors. Therefore the more number of bits we allocate
for each pixel the more color gene rated. But for this approach to work effectively then we need
more frame buffer. To avoid the need of large frame buffer in display devices, the second
approach of using table known as look up table was devised.
In color displays, 24 – bits per pixel are c ommonly used, where 8 -bits represent 256 levels for
each color. Here it is necessary to read 24 -bits for each pixel from frame buffer. This is very time
consuming. To avoid this video controller uses look up table (LUT) to store many entries of
pixel value s in RGB format. With this facility, now it is necessary only to read index to the look
up table from the frame buffer for each pixel. This index specifies the one of the entries in the
look-up table. The specified entry in the loop up table is then used t o control the intensity or
color of the CRT.
Usually, look -up table has 256 entries. Therefore, the index to the look -up table has 8 -bits and
hence for each pixel, the frame buffer has to store 8 -bits per pixel instead of 24 bits . For example
for color cod e 3156, a combination green -blue color is displayed for pixel location (x,y, see
figure below)
18 | P a g e
Advantages of look up tables
provide a "reasonable" number of simultaneous colors without requiring Large frame
buffers
table entries can be changed at any ti me, allowing a user to be able to experiment easily
with different color combinations
In visualization and image -processing applications, color tables are convenient means for
setting color thresholds so that all pixel values above or below a specified thr eshold can
be set to the same color
2.3.2 Hard -Copy Devices
Hard copy device or output devices accept data from a computer and converted them into a form
which is suitable for use by the user . The printer s or plotter s are used to produce hard -copy
output o n 35 -mm slides, overhead transparencies, or plain paper. The quality of the pictures
depends on dot size and number of dots per inch (DPI).
Types of printers: line printers, LaserJet, ink -jet, dot -matrix
LaserJet printers use a laser beam to create a char ge distribution on a rotating drum coated with
a photoelectric material. Toner is applied to the drum and then transferred to the paper. To
produce color outputs, the 3 color pigments (cyan, magenta, and yellow) are deposited on
separate passes.
Inkjet p rinters produce output by squirting ink in horizontal rows across a roll of paper
wrapped on a drum. To produce color outputs, the 3 color pigments are shot simultaneously on a
single pass along each print line on the paper.
Pen plotters are used to gener ate drafting layouts and other drawings of normally larger sizes.
A pen plotter has one or more pens of different colors and widths mounted on a carriage which
spans a sheet of paper.
2.4 Input Devices
The input devices provide a means a user can inter act with system by providing an appropriate
stimulus to a graphic system.
The input devices are categorized into two types:
Physical input devices and
Logical input devices
19 | P a g e
2.4.1 Physical input devices
Common devices: keyboard, mouse, trackball and joysti ck
Specialized devices are:
1. Data gloves are electronic gloves for detecting fingers' movement. In some applications, a
sensor is also attached to the glove to detect the hand movement as a whole in 3D space.
2. A graphic tablet contains a stylus and a drawing surface and it is mainly used for the input of
drawings. A tablet is usually more accurate than a mouse, and is commonly used for large
drawings.
3. Scanners are used to convert drawings or pictures in hardcopy format into digital signal for
computer processing.
4. Touch panels allow displayed objects or screen positions to be selected with the touch of a
finger. In these devices a touch -sensing mechanism is fitted over the video monitor screen.
Touch input can be recorded using optical, electr ical, or acoustical methods.
2.3 CPU and GPU
In a simple system, there may be only one processor, the central processing unit (CPU ) of the
system, which must do both the normal processing and the graphical processing. The main
graphical function of the pro cessor is to take specifications of graphical primitives (such as lines,
circles, and polygons) generated by application programs and to assign values to the pixels in the
frame buffer that best represent these entities. For example, a triangle is specifie d by its three
vertices, but to display its outline by the three line segments connecting the vertices, the graphics
system must generate a set of pixels that appear as line segments to the viewer. The conversion
of geometric entities to pixel colors and l ocations in the frame buffer is known as rasterization ,
or scan conversion .
In early graphics systems, the frame buffer was part of the standard memory that could be
directly addressed by the CPU. Today, virtually all graphics systems are characterized by
special -purpose graphics processing units (GPUs ), custom -tailored to carry out specific
graphics functions. The GPU can be either on the mother board of the system or on a graphics
card. The frame buffer is accessed through the graphics processing unit an d usually is on the
same circuit board as the GPU.
One way to set up the organization of a raster system is to provide it with a separate display
processor, sometimes referred to as a graphics controller or a display coprocessor. The purpose
of the display s processor is to free the CPU from the graphics chores. In addition to the system
memory, a separate display -processor memory area can also be provided. A major task of the
display processor is digitizing a picture definition given in an application progr am into a set of
pixel -intensity values for storage in the frame buffer.
20 | P a g e
Display processors are also designed to perform a number of additional operations. These
functions include generating various line styles, displaying color areas, and performing cert ain
transformations and manipulations on displayed objects. Also, display processors are typically
designed to interface with interactive input devices, such as a mouse.
In random scan displays, the display processor has its own instruction set and instruc tion address
register. Hence it is also called Display Processing Unit (DPU) or Graphics Controller. It
performs instruction fetch, decode and execute cycles found in any computer. To provide a
flicker free display, the display processor has to execute its program 30 to 60 times per second.
The program executed by the display processor and the graphics package reside in the main
memory. The main memory is shared by the general CPU and the display processor.
21 | P a g e
LESSON 3: OUTPUT PRIMITIVES
Lesson Objectives
Meaning of output primitives
Points
Lines and line drawing algorithms
Attributes of output primitives
Introduction
Usually in larger images are formed by means of combining small images. To understand this let
take an example of drawing a rectangle. The re ctangle can be thought as a combination of four
lines, therefore in this case the lines are considered to basic shape that creates the rectangle. In
other word if we are able to draw a line then automatically we can also be able to draw a
rectangle.
Output Primitives: Basic geometric structures (points, straight line segment, circles and other
conic sections, quadric surfaces, spline curve and surfaces, polygon color areas, and character
strings) . In this course basically we will check the point and line p rimitives and some algorithm
used to generate them
In order to draw the primitive objects, one has to first scan convert the object.
Scan convert: Refers to the operation of finding out the location of pixels to the intensified and
then setting the values of corresponding bits, in the graphic memory, to the desired intensity
code.
Points
Point is the fundamental element of the picture representation. It is not hing but the position in a
plane defined as either pairs or triplets of number depending on wheth er the data are two or three
dimensional. Thus, would represent a point in either two or three
dimensional space respectively .
Suppose you have a mathematical point (x,y) it needs to be scan conv erted to a pixel at location
(x’ y’ so that it can be displayed on the screen.
To get the value of X’ and Y’ we can round the values of X and Y since the accepted values of
pixel are whole numbers ie
22 | P a g e
Therefore all the points satisfy the relations
1 xxx and
1 yyy are ma pped to
pixel ’ ’ . See the figure for clarification
Lines
Two points would represent a line or edge, and a collection of three or more points a polygon.
The representation of curved lines is usually accomplished by approximating them by short
straight line segments.
If the two points used to specify a line are , then an equation for the line is
given as
=
=
Where
and
23 | P a g e
We can obtain the value of and respectively by:
And
The above equation is called the slop intercept form of the line. The slope m i s the change in
height divided by the change in the width for two points on the line. The
intercept b is the height at which the line crosses the .
For lines with slope magnitude can be set Proportional to a small horizontal
deflection voltage and the corresponding vertical deflection is then set proportional to
For lines with slope magnitude can be set proportional to a small deflection voltag e
with the corresponding horizontal deflection voltage s et proportional to
For lines with
Line Drawing Algorithms
The process of ‗turning on‘ the pixels for a line segment is called vector generation or line
generation, and the algorithms for them are known as vector generation algorithms or line
drawing algorithms.
Before discussing specific line drawing algorithms it is useful to note the general requirements
for such algorithms. These requirements specify the desired characteristics of line.
The line should appear as a straight line and it should start and end accurately.
The line should be displayed with constant brightness along its length independent of its
length and orientation.
The line should be drawn rapidly.
Let us see the different lines drawn in fig bellow,
24 | P a g e
As shown in the figure, horizontal and vertical lines are straight and have same width. The 45°
line is straight, but its width is not constant. On the other hand, the line with any other orientation
is neither straight nor has same width. Such cases are due to the finit e resolution of display and
we have to accept approximate pixels in such situation. The brightness of the line is dependent
on the orientation of the line. We can observe that the effective spacing between pixels for the
45° line is greater than for the ve rtical and horizontal lines. This will make the vertical and
25 | P a g e
horizontal lines appear brighter than the 45° line. Complex calculations are required to provide
equal brightness along lines of varying length and orientation.
Simple/direct line drawing algorit hm
This algorithm uses the line equation to generate the points composing it. Basically the
following equations are needed in implementation of this algorithm:
The following diagram the show the value of ―b‖ when a line is drawn
To generate the successive points between the st arting point to the last point we use the
following relations
One major disadvantage of this algorithm i s that it involves many floating point operation as
result the drawing activities becomes time consuming . Another disadvantage is that there is a lot
of multiplication in the algorithm which result in the program to be slow and less efficient
26 | P a g e
Vector Ge neration /Digital Differential Analyzer (DDA) line algorithm
The Digital Differential Analyzer (DDA) is a scan. Conversion line Algorithm based on
calculating either x. We sample the line at unit intervals in one coordinate and determine
corresponding integer values nearest the line path for the other coordinate
Now consider first a line with positive slope, as shown in figure above . If the slope is less than
one or equal to 1,we sample at unit x intervals ) compute each successive y values as :
Subscript k takes integer values starting from 1, for the first point & increase by 1 until the final
end p oint is reached.
For lines with positive slope greater than 1, we reverse the role of x and y. That is we sample at
unit y intervals ( ) and calculate each succeeding x value as:
Above e quations are based on assum ption that lines are to be processed form left end p oint to
the right end point.
If this processing is reversed the sign is changed
and
Above E quations are used to calculate pixel position along a line with negative slope.
When the start endpoint is at the right we set and obtain y position from eq uation
similarly when Absolute value of Negative slope is greater than 1, we use or we
27 | P a g e
Advantages of DDA Algorithm
It is the simplest algorithm and it does not require special skills for implementation.
It is a faster method for calculating pixel positions than the direct use of equation
. It eliminates the multiplication in the equation by making use of raster
characteristics, so that appropriate increments are applied in the x or y direction to find
the pixel positions along the line path.
Disadvantages of DDA Algorithm
Floating point arithmetic in DDA algorithm is still time -consuming.
The algorithm is orientation dependent. So the e nd point accuracy is poor.
Attributes of output primitives
The attributes gives additional information to a given primitives. They are parameters that affect
the way a certain primitives will be displayed.
The line primitives have the following attribute s:
1. Type e.g. solid, dotted, dashed etc
2. Width – the size of line
3. Line cap and joins – meter, round, bevel etc.
4. Color – blue, yellow etc.
5. Line brush
28 | P a g e
LESSON 4: INTRODUCTION TO 2D VIEWING AND CLIPPING
PART I: VIEWING
Introduction
For the computer graphic ob ject to displayed on the screen or any output devices it passes
through different steps. In each the step the object is represented in a given space (plane) hence
resulting into different coordinates system
Objects in the 2D or 3D scene and the scene itsel f are sequentially convert ed, or transformed,
through several spaces when proceeding through the 3D pipeline.
Coordinate Representations (spaces) in Graphics
General graphics packages are designed to be used with Cartesian coordinate representations
usually several different Cartesian reference frames are used to construct and display a
scene:
Modeling coordinates are used to construct individual object shapes. Each model is in its own
coordinate system, whose origin is some point on the mode l, such as the right foot of a soccer
player model. Also, the model will typically ha ve a control point or "handle" from which a
reference is made when moving or rotating a model.
World coordinates are computed for specifying the placement of individual ob jects in
appropriate positions. Using these coordinates the models are placed in the actual 3D world, in a
unified world coordinate system i.e. all models coordinated are transformed to this coordinated
system. Sometime this coordinates system is called a universal coordinates system. The OpenGL
API doesn't really have a world space
29 | P a g e
Normalized coordinates are converted from world coordinates, such that x,y values are ranged
from 0 to 1. The normalized coordinates makes easy for an object to be mapped to any device
with different sizes
Device coordinates are the final locations on the output devices. For example the printer has its
dimensions hence different coordinates so does for plotters and other devices.
Viewing
When we define an image in some world coordinate system, to display that image we must map
the image to the physical output device. This is a two stage process. For 3 dimensional images
we must first determine the 3D camera viewpoint, called the View Reference Point (VRP) and
orientation. Then we project from 3D to 2D, since our display device is 2 dimensional. Next, we
must map the 2D representation to the physical device. We will first discuss the concept of a
Window on the world (WDC), and then a Viewport (in NDC), and finally the mapping WDC to
NDC to PDC.
Window and Viewport
(World) Window: The rectangle defining the part of the world we wish to display. In other
words we can say the window defines what is to be displayed
Viewport: The rectangle within the screen window defining where the image will appear. This
is to say that the viewport defines where it is to be displayed
To describes these concepts see the figure below:
The following figure shows how window and the viewport work. In the first ( left) figure it shows
the windows applied to a given shape and the second (right) shows a viewport displaying only a
part of image defined by the widow.
30 | P a g e
Window -Viewport Transformation
The process of mapping from a window in world coordinates to a viewpor t in screen coordinates
The window -to-viewport transformation maintains the relative position of a point in window as
well as in the viewport. A point at position in the window is mapped into position
in the associated viewport.
Consider the following figures in which the point from the viewport is mapped to point
on the viewport
In order to maintain the relative p osition of the point in window and viewport then we use the
following relations:
…………………………
31 | P a g e
and
From equation (1)
and from equation (2)
Where and are scaling factors, and are given as shown below:
and
Application of window to view port transformation
1. Panning : Moving the window about the world coordinates system. This process reposition
an object in different location of the window
2. Zooming : Reducing or increasing the window size. As the window increases in size, the
image in the viewport decreases in size and vice versa
………………………… 2)
………………………… 3)
………………………… 4)
………………………… 6)
………………………… 5)
32 | P a g e
PART II: CLIPPING
Many graphics application programs give the user the impr ession of locking through a window
at a very large picture. The program makes use of the scaling and translation techniques to
generate a variety of different views of a single representation of a plan.
To display an enlarged portion of a picture, we must not only apply the appropriate scaling and
translation but also identify the visible parts of the picture for inclusion in the displayed image
Certain lines may lie partly inside the visible portion of the picture and partly outside.
The correct way to sel ect visible information for display is to use clipping, a process which
divides each element of the picture into its visible and invisible portions, allowing the invisible
portion to be discarded.
There are different types of clipping, these includes point clipping, line clipping, polygon
clipping ect
Point Clipping
Is to determine if a point (X,Y) is visible or not by a simple pair of inequalities:
X left ≤ X ≤ X right
Y bottom ≤ Y ≤ Y top
Where X left , X right , Y bottom , Y top are the position of the edge of the clipping window.
These inequalities provide us with a very simple method of clipping pictures on a point -by-
point basis.
Line clipping
It would be quite inappropriate to clip pictures by converting all picture ele ments into points and
using point clipping, the clipping process would take far too long. We must instead attempt to
clip large elements of the picture.
The following figure shows a number of different lines with respect to the screen :
Notice that those lines which are partly invisible are divided by the screen boundary into one
33 | P a g e
or more invisib le portions but into only one visible segment.
This means that the visible segment of a straight line can be deter mined si mply by computing its
two end points.
We divide the line clipping process into two phases :
1- Identify those lines which intersect the window and so need to be clipped
2- Perform the clipping
All li ne segments fall into one of the following clipping categories :
1- Visible : both end points of the line se gment lie within the windo w
2- Not visible : The line se gment definitely lies out side the window.
This will occur if the line se gment from ( X1, Y1) to (X2 , Y2) satisfies any one of the
following four inequalities :
X1 , X2 > X max Y1 , Y2 > Y max
X1 , X2 < X min Y1 , Y2 < Y min
3- Clipping candidate : the line is in neither category 1 nor category 2.
Consider the following figure:
Line segment AB is in category 1 (visible).
Line segments CD and EF are in category 2 ( invisible)
Line segments GH , IJ , KL are in category 3 (clipping candidate)
There are many algorithms used in line clipping, these includes Cohen Sutherland, Liang Barsky
etc
34 | P a g e
Polygon clipping
Line clipping is acceptable when the output can be a set of disconnected lines segments. There
are, however, situation in which a polygon clipped against a window should result in one or
more polygons.
The following example illustrates a simple case of polygon clipping.
One popular polygon clipping algorithm is a Sutherland -Hodgman Polygon -Clipping Algorithm.
The Sutherland -Hodgman polygon clipp ing algori thm clips a polygon against each edge of the
window, one at a time. Specifically for each window edge it inputs a list of vertices and outputs
a new list of vertices which is sub mitted to the algorithm for clipping against the next window
edge.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: SOKOINE UNIVERSITY OF AGRICULTURE DIRECTORATE OF COMPUTER CENTER DIT 0108: COMPUTER GRAPHICS SUBJECT LECTURE NOTES (2013/2014) Instructor: AYUBU,S…. [600075] (ID: 600075)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
