Operation Management. Total Quality Management

Chapter 1

1.1. Presentation of the company

“IMPACT SA”

In the year 1991, when was concluded, IMPACT SA company was dealing with renting of own real estates, but beginning with the year 1995 the company’s activity concentrates mostly on the construction of houses situated in the firm’s own residential ensembles.

The initial launching of the company on the real estate market began with the first residential project – Alfa Ensemble, which contains 40 houses and which was finalized in September 2001.

In the first ensemble, IMPACT built only luxury villas. The succsessful ending of this first real estate project brought recognition to the company in the world of real estate promoters which operate on the Bucharest market.

Starting a project with own forces and with the aid of a long term bank credit (payable within 15 months) contracted in 1995, IMPACT succeeded to prove the viability and the success of a well thoughted business.

In 1998, IMPACT started a second project – the Beta Ensemble – with a number of 70 houses: luxury villas and a new product – accesible villas. With this project the company began indeed the actual diversification of the products, adapting itself to the demand of the market and offering to a large number of persons the possibility to build their own home.

The Gamma Ensemble was the third ensemble started in 1999 and the Delta ensemble was started in 2000.

Together, these ensembles contain over 300 houses, diffrent as architectonic style and prices: luxury villas, accessible villas and others and in September 2001 in Delta ensemble began the construction of the first timber made house.

In July 2001 began the construction of Epsilon ensembly, which will comprise an office building of the company, excepting the 56 houses – mostly luxury and accessible villas.

Year 2002 began with the expanding of the company’s activity through the construction of their own commercial centre with materials and construction systems – DePact.

The main activities of the centre is warehousing and commercialization of the materials and construction systems. A big part of the materials are realized by subdivisions of the DePact: SitPact, LePact, MePact and FerPact.

The services offered by the IMPACT are construction consultancy, finan

cial consultancy and real estate consultancy; also the firm offers a great experience in the domain of construction of holls and deposits, construction of offices and commercial centers.

The company has a team of 30 projection specialists formed by architects, resistance engineers, plumbing engineers, system engineers and project managers with a lot of experience accumulated in the domain of house and residential ensembles construction.

Also the clients of the company benefits by the performant working tools, projection programs, informatic systems and typical studs.

The work in the building site is supervised by a team with an important experience in supervising the construction of houses. As for the quality, IMPACT guarantees for its work.

In the domain of interior and exterior design, the professionals from IMPACT have a 7 year experience in the field, specialized on diffrent degrees of finishings, on diffrent types of villas – luxury villas and accessible ones.

A special team, organized after the principle of a „single office”, specialized and with know-how in the field deals with the notifications and with the authorizations.

The company has multiple subsidiaries organized after the domeain of activity: DePact, MePact, FerPact, SitPact and LePact.

DePact is the most important from all of them and offers an entire set of afvantages:

Free consulting for the utilisation of commercialized products.

Packaging of the bought materials for a easy handling and to prevent the deterioration.

Quick invoicing procedure using the bar codes system.

Free transportation for big charges.

The possibility of delivery to an indicated address.

Optional – the payment is made by instalments.

DePact is commercializing:bricks, B.C.A., ciment, lime, plaster, gypsum, and accessories, tile, timber, tego pannels, rolled goods, iron plates, wire, assembling organs (screws, screw nuts, washers, dowels) cables and electrical conductors, electrical machinaries, technical lighting ensembles, polystyrene, mineral cotton, tools (mechanical and electrical), termical wall station, radiators (steel and aluminium), pipes (polypropylene, polyethylene, pexal, copper), fittings, plumbings, sanitations, washable paint, paint for wood and mintal, polishing plaster coats, laquers, adhesives, quits, silicon quits, polyurethane foam, parquetry, gritstones, faience, P.V.C. profiles, P.V.C. thin sheet, abrasive products.

MePact offer is generous in offer with all kind of metalic constructions and accessories.

Metalic construction for industrial holls.

Lattice work for doors, windows and balconys.

Fences and metalic gates.

Metalic hutments.

Metalic doors for industrial constructions, and doors resistant to fire.

Metalic containers, boxes and cassettes for electrical pannels and gas regulators.

Winding stairs and metalic banisters.

Metalic structures for covers.

Connectors for wood constructions.

Metalic pillars for lighting.

Tikery ( drains, rain races, hooks).

Metalic plumbings, ventilations and stovepipes for fireplaces.

Metalic shutterings for semifabricated elements.

To the request of the customer the elements are specialy treated: ground coating and painting. The firm provides the transportation for voluminous elements, only for distances smaller than 15 km. Also the company guarantees the execution on the base of a project for every element. The machinaries used are performant and the team has a 10 years experience, all the products being already executed before for the IMPACT customers..

FerPact offers for its clients an entire set of productts like:

Armours and metalic structures used in armouring the resistance elements of a construction.

Girdles

Beams

Pillars

Lintels

Cradle stirrups

Foundations girdles.

SitPact offers for its clients all the elements regarding the P.V.C. thecnology:

Windows and doors with P.V.C. carpentry.

Oscillatory windows.

Completely stained shuttings, completely stained wall, greenhouse walls with a 90-120 degree opening angle, stained entrance hall walls, attic walls, bowindows.

Glinding doors.

General characteristics of the PLUS TEC profiles:

PLUS TEC profiles are fabricated in Germany by the Plus Plan Kunststoff – und Verfahrenstechnik – GmbH company and they are seasoned on the Romanian market by the Technical Seasoning Commission in the construction domain. The physical-mechanical characteristics of the profiles and of the products realized were checked through specific tests according to RAL RG 716/1 norm which is the norm elaborated by the RAL German Institute for Quality Standards and Trade Marks.

Componence.

The P.V.C. profiles are realized from a polivinilique composition without any trace of lead, and stabilized with the aid of calcium-zinc in order to correspond to the most severe ecological standards that are to be adopted by the European Union countries.

Clear thermopan windows 4-16-4 mm, Sain Gobain, Pilkinton.

German iron foundry found also at upringht beams as well as at cross pieces and which provides an optimal phonic (34-42 Db) and thermic (1,4W/mpK).

The P.V.C. profiles permit the installation of a window with a thickness up to 54mm , disposing by well hidden set and offering a modern geometry.

In the case of normal windows the same type of armour is used at the frame as well as at the transom.

The advantages offered by SitPact are:

German quality.

Tansportation is included in price .

Free dismantling of the existing windows.

Free measurements made by the company’s personnel.

1 year warranty starting from the assembly date.

LePact offer:

Interior products:

interion timber made stairs, tread, countertread, current hand.

Floor.

Attic pannelings.

Doors.

Profiled wands.

Furniture – tables, chairs, benches, overlaped beds.

Exterior products:

Roof elements – fascia board, wedge.

Rafters.

Fence.

IMPACT offers for its customers an entire set of houses – luxury ones and accssible ones for all types of clients and budgets: luxury villas, Prestige villas, Freedom villas, vacation houses, warehouses and deposits.

The company has a single branch at Constanta which is entitled with the Boreal ensembly construction. The project began in June this year and is estimated to be finished in 2007. The ensemble is the first residentioal ensembly from Constanta, the first real estate project started by IMPACT in the country.

The total surface of the Boreal ensemble is of 90000 mp containing a total number of 238 parcels with luxury villas, accessible villas and vacation houses. The total value of the investition is of 25 millions $.

The firm has developed important projects like the Class enesemble and the Jeans enseble.

The Class enseble is situated in the north area of Bucharest and has a large surface of 75000 mp. The project will comprise 23 Deluxe villas and 107 Prestige villas (accessible ones).

The architectural projects comprise 12 various types of houses with surfaces between 130 and 315 mp.

The Jeans ensemble represents an enseble for young families and for teen-agers. It comprises 2 hostels with a ground floor and 2 levels having 12 one room flats. The total utility surface of the ensemble is 11000 mp.

The enseble has a capacity of 96 places each with a surface of 215 mp. The total estimated value of the Jeans project is 5 million $. The project was started in 2003 and is estimated to be finished in 2005.

1.2.Modalities of payment for the contracted houses

IMPACT offers its clients various flexible schdules of payment for a house, depending on the specific financial situation of each customer:

full payent in the moment the contract is signed – the client receives a 5% discount.

payment depending on the physical levels of the house:

foundations and structure to level 0 – 20% payment.

The plate over the ground floor – 20% payment.

The plate over the first level and the framework – 20% payment.

Finishings – plumbings, sanitations – 40% payment.

rates with a minimum advance of 15%. The rates can be contracted on a maximum 6 years period, with 14% interest.

The client has also the possibility to pay intermediat sums of money bigger than the initial rates, situation in which can adopt one of the following alternatives:

the maintaining of the contracted rate value, but with the reduction of the time period of the contract.

The maintaining of the number of rates initially established, but reducing their value together with the corresponding interst.

The client can also pay the rates in trimestrial instalments, namely in the second month of each trimestre.

In the situation in which the payed sum of money exceeds the value of realized work by Impact untill that moment, the client receives a 10% interest until the moment in which the payed sum is less than the value of services rendered. The interest remains constant on the entire period of the contract.

After the signing of the contract and the payment of the advance, in the immediate next month the rates are begining to flow, no matter when the moving takes place. Once the way of payment been established, it can be changed on the client’s demand taking into consideration the changes appeared in his financial situation.

In the case of choosing the rates modality of payment there’s no need for the client to present some supplimentary warranties. He just have to contract a life and invalidity insurance policy, which will cover the remained value to be payed and to be ceded to Impact.

After a certain time in which the contract rolled in the rates payment modality, and if the financial situation of the client is positive, he can pay the entire remained value of the house. The same solicitation can b

rates with a minimum advance of 15%. The rates can be contracted on a maximum 6 years period, with 14% interest.

The client has also the possibility to pay intermediat sums of money bigger than the initial rates, situation in which can adopt one of the following alternatives:

the maintaining of the contracted rate value, but with the reduction of the time period of the contract.

The maintaining of the number of rates initially established, but reducing their value together with the corresponding interst.

The client can also pay the rates in trimestrial instalments, namely in the second month of each trimestre.

In the situation in which the payed sum of money exceeds the value of realized work by Impact untill that moment, the client receives a 10% interest until the moment in which the payed sum is less than the value of services rendered. The interest remains constant on the entire period of the contract.

After the signing of the contract and the payment of the advance, in the immediate next month the rates are begining to flow, no matter when the moving takes place. Once the way of payment been established, it can be changed on the client’s demand taking into consideration the changes appeared in his financial situation.

In the case of choosing the rates modality of payment there’s no need for the client to present some supplimentary warranties. He just have to contract a life and invalidity insurance policy, which will cover the remained value to be payed and to be ceded to Impact.

After a certain time in which the contract rolled in the rates payment modality, and if the financial situation of the client is positive, he can pay the entire remained value of the house. The same solicitation can be made after obtaining a long term bank credit from one of the banks that colaborates with Impact and which offers such a product. From this procedure the customers can use the consultancy service from the economic department of the company. The advantage of such a solicition is the diminishing of the rate, the period following to increase correspondingly. The warranties solicited by the banks for such a credit is the first rank mortgage on the house, including the land on which it is built; so, there are not supplimentary warranties. Of course, there are compulsory – the life insurance policies and an endorser.

During the contract period, the house represents the Impact’s property or it is mortgaged in the bank’s favor.

It does not exist an elaborated or fixed payment schedule of the price. The way of payment for the house value is established in accordance with the beneficiary.

To forestall any accident or calamity, Impact contracts contruction policies as well as fire policies.after the finishing of the construction.

For covering the risk of exceeding the deadline for finishing the construction, Impact contracted exceeding dead line polies from the supplier’s fault.

As a conclusion, the client has the assurance that the construction will be finished in due time and that the payed sums can be payed back in the case of not finishing the work – so the money are not lost.

BALANCE SHEET-IMPACT

1999

TOTALASSETS 67528696

Fixed assets (total) 28433799

Current assets (total) 35955871

Inventories 29790067

Cash and cash equivalents 3289444

Customers and similar accounts 1195559

Advances to suppliers 1399177

Other receivables 281624

Regulatory accounts and similar accounts 3139026

Discounts for bonds payable

TOTAL LIABILITIES AND SHAREHOLDERS’ EQUITY

67528696

Owner’sequity 31971952

Social capital 19428428

Subscribed and paid soc. Capital 19428428

Regy’s patrimony

Public patrimony

Provisions for risks and charges

Total liabilities 35033339

Regulatory accounts 535404

PROFIT&LOSS INDICATORS FOR 1999

Turnover 37920980

Totalrevenues 44967054

Totalexpenses 30438622

Gross profit 14528432

Gross loss

Net profit 12350277

Net loss

BALANCE SHEET – IMPACT

2001

TOTAL ASSETS 75200062

Current assets 149860827

Inventories 91537046

Cash and cash equivalents 29061831

Customers and similar accounts 11391102

Other receivables 17870848

Owner’s equity 172039530

Subscribed and paid soc. Capital 72194706

Regy’s patrimony

Public patrimony

Provisions for risks and charges 641069

TOTAL LIABILITIES 38317912

PROFIT & LOSS INDICATORS

2001

Turnover 221063460

Total revenue 255799419

Total expense 163961951

Gross profit 91837468

Gross loss

Net profit 78194447

Net loss

1.4.THE FINANCIAL SITUATION AT THE BEGINNING OF THE YEAR 2003

The balance sheet, profint and loss account and the cash flow analysis from the 30th of July 2003 of the IMPACT SA were audited and confirmed by SCC Scot & Company Consulting – Auditors, Accountants, Tax Advisors. Compared to the first six months of the year 2002, the profit and loss account and the balance sheet analysis at the half of the year 2003 reveals the following:

-the increase of total revenues from 244 billion lei to 319 billion lei, representing an increase of 33,9%.

-the increase of total expenses from 132 billion lei to 198 billion lei, representing an increase of 53,3%.

-the increase of gross profit from 111 billion lei to 121 billion lei, representing an increase of 11,9%.

-the increase of net profit from 102 billion lei to 112 billion lei, representing an increase 12,2%.

-the increase of current assets from 269 billion lei to 656 billion lei, representing an increase of 149,2%.

-the increase of total assets from 405 billion lei to 887 billion lei representing an increase of 124,1%.

-the increase of social capital from 72 billion lei to 182 billion lei, representing an increase 179,3%.

-the increase of owner’s equity from 233 billion lei to 533 billion lei, representing an increase of 144,2%.

-the increase of liabilities from 178 billion lei to 310 billion lei representing an increase of 79,7%.

In the first six months of the year 2003, IMPACT signed construction contracts which determines a total value of sales of 7,3 million $ compared to 4,589 million $ in the same period of the year 2002, representing an increase of 59%. The realized gross profit from the first six months was 3,726 million $ and represents 53,5% from the targeted profit of the whole year 2003 and aproved by AGAE in 25.04.2003

Chapter 2

2.1. Designing For Customers’ Needs

2.1.1. Desingn: A Core Business Responsability

A chronic weakness among many companies is undermanagement of design and development. Perhaps part of the fault is hat business schools have chronically underemphasized design as a core business responsability. Ironically, we probably know more about poor design practices than we do about good ones. But design problems ripple into operations.Quality deteriorates, processing slows or stops, and costs mount.Customers look elsewhere. In short, competitiveness suffers.

Design as a Competitive Weapon

For an increasing number of companies, however, a lackadaisical attitude about design is a thing of the past.It seems that good product design makes a company more competitive, but when design efforts extends beyond product and into processes, the rewards are even greater.

Dual Focus: Outputs and Processes

Design has two main targets: the outputs (goods and services that customers want) and the processes to provide them.

Design of outputs greatly defines choices of processes and method.In fact, as we shall see, one criterion of good product and service design is that it bring about ease of processing. Thus, we shall pay particular attention to how design-related activities affect operations and to the contributions operations managers make to design programs. For one thing, we recognize design as the first step in quality.

Designed-in Quality

The problem that plagued Black&Decker – tired product lines – is common to many well-regarded companies. Their goods fare well in the marketplace, which breeds the attitude, „Don’t tamper with a winner!” But improved design concepts are taking root. Innovative competitors lurk, ready to beat the existing industry standards and crow about it through performance-comparison advertising. For example, computer hardware and software providers almost always introduce new products by comparing performance of those products with the industry leaders.

Since new features stem from market feedback or reflect advances in technology, designs that incorporate those features are likely to advance quality in the eyes of customers. Thus, a strong design program, including one that wins prestigious design awards, is itself one mark of organizational quality. The quality of output goods and services – and the processes that provide them – begins during the initial design activities. The next example shows how an iterative action cycle of design, discovery, and improvement provides quality.

In practice, the five phases of the quality action cycle overlap, and sometimes we’re even able to eliminate parts of the third (discovery) phase. Pictre it this way: The ideal way to attain quality is to have perfectly designed outputs that are created without defect or variation by perfectly functioning processes. When perfection fails to materialize, the next best thing is to discover problems as soon as possible and work quickly to develop remedies for the underlying causes of those problems, before any „bad outputs” finds its way downstream to customers. Continuous improvement toward the ideal is the hallmark of total quality management.

Example 2.1. Quality Action Cycle

2.1.2. Research, Design, and Devedopment Strategy

Often, design is financed through a company’s research and development (R&D) budget. Research pushes the boundaries of science, aiming for new products, services, and processes, while development translates those innovations into useful tools for employees and/or into practical outputs for customers. Regardless of spending levels, however, an effective design strategy for any firm is one that overcomes weaknesses inherent in conventional design efforts. But even that is not enough, for contemporary design programs must directly support immediate business needs. Also, customers want faster and better designs that require use of modern design technology.

Weaknesses in Conventional Design

Sometimes customers sound off when they encounter poorly designed goods or services. They express themselves on consumer information cards that accompany new products or on diverse feedback forms. The providers in these cases are fortunate, for they get the feedback necessary to make changes. But more often, customers just take their business elsewhere. Providers don’t get the specific details, but – as was the case with Black&Decker – falling revenues signal possible design problems. Many of the problems deserving immediate attention can be traced to one or more of the historical weaknesses of conventional design:

Design is slow. Consequently, a product or service is late to market, arriving after competitors are entrenched. Or, ineffective transformation processes continue to operate because redesigns are delayed. Negative effects ripple through the three other primary line areas: (1) marketing must play catch-up in sales, and advertising can’t tout the firm as the innovator; (2) operations also play catch-up to competitors’ postintroduction improvements; and (3) delayed financial returns prolong investment recovery. Exhibit 2.2. shows the financial consequences of being late to market.

Example 2.2 Cost of Arriving Late to Market

Number of Months

If a company is late

to market by: 6 5 4 3 2 1

Then average gross profit 33% 25% 18% 13% 7% 3%

Design is myopic.This have been a common blind spot even for Western companies known for commitment to research. Though the problem takes many forms, perhaps the classic example occurs when people take the word design to mean product design, and pay little or no attention to design of the processes – often made up largely of support services – needed to develop and commercialize discoveries. For service providers, design myopia acts in the same fashion: firms concentrate on the frontline service – the point of customer contact – and give scant attention to design of back-office support services. Resulting problems in those areas detract from any high regard customers might have for the frontline service.

Design is staffed – off. In traditional design settings, designers are a breed apart – literally. They perform their work in isolation from their various customers, including fellow employees who must transform design into outputs, sell or distribute for facts because too many constituencies are not represented at the design table. When design don’t pan aut, counterproductive finger pointing ensues.

Design is unfocused. This problem may also appear in several forms, but all suggest the lack of clear guidelines for channeling the design program to remain true to function or purpose.Designers should devote primary attention to the core components that make the „thing” being designed perform its intended function. However, dedication to function has not been as strong in the history of North American design strategy. In their haste to embellish external appearance, designers have placed more emphasis on nonfunctional features and on inner workings so critical to functional performance.

Individually, each design weakness is bad enough. When acting in concert, they give customers – internal as well as external – cause to scream. Frequently, the effects of weak design is show up as shoddy performance in operations. For example, when a new product or service is a hit and demande exceeds expectations, operations scurries to increase production or service capacity. Undesirable results include poor recruiting, inadequate trening, reliance on untested equipment and sources of supply, and postponement of essential maintenance. In these cases, all of the right people weren’t included during design discussions.

Broadly speaking:

Whenever a good or service can’t be easily built or installed, operated, delivered, or maintained, operations personnel are likely to take the heat for what is actually a manifestation of one or more of the traditional weaknesses of design.

When it won’t sell, marketing gets the blame.

If it costs too much to provide, accountants feel the presure to sharpen their pencils.

In general, as industry has begun to recognize the effects of poor design programs, and as employees in all areas of organizations have begun to understand how design has been affecting their work, the reaction has been positive. Comprehensive design programs that attack traditional weaknesses – and do much more – have emerged.

Comprehensive Design Program

Design program specifics differ, of course, but superior companies seem to agree on several common characteristics that describe effective product, service, and process design. Exhibit 2.3. presents a model for a comprehensive design program composed of six integrated and overlapping parts.

The first two parts are strategic planning activities. First, senior managers decide which businesses to pursue and then select products and services to offer within the chosen industries; this determines the overall competitive environment in which the firm will operate. Next, the firm’s design strategy must be positioned and implemented within that environment. This requires continuous environmental scanning and analysis – with specific attention to customers’changing wants and competitors’ shifting abilities and weaknesses. Since environmental change often evokes modifications in business strategy, some reformulation of design strategy might be needed.

With strategy set, implementation puts that strategy into action. Among companies with leading R&D programs, the multi- or cross-functional team concept is the vehicle of choice for adressing all of the historical design weaknesses. When teams also include customers and suppliers – all close enough to share a coffeepot – design efforts become fully sensitized.

The bottom three boxes in exhibit 2.3. represent broad categories of design team responsabilities. We’ve mentioned the colection of customer and competitor data as it affects design strategy, but such information plays a tactical role during design.

Exhibit 2.3. Comprehensive Design Program

P

Technology in design

Arguably, the impact of computer-related technologies is at its most spectacular in the design arena. For decades, increasing processor power coupled with a vast array of commercial and personal software has made computer – aided design (CAD) a natural to even the smallest of companies. The impressive displays of CAD applications in the design of complex durable goods sometimes make us forget that the personal – computer – based applications for design of business cards, forms, and so forth is really another form of CAD.

For manufacturers of complex products, more specific and powerful computer – aided – engineering (CAE) applications assist with design concept development and specification and analysis of desirable functions.Process design may be enhanced through computer – aided process planning (CAPP), often aimed at extending design specifications into the operations activities needed to create the part. More broadly, process design is often accompanied by process simulation – trial runs of a computer model of a production flow – control process.

Design Teams

The power of teams throughout organizations’ activities is what we will refer to next, but what specifically do multifunctional teams do to improve design ? Quite a bit. Furthermore, benefits accrue to both providers and customers: Providers overcome weaknesses in traditional design with powerful design techniques and tools that teams make possible. Customers enjoy improvements evident in final design, often made direct contributions to those designs. We will discuss but two of the more significant items in each category. First, teams facilitate concept development and make concurrent design possible. Second, when design is team based, products, services and processes are more likely to be more socially responsible.

Concept Development

At the heart of any design program lies the notion of concept development. Crawford defines a design concept as „ a combination of verbal and /or prototype form that tells what is going to be changed and how the customer stands to gain (and lose)” Three essential parts must exist for an idea or plan to be elevated to the status of concept:

Form. This is the physical thing itself – its shape, materials content, and so on. In the case of a service, form is often described by the steps needed to provide the service.

Technology. The principles, techniques, equipment, mechanics, policies, and so forth to be employed in creating or attaining the good or service collectively constitute the technology. Examples include a particular assembly sequence or delivery plan.

Benefit. Benefit is the value the customer plans to derive from a good or service.

According to Ulrich and Eppinger, design concept development begins with concept generation , itself a procedure. It transforms a set of customer needs and target specifications into a set of possible design concepts from which the team will select the most promising alternative. One part of concept generation is competitive analysis -investigation of competitors’ offerings.

For services, competitive analysis requires going to the competitor,being served, and taking extensive notes for later use by your own service design teams. In manufacturing, the usual procedure is to buy a competitors’ product and bring it in for thorough study, perhaps including complete disassembly, which is called reverse engineering.

After selection of most promising design concept, development continues with refinements of specifications, economic analyses, and other fine -tuning activities. As the design nears final form, interest naturally shifts to production and delivery systems. Again, with effective use of team – based design , the transition will be smooth, for those charged with process design and operation will have already been active contributors to the design effort. That is , concurrent design will be ongoing.

Concurrent Design

Concurrent design, also known as simultaneous engineering, occurs when contributors to an overall design effort provide their expertise at the same time while working as a team instead of an isolated functional specialists working in serial fashion. Concurrent design, sometimes with much imput from suppliers, is the norm in many leading companies;

From an operations perspective, significant benefits stem from getting those who design, operate, and maintain transformation processes included early and on the same team as those who design products. Though our focus is operarions, we emphasize that a full concurrent design team also includes people from marketing, finance, purchasing, human resourses, and other inside departments; customers, suppliers, and freight carriers: and perhaps community and regulatory officials.

When a business is young or small, teaming up for effective design is easy and natural. But as the firm grows, people split off into functional specialties. In that overspecialized system, product designers are accused of „throwing the design over the wall” to manufacturing or service process designers and saying in effect, „Let’s see you figure aut how to build that!” When process planners need to make changes, the design goes back „over the wall” to product designers, with the implication that , „If you had been smart enough to create a producible design in the first place …”

Even when product life cycles are long, concurrent design has merit. For one thing, it avoids time – consuming misunderstandings and costly „do – overs” during the design phase. For another, it reduces costly bugs, errors, rework, and warranty claims during production, delivery and customer use phases.

Concurrent Design at Rubbermaid

Unlike many consumer – product companies, Rubbermaid does not test marketing.Instead, it has created entrepreneurial teams of five to seven members in each of its four dozen product categories. Each team includes a product manager, research and manufacturing engineers, and financial , sales, and marketing executives. The teams conceive their own products, shepherding them from the design stage to the marketplace.

Socially Responsible Design

The team approach has yet another advantage. When we truly think about including all customer constituencies, a broader range of design functions emerges and that , in turn suggests more design options. Environmental frindly designs fall under the umbrella of social responsability.

Sometimes reaction to a social concern opens up a set of promising new design options. For example, in attempting to design products easily operable by disabled consumers, designers have unearthed an attractive new approach called universal design.

Universal designs are good news for the maker because it means greater standardization. Also, design teams can focus their energies on perfecting the fewer designs they work with. Provider and customer benefit; it’s a win – win situation. We will see next how those benefits and others can be integrated within the context of a powerful yet relatively new tool – quality function deployment.

Quality Function Deployment

The quality function deployment is a procedure for transforming customer requirements and competitors’ capabilities into provider targets, extending from design research to operations, marketing and distribution.

Quality function deployment (QFD) provides a structured way of viewing the big picture and of organizing the details of both product and process design. The structure comes from a series of matrices. The first and most important matrix spells out customer needs – the „voice of the customer” – and compares the company’s and key competitors’ abilities tsatisfy those needs.When the matrix is filled in and „roofed over”, it takes the shape of a house – the house of quality.

The tough part for a design team is getting good data to enter into the matrix. Data sources may include focus groups, surveys, studies, comparison shopping, competitive analysis, public information, calculations and reckoning.

The design team uses the basic house of quality in the product – planning stage. More detailed matrices may be developed for three remaining stages of design – that is , product design, process planning, and process control planning.

EXHIBIT 2.4. QFD Overview

Customer requirements

Design requirements

Part/item characteristics

Process operations

Operations requirements

2.1.3 Design Objectives

Quality function deployment is at its best when the aim is to satisfy a prescribed set of customer needs in the face of rather well defined competitors’ capabilities. Design objectives flow somewhat naturally from specific performance targets. On a broader level, however, general guidelines have emerged and are gaining wide spread acceptance by design teams across a variety of industries.

Design for Operations: Guidelines

Design lore is replete with examples of what have come to be known as DFX criteria: design for X which can mean design for quality, reliability, or other desired ends. Sometimes product designers overlook the realities of frontline operations:the moment of truth with an unpredictable customer or the many sources of surprise, variation, agony and error in operations. The design team may be able to avoid some of these pitfalls by following design-for-operations guidelines, which have evolved from the works of professors Geoffrey Boothroyd and Peter Dewhurst. As a result of this work, tens of thousands of design engineers had studied design for manufacture and assemby(DFMA). Although the DFMA guidelines were aimed at manufacturing, they have proven to be general enough to apply well to services; thus , we use the more general term, design for operations (DFO).

EXHIBIT 2.5 Design for Operations Guidelines

General Guidelines

design to target markets and target costs.

minimize number of parts and number of operations.

Quality Guidelines

3.ensure the customer requirements are known and design to those requirements.

4.ensure that process capabilities are known and design to those capabilities

5. use standard procedure, materials and processes with already known and proven quality

Operability Guidelines

6. design multifunctional/multiuse components and service elements and modules.

7. design for ease of joining, separating, rejoining and ease of coupling/uncoupling services.

8. design for one-way assembly, one-way travel.

9.avoid speacial fasteners and connectors and off-line or misfit service elements.

10. avoid fragile designs requiring extraordinary effort or attentiveness – or that otherwise tempt substandard or unsafe performance.

The first two guidelines are general in that they have wide-ranging benefits.

Target Markets and Target Costs. Customer and market representatives bring sales targets, profit data and competitor’s pricing policies to the team.

A newer tool approaches costs from a different yet complementary view. By applying the concepts and mathematics associated with the Taguchi loss function, the team’s engineering and operating experts are able to project savings resulting from designs that reduce product or process variation. Taguchi’s broad concept includes losses to society, not just to the provider or the customer.

Minimize Parts and Operations. The Boothroyd- Dewhurst methodology focuses especially on this guidelines, minimizing the number of parts or, outside of manufacturing, the number of operations.

The next three guidelines pertain to quality: quality requirements of the customer, quality capabilities of internal and external processes, and use of standardization to make quality easier to deliver.

Customer requirements. Guidelines nr. 3 calls for the design team to discover customers’ precise requirements and to keep abreast of changes during the design project. Requirements may specify physical characteristics, operating parameters or processing needs. The design team must be clear on the matter because one of its jobs is to transform requirements into specifications and tolerances.

Good suppliers, inside or outside the firm, do whatever they can to fiind out their customers’ real requirements, avoid misunderstanding and make their customers’ look good. When communications are good , design tips flow both ways, and sometimes it’s the cusomer who makes the supplier look good.

Process Capabilities. Guideline 4, designing to process capability, affects the design team in two ways. First, the team is held responsible if the design cannot easily be delivered or produced using available processes. Second, in being held responsible the design team must become familiar with process capabilities, which usually are measurable to some degree.

Capability measures might include years of experience, amount of cross-trening, and educational attainment of associates; documentation of procedures; safety devices in place; low equipment failure rates; and ability to achieve and hold tolerances. The latter may be measured using the process capability index, which has become important in manufacturing in recent years.

Standard Procedures, Materials, and Processes. The guideline advises designers to favor standard procedures, materials, and processes. Related to standardization are questions about creativity, satisfying customer needs for variety, and new oportunities for global marketing. Let’s have a brief discussion abouts each of these issues.

Standardization. Nonstandard designs are risky because of lack of knowledge about their performance. Xerox found this out the hard way. One consultant observed that, for lack of competition, the company’s large staff of bright engineers developed machines with incredibly complex technology. Design complexities led to costly field service to make copiers work properly. High costs opened the door to competition, which actually was good for Xerox.

Standardization and Personalized Design. Standardization may be only way to make personalized design profitable: carefully design a small number of standard elements that can be delivered in volume at low costs, and have flexibility to quickly customize them right in front of the customer.

Standardization and Globalization. Taken in conjunction with guideline 2, this guideline has strategic implications. As goods and services are desineged with fewer, more standardized components and operations, costs go down and quality becomes more dependable. In turn, this increases their appeal, sometimes to the point where people around the globe know about and want the item.

Multifunctional/Multiuse Elements and Modules. The do-it-yourself industry is alive and thriving. Buy some plumbing modules, shelving components, or mix-and-match clothing and combine to taste. Good design in accordance with this guideline makes it possible.

EaseofJoining/Separating,Coupling/Uncoupling.Push,click,snap,whirr. That’s the sound of modern keyboard, clock or auto dashboard being assembled. That’s good until someone needs to take off a cover for repair, or until the junked unit gets to the recycler to be separated into reusable materials. Today’s designer needs extra ingenuity to make disassembly and separation as easy as push-and-snap assembly.

One Way Assembly and Travel. Who hasn’t had to stand in one line for a certain service element, wait in another line for the next element and then later go back to the first line? Guideline 8 aims at avoiding that kind of backtracking and in manufacturing is helping to revitalize some assembly plants.

Avoid Special Fastening and Fitting. This guideline avoids special steps. In manufacturing, the guideline applies especially to connectors and fasteners.

Avoid Fragile Designs. Tendencies or temptations to make unsafe shortcuts, to be careless with sensitive equipment, to be brusque with customers, to steal, or otherwise misperform are partly avoidable by using designs that make such tendencies difficult.

One approach is to design controls into the process. Examples are: design the process to maintain strict segregation of personal and business possesions, clearly labeled locations for all files and materials, easy access to backup help, and safety-guard gates to keep associates from blundering into an unsafe area.

Another approach is the use of robust design concepts. Examples: shatte-resistant glass, a waterproof watch, carpeting that comes clean even if smeared with black grease, and keyboards you can spill Coke on.

Design for Reliability and Serviceability

The DFO quality guidelines, especially guidelines 3, are deliberately quite broad – they must encompass diverse customer needs and they must apply to both product and process quality.

Reliability is the probability that an item will function as planned over a given time period. It may be calculated as follows:

-\t

R=e

Where:

R= reliability, a value from 0 to 1.0

E=the base of natural lagarithms

\= a constant failure rate

t= specified point in time

Serviceability is a bit harder to define; it means diffrent things to various constituencies. One common thread, however, is the degree to which an item may be maintained; either kept in service through preventive maintenance or restored to service after a breakdown. Popular measures that relate to serviceability include:

Failure rate, denoted by the Greek letter lambda, is the average number of times an item is expected to fail within a given time period. As we saw in the equation above, lambda is the critical determinant of reliability.

Mean time to repair between failures (MTTR) is the average time required to repaid assuming that appropriate parts and sufficient expertise are available. In some circles, MTTR is used almost synonymously with serviceability.

Availability is the proportion of time that a resource is ready for use. One version of availability considers only designated operating time and combines MTBF and MTTR:

MTBF

A=

MTBF+MTTR

Design for Automation

Boothroyd and Dewhurst’s operability guidelines – 6 to 10 – are particularly useful as design teams cope with automation, a topic of continuing concern for operations managers. Three broad issues, affecting output and process design efforts, come into play:

Wasteful and unnecessary processes should not be automated. Clean up the former and eliminate the latter before considering automation.

The theme of the five guidelines is simplification for good reason: the simpler the task, the easier it will be to design equipment to do it. Development will also be faster and cheaper.

When design teams strive for an easy-to-automate design, they sometimes get an unexpected dividend: following the operability guidelines might simplify the process to such an extent that the firm can avoid the time and expense needed to acquire and install the automation.

Design for the Environment

An unfortunate consequence of the operations that create our goods and services is that they also create unwanted outputs – especially waste and pollutants. Industry is not the only pollyter of course, but it is a major one. Fortunately, we can do something about that and the effort begins in design. Design for the environment (DFE) – another of the DFX set of acronyms – refers to steps taken during the design process to minimize the negative environmental impact of output goods and services and the processes that provide them.

One approach is to incorporate specifications into design and agreements that define suppliers’ enviromental responsabilities. Increasingly, the world’s large OEMs are requiring their suppliers to register to ISO 14000, the Environmental Management Standards defined by the Geneva-based International Organization for Standardization (ISO). In the fall of 1999, for example, Ford and General Motors announced their timetables for expected supplier registration. Though this sound a bit dogmatic, we should note that the objective is for all companies to create, enunciate, and activate their own environmental management system (EMS), a key requirement for obtaining ISO 14000 registration.

Design Review and Appraisal

Design and development is a loop. Preliminary designs are critiqued, improved, critiqued again , improved again, and so on. This commonly continues after the product designs are in production or services are being delivered, and customers are sampling the results. The inescapable questions remain: „Has quality been designed in?” and „How do we know?”

We don’t know. To fiind aut, the extended design team must listen to customers, keep track of competitors, and when necessary, modify the design. External feedback begins to reach the design team as soon as the first customers begin to work with the design. Quite often, these first customers are operations personnel. Later, marketing and advertising associates get their turn; they must present the new designs to external customers – the final arbiters. Systematic, measurement based design checking all along the chain provides a near – constant flow of design appraisal data.

Lately, the list of those who evaluate designs, the criteria they use and their sophistication all seem to be expanding. Several trends bear watching: first, businesses are increasingly aware of the financial impact of design – in short and long-term operations and marketing. As such, many are turning to outside expertise – outsourcing their design work.. Providing design services is a booming business. Second, customers all along the chain have elevated their wilingness to sound off about design. They know more about design, are more demanding of design excellence, and appear to be less intimidated by so-called experts who fail to listen. Perhaps, it’s fair to say that design appraisal has gone public. Finally, customers not only want design teams to do more – to cover more territory, so to speak – they also want to help.

The quality imperative

2.2.1 Quality: A Broad View

Does the word quality denote a desirable characteristic in output goods and services? Or, does it describe processes that make and deliver those outputs in ways that please customers? Or, especially when appended with the word management and preceded by the word total, does it refer to an even bigger picture – an overall approach to running organizations? The answers are yes, yes, and yes. The quality concept is both comprehensive and complex.

Quality terminology. Today, it’s hard to fiind a business that doesn’t have some manner of formal program for ensuring quality in the goods or services that it provides. Hospitals, schools and universities and government agencies have also joined the movement. Unique names, acronyms, logos, and company or customer specific jargons abound. Thousands of publications on some facet of quality have appeared since the early 1980’s.

Although consumers welcome the widespread attention to quality, some observers lament the lack of clear definition. For example, Philip Crosby and the late W. Edwards Deming, both respected pioneers in the quality movement during the 20th century, avoid the term total quality management (TQM), arguing, respectively, that it lacks clear definition or even meaning. Joseph M. Juran, another respected voice in the quality field, says that part of the problem stems from a failure to distinguish quality goals from the steps taken to reach those goals; that is, quality and total quality management need to be defined separately. Let’s consider first the quality:

Quality. As the quality movement has evolved, so has the definition of quality. To keep demanding customers happy,businesses have expanded the concept of quality and at the same time have improved their ability to deliver on a wider array of quality dimensions. The following example contains two itemized lists that exemplify this broadened view. The first was proposed for services; the second is more good oriented. Despite diffrecences in wording, the lists share two characteristics:

Both reflect how customers think about quality.

Both suggest action – things managers at all levels need to address if quality is to happen.

Example. Dimensions of Quality

process – approachability and ease of contact.

Courtesy – politeness, respect, consideration for property, clean and neat appearance.

Communication – educating and informing customers in language they can understand; listening to customers. 10 Dimensions of Service Quality

Reliability – consistency of performance and dependability.

Responsiveness – willingness or readiness to provide service; timeliness.

Competence – possesion of the skills and knowlege required to perform the service.

Credibility – trustwothiness, believability; having customer’s best interest at heart.

Security – freedom from danger, risk, or doubt.

Understanding – making an effort to understand the customer’s needs; learning the specific requirements; providing individualized attention; recognizing the regular customer.

Tangibles – the physical evidence of service (facilities, tools, equipment).

8 Dimensions of Quality

Performance – primary operating characteristics.

Features – little extras.

Reliability – probability of successful operation within a given time span.

Conformance – meeting preestablished standards.

Durability – length of usefulness, economically and technically.

Serviceability – speed, courtesy, competence, and ease of repair.

Aesthetics – pleasing to the senses.

Perceived quality – indirect evaluations of quality (e.g. reputation)

Total Quality Management (TQM). Juran provides a straightforward yet very inclusive definition of TQM: those actions needed to get to world-class quality. The word total is a contribution of Armand Feigenbaum and the late Kaoru Ishikawa – two additional respected quality pioneers: in top organizations, quality management is no longer treated as a staff responsability or functional speciality tucked away somewhere behind a door labeled „ Inspection Department.” Instead, it is everybody’s business, a total commitment – organizationally as a competitive requirement; collectively as people pool their skills and special talents as members of improvement teams; and singly as each individual performs job tasks.

TQM in Practice

A broad view further manifests itself in the multitude of programs, techniques, and tools being implemented under the banner of TQM and its close cousines. At about the same time some leading Western companies were fashioning their TQM agendas in the early 1980s. Others were placing equal or greater emphasis on just-in-time operations. From its inception in Japan, JIT had a strong quality improvement component in addition to its main empahsis on cycle-time reduction. Today, most competitive organizations embrace and extend that notion; they include benchmarking, reengineering, supplier development, total preventive maintenance, quick-response programs, and a host of team related tools, along with the more „ hard-science” quality management techniques such as statistical process control , design of experiments, and scientific problem solving.

Research evidence also supports a broad description of TQM. In one recent study, for example, researchers grouped quality management into four broad dimensions – relationships with suppliers, relationships with customers, product design and transformation processes. They found that all four dimensions were important contributors to high quality performance.

As they will with any management initiative, however, some people prefer to focus their atention on flaws or failures in TQM implementations. And indeed, there have been many instances when tools that might be considered part of a TQM program haven’t worked ot. how can we know if application of a new tool or procedure, trening programe, or process change is sound TQM? GEC Plessey Semiconductors (GPS) headquartered in Wiltshire, England, answers that question with a simple test based on three fundamentals; customers, teamwork, and improvement. GPS personnel evalutate all company programs on the seven dimensions shown in the following example:

Example: GEC Plessey Semiconductors’ TQM Test

Is it TQM?

Is there a clear link to customers?

Is there a clear link to company objectives?

Have improvement measures been defined?

Are managers and employees involved as a team?

Is the team using a TQM process and tools?

Is the team accomplishing and documenting its work?

Are team decisions derrived from data?

2.2.2 TQM : History and Heritage

Quality Assurance

Interest in quality is centuries old. The code of Hammurabi, mandated death for any builder whose work resulted in the death of a customer. Other quality related codes, often equally harsh, are found in the writings of the ancient Phoenicians, Egyptians, and Aztecs. Despite the harsh codes, however, it was the artisan’s pride, not fear, that contributed most significantly to suppoting quality assurance for centuries to come.

Industrial Revolution

Long supplier – customer chains are a product of the Industrial Revolution. In order for the masses to enjoy a wider array of goods, production costs had to deacrease. Demand and output increased as production costs decreased. Labor became specialized and disconnected from the big picture. Production people focused on quantity rather than quality; the reward system supported such behavior. In an attempt to stem the tide of deteriorating quality, managers assigned inspectors to check the work of line employees. Inspection, however, merely became another job specialty. Inspectors were unable to improve production quality; they could only fiind and remove some of the bad output after it had been produced. In many companies, quality fell apart and customers were angry.

Consumerism and Liability Laws.

Product quality and safety began to capture public attention in the mid – 1960s. Concern for safe consumption of goods and services also led to regulatory action. In 1972, the U.S. Congress passed the Consumer Product Safety Act, which aims at preventing hazardous or defective products from reaching the consumer. Most other Western countries have followed the same pattern. Some companies extended their product warranties, but well-publicezed product recalls seemed to say that quality on the warranty paper was not quality in fact. At the same time a new and demanding type of consumer emerged: the consumer of average means, who prefers to do without rather than pay for second best.

The quality imperative is also rooted in the experiences,research and writings, and teachings of modern era quality pioneers, some of whom we’ve already mentioned.

Quality Pioneers of the 20th Century

As the concepts and practice of total qulity continue to evolve, the contributions of other leaders of the quality movement will emerge, but for now the work of six individuals stands aut: W. Edwards Deming, Joseph M. Juran, Armand V. Feigenbaum, Kaoru Ishikawa, Philip B. Crosby, and Genichi Taguchi.

Though known, respectfully, as quality gurus, their thinking and influence extend well beyond the management of quality alone. They all speak of:

Company wide integration of purpose – a shared culture manifested by a top – down commitment to qulity that is embraced at all levels.

High regard for humans, as individuals and as vital components of teams.

Continous improvement in all facets of operations – a never ending program of looking for problems; finding and eliminating root causes of those problems.

Widespread service to all segments of society through sharing of total quality management ideas, programs, data and results.

Quality Pioneers – Major Contributions

W. Edwards Deming (1900 – 1993)

14 pointsfor obtaining quality

Plan-Do-Check-Act cycle for continuing improvement

Ardent to support for training and data based problem – solving.

Joseph M. Juran (1904 – )

Editor-in Chief. The Quality Handbook

Management breakthrough

The quality trilogy – planning, control, and improvement

Armand V. Feigenbaum (1920 – )

Concept of total quality control

Clarification of quality costs – those associeted with poor quality

Concept of „hidden plant” – plant capacity required for rework

Kaoru Ishikawa (1915 – 1989)

Registered the first quality control circle ( in 1962)

Cause – effect diagrams ( fishbone charts)

Elemental statistical method – simple but effective tools for data – based decision making.

Philip B. Crosby (1926 -)

Concept of zero defects as the only acceptable quality goal

Published Quality is Free; argued that lack of qulity is what costs

Defined quality as meeting customer requirements.

Genichi Taguchi (1924 – )

Simplified pathway for greater efficiency in experimental design

Robust design to withstand rigors of production and customer use

Quality loss function ; idea that any deviation from target value of a quality characteristic costs society in some ways.

Operation Management and Total Quality Management: Contemporany Interfaces

Reflection on the concepts and heritage of TQM along with the principles of OM might prompt the question, „ Isn’t there a great deal of common ground?” Indeed there is; and the flow of influence goes both ways. Scott Mitchel, director of operations for Troy, Michigan, based Delphi Automotive puts the operations/quality interface into perspective”…the major quality challenge facing the industry is speed, how to get the product to the consumer at a very rapid pace. Automakers are operating with shorter supply chains, or at least quicker ones, and they are putting a lot of emphasis on less inventory, lower costs and lean manufacturing…But when you reduce the supply base and shorten the reaction time, you cannot tolerate products that are inferior in quality.”

Operations in any organization contain primary transformation processes – key targets for obtaining built – in quality improvement. These same processes, however, are also targets for other changes such as increased productivity, greater flexibility or faster throughput. A change in methods, though initially intended to improve output timing, may surprise implementors by also yielding better output. Rather than quietly accepting the „good luck”, today’s enlightened firm will eagerly seek to understand the cause of the unforseen improvement. A second round of changes, perhaps with a quality improvement aim this time, may also result in greater speed or in some other desired outcome.

2.2.3 TQM and Competitiveness

Dictionaries tell us that competition is effort expended by two or more parties to win the favor of a target individual or group.

In the late 1980s, Gabriel Pall, director of the IBM Quality Institute and former line manager, described two pathways through which improved quality enhances a cmpany’s profitability.Market-route benefits begin when improved qulity increases the product’s value in the eyes of customers. The provider may raise prices or – by holding prices steady – realize a gain in market share; revenue increases in either case. Cost –route benefits accrue because increased defect free output cuts operating costs per unit, and lower costs also enhance profits.

Profitability is one-time honoured indicator that customers are as a whole satisfied with a company’s output. Demonstrating linkages from better quality to profits might seem a waste of time to the TQM believer, but it is often a necessary step in convincing budget-minded managers to cough up funds for process improvements

Cost of Quality

Perhaps the earliest method of trying to incorporate costs of quality into modern managerial decision making was known simply as the cost-to-qulity approach. Four categories of quality-related costs were identified:

Internal failure costs. Costs the provider incurs directly – prior to delivery or shipment to customers – as a result of defective output.Examples are scrap, rework, retest, downtime, and searching for something misplaced.

External failure costs. Costs to the provider when defects are discovered after delivery or shipment to customers. Included are returns, warranty expenses, allowances, returned material handling, complaint processing and service recovery. In extreme cases, liability settlements and legal fees would be included.

Appraisal costs. Costs of determining the degree of quality.They include monitoring, materials inspection and testing, maintenance of test equipment and materials and other resources consumed during inspection and testing.

Prevention costs. Costs of efforts to minimeze appraisal and failure costs. They include quality planning, trening, new-products review, reporting, and improvement projects.

In sum, the term cost-of-qulity is itself misleading. Taguchi’s quality loss function, shown in the following example, correctly points out that the worrisome costs are those associated with not having qulity. Furthermore, Taguchi’s view accomodates a much broader concept of costs – he considers not just costs to a provider but costs to all of society.

Basically, Taguchi holds that unwelcome costs to some segment of society – producer, customer, end consumer, or even society at large – occur with any deviation of process performance from the intended or designed target. The smaller the amount of this deviation, the smaller the social costs and the more valuable the product or service.

In the early stages of their TQM efforts, quality advocates in a number of well known firms used the cost-of-quality argument for shock value. Cost – minded senior managers were often startled to learn that „ costs of un-quality” in their companies were 10 percent to 20 percent of annual revenue. When other arguments for managerial commitment to quality improvement programs failed, the costs-to-quality speech often got results.

Should firms that already have thriving TQM programs continue doing annual cost-to-quality audits? Probably not. Consider for example a process improvement that prevents defective output. In a TQM company, it’s a better- than- even bet that the change has other benefits: perhaps it results in faster or better engineering, reduces cycle times, or improves safety. Is the expense of the change a cost of quality? Or of engineering, production or employee safety? Under TQM, quality is everybody’s business; it’s woven into the fabric of every job. So, the amount spent to achieve quality is difficult to state precisely. But even if we could fiind it, it isn’t a cost we want to eliminate.

Value of Quality

As TQM gained momentum, companies began to pour large sums into its implementation. Not surprinsingly, critics raised two good questions: „Is quality being improved?” and „ if so, are the improvements contributing to the bottom line?” we are going to addresse a few of the studies that have been made.

The General Accounting Office, the investigative arm of Congress, studied data from finalists in the 1988 and 1989 Malcolm Baldrige National Quality Award competition to ascertain whether TQM improved performance. The results show that, after beginning TQM programs, quality-oriented companies experience general improvements in market share and profitability, customer satisfaction, quality, costs and employee relations.

Additional survey data, including the widely quoted PIMS ( Profit Impact of Market Strategy) studies, also show positive effects from quality improvement efforts.

In 1995, the National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, created the „Baldrige Index”, a mythical stock fund made up of publicly traded companies who have won the Baldrige Award.

Professor James L. Heskett and a team of collegues, all members of Harvard Business School’s service management interest group, studied successful service companies, including Banc One, Intuit, Southwest Airlines, Taco Bell, and MCI. The researchers place special emphasis on the roles played by the human elements – customers and service providers’ employees. Working backwards along what they refer to as the service-profit chain, they suggest that.

As a conclusion, there appears to be ample evidence of a qulity-competitiveness linkage. Customers aknowledge quality with their loyalty, but quality can also lead to more formal recognition.

Recognizing Quality

As the quality imperative caught on the scramble to discover and recognize excellence in quality was on. Benchmarking, supplier certification programs, ISO 9000 registation and competition for qulity awards have all contributed to global definitions of quality and quality management. They all serve to recognize quality.

Benchmarking

Benchmarking, is the systematic search for best practices, from whatever source, to be used in improving a company’s own processes. This technique was first used and developed at Xerox Corporation in the late 1970s. At first, Xerox people called it competitive benchmarking. As the word suggest, they limited its application to finding their direct competitor’s best practices. Xerox benchmarking teams boldly contacted competing manufacturers of copiers, computers, and other Xerox products.

Now, benchmarking is in wide use by major hotels, accounting firms, transportation companies, banks, manufacturing companies and others. For example, Marriott Hotels have benchmarked the hiring, trening and pay practices of fast-food companies because hotels hire aut of the same labor pool. In the same way, corporate attorneys at Motorola have been employed benchmarking.

With so many firms involved in benchmarking – and trying to visit some of the same high-performance firms – the idea of putting benchmarking data into data banks arose.

Xerox divided its initial benchmarking procedure into 10 steps. First comes planning and organization, the next step is all – important; selecting the process to be benchmarked and the team members. However, the team should not immediately set off to benchmark another company. First they need to analyze their own processes, in the following terms: metrics measurements and practices.

The third step is collecting information on whom to benchmark and what questions to ask. The fourth is to gain approval and establish plans for exchange visits.

Fifth is the benchmarking itself, including a visit to the benchmarked firms’ sites. Last, the benchmarking team analyzes the data, developes plans for change, and follows through.

Benchmarking has spread from its North American origin to many corners of the globe, and like other tools, it has been improved. Benchmarking teams may now tap computer networks and the clearinghouse database for much information, but the site visit remains popular.

The overall benchmarking task really has two major components; a user process and a management process. The former consists of the steps that actually make up the benchmarking study. The latter is much broader, containing all those actions that support the user process before, during and after the actual investigation. „Step zero” is added to the original benchmarking procedure, to ensure consensus on key facets of benchmarking before it is begun. Benchmarking has a greater success when launched in an environment already steeped in TQM philosophy and procedures.

Problem-based benchmarking – reaction to a specific trouble spot – was the right approach for early benchmarking efforts. Now, leading edge firms realize that process-based benchmarking which targets those key business – wide processes contributing most to company goals, offers greater payback.

Supplier Certification

We’ve seen that in the third step the benchmarking step asks: „Who’s the best?”. The team’s purchasing associate might suggest taking a look at the company’s own certified, high-quality suppliers.

Traditional assesments of supplier performance and capability were not stringent enough for TQM-driven companies. A quality – centered approach, called supplier certification, fills the need. Receiving an important customer’s highest certification is grounds for celebration at any supplier company. But someone should note that awards can also be lost if improvement does not continue. Other customer’s certifications are the next challenges. Marketers – always on the lookout for a competitive edge – quickly insert certification information into promotional materials.

The growth of certification programs parallels another, related trend: reduction in the number of suppliers. That movement has pick up steam as more companies, service as well as industrial, see the competitive advantages of dealing with but a few good suppliers. What suppliers do they keep? Those that can meet their quality certification requirements.

Suppliers may grumble abuot coercion, but those that are certified by big customers may fiind that effort an extra payoff when the time comes for them to seek registration to the ISO 9000 standard.

Example: ISO 9000 Series Standard – 2000 Format

Standard Title

ISO 9000 Quality Management System–Fundamentals and Vocabulary

ISO 9001 Quality Management System – Requirements

ISO 9004 Quality Management System–Guidelines for Performance Improvements

ISO 9000 Series Standards

The ISO 9000 Standard is actually an umbrella name for three separate but related quality standards originally published in 1987 by the International Organization for Standardization, based in Geneva, Swizerland. Though support has been particularly strong within the European community, its use is global.

siMany companies require their suppliers to register to ISO 9000, and expect those suppliers to, in turn, require their own suppliers to register as well. As customers, their rationale is understandable: the quality imperative demands reliable suppliers. Under the ISO 9000 scheme, a company (or a division or plant within a company) arranges to have its quality systems documentation and implementation audited by an independent accredited registrar. The phrase „third party registration” is used to refer to this objective assesment. The particular role to be played by a supplier determine which proportion of the standards must be met.

If the quality systems – specifically, the plan, the implementation, and the documentation – are in order, the company is registered, and is permitted to advertise that fact in its promotional materials and other documents. The registrar continues to survey the supplier and makes full reassesments every three or four years.

Is registration to ISO 9000 standards the ultimate quality performance achievement? No. As mentioned earlier, the standard does not certify quality of goods and services, but rather registers the existence of proper quality plans, programs, documentation, data, and procedures. Some customers may wish to probe deeper and require additional assurances of quality, buy they might opt to not bother if a supplier balks at going for ISO registration. Another supplier will want the business.

In the final analysis, registration is a form of service to customers. And, managers who have taken companies through ISO 9000 registration offer another perspective; the only approach to ISO 9000 registration that works is to improve the company’s quality system for the benefit of those who function within it; ISO 9000 registration is a by-product of quality system improvement.

Deming Prize and Malcolm Baldrige National Quality Award

Until the late 1980s, the Deming Prize, named after W. Edwards Deming and administered by the Union of Japanese Scientists and Engineers, was the only quality award of note. First presented in 1951, the Deming Prize was largely unappreciated outside Japan for over 30 years. As the quality of Japanese goods caught the world’s attention in the 1970s and 1980s, however, the Deming Prize – along with a wealth of Japanese insight into quality philosophy and technique – gained international acclaim. It is esteemed grandfather of othr quality awards, including the Malcolm Baldrige Award in the United States.

On August 20, 1987, President Ronald Reagan signed Public Law 100-107, the Malcolm Baldrige National Quality Improvement Act. Named after the late secretary of commerce, the legislation reflected growing belief that the U.S. government should take a more active role in promoting quality and established the Malcolm Baldrige National Quality A ward ( MBNQA) that would recognize total quality management in American industry.

The annual awards for businesses were first given in 1988 and may go to no more than two winners in each of three cathegories – manufacturing, services and small businesses. Many more companies ask for award applications than actually apply. In 1991 for example 235000 applications were sent but only 106 comanies applied. Further, only 10% of the applicants merit a site visit by an examination team.

Today, many companies have accepted the premise that self-assesment has rewards of its own.

Although the basic structure , intent and procedure associated with the Baldrige Award remain largely as originally designed, the award continues to evolve. The heart of the award is reflected in the core values and concepts, which , in turn, signify evolving ideas about quality of outputs and processes. The core values and concepts in the 2000 criteria were:

Visionary Leadership

Customer Driven.

Organizational and Personal Learning

Valueing Employees and Partners

Agility

Focus on the Future

Managing for Innovation

Management by Fact

Public Responsability and Citizenship

Focus on Results and Creating Value

Systems Perspective

The core values and concepts are embodied in seven broad categories containing 19 examination items. The following example shows the 2000 categories and items along with maximum point scores, reflecting the relative weight given to each item during scoring. J.M. Juran has expressed the opinion that the Baldrige Award criteria are the most comprehensive available list of actions needed to improve quality.

Example MBNQA Examination Criteria – 2000

Leadership 125

Organizational Leadership 85

Public Responsability and Citizenship 40

Strategic Planning 85

2.1.Strategy Development 40

2.2. Strategy Deployment 45

Customer and Market Focus 85

3.1.Customer and Market Knowledge 40

3.2.Customer Satisfaction and Relationship 45

Information Analysis 85

4.1.Measurement of Organizational Performance 40

4.2.Analysis of Organizational Performance 45

Human Resource Focus 85

5.1.Work Systems 35

5.2.Employee Education, Trening, and Development 25

5.3.Employee Well-Being and Satisfaction 25

Process Management 85

6.1.Product and Service Processes 55

6.2.Support Processes 15

6.3. Supplier and Partnering Processes 15

Business Results 450

7.1.Customer Focused Results 115

7.2.Financial and Market Results 115

7.3.Human Resource Results 80

7.4.Supplier and Partner Results 25

7.5.Organizational Effectiveness Results 115

as with the Deming Prize, the Baldrige has its critics. For example, some people lament the marketability of success that comes from winning the award, or feel that the Baldrige Award mandate for winners to share the secret of success with other companies is unrealistic.

Winning a quality award is not the signal to relax. In 1990, consultant Richard Dobbins participated in a study of several Deming Prize-winning companies in Japan. The study team members were especially impressed by auto parts supplier Nippondenso. Its proud, highly involved workforce had thoroughly mastered tools of process improvement, and the company made a lot of money and rarely produced a defective part. Dobbins said to a Nippondenso plant manager, „It’s easy to seee how you won a Deming Prize with a management system like this.” The manager replied, „ we won our Deming Prize in the 1960s; all of this we have learned since then.”

2.2.5. Employee – driven quality

Implementation has a very essential role in quality and the need for broad – based human involvement and commitment is seen. Ganing that commitment requires action on three fronts:

Tranining. Everyone needs trening in the tools of continuous improvement, problem solving and statistical process control. In addition, people require training in job skills, plus cross-trening for an understanding of the bigger picture.

Organization. People need to be put into close contact with customers and suppliers. This calls for organization of multifunctional customer-, product-, or service-focused cells, teams and projects.

Local ownership. The management, control and reward systems need to be realigned with the goals of employee and team –driven, customer-centered quality and continous improvement.

Time out for training

Quality is free, Philip Crosby says. It pays its own way – but not without an upfront investment. The investment is for training, the essential catalyst for action.

Amid all the evidence that businesses have taken the quality imperative to heart, the elevated commitment of certain firms to quality-oriented training and cross-training stands out.

Lack of training deters teamwork because, at least in Western cultures, people do not seem to be naturally team oriented.

Getting Organized – Team Formats

Team organization comes before team building. The first priority is getting the right people on the team.

Quality Circles. One useful kind of team is the qulity circle ( which was popularized by Kaory Ishikawa in Japan, where is called quality control circles). Japan’s QC circles contributed as many as 100 times more suggestions per employee than Western companies could elicit. The result were favourable, but not much more so than certain other programs, such as suggestion plans. It now seems clear that most quality circles were organized in a way that avoided a customer focus – that is, they excluded next processes.

Cells. This type of circle is hard to organize because it requires moving people and equipment aut of functional departments. A few compaies call it a natural team.

Document processing cells and manufacturing cells are rarely organized form the purpose of creating quality circles. Rather, they are formed to quicken response time, cut aut many clerical activities and transactions, eliminate bulk handling across long distances, and slash inventories along with potential rework and scrap.

Teams. As 1980s circles did, teams often employ a facilitator, whose job may include keeping team meetings on track and providing training. One set of team trening topics includes team dynamics, communications, and other behavioral matters, including how to interview and choose new employees and how to evaluate one another’s performance. Another set aims at general problem-solving tools, such as brainstorming, nominal group techniques, role-playing and multivoting.

To some extent the manufacturing sector already has passed through the phase of teams focused primarily on human issues, as was generally the case during Western industry’s first attempts in the early 1980s to implement quality circles. By now, many manufacturers have learned to use the full potential of teams – for both human and quality improvement issues.

Local Ownership of Total Quality Management

As personal quality checksheets lead the way and teams follow, everyone needs to feel a sense of ownership of control, of improvements and of results. It also means less control from on high and fewer levels of management to review improvement proposals. Further, the company must shift toward rewarding specific results at local levels rather than general ones at high levels.

To support local ownership, managers need to be aut of their offices and visible locally, where they admire control charts and process experiments, help remove obstacles, and pass aut awards. When local ownership has trully taken root, the evidence is likely to include charts of all kinds – on walls, doors and partitions – in the workplace rather than in managers offices.

2.3. Process Control and Improvement

The quality imperative is just that – a demand or mandate from customers. But quality becomes reality through the efforts of skilled people using the tools of their trades and motivated to provide that quality.

2.3.1.Improving Outputs and Processes

When people do apply their skills and motivations to the improvement of customer-serving processes – positive results can extend beyond quality improvement and reach into the larger set of customer wants.

Targets: Quality and …

Process outputs can be many and varied. Improvement efforts may target – and achieve – better quality, faster response, lower variability, better service or combinations of these and other customer wants.

Sometimes, in fact the definition of quality is time: That is in the case when we stand, fidgeting impatiently in long lines waiting for service. Usually, however, the time-quality connection is more subtle than that. The following points explain this apparent contradiction:

Quick response. Improving quality eliminates delays for rework, process adjustments and placating customers, thus providing quick response for a greater percentage of customers.

On – time. Quality the first time – every time – removes a major cause of delays, late completions, and unpredictability, thereby improving on-time performance.

Quick feedback. All efforts to cut aut delays provide quicker feedback on causes of bad quality, allowing earlier process improvement efforts. Anything that reduces delays is a powerful technique for process quality improvement.

Enough time for quality. the time saved by removing delays and making quality right must not be squandered. It needs to be reinvested in trening, design collaboration, inspection and on-the-spot correction, feedback and consultation with people in earlier and later processes, data collection and improvement projects. If those activities are neglected, for example, under pressure for more output, quality suffers, and a chain reaction of delays and variations results in less output and slower response. These interlinkages – time, quality and problems in general – suggest that improvement should be looked at broadly.

Source – Oriented Improvement Cycle

Design. Design a capable, fail-seif process, for the best approach is prevention. Process capability means capable of meeting customer requirements or specifications. Since no design is perfect, add backup protection: fail-seif devices or procedures. The aim of fail-safing is to equip a process with features that prevent a mishap from going forward or even happening at all. For example, an invoice-payment computer routine won’t write a check for an aut-of-bounds amount. If the process is not fail-seif, the next best response is self-inspection and correction. Each frontline associate receives authority to correct a problem,such as placating an angry customer on the spot or to stop the whole production line, to avoid making bad products. And every work group takes responsability for correcting its own mishaps; no passing problems on to a separate complain or rework department.

Detection. When problems cannot be fully contained at the source, we are pushed into the poor practice of inspection and discovery at a later stage. Delayed detection is costly and damaging to reputations. Quick as possible feedback provides some damage control. The early-warning system should provide specific feedback from all subsequent error discovery points: in a later process in the same organization, within the next company, and by final customer.

Improvement. Process improvement requires collection and use of data about process problems.Collecting process data cannot be a sporadic effort. Supervisors and operators need trening in how to measure quality, collect quality data, and analyze quality statistics in order to isolate root causes. The collected data become the raw material for problem solving. Process improvement teams analyze the data and attack the problems. Improvement projects aim at making deficient processes capable and fail-seif, and the quality action cycle begins again.

Role of Quality Professionals

With a quality-at-the-source mind-set and frontline associates assuming primary responsability for quality, is there still a need for a quality assurance department? Usually there is, expect in very small organizations. However, the quality movement changes the role of that department.

2.3.2. The Process Focus

A process is the unique set of conditions that creates certain outcomes. Change a process and differnt results are likely to occur. In many processes, not all of the M’s are aparent. Some human services involve virtually no materials. In some cases, maintainance might be considered a part of methods.

Processes components change over time: one data entry person replaces another materials come from a diffrent supplier, a machine or its cutting tool is changed, a diffrent maintenance schedule is started and so on.

2.3.3. Coarse – Grained Analysis and Improvement

Process Flowcharts

A flowchart gets the analysis started. It helps the team visualize the value chain. The process flowcharts pictures the full process flow in all its comlexity.

Checksheet and Histograms

A checksheet is the simplest of all data collection tools. Just make a check mark each time the mishap occurs.

Histograms offer another way to display frequency data. A histogram which is more structured than a check sheet, has equal interval numeric categories on one axis with occurence frequency shown on other.

Pareto Analysis

Pareto analysis helps separate the vital few from the trivial many. In process improvement, pareto analysis proceeds as follows:

Identify the factors affecting process variability or product quality.

Keep track of how often a measurable defect or nonconformity is related to each factor.

Plot the result on a bar chart, where lenght of a bar chart stands for the number of times the causal factor occurs.

2.3.4.Fine-Grained Analysis and Improvement

We have just seen how the coarse-grained tools can sometimes lead to solutions-without further, more-detailed study. It's like the apple that fell on Sir Isaac Newton's head. He deduced the gravity concept from the coarse-level "analysis."

The five more refined analysis tools from the example study one quality characteristic at a time. The purpose might be to further test a good idea, such as Newton's.

Fail-.Safing

A common fail-sating example for suppliers of boxes of parts is egg-crate box dividers so that only the correct number of parts-no more, no less-can he packed and sent to a user.

Unlike other tools of improvement, fail-sating does not rely on any particular

sources of process data. Rather, it is a mind-set that can help direct associates, positively, toward a permanent if unrealizable process fix: Expand the improvement zone of the quality action cycle, and shrink the detection zone to nothing.

Fail-safing is best applied at the root-cause level of analysis-for example, a third, fourth-level sub-bone of a fishbone chart. It can prevent, for instance:

Leaving out parts or steps.

Fitting components or service elements together improperly.

Failing to follow the right process sequence.

Bad process result (e.g., the machine stops itself because of excess tool wear).

Passing errors along to the next process (because the root cause has been found and eliminated).

Fail-safe devices may be as simple as templates, egg-crate dividers, velcro, glue, and paint, or as fancy as limit switches, electric eyes, scales, locks, probes, timers, and scopes.

Fail-safing embraces a realistic view of people, processes, and errors. It recognizes that people need to be protected from their own naturally variable behaviors. This is an excellent attitude and often leads to the right solution. If people are unaware that processes can and should be fail-safed, their tendency is to hide the mishap when it occurs to avoid the possibility of blame.

Design of Experiments (DOE)

Sometimes the improvement team is stumped: flowcharts, Pareto analyses, or fishbone charts have brought too many potential causes to the surface. "Which are the actual causes?" Statisticians or engineers who are seasoned veterans in the art and science of experimentation can help.

As noted earlier, the early 1990s witnessed renewed interest in design of experiments (DOE) as the quality movement progressed. The intricacies of DOE, including the Taguchi short-cut methods, are beyond the scope of this discussion, but some comments about how to bring experimentation into the improvement process are in order. The best way is to have those with DOE expertise join process improvement teams as consultants. It is important that experts and frontline associates rub shoulders. Experts have much to learn about applications, and front-liners need to learn what they can about advanced methods. Cross-learning is the key to joint ownership of processes and results.

While not everyone should try to master DOE techniques, another experimental tool, the scatter diagram, is easy for everyone to learn and use.

Scatter Diagram and Correlation

As used in TQM, a scatter diagram (scattergram for short) plots process output effects against experimental changes in process inputs. The correlation coefficient (discussed in Chapter 4) pins down the relationship in precise numeric terms. Often, we can adequately estimate the strength of the association just by looking at the scattergram.

Suppose that associates producing rubber inner tubes have noted wide variation in tube strength, as revealed by overfilling the tubes with air. They run an experiment seeking ways to reduce the variation. At the previous process in their work cell, in which formed tubes are cured in ovens, they vary the cunng time. Next, they test the tubes, plot wring times against tube strength on a scatter diagram. and look for a correlation.

Run Diagram

When we aren't doing experiments or solving problem,. what should we be doingbesides the task itself? Watching the process.

Here again we need tools. Relying on impressions won't do. The run diagram is the simplest of the process-monitoring tools.A run diagram is simply a running plot of a certain measured quality characteristic. It could be number of minutes each successive airplane departs late or number of minutes each successive airplane departs late or number of customers visiting the complaint desk of a store each day. III these cases, the company might specify an tipper limit, which would he plotted on the run diagram. Any point above the upper specification (spec) limit would he obvious–and perhaps grounds for taking some kind of action.

In other cases, there might he both an upper and a lower specification limit, Consider a plastic part made in an injection-molding process, it might he a component Fix a toy, a home appliance, or a consumer electronics product. The customer's specifications are targets for the operation.

Process Control Chart

While the run diagram plots data on every unit, the process control chart, less precisely referred to as the quality control chart, relies on sampling. The method requires plotting statistical samples of measured process output for a quality characteristic. The sampling must be done initially to establish trial control limits. Then, sampling continues with the results plotted on the control charts. By watching the plotted points, the observer can monitor process variation, which may call for some kind of corrective action to the process.

Control charts come in several forms. We examine three of the more popular kinds: mean and range charts, which operate as a pair; proportion-defective chart; and number-of-defects chart. The mean and range charts rely on measurements of a continuous variable, such as weight, height, or specific gravity and are therefore referred to as variables charts. Both the proportion-defective and the number-ofdefects charts. on the other hand, count occurrences of a quality attribute within a sample and are called attribute charts, The underpinnings of the three charts are three different statistical probability distributions: normal distribution for the mean chart, binomial distribution for the proportion-defective chart, and Poisson distribution for the number-of-defects chart.

2.4. Flow Control: Eliminating Process Waste

Better controls and process improvements are carry-over agendas as we continue our discussion of customer-serving processes. Having addressed design and quality, we now focus more directly on incorporating other basic customer tams-lower costs, faster response, and flexibility-into those processes.

A sound flow control system is central to building and keeping customer-friendly processes; it ensures timely, accurate, value-adding movement of goods, services, or customers themselves through various stages of processing. Better flow control is one objective of several popular OM tools-benchmarking, reengineering, design for manufacturability, quick-response manufacturing (QRM), group technology/cellular layout, and just-in-time operations, for instance.

2.4.1.Flow-Control System Overview

Analogies have a way of being imperfect, but let's try one anyway: The flow-control system in a company is like the operating system in your personal computer. You don't know it's "doing its thing" until it isn't. Put another way, troubles in the flow-control system (or the computer's operating system) remind us that it is a part of the picture and that we mustn't take it for granted.

Let's take the analogy a step further: Good flow control crosses all departments, inks internal and external provider-customer chains, and facilitates attainment of the common purpose. In similar fashion, the computer's operating system "touches everything" as it fills a linking role; it links hardware, software, and information networks and helps the operator accomplish some purpose. And when they occur, flow-control system breakdowns-like computer operating system faults-tend to produce widely variable effects that can crop up in unpredictable places.

Process Variability

Flow-control problems can be caused by faulty equipment, too little or too much management, wrong materials or wrong quantity-in short, by any of the process components defined. And, like deviation from a specification target for an output good, deviation from targeted process performance creates costs somewhere along the line. In any organization, multiple process elements can vary at the same time, and the combined variations can result in everything from mild annoyances to failures and delays to complete shutdowns.

Process variability annoys. both because it yields bad results and because it equals uncertainty. If a bus is late by 10 minutes dependably, we might be able to live with it. But if it's 10 minutes late on average-sometimes much later, on time, or early-we may give up on bus riding.

Keeping buses on time requires controlling Just a few sources of variability. But flow control in a complex organization involves many interacting sources of variability, such as multiple internal processes using many different external materials and other resources. Reducing interacting variability require, a three-pronged attack:

System designers avoid complexity so that there are fewer sources of variability.

Every associate and team finds ways to control process variation.

Cross-functional teams develop ways to detect and plan around or adjust to sources of variability, thus producing a satisfactory result.

Developing the flow-control system requires a basic understanding of key relationships (or linkages) and overall impact.

Causes of Costs

We talked about OM's role in enhancing company margins-driving a larger wedge between costs of providing goods and services and the prices we can command for them. As we noted there, process streamlining getting rid of wastes-is a core ingredient of that effort.

The assumption, of course, is that those wastes are cost drivers, or true causes of costs, and thus cut into the company's profit margins. In this section, we examine a few of the difficulties associated with cost-driver classifications by focusing on inventoryrelated costs, First, though, we very briefly look at a traditional cost analysis tool.

Cost Variance

Unfortunately, not all cost statements reflect operating reality. Numerous respected authorities in the field of cost (managerial) accounting have called for substantive revisions to cost accounting techniques so that they better reflect true cost drivers.'- For years, standard practice has been to issue periodic cost variance reports as a form of feedback and control, and frankly, to serve as a warning to "errant managers."

The system sums up standard and actual costs for all work completed during a period. Standard cost represents what should hare been spent for normal amounts of direct labor, materials, and so forth: actual cost comes from payroll expenditures, materials accounts payable, and other transaction records. If actual cost exceeds standard cost, a negative cost variance results. and if severe or often enough, can result in pressure for someone to shape up.

In theory, the cost variance makes sense. In practice, often it does not. While a detailed assessment is far beyond our scope, we will note that in general, cost variance systems do not adequately account for overhead and can seriously misallocate direct costs, as well. Though a hit of a simplification, we might sum up by saying that discretionary cost allocation-though perhaps not malicious can be deceiving. For this reason, some companies worry less about trying to compute cost Savings associated with an improvement project Than about keeping track of other, more objectively measured cost drivers.

When a company abandons detailed, period-cost measures, aren't there some risks? Perhaps so. if the company has no better way to deal willi costs and wastes. A better way, however, has proven itself. It is a multifaceted system of identifying and then eliminating the causes of cost, poor quality, delays, and other undesirables.

Seven Deadly Wastes

This system introduces additional, detailed process improvement measures, and it takes aim at the so-called seven deadly wastes. Originally formulated for factories,' the seven wastes are adapted for any organization as follows:

I. Waste of doing; more than the customer rants or needs. In an office, this includes too many reports issued too often, too many meetings, too many interruptions. At retail it would include badgering the customer and demonstrating products and models that go beyond the customer's interests. In a factory it is, simply, overproduction.

2. Waste of waiting. This isiwasting the time of clients, suppliers, or the workforce.

3. Transport waste. Ill-planned layouts of facilities can mean long travel distances from process to process for customers, suppliers, the workforce, materials, supplies, mail, tools, and equipment.

4. Processing waste. In the value-adding operations themselves, processing wastes can add up: Files are not properly cross-referenced. Procedures are not kept up-to-date. The task sequence is cumbersome or difficult to do.

5. Inventory waste. This includes all of the extra costs of holding and monitoring inventories, such as outdated catalogs and records, obsolete materials, and items bought or produced in excessive quantities too early for use.

6. Waste of motion or energy. Mere motion or consumption of energy do not equal useful work. The test is, Does it add value to the product? If not, it wastes motion or energy.

7. Waste of defects and mishaps. Any defect or any mishap creates a chain reaction of other wastes-potentially all of the preceding six wastes-to "make it right." Included are wasting time, adding transport distance (e.g., return and do it over), inserting extra processing, requiring more inventories, and wasting motions or energy. Poor quality affects nearly everything negatively.

In effectively attacking these wastes, process-improvement teams apply a full array of problem-solving tools. As results of such efforts draw attention, competitors get interested in driving wastes out of their processes as well.

Success is contagious, and made even more so when competitive forces enter the picture. One spillover effect is that we begin to look at old assumptions-ss hatever they may be-with a more critical eye. We begin, for example, to see a piece of very necessary work-in-process (WIP) inventory that happens to be waiting in a queue to be fed into a machine in a different light. Yes. traditional accounting tells us that it is an asset, but our knowledge that its costs mount as it sits raises the word "liability" in our thoughts as well. That delay is costing us money!

Costs of Delays (Carrying Costs)

To reiterate, the work must flow. That advice becomes doubly important when cost is considered, for cost is like dust-it tends to settle on anything that is sitting around. Rapid processing allows little time for costs to accumulate. When material is idle, it incurs a cost above and beyond its unit price. That cost is called an inventory carrying cost.

Office work collects carrying costs, too. For example, office people sometimes spend more time searching (in-baskets, briefcases. computer files) for a document than working on it.

Costs of Idleness. What do those delays cost? For a client the cost is hard to judge because most of it is poor-service cost. that is the cost of the client's involuntary idleness.

Likewise, lot- documents and tiles the cost of idleness is mostly the cost of slow service to the customer; costs of storing and carrying the documents and files are minor.

What about materials in a hospital. restaurant, or factors'? first are the physical costs of holding inventory and the financial costs of having working capital tied up in idle inventory. But those are the obvious carrying costs, which accounting and inventory management writings have always recognized. More recently these writings have paid heed to less obvious and "hidden" costs.

Obvious Costs. In order to be a true inventory carrying cost, a cost Must rise with the growth, and fall with the reduction, of inventory. Capital cost, first on the list, clearly qualities. Company financial managers frequently attempt to secure bank loans or lines of credit to pay for more inventor\. Banks often use the inventory as collateral for loans.

Example. Carrying-Cost Elements

Obvious carrying costs:

Capital cost-interest or opportunity costs of working capital tied up in stock.

Holding cost-stockroom costs of:

Space

Storage implements (e.g., shelving and stock-picking vehicles)

Insurance on space, equipment. and inventories

Inventory taxes

Stockkeepers' wages

Damage and shrinkage while in storage

Semiobvious carrying costs: Obsolescence

Inventory planning and management

Stock record keeping

Physical inventory taking

Hidden carrying costs:

Cost of stock on production floor:

Space

Storage implements

Handling implements (e.g.. conveyors, cranes. forklift trucks)

Inventory transactions and data processing support

Management and technical support for equipment used in storage, handling, and inventory data processing.

Scrap and rework

Lot inspections

Lost sales, lost customers because of slow processing

Only in abnormal situations can a company avoid capital costs.

Next on the list is holding cost, which is mainly the cost of running stockrooms.

While the accounting system may consider space and storage implements as fixed costs, they exist only to hold stock: therefore, they are true earning costs. The other more or less obvious holding costs are insurance. taxes, material department wages. damages, and shrinkage.

Semiobvious Costs. Semiobvious carrying costs include inventory obsolescence and costs of inventory management and clerical activities. People involved in inventory planning, stock record keeping. and physical inventory counting do not actually handle stock, and their offices often are far from stockrooms. Perhaps for these reasons, some companies include those costs as general or operating overhead. Clearly, however, they are inventory earn ing costs.

Obsolescence cost is nearly zero when materials anise just in time for use, but it can be high if companies buy in large batches and then find that the need for the items has dried up. High-fashion stores and high-tech electronics companies should be acutely aware of obsolescence as a cost of carrying inventory. Old-line manufacturers. however, might write off obsolete stock only once every 10 years: if so, they may fail to include obsolescence routinely in their calculated carrying-cost rate.

Hidden Costs. Carrying costs that commingle with other costs tend to be hidden. A prime example is stock released from a stockroom to operations (factory, sales floor, kitchen, etc.), where it sits idle between operations. tying up cash and occupying costly space. In manufacturing, idle in-process inventories commonly occupy half or more of factory floor space. Idle stock often sits on racks. conveyors. automatic storage systems, and other costly equipment, and it adds up to a major hidden carrying cost component.

Most companies once invalidly charged those costs as production costs. Today, accountants and operations managers and associates are increasingly asking: Does it add value? Does the activity produce something saleable or directly serve a paying customer? If not, treat it as an inventory carrying cost. Illustration: a conveyor literally carries inventory and adds no value to the product.

Another so-called non-value-adding (NVA) activity is processing inventory transactions, including the cost of employees' time for entering inventory usage and scrap data into terminals plus the cost of the terminals, usually treated incorrectly as operating costs. Much greater are the associated central processing costs (hardware, software, and computer operations) and the costs of corrections and report processing.

Inventory intensive firms, inventory management is the dominant computer application; its costs have been conveniently bundled into the information system department's total costs, but they are actually hidden inventory carrying costs. Costs of management and technical support lot storage, handling, and data-processing equipment are also carrying costs, but they are rarely treated as such.

Scrap and rework costs also fail with decreases in inventories, including decreases in lot sizes. This is true in processing perishables (such as cutting off rot from food items), in wholesaling and retailing (e.g., an entire lot of garments missing a buttonhole), in information processing, and in manufacturing.

As an information processing example, suppose telephone sales associates send sales orders forward once a day in batches averaging 800 orders. Order entry clerks in the next department might find numerous defects, such as missing quantity, incomplete address, or lack of a promise date. Sometimes, especially for a new promotion, an entire lot of 80(1 orders will contain the same error. More commonly, errors will occur at some average percentage. Either way, order entry clerks end up sending the faulty forms back to the sales office for rework, probably the next day. Meanwhile, time has passed and sales people are busy with other orders.

If salespeople processed and forwarded orders in lots of 20 instead of 800 maximum damage would be 20, which could be sent back while the trail of causes is still warm.

Much better still would be for a sale, associate to hand the order directly to an order entry clerk. They become a team, intolerant of errors on order forms. Large defective lots are no longer possible. When an error occurs, the clerk usually discovers it right away while the cause is still obvious. The team finds ways to permanently eliminate the causes, steadily driving down the rate of defective order forms.

Inspection costs merit similar scrutiny. Inspectors lacing large lots have the big job ol'sorting out the bad ones. However, some companies avoid large lots by adopting just-in-time techniques. They avoid large bad lots by implementing strict process controls to prevent rather than merely detect defects. The tie-in between inspection costs and lot-size quantities is becoming clear, and the conclusion is that even inspectors may be treated as a carrying cost.

Last and most important are the costs of lost sales and lost customer allegiance when the flow-control system is plagued by stalled orders. Thus, the negative impact of idle inventories on customer responsiveness is also a carrying cost. But by keeping lot sizes, queues, and transport distances short, the firm can ensure that the work flows through the system cleanly and quickly-perhaps surprising, delighting, and retaining the customer.

Uses of Carrying Costs. In some organizations inventories are such a dominant cost that virtually every investment proposal has an inventory effect. Therefore, it is important to use a realistic carrying cost when doing a financial analysis for a proposal.

Traditionally, carrying cost has been stated as an annual rate based on the item's value. Older books on inventory management suggested a rate of 25 percent as a good average. Many North American manufacturers still use 25 percent (or 24 percent2 percent per month). But that rate is based on the obvious carrying costs and possibly some of the semiobvious costs.

If all carrying costs are included, as they should be, what is the proper rate? No studies have answered that question definitively. In 1990, Ernst & Young suggested a rate of at least 48 percent. Today. the rate is surely at least 50 percent. Indeed, several manufacturers have upped their rates to 50 percent or higher.5 When researchers have unearthed all carrying costs, more companies may use higher rates, perhaps as high as 100 percent. To see what 100 percent means, imagine a $50 chair sitting in a stockroom for a year. The owner would be paying another $50 for the chair in the form of the costs of carrying it.

Thinking about moving a machine and its operator across the building to team up with a machine and operator at the next process? How much inventory savings are there, and what carrying-cost rate is being used? Suppose that the cost of moving is $2,000 and that $3,000 of inventory would be eliminated. At a 25 percent rate, the savings are $750 per year (0.25 X $3,000); without doing a discounted cash flow analysis, payback on the investment will take 2 years ($2.000 – $750 per year)-perhaps not very attractive.

The best-known financial analysis that uses earning-costs rates is in calculating an economic order quantity Another issue that bears on flow-control systems is the degree to which a company has developed a quick-changeover flexibility, which makes smaller lots more economical.

2.4.2.Quick-Change Flexibility

How long does it take an Indy 500 pit crew to change four tires. fill the tank, clean the windshield, and squirt Gatorade into the driver's mouth? Fifteen seconds? Less? Regardless of how long, the workings of an efficient pit crew capture many concepts of quick-change teamwork and readiness.

Concern about changeover and readiness is not limited to pit crews. The RitzCarlton Hotel Company, a 1992 and 1999 winner of the Malcolm Baldrige National Quality Award, for example, switched from independent room cleaning to team cleaning as part of an effort to reduce the time needed to prepare guest rooms. By more than meeting its goal of a 50 percent eduction in cleaning -cycle time. Ritz-Carlton made a significant reduction in the time guests had to wait at the front desk for check-in.

From the famous racetrack in Indianapolis to the posh suites in some of the world's finest hotels, quick-change tactics are directly responsible for winning operational performances. The underlying concepts are simple and can be expressed as guidelines for action.

Guidelines

Although some businesses are famous for their quick changeover expertise (e.g., stage crews and airline caterers), most organizations give the matter scant attention. But elevated competition in many businesses demands quicker. error-free service and enhancing the firm's ability to continually reduce changeover and get-reads times. The training materials that address these concerns are based on a few guidelines, which we discuss next.

Changeover Avoidance. Guideline one is the special case of a single service, product model, or type of customer that gets its own dedicated process. If. say. three quarters of McDonald's customers wanted a Big Mac and a medium Coke, the restaurant would set up a dedicated Mac-and-a-Coke line, with no flexibility or changeovers to worry about. All companies would love to have products that are popular. The simplicity, low cost. and uniformly high quality of this mode of processing yields high profits and large numbers of loyal customers.

Be-Ready Improvements. The next three guidelines provide natural. low-cost improvement projects for teams of associates.

Guideline two is doing all possible setup steps while the process is engaged on its previous product model, type of customer. or service. That minimizes the time the process is stopped and unproductive. Alternatively stated: Convert internal setup time (while the process is stopped) to external steps (done offline, while the process is running a prior job). At a launtlromat. for example, have \our next load sorted and the detergent and other additives measured out betore the machine stops.

Guideline three provides the discipline of "A place for everything, and everything in its place." have you had to wait to sign something while a clerk? looks six a 49-cent pen? Or has one, but It won’t write? By contrast, an Indy pit crew is ready with gasoline hoses, tire-changing devices, and tires correctly positioned and in tip-top shape. And ruoni-cleaning teams at Ritz-Carlton hotels have towels, linens, bar stocks, and cleaning supplies stocked and ready six use long before cleaning activities begin. Surgical teams in operating rooms, rescue team personnel, and firefighters adopt the same kinds of readiness habits and discipline.

In factories, readiness may include hanging precleaned and sharpened hand tools on "shadow boards" at the workplace: no fumbling through a drawer or tool box, or walking to a tool roost.

Where equipment is expensive -a race car, a surgical room, or a massive press line that stamps out automobile body parts-a sizable, well-trained changeover crew is justified. Guideline four is deftly applied, for example, in well-managed conference centers: Dozens of employees gather minutes before a conference ends, and quickly and acting in parallel, they dismantle the speaker's platform, remove water pitchers and other tabletop items, fold and stack tables and chairs, clean the area, and set up for an evening banquet or wedding party.

Too often the opposite occurs in factories of well-known companies, such as a $5 million packaging line for a headache remedy halted for four hours while one or two maintenance technicians make hundreds of adjustments, one by one (serially), for the next package size or type of tablet. The JIT movement has caused many manufacturers to change their human resource practices so that such expensive equipment can be set up efficiently.

Modifications. Guidelines five through eight generally require that the improvement team call on an expert for technical assistance. Since the modifications may be costly, these guidelines would usually take effect after the be-ready guidelines (two through four).

Guideline five calls for eliminating or immobilizing devices and adjusters that come with the equipment or that were once part of the process but are no longer needed. For example, an overhead projector has a focus knob, but if the projector stays in the same classroom anchored to a table facing the same screen year after year, the focus adjustment unit is an invitation for unnecessary, non-value-adding tampering and variable image quality. In one company, a conference room user had wound strapping tape around the adjustment knob at the right focus setting so that other users could skip the adjustment step.

Why not just order the projector with a fixed focal length to suit the room layout? Because it would be a costly special order, and the manufacturer would have to charge a higher price. Equipment designers usually include many adjustment features, which broadens appeal, increases demand, produces economies of scale, and lowers the price.

After the sale, howeser. teams of users should work on removing or immobilizing unneeded adjustment den ices.

Guideline six is the opposite of five: adding special features not usually provided by the equipment manufacturer. For example to make recycling easier, a team might come up with a plan to equip all the firm's pop machines with a bin that receives. crushes. and holds empty cans.

In manufacturing. setup teams frequently devise locator pins. stops. air-cushion glides. and guide paths that make it easier to change a mold or a die. Exhibit 10-6 shows huge "sleds" on rails. used for quickly and accurately mos ing multiton dies in and out of stamping presses.

Guideline seven calls for simplified, standardized designs. Too many brands of word processors computers, typewriters. and drill presses (each obtained at a bargain price) expand exponentially the array of supporting de\ ices and sets of instructions needed for setup and changeover. Standardization also applies to accessories: for example, if all fastening bolts on a machine are the same size. only one size of wrench is needed in machine changeover.

2.4.3.Quick Response: Managing Intersector Flows

QR links different companies in several stages of production, supply and freight hauling to final points of sale. The ultimate aim is tight synchronization: Pick the cotton that’s spun into thread that's woven into cloth that's dyed and finished into fabric that's cut and sewed into a shirt that's delivered to the store just before you walk in to buy it-all of this, and transportation, too, in sync. Synchronization at each stage affects scheduling, purchasing, storage, logistics, capacity, marketing, and cash-flow planning.

Basic and Enhanced QR

Quick response's unofficial kickoff was in June 1986 in Chicago. Roger M illiken.chairman of textile manufacturer Milliken & Co., was instrumental in getting together a few dozen retailers and textile and apparel suppliers to discuss foreign competition. The main issue was how North American fiber-textile-apparel industries could compete with low-wage companies off shore. Participants at this and following meetings wanted to use technology to exploit the proximity of U.S. companies to the American market, and the goal was to set standards so everyone from raw material supplier to the retail store could speak the same electronic language and share data.

QR has rapidly expanded. An annual Quick Response convention and exhibition has emerged to promote the concept and the technology. The photo "Survival of the Quickest" suggests a sports metaphor that expresses the competitive spirit of QR programs.

Basic QR requires point-of-sale (POS) data only from selected stores. This is like predicting election results. Pollsters use voter intentions from key precincts to predict election winners with high accuracy. Similarly. manufacturers–even if several echelons removed from final sales points-can schedule production based on recent POS samples. Conventional scheduling, on the other hand is always weeks or months out of date.

In an advanced version of quick response. called vendor-managed inventory (VMI), retailers confer to producers the management of retail inventories. Retailers send point-of-sale data to producers daily, and producers have access to retailers' inventory file. Unlike basic QR, which requires scanning data only from a few stores, VMl can provide manufacturers with complete data-from every store.

Still another advanced form of QR is called efficient customer response ECR). ECR provides additional supply-chain linkages,n four main ways.

Efficient replenishment. These are the practices already described for QR and VMI.

Efficient assortment. Retailers use sophisticated "category management" software to stock store areas with what consumers want most. The twin aims are more sales per square foot and improved customer satisfaction.

Efficient promotion. Order, produce, ship. and stock exactly what sells. Cease forward buying, trade loading. and BOGOs (buy one, get one free), which pay little heed to real customer needs or usage.

Efficient product introduction. Product development is a joint effort. Producers, distributors, brokers, and retailers team up to get the right products to market quickly.

Regarding efficient promotion, Ronald Zarrella, GM's head of sales and marketing, introduced the following: Promote cars throughout their life cycle, instead of spending lavishly in the introductory year, then starving the model after that.

Linking External and Internal Flow Control

QR is the offspring of justintime and it embodies JIT's core concept of final customers “pulling the strings" to cause production and delivery back through the chain of supply. For QR to work, firms at each echelon in the supply chain must improve their internal processes-in office support, distribution, and freieht areas, as well as in frontline operations. These firms can use a broad array of proven JIT techniques for responding to customers' demands, plus TQM techniques for getting it right.

By involving retailers, QR uses sales scanning data that big retailers had collected for years but never used to good advantage. Before QR's introduction, firm-to-firm JIT arrangements were widespread but mostly limited to manufacturing: processedmaterial or component suppliers linked (by kanban. fax, EDI, etc.) to fabrication or assembly plants. QR establishes a common basis for sector-to-sector flow control, linking goods and service sectors seamlessly.

JIT was born in Japan and is now practiced worldwide, but QR is a uniquely North American contribution to good management. America's large open market and relatively efficient distribution system offered a favorable environment for QR' s development. The necessary alliances have been difficult to attain in Japan with its many layers of middle-men between manufacturers and retailers. In Europe, despite geographical difficulties (national borders), QR is catching on.

Logistics: Moving the Goods

As we've seen, QR and JIT both rely on efficient freight and distribution management. which businesses refer to as logistics or more recently, supply-chain management. In tact, logistics has become a high-visibility partner of marketing and operations management. after years of being left to disparate specialists. Producers and retailers increasingly see that they cannot be expert in managing the flow among parties in the supply chain. They must partner up with freight movers (air, rail, truck, barge) and distributors.

A growing trend is for a single logistics company to take over both freight and some of the management of distribution centers. Federal Express receives and stores personal computer components for IBM and Compaq Computer. Then, when a company (or a person) orders a customizes PC system, Federal Express pulls the components, boxes them (with manuals thrown in), and makes the delivery. This role expansion has transformed Federal Express from an overnight mail delivery company into a supplier of order-fulfillment services.

DSC Logistics, with a 1.2-million-square-foot warehouse in Melrose Park, Illinois, got itself into the business by a different route. It had been strictly a warehouser. Ann Drake, who runs the family-owned business, changed its name from Dry Storage Company in 1993.

Trucking companies are metamorphosing into full-service logistics as well:

Just behind a vast field of corn on Packerland Drive in Green Bay, Wisconsin. stands the brain center of Schneider National. Once strictly a trucking company, Schneider now focuses on its highly profitable logistics business. It has about 140 logistics contracts, ranging from $2 million to $200 million, the last for a deal kith GM.

On one giant floor. hundreds of Schneider customer-service representatives in cubicles track freight using electronic data interchange technology. With a satellite system, a Schneider representative can tell customers exactly where its drivers are, and, more importantly, what time a given shipment will be delivered.

Satellite-navigated trucks time their arrival at unloading docks to the hour-or even within a 15-minute window. Producers help by providing advance shipping notices to the freight haulers.

The retailer's distribution centers are shifting their roles as well, sometimes spurning their traditional storage role.

:Measuring Flow-Control System Performance

What should a high-performance flow-control group strise to achieve'? Exhibit 10-c summarizes traditional and newer answers to that question. Traditional (and still com mon) examples of flow-control measures include 95 percent on-time performance against internal schedules, 99 percent inventory accuracy. and five inventory turn! per year.

Pipeline Efficiency

Quick response programs look beyond the department or company walls. QR-connected firms in the supply pipeline all work from the same scanning data: real customer demand. The supporting information system usually allows suppliers to send advance shipping notices to freight carriers via EDI, fax or Internet transmission.

New pipeline-oriented measurements need to be devised to reflect flow control among suppliers, freight carriers, and retailers. Examples for suppliers include time: from ordering to receipt of material by the customer; for carriers, time from receipt of the advance shipping notice to customer receipt of the goods; and for the retailer, time – from receipt of the goods to their availability on the sales floor.

Cycle Tunes/Lead Times

A prominent performance measure within the company walls is cycle time, or lead time, including time to process all information related to production or service. Aside from measuring quickness of response. cycle time serves as an overall indicator of flow control. Long cycle times (e.g., many weeks) are evidence that the work flow is out of control.

Better flow control means a smaller flow-control staff. That is, as cycle times fall and the work flow becomes more tightly controlled, the firm needs fewer expediters, schedulers, dispatchers, and clerical staff.

For example, JIT implementation teams at Physio Control (a manufacturer of defibrillators) were able to create I I JIT cells (or team-built lines, as they are called at Physic). The focused cells used daily rate scheduling, revised monthly, thus eliminating thousands of work orders. Work-in-process (WIP) inventories plunged, emptied WIP stock rooms were torn out, and remaining small stocks became the property of each team-built line. Physio's 10-member production control department had no scheduling or inventory management to do and was abolished, and the 10 people were retrained for other duties, such as supplier certification and supplier development.

Response Ratios

Cycle time is a fine measurement of overall flow through several processes, but what about measurements within each of those processes? The response ratio fills the need.

The three response ratios are cycle time to work content, process speed to use rate, and pieces to work stations or operators. The ideal ratio for each is I to I, but in practice it is typically 5, 10, 100, or 1,000 to l. What does a ratio of, say, 100 to I mean? Examples for each ratio serve to illustrate:

In a drop-and-add line (at registration for college classes), there is an average of 99 minutes of delay for a I-minute transaction to have a form signed. The 99 minutes of delay plus the 1 minute for signature yield a ratio of 100 minutes of total cycle time to 1 minute of value-adding work content.

A wire-cutting machine currently is cutting 1000 pieces of electrical power cord per hour for a certain model of lamp. Lamp assembly, the next process, installs that utodel of cut cord at only 10 par hour. The ratio or process speed to use rate, thus, is 1,000 to 10, or 100 to 1.

A clerk in a purchasing department typically has a stack of 99 invoices in an in-basket andjust I invoice being worked Oil. This constitutes a pieces-to-operator ratio of 100 to 1.

In each case, high ratios mean long queues of customers, documents. idle materials, or projects. Team members may calculate the ratio, post it in the workplace, and then work to lower it. But they cannot do so without making improvements that we have covered in this and previous chapters: cut changeover times. limit the queues. have a system for borrowing labor when lines get too long, eliminate disruptive rework by doing it right the first time, keep all areas clean and well organised, run equipment at the use rate instead of at maximum speed, and so forth.

A main advantage of the response ratio is that it is unitless, devoid of numbers of minutes, clients, truckloads, and so forth. The goal of I to I or 2 to I is the same for any kind of work, and it enables comparison of improvement rates across the enterprise. In short, the ratio is promising as a universal measure of service speed, flow control, and. conversely, nonvalue-added wastes and delays.

2.4.4.Inventory Control and Turnover

.

For manufacturers, wholesalers, and retailers, inventory turnover remains a good overall measure accounting for many of the wastes tied up in inventory. Corporate management or improvement teams can use it to assess site performance. and site man- teen agers/teams can use inventory turnover, or turnover improvement. to measure their own performance. Low inventory turnover normally indicates poor performance. symptomatic of waste and inflexibility; high and increasing inventory turns indicate good performance and continuous improvement.

Moreover, research evidence suggests that long-term trend in inventory turnover t may be as good or better an indicator of company strength than the usual financial measures, such as profitability and sales revenue. This suggestion is based on a study of inventory turnover for well-known manufacturers in the United States and the United ; Kingdom over a 45-year period beginning in 1950; partial supporting evidence comes from France and Australia," The research shows declining turnovers for about 25 years. Then, many manufacturers began to react. They learned about lean production concepts and put them to work. One result of all of these improvement methodologies is less dependence on inventories as a cover for problems, so inventory turnovers improve.

2.5. Timing – Another Imperative

2.5.1.Timing-Impact on OM

In 1990, Joseph D. Blackburn published Time-Based Competition: The Nert battle Ground in American Manujitcturing, in which he warned that the battle for better quality, clearly under way at that time, should not be viewed as the final struggle in the war' for competitive position. He reminded us that timing is also very Much on the mind of customers.

Through the 1990s, time-based competition emerged into the reality that Blackburn had predicted, and today it is fair to say that all organizations face timing problems of one form or another. Moreover, operations managers find themselves in the thick of the timing battle.

Dual Output Requirements

Faster response times, suggested between the lines in the opening vignette, is one timerelated requirement, but customers also want on-time performance. The Into Practice box illustrates faster turnaround (response) time at Dun & Bradstreet and better on-time performance at Canadian Airlines. We can toss in another requirement, perhaps better viewed as an extension of the first two: Customers also want consistently fast and consistently on-time performance. In other words, the basic requirement of less variability is very much a part of the timing requirement as well.

Greater customer awareness of timing performance is playing a role. Public transportation companies, government agencies, and other high-visibility organizations must publish schedules of their operations, meetings, projects, and so forth. Even a casual observer can compare what was promised to what was delivered. Being late or taking too long shows up quickly. And of course, consumer groups have a powerful communications link in the Internet-word spreads rapidly. Thus, the timing of an organization's grand outputs-those goods and services destined for external customers-can cast a positive or negative light on overall company operations. "Quick-look" analyses of outputs, however, fail to reveal the depth of impact that timing has on support operations.

Timing in Support Operations

Whenever people or materials are late, carefully crafted schedules go awry, and the negative effects ripple up and down chains of provider-customer linkages throughout the organization. Waiting, unplanned downtime, extra handling of materials and equipment, expensive rescheduling actions, personnel shuffling, and inventory excesses or outage add costs and thus serve to negatively impact operations effectiveness and financial performance.

2.5.2. Pull- and Push-Mode Operations

As noted earlier, just-in-time and kanban are often associated with pull-system, or pullmode, operations. The distinction between pull- or push-mode operations lies in determining whether the provider or the customer controls the flow. In this section we examine some of the differences between the two modes, consider why push-mode often dominates, and close with some observations regarding increased pull-mode operations.

Push-Pull Distinctions

In pull-mode operations, the recipient's signal is required before the provider sends the work along; that is, the customer controls the flow of work. Kanban signals–whether in the form of a card, container, or human being in a waiting line-tell the provider of good or service that it is time for action. Pull-mode governs many machine-dispensed goods and services; vending machines, ATMs, and car washes are common examples. Also, most personal human services are (almost) pure pull operations; no action is taken until the customer presents the request for service. Many government services also qualify … response to fire alarms is a classic case of pull-mode.

In push-mode operations, the provider sends work along in the absence of any call from the customer. In push-mode. the provider determines when (and often what) work flows … "Ready or not, here it comes!" Some services have considerable push-mode components. Radio and television stations, for example, broadcast (literally) to the cosmos even if (in the short run, at least) no one is tuned in. And, many manufactured goods flow because the provider chooses to produce them, not because a customer has ordered them. To be fair, push-mode providers usually act in anticipation of demand being "out there," but that demand is a far crs from being a firm customer order.

Numerous hybrid situations may be identified. Utility services (electricity, water, etc.) are pushed into the distribution channels. but the customer's flip of a switch or turn of a faucet is required to start the meter running and thus make the final (pull-signal) determination of flow control. Some fast-food outlets keep a small quantity of prepared sandwiches in a short slide chute; push-mode up to say. kanban = 3, but not more. If these are top-sellers, they move rather quickly so food freshness is assured and response times are lowered.

Dominance of Push Systems

Signs of push-mode dominance appear in nearly every type of operations. Factories hoe conveyors. pallets, and storage racks filled with goods being pushed out with no queue limits in place. The mounds of unsought goods move out into distribution channels. to warehouses, and on to retailers where they clutter storefronts until "special sales" are required to move them along-often at a loss. In services, long lines form and providers have no response plans in place to, again, limit the queues … servers just keep chugging along at the built-in pace. Response times-time in queue or in system get extended.

Although apparently wasteful and insensitive to customer wants, push-mode operations have dominated for several reasons:

I. inflexibility. A surge in customers or order arrivals will cause a queue to lengthen unless the provider can muster resources. Insufficient physical capacity. lack of cross-trained labor or a backup labor supply. and no outsourcing partners are frequently the culprits.

2. Geographical distances. In manufacturing or distribution, providers and users are often quite separated, and that tempts the supplier to keep producing and push product forward. Poor contact with customers down the line is to blame.

3. Erroneous costing. As we saw in Chapter 10, costs of producing an item in advance of need and carrying that item until it is needed are often grossly underestimated.

4. Period quotas. When managers feel pressure to attain period production or sales quotas, the result can be an end-of-period push to get goods out the door. Often described as "hockey-stick management, " the flow pattern resembles a series of hockey sticks lying end to end with the blades extended upward to reflect increased output near period end.

5. Capacin/budget justification. Even if production or sales quotas aren't involved, the desire to "show high levels of resource utilization" as a way of justifying existing budget levels or capacity allocations can prompt managers to push unneeded work onto society. End-of-fiscal-year spending-perhaps to fund fm olous projects or research studies-by public and not-for-profit organizations is a classic example of push-mode waste.

Competitise forces are making continued disregard for customer desires a hazardous game. In response, many steps have been taken to improve operations so that they can be accomplished in a more pull-mode manner.

Chapter 3

Conclusions

Nowadays, all the firms in Romania and all over the world are struggling for survival and are trying to better follow all the managerial techniques and principals.

Beeing of such of vital importance, the managements’ tasks are considered by some of the economical agents – a Bible for their activity. And this category of this lucky, enlighted people – have no problem achieving their goals in business. But still, is another category – which is not aware of the managements’ considerents – and which simply don’t have any chance of survival.

The first category are the so called good students of the society. They try to accomplish all the important principals of the management and take care of their business as good as possible.

“ IMPACT SA” , for example is one company that tries to do its best, to better satisfy its clients and to realize profitable outcames from the business. They are mostly applying in practice what they’ve learned in theory.

Let’s take for example the design of the product which is the core responsibility of the business.

A chronic weakness among many companies is undermanagement of design and development. Perhaps part of the fault is hat business schools have chronically underemphasized design as a core business responsability. Ironically, we probably know more about poor design practices than we do about good ones. But design problems ripple into operations.Quality deteriorates, processing slows or stops, and costs mount.Customers look elsewhere. In short, competitiveness suffers.

Indeed as for the design, IMPACT struggles to make it.right. They give a vital importance to that, trying to have the best possible designs for their products. Looking at their different house types, we will see that.

Probably, the most important is quality. When a business does not provide quality for its products, then their work is for nothing and has no sense.

The lack of quality means the end of a business. Interest in quality fl3is centuries old. The code of Hammurabi, mandated death for any builder whose work resulted in the death of a customer. Other quality related codes, often equally harsh, are found in the writings of the ancient Phoenicians, Egyptians, and Aztecs. Despite the harsh codes, however, it was the artisan’s pride, not fear, that contributed most significantly to suppoting quality assurance for centuries to come.

Juran provides a straightforward yet very inclusive definition of TQM: those actions needed to get to world-class quality. The word total is a contribution of Armand Feigenbaum and the late Kaoru Ishikawa – two additional respected quality pioneers: in top organizations, quality management is no longer treated as a staff responsability or functional speciality tucked away somewhere behind a door labeled „ Inspection Department.” Instead, it is everybody’s business, a total commitment – organizationally as a competitive requirement; collectively as people pool their skills and special talents as members of improvement teams; and singly as each individual performs job tasks.

ISO 9000 Series Standards

The ISO 9000 Standard is actually an umbrella name for three separate but related quality standards originally published in 1987 by the International Organization for Standardization, based in Geneva, Swizerland. Though support has been particularly strong within the European community, its use is global.

Many companies require their suppliers to register to ISO 9000, and expect those suppliers to, in turn, require their own suppliers to register as well. As customers, their rationale is understandable: the quality imperative demands reliable suppliers. Under the ISO 9000 scheme, a company (or a division or plant within a company) arranges to have its quality systems documentation and implementation audited by an independent accredited registrar. The phrase „third party registration” is used to refer to this objective assesment. The particular role to be played by a supplier determine which proportion of the standards must be met.

Quality driven by the employees is another important step which must be accomplished, and indeed IMPACT handles well this situation. Its employees have a major contribution on the company’s wellfare Everyone needs trening in the tools of continuous improvement, problem solving and statistical process control. In addition, people require training in job skills, plus cross-trening for an understanding of the bigger picture. People need to be put into close contact with customers and suppliers. This calls for organization of multifunctional customer-, product-, or service-focused cells, teams and projects. . The management, control and reward systems need to be realigned with the goals of employee and team –driven, customer-centered quality and continous improvement.

Similar Posts

  • Poezie. Analiza Traducerii Poeziei

    Poezie. Analiza traducerii poeziei (studiu de caz) INHALT DIE EINLEITUNG KAPITEL I. DAS GEDICHT. DIE ARTEN DER ÜBERSETZUNG DES GEDICHTES I.1. DIE BESONDERHEITEN DER ADÄQUATEN ÜBERSETZUNGS DES GEDICHTES I.2. DIE HAUPTPROBLEME DER VERSÜBERSETZUNGS KAPITEL II. DIE VERGLEICHENDE ANALYSE DER ÜBERSETZUNG DES GEDICHTES J. W. VON GOETHE „DIE SCHÖNE NACHT II.1. DIE VERGLEICHENDE ANANLYSE DER ÜBERSETZUNG…

  • Poezia Lui Ion Pillat Intre Traditionalism Si Modernism

    CUPRINS INTRODUCERE…………………………………………………………………… 3 ION PILLAT- BIOGRAFIE……………………………………………………..6 TRADIȚIE ÎN OPERA LUI ION PILLAT……………………………….. 9 3.1 PE ARGEȘ ÎN SUS: 3.1.1 ACI SOSI PE VREMURI………………………………………. 13 3.1.2 CĂMARA CU FRUCTE………………………………………….. 18 3.1.3 ODAIA BUNICULUI……………………………………………… 20 INOVAȚIE ÎN OPERA LUI ION PILLAT……………………………… 23 4.1 POEME ÎNTR-UN VERS…………………………………………… 27 PARALELĂ ÎNTRE TRADIȚIONALISM ȘI MODERNISM ….. 33 5.1 POETICA LUI…

  • Tipologia Epitetului Eminescian

    TIPOLOGIA EPITETULUI EMINESCIAN PLANUL LUCRĂRII Introducere Capitolul 1: Epitetul Importanța epitetului la nivelul limbajului artistic Epitetul – figură de stil predilectă a romantismului Definirea epitetului Clasificarea epitetului după Ghe. N. Dragomir 1.5. Clasificarea epitetului după Tudor Vianu 1.5.1. Categoriile gramaticale ale epitetului 1.5.2. Categoriile estetice ale epitetului 1.5.3. Epitetul și celelalte figuri de stil 1.5.4….

  • Provocari Si Solutii In Traducerea Textelor Specializate

    PROIECT DE LICENȚĂ Provocări și soluții în traducerea textelor specializate CONTENTS INTRОDUCTION This work is on the topic “Challenges and solutions in translating specialized texts”. I chose the hotel “Macon Residence Wellness and Spa” to perform my internship because of the cooperation agreement between my university, Free International University of Moldova and University “Prof. Dr….

  • Lumea de Dincolo In Poezia Funebra

    Capitolul 1. Poezia funebră Până în momentul apariției cunoscutului studiu „Ale mortului din Gorj” al lui Constantin Brăiloiu în 1936, bocetele au fost greșit incluse în categoria cântecelor funerare. Deși există interferențe între cele două în ceea ce privește structura și motivele, diferența este evidentă în ceea ce privește finalitatea lor, acest fapt fiind subliniat…