University Politehnica of Bucharest [626547]

University “Politehnica” of Bucharest
Faculty of Electronics, Telecommunications and Information Technology

Development of a Verification Environment for a
Programmable Interval Timer using UVM Methodology

Dissertation Thesis
submitted in partial fulfillment of the requirements for the Degree of
Master of Science in the domain Electronics and Telecommunications ,
study program Advanced Microelectronics

Thesis Advisor Student: [anonimizat], Lecturer Mariana ILAȘ Vlad Alexandru GEORGESCU

Year
2015

Statement of Academic Honesty

I hereby declare that the thesis “ Development of a Verification Environment
for a Programmable Interval Timer using UVM Methodology ”, submitted to the
Faculty of Electronics, Telecommunications and Information Technology in partial
fulfillment of the requirements for the degree of Engineer/Master of S cience in the
domain Electronics and Telecommunications , study program Advanced
Microelectronics , is written by myself and was never before submitted to any other
faculty or higher learning institution in Romania or any other country.

I declare that all information sources sources I used, including the ones I found
on the Internet, are properly cited in the thesis as bibliographical references. Text
fragments cited “as is” or translated from other languages are written between quotes
and are referenced to the source. Reformulation using different words of a certain text
is also properly referenced. I understand plagiarism constitutes an offence punishable
by law.

I declare that all the results I present as coming from simulations and
measurements I perfor med, together with the procedures used to obtain them, are real
and indeed come from the respective simulations and measurements. I understand
that data faking is an offence punishable according to the University regulations.

Bucharest, 15.06.2015

Vlad Alexandru GEORGESCU

_________________________
(student’s signature)

Table of Contents

Registration Form of the Thesis Topic ………………………….. ………………………….. ………………………….. 3
Statement of Academic Honesty ………………………….. ………………………….. ………………………….. ………. 5
Figures List ………………………….. ………………………….. ………………………….. ………………………….. ………. 9
Tables List ………………………….. ………………………….. ………………………….. ………………………….. ………. 11
Acronyms List ………………………….. ………………………….. ………………………….. ………………………….. …. 13
Introduction ………………………….. ………………………….. ………………………….. ………………………….. …….. 15
Context ………………………….. ………………………….. ………………………….. ………………………….. ……….. 15

Chapter 1 – Functional Verification ………………………….. ………………………….. ………………………….. … 17
1.1 Hardware Verification Languages ………………………….. ………………………….. ………………………. 17
1.1.1 OpenVera ………………………….. ………………………….. ………………………….. …………………. 17
1.1.2 System Verilog ………………………….. ………………………….. ………………………….. ………….. 18
1.1.3 e Language ………………………….. ………………………….. ………………………….. ……………….. 18
1.2 Verification Methodology ………………………….. ………………………….. ………………………….. …….. 19
1.2.1 e Reuse Methodology ………………………….. ………………………….. ………………………….. …. 20
1.2.2 Universal Verification Methodology ………………………….. ………………………….. ………… 21
1.2.2 .1 Transaction ………………………….. ………………………….. ………………………….. ….. 21
1.2.2 .2 Bus Functional Model ………………………….. ………………………….. ……………….. 21
1.2.2 .3 Driver ………………………….. ………………………….. ………………………….. …………. 22
1.2.2 .4 Monitor ………………………….. ………………………….. ………………………….. ……….. 23
1.2.2 .5 Agent ………………………….. ………………………….. ………………………….. ………….. 23
1.2.2 .6 UVM Verification Environment ………………………….. ………………………….. …. 24
1.3 Verification Workflow ………………………….. ………………………….. ………………………….. …………. 24
1.3.1 Metric Driven Verification ………………………….. ………………………….. ………………………. 25

Chapter 2 – 8254 Programmable Interval Timer ………………………….. ………………………….. …………… 29
2.1 DUT Description ………………………….. ………………………….. ………………………….. …………………. 29
2.2 APB Interface Protocol ………………………….. ………………………….. ………………………….. ………… 31
2.2.1 Write Transfers ………………………….. ………………………….. ………………………….. …………. 32
2.2.2 Read Transfers ………………………….. ………………………….. ………………………….. ………….. 32
2.2.3 Transfers with Error ………………………….. ………………………….. ………………………….. …… 32

Chapter 3 – DUT Verification Flow ………………………….. ………………………….. ………………………….. . 35
3.1 Verification Plan Elaboration . ………………………….. ………………………….. ………………………….. .. 36
3.2 Verification Plan Implementation ………………………….. ………………………….. ………………………. 37
3.2.1 APB eVC………………………….. ………………………….. ………………………….. ………………….. 38
3.2.2 vr_ad Package ………………………….. ………………………….. ………………………….. …………… 40
3.2.3 Verification Environment ………………………….. ………………………….. ……………………….. 40
3.2.3.1 C_OUT Agent ………………………….. ………………………….. ………………………….. 41
3.2.3.2 Counter Golden Model ………………………….. ………………………….. ………………. 41
3.3 Metrics Implementation ………………………….. ………………………….. ………………………….. ……….. 44
3.4 Test Scenarios ………………………….. ………………………….. ………………………….. ……………………… 45
3.5 Result Analysis ………………………….. ………………………….. ………………………….. ……………………. 46
3.6 Verification Code Performance Review ………………………….. ………………………….. ………………. 49
Conclu sions ………………………….. ………………………….. ………………………….. ………………………….. …….. 53
References ………………………….. ………………………….. ………………………….. ………………………….. ………. 55

Annex 1 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 57
Annex 2 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 59
Annex 3 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 61
Annex 4 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 63
Annex 5 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 65
Annex 6 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 67
Annex 7 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 71
Annex 8 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 73
Annex 9 ………………………….. ………………………….. ………………………….. ………………………….. ………….. 75

Figures List

 Fig. 0.1: Digital design workflow (pg. 15)
 Fig. 1.1: A simple Vera Program (pg. 17)
 Fig. 1.2: Illustration of FV System Verilog aspects (pg. 18)
 Fig 1.3: e code fragment (pg. 19)
 Fig. 1.4: History of FV methodologies (pg. 19)
 Fig. 1.5: XSerial eVC – Dual -agent implementation (pg. 20)
 Fig. 1.6: Verification Environment Example (pg. 22)
 Fig. 1.7: UVM Monitor example (pg. 23)
 Fig. 1.8: MDV Phases (pg. 25)
 Fig. 1.9: MDV Workflow (pg. 26)
 Fig. 1.10: Verification metrics (pg. 26)
 Fig. 2.1: Timer device block diagram (pg. 29)
 Fig. 2.2: Control Word Reg ister format (pg. 31)
 Fig. 2.3: APB AMBA 3 write transfer (pg. 32)
 Fig. 2.4: APB AMBA 3 read transfer (pg. 33)
 Fig. 3.1: Verification strategy impact on time saving (pg. 35)
 Fig. 3.2: Functional Verification workflow (pg. 36)
 Fig. 3.3: DUT verification pla n extract (pg. 37)
 Figure 3.4: Counter golden model (pg. 37)
 Fig. 3.5: Verification Environment architecture (pg. 38)
 Fig. 3.6: APB Interface eVC architecture (pg. 39)
 Fig 3.7: APB transfer (pg. 39)
 Fig. 3.8: Address map (pg. 40)
 Fig. 3.9: C_OUT agent interface driving (pg. 41)
 Fig 3.10: Mode 0 simulation waveform (pg. 42)
 Fig 3.11: Mode 1 simulation waveform (pg. 42)
 Fig 3.12: Mode 2 simulation waveform (pg. 43)
 Fig 3.13: Mode 3 simulation waveform (pg. 43)
 Fig 3.14: Mode 4 simulation waveform (pg. 44)
 Fig 3.15: Mode 5 simulation waveform (pg. 44)
 Fig. 3.1 6: Virtual sequencer ( pg. 4 5)
 Fig. 3.17: Incisive Enterprise Manager Interface (pg. 46)
 Fig. 3.18: Mapping of the VP elements (pg. 47)
 Fig. 3.19: Reference checking mechanisms analysis ( pg. 47 )
 Fig. 3. 20 Functional coverage analysis (pg. 48)
 Fig. 3.21: Code coverage analysis (pg. 48)
 Fig. 3.22: Profiling tool interface (pg. 49)
 Fig 3.23: Test sorted by the inefficiency factor (pg. 50)

Tables List

 Table 2.1: 8254 Programmable Interval Timer interface description (pg. 30)
 Table 2.2: 8254 Programmable Interval Timer register description (pg. 30)

Acronyms List

 FV – Functional Verification
 HVL – Hardware Verification Language
 HDL – Hardware Description Language
 eRM – e Reuse Methodology
 UVM – Universal Verification Methodology
 LRM – Language Reference Manual
 DM – Developer Manual
 VE – Verification Environment
 DUT – Design Under Test
 eVC – e Verif ication Component
 UG – User’s Guide
 CDV – Coverage Driven Verification
 MDV – Metric Driven Verification
 APB – Advanced Peripheral Bus
 AMBA – Advanced Microcontroller Bus Architecture
 VP – Verification Plan

14

15
Introduction

Context

The semiconductor industry has experienced a challenging evolution in the complexity of
digital integrated circuit designs: increasing integration density and die size has made it possible to
design chips with hundreds of millions of transistors. During its development, a digital design goes
through multiple transformations from the original set of specifications to the final product. Each of
these transformations corresponds to a different description of the system or m odule, which is
incrementally more detailed and which has its own specific semantics.

Fig. 0.1: Digital design workflow [Bertacco, 2003]

Due to the importance of design correctness, a significant fraction of engineering
development time and resources are devoted to it [Bertacco, 2003]. In this fast evolving landscape,
digital ICs are required to be accurate due to the cost of production [Luong 2012] , also ensuring the
functional correctness becomes crucial as many of the applications in transportation or medical
systems where a design flaw can lead to loss of life.
Functional verification is demonstrating the intent of a design is preserved in its
implementation. This is a complex task for most large system design projects and is widely
acknowledged as a major bottleneck in the design methodology as up to 70% of the design
development time and resources are spent on FV . It is a part of more encompassing design
verification , which, besides functional verification, considers non -functional aspects like timing,
layout and power.

16
The verification engineer faces four main challenges: the volume of possible testcases which
exist even in simple designs, finding incorrect behavior, the lack of a comprehensive functional
verification metric and the absence of a reference golden model. To functionally verify the correct
behavior of a chip it is necessary to check that each possible current state in combination whith each
possible input, results in a correct next state.
Meeting these challeng es requires advanced technologies and methodologies that ensure the
highest design quality. The functional verification methodologies allow engineers to find bugs
quickly and easily . The chosen verification strategy significantly imp acts the quality of the most
complex designs and enabling first -pass silicon success .
This paper describes an approach to module level functional verification using the Universal
Verification Methodology to elaborate a verification environment for a complex DUT with an APB
interface.
The goal is to perform a functional verification process using a metric driven methodology,
to describe all the steps needed in order to build a UVM compliant verification environment using
Specman e, the software tools used, the UVM component hiera rchy and how to plan and build
constrain ed-random test sequences to achieve the proposed metrics, to apply a method used for
determining the CPU performance consumption of the verification code.

17
Chapter 1 – Functional Verification

1.1. Hardware Verification Languages

Hardware description languages, logic synthesis, and automated place -and-route technology have
similarly made it possible to build very large, complex integrated circuits. In order to validate these
designs the technology had to keep pace. Some EDA companies have produced languages designed
for writing testbenches and checking simulation coverage . These languages are called hardware
verification languages.
A hardware verification language is a programming language used to verify the designs
of electronic circuits written in a HDL . It include s features of a high-level programming
language like C++ or Java as well as features for easy bit -level manipulation similar to those found
in HDLs. Many hardware description languages provide constrained random stimulus generation,
and functional coverage constructs to assist with complex hardware verification.
SystemVerilog, OpenVera and e are the most commonly used HVLs. SystemVerilog attempts to
combine HDL and HVL constructs into a s ingle standard.
The sections that follow describe languages that are currently public. Most of these
languages started as proprietary in -house or commercial.

1.1.1 OpenVera

OpenVera was created mainly for writing testbenches for Verilog simulations and it was
released in 1995 as Vera by Systems Science [Edwards, 2004].

Fig. 1.1: A simple Vera Program [OpenVera LRM, 2004]

OpenVera is a concurrent, imperative language designed for writing testbenches. It executes in
parallel with a Verilog or VHDL simulator and can both provide stimulus to the simulator and
observe the results. It also provides facilities to drive biased ran dom stimuli to the DUT and check

18
signal values during the simulations in order to validate the collected coverage. Figure 1.1 illustrates
a mix of imperative and object -oriented styles and the ability to generate random stimuli.

1.1.2 System Verilog

To produce System Verilog many aspects of the Vera, Sugar and ForSpec HVLs have been
merged with Verilog, along with the higher level programming constructs of C and C++.
SystemVerilog adds enumerated types, record types (structs), typedefs, type casting, a
variety of operators, control -flow statements such as break and continue, as well as object -oriented
programming constructs such as classes, inheritance, and dynamic object creation and deletion. At
the very highest level, it also adds strings, associativ e arrays, concurrent process control (e.g.,
fork/join), semaphores, and mailboxes.
Figure 1.2 illustrates the most interesting features of System Verilog related to verification,
such as random -constrained stimuli generation. Other aspects related to F V are user defined
coverage and checking, and temporal assertions.

Fig. 1.2: Illustration of FV System Verilog aspects [Edwards, 2004]

1.1.3 e HVL

Developed by Verisity, the e HVL is part of the Specman tool for efficiently writing
testbenches [ Edwards, 2004 ]. Like OpenVera, it is a concurrent, imperative language with the
ability to generate constructs for checking functional coverage, random constrained stimuli and
mechanisms to check temporal properties.
e code can be organized in multiple fi les. File names must be legal e names. The default file
extension is „.e”. e code files are sometimes referred to as modules. Each module contains at least
one code segment and can also contain comments. A code segment is enclosed with a begin -code
marker <’ and an end -code marker ’>. Both the begin -code and the end -code markers must be
placed at the beginning of a line (left most), with no other text on that same line (no code and no
comments).

19

Fig 1.3: e code fragment [e LRM , 2003 ]

The e language is used to demonstrate the subjects proposed by this thesis because it is
tailored to implementing highly flexible and reusable verification testbenches, leading to a
significant productivity improvement. It is one of the most mature verification languages, used by
specialists for advanced verification. It is, therefore, the most mature in its coupling to overall
verification methodology, technology, and verification IP (VIP), and it can scale to the most
complex block/unit, chip, system, and proj ect levels.

1.2 Verification Methodology

The objective of functional verification is to ensure a bug free circuit with minimum delay
and with high flexibility. The verification quality of a chip is impacted by the chosen verification
strategy.
In order to become more productive and efficient in the functional verification area, some
verification guidelines are used. Figure 1.4 present a brief history of the used verification
methodologies over time by the three largest verification vendors.
Fig. 1.4 : History of FV methodologies

In the following sections the eRM and UVM main advantages and features will be presented
in more detail.

20
1.2.1 e Reuse Methodology

e Reuse Methodology ensures reusability by delivering the best known methods for
designing, coding and packaging e code as reusable components. eRM is about maximizing
reusability o the verification code written in e language. [ eRM DM, 2003]
The ultimate form of reusable verification environment is an e Verification Component. An
eVC is a configurable, ready -to-use VE specific for a protocol or architecture (such as APB,
Ethernet, PCI or USB). Every Verification Component can be applied to the DUT in order to verify
the implementation of the protocol or architecture as it consists o f a complete set of elements for
checking and collecting coverage information for a specific protocol or architecture , as it is shown
in figure 1.5 . In this representation of the XSerial eVC there are two types of agent: a receive agent
which collects data from the DUT transmit port and an agent which can send data to the DUT’s
receive port.

Fig. 1.5: XSerial eVC – Dual -agent implementation [eRM DM, 2003]

Figure 1.5 is a typical example of architecture for a bus protocol eVC. For each port of the
interface, the eVC implements an agent. These agents can emulate the behavior of a legal device,
and they have standard construction and functionality. Each agent is commonly comprised of a
Config group of fields used for seeting the agent’s attributes and behavior and a group of unit
memembers which represent the hardware signals that the agent must access while interacting with
the DUT. The Sequence Driver is the coordinator for running user defi ned test scenarios, while the
Bus Functional Model is a unit thet interacts with the device under test. The Monitor is a passive
unit which samples the DUT signals and provides interpretation of the monitor ed activity to the
other components.
The e Reuse Methodology was widely accepted by veri fication engineers and formed the
basis of the URM (Universal Reuse Methodology) developed by Cadence Design Systems for the
SystemVerilog Language. URM, together with contribution from Mentor Graphics' AVM, later

21
went on to become the OVM (Open Verification Methology) and, finally the UVM (Universal
Verification Methodology).

1.2.2 Universal Verification Methodology

The Universal Verification Methodology (UVM) is a standard being developed by
Accellera , based on the Open Verification Methodology (OVM) 2.1.1 release , for the expressed
purpose of fostering universal verification IP (VIP) interoperability. The UVM increase s
productivity by eliminating the expensive interfacing that typically slows VIP reuse.
In this paper, the purpose is to present the advantages of using Accellera’s Universal
Verification Methodology and to provide some examples of how using these guidelines can
improve the verification quality. UVM provides a framework for Coverage Driven Verification
[Farkash, 2015 ] which combines automated checkers and coverage metrics in order to eliminate the
effort spent implementing hundreds of directed tests and to ensure that all aspects of the digital
circuit are verified.
The benefits of using UV M are very diverse and important. This verification methodology
supports module -to-system reuse, runs on any simulator, it is based on a base class library, provides
automation capabilities, enables multi language VIP and it is integrated with the coverage driven
verification workflow.
The environment (Figure 1.6) is the top level structure of the verification component and it
is comprised of one or more agents and other components. Also, the configuration properties of the
VE enable reuse. The typical structure of a verification component is comprised of a complete set of
checking, stimulating and coverage collecting elements and follows a consistent architecture as it
will be described in the next sections.

1.2.2.1 Transaction (Data Item)

Data items represent the input to the DUT. By randomizing data item fields using
constraints, a large number of tests can be created and the coverage is maximized. Examples
include networking packets, bus transactions, and instructions. The fields and attributes of a data
item are derived from the data item’s specification. For example, the Ethernet protocol specification
defines valid values and attributes for an Ethernet data packet. [UVM UG, 2011]

1.2.2.2 Bus Functional Model (Driver)

The BFM repeatedly receive s a data item and drives it to the DUT and emulates the driving
logic. For example, a driver controls the read/write signal, address bus, and data bus for a number of
clocks cycles to perform a write transfer .
No generation is done in the BFM. The BFM rec eives a data item and performs all
operations required to send the data item to the DUT according to the protocol rules. The item
should contain all necessary information to complete the transaction. To perform its task correctly,
the BFM should know the c urrent state o the DUT. The BFM can sample the DUT signals directly
or use information extracted by the monitor.

22
Fig. 1.6: Verification Environment Example [UVM UG, 2011]

1.2.2.3 Driver (Sequencer)

The driver controls the items that are provided to the BFM for execution and behaves as an
advanced stimulus generator.
The sequence driver is instantiated in an active agent and it is the test writer’s interface to
control the verification process. All test specif ic control, including reset and error injection, should

23
be available through test sequences. At any given time, the monitor and BFM provide the current
DUT state to the sequence driver for generating sequence items.

1.2.2.4 Monitor

The monitor is a pass ive entity which collects coverage and performs checking. Even though
reusable drivers and sequencers drive bus traffic, they are not used for coverage and checking.
Monitors are used instead.
A monitor extracts signal information from a bus and translate s the information into a
transaction that can be made available to other components and to the test writer. The monitor
detects the availability of information (such as a transaction), structures the data, and emits an event
to notify other components of t he availability of the transaction (Figure 1.7) . Checking typically
consists of protocol and data checkers to verify that the DUT output meets the protocol
specification. Coverage also is collected in the monitor. Coverage and checkers are implemented in
subtypes of the monitor such as has_checkers and has_coverage . That minimizes performance
overhead if these features are not used. The has_checkers and has_coverage flags should be
propagated to the monitor so that the monitor code can be optimized.
A bus monitor handles all the signals and transactions on a bus, while an agent monitor
handles only signals and transactions relevant to a specific agent. Typically, drivers and monitors
are built as separate entities (even though they may use the same signals) so they can work
independently of each other . [UVM UG, 2011]

Fig. 1.7: UVM Monitor example

1.2.2. 5 Agent

BFMs, drivers, and monitors can be reused independently, but this requires the environment
integrator to learn the names, roles, configuration, and hookup of each of these entities. To reduce
the amount of work and knowledge required by the test writer, UV M recommends that environment
developers create a more abstract container called an agent. Agents can emulate and verify DUT
devices. They encapsulate a driver, sequencer, and monitor. Verification components can contain

24
more than one agent. Some agents (f or example, master or transmit agents) initiate transactions to
the DUT, while other agents (slave or receive agents) react to transaction requests. Agents should
be configurable so that they can be either active or passive. Active agents emulate devices a nd drive
transactions according to test directives. Passive agents only monitor DUT activity. [UVM UG,
2011]

1.2.2. 6 UVM Verification Environment

The environment (env) is the top -level component of the verification component. It contains
one or more agen ts, as well as other components such as a bus monitor. The env contains
configuration properties that enable you to customize the topology and behavior and make it
reusable. For example, active agents can be changed into passive agents when the verificatio n
environment is reused in system verification. Figure 1.6 illustrates the structure of a reusable
verification environment. A verification component may contain an environment -level monitor.
This bus -level monitor performs checking and coverage for activi ties that are not necessarily related
to a single agent. An agent’s monitors can leverage data and events collected by the global monitor.
The environment class ( uvm_env ) is architected to provide a flexible, reusable, and
extendable verification component . The main function of the environment class is to model
behavior by generating constrained -random traffic, monitoring DUT responses, checking the
validity of the protocol activity, and collecting coverage.

1.3 Verification Workflow

As the design complexity increases, the use of traditional verification methodology becomes
minimal for verifying hardware designs. Directed Tests were used quite long back. Later, Coverage
Driven Verification methodology (CDV) came up. In directed tests a pproach, the verification
engineer is going to state exactly what stimulus should be applied to the DUT. This can be applied
only for small designs which ha ve very limited features.
As the design became more complex, verification engineers started looking for the
possibility of checking the effectiveness of the verification, or in other words the features covered
during verification. This is the whole idea behind CDV, which is done by setting up cover -groups
for the features to be verified and also for cove rage closure. The stimulus generation is random (by
using Constrained Random Generation method) for CDV, so this approach is much more effective
than directed tests. CDV improves productivity and also quality, but you will find difficulties in
planning and estimating the verification completion. For complex designs there will be thousands of
cover -groups and it is difficult to map with the specification.
UVM provides the best framework to achieve coverage -driven verification . The purpose of
CDV is to elimin ate the effort and time spent creating hundreds of tests , to ensure thorough
verification using up -front goal setting and r eceive early error notifications and deploy run -time
checking and error analysis to simplify debugging.
Significant efficiency and vi sibility into the verification process can be achieved by proper
planning. Creating an executable plan with concrete metrics enables the user to accurately measure
progress and thoroughness throughout the design and verification project. By using this meth od,
sources of coverage can be planned, observed, ranked, and reported at the feature level. Using an

25
abstracted, feature -based approach (and not relying on implementation details) enables the
verification engineer to have a more readable, scalable, and re usable verification plan.

1.3.1 Metric Driven Verification (MDV)

Metric Driven Verification (MDV) is a proven methodology for verifying hardware designs
which has been introduced by Cadence. This is based on CDV approach, but overcomes pitfalls in
CDV approach. In MDV flow, features are stated in an executable verification plan. This is the first
phase for the verification and later this will be correlated with the actual cover -groups. This uses
constrained random for stimulus generation which helps to have better coverage than traditional
simulation method. Figure 1.8 shows the four phases of MDV.
Metric driven verification ensures verification project predictability, productivity, and
quality. It uses specifications to create verification plans captur ing verification intent, performs
metrics analysis/reporting, measures progress, and automates verification tasks shown in Figure 1.9 .
MDV enables coherent verification by driving convergence across digital, analog, and low -power
domains among IP and SoC t eams to ensure high quality at every milestone, from systems to
silicon.

Fig. 1.8: MDV Phases [ Balakrishnan , 2012]

The next step after the planning phase is to construct a verification environment. It is done
by reusing existing verification IPs, reusing available UVM libraries and developing from scratch
some part of the environment. The test -bench and some of the test cases will be ready by this time.
Once the verification environment is ready, test cases ca n be executed and results checked.
The tool vManager from Cadence can fire the regression and can easily capture the result and
correlate with verification plan, if the v -plan feature information is specified while de fining the
coverage in the code . Incisi ve Metric Centre is the default way of viewing coverage as a unified
coverage browser, which clearly shows up what part of the design has been exercised.

26
Once the coverage information is available, this should be analysed with the v -plan.
Cadence INCISIV tool package has builtin features for v -plan to feature mapping against the
coverage result. It also shows coverage based ranking to see which test is most effective and which
tests are redundant. By having better verification planning and management and c orrelating with
coverage, MDV flow significantly improves the productivity of your verification.

Fig. 1.9: MDV Workflow [Rosenberg, 2003 ]

Coverage is the key in the verification of today's complex designs. Knowing what has been
verified, how it relates to what needs to be verified, and where are the holes, adds precision,
efficiency and predictability to the verification process. Functional, code and assertion coverage,
being complementary in nature, are all needed to provide a reliable a nd thorough coverage metric.
When combined, they facilitate a coverage -driven verification methodology, to find more bugs,
faster. This in turn saves significant human and machine resources, shortens time -to-market and
eventually contributes to a mature an d timely tape -out decision and a high -quality product.
Fig. 1.10: Verification metrics

27
MDV allows an engineer to capture all of the key metrics in the verification process, from
the coverage notions described previously for the device under test (DUT metrics), to the project
metrics necessary to be visible and managed (Figure 1.10). The list of metrics available is endless,
yet the MDV flow provides an executable verification plan to organize, manage, and view this
overwhelming amount of metrics in a form that is human readable and understandable.

28

29
Chapter 2 – 8254 Programmable Interval Timer

The Intel 8254 is a counter/timer device designed to solve the common timing control
problems in microcomputer system design. It provides three independent 16 -bit counters, each
capable of handling clock inputs up to 10 MHz. All modes are software programmable. Figure 2.1
presents the block diagram of the timer device.

Fig. 2.1: Timer device block diagram [8254 PS, 1993]

2.1 DUT description

The 8254 solves one of the most common problems in any microcomputer system, the
generation of accurate time delays under software control. Instead of setting up timing loops in
software, the programmer configures the 8254 to match his requirements and pro grams one of the
counters for the desired delay. After the desired delay, the 8254 will interrupt the CPU. Software
overhead is minimal and variable length delays can easily be accommodated. Some of the other
counter/timer functions common to microcomputer s which can be implemented with the 8254 are:
real time clock , event-counter , digital one -shot, programmable rate generator , square wave
generator , binary rate multiplier , complex waveform generator or complex motor controller .

30
The pin description of the DUT is described in tabel 2.1. AMBA APB bus is used for
interfacing this version of the timer core. This is the peripheral bus used by the ARM family of
processors.

Pin Name Width (bits) Direction Function
PCLK 1 I Clock, primary used in core
PRESETn 1 I Asynchronous reset. Active low
PSEL 1 I Peripheral select
PENABLE 1 I Peripheral transfer enable
PWRITE 1 I Periferal write/read_n control
PADDR 2 I Peripheral address bus
PWDATA 8 I Peripheral data write bus
PRDATA 8 O Peripheral data read bus
CLK0 1 I Clock input for counter 0
GATE0 1 I Gate input for counter 0
OUT0 1 O Output of counter 0
CLK1 1 I Clock input for counter 1
GATE1 1 I Gate input for counter 1
OUT1 1 O Output of counter 1
CLK2 1 I Clock input for counter 2
GATE2 1 I Gate input for counter 2
OUT2 1 O Output of counter 2
Table 2.1 : 8254 Programmable Interval Timer interface description

The 8254 Timer contains three identical presettable synchronous down counters. The
operation of each counter is based upon how it is programmed. Prior to being programmed, the
operation is undefined.
The Control Word Register (see Table 2.2) is selected by the read/write logic when the
address is 2’b11. If the CPU then does a write operation to the 8254, the data is stored in the
Control Word Register and is interpreted as a Control Word used to define the operation of the
Counters. The Control Word Regis ter can only be written to. Status information is available with
the Read -Back Command.

Address R/W Function
0 R/W Read counter 0
1 R/W Read counter 1
2 R/W Read counter 2
3 W Control word write
Table 2.2 : 8254 Programmable Interval Timer register description

Counters are programmed by writing a control word and then an initial count. The control
words are written into the Control Word Register, which is selected when the address is 2b’11. The
control wor d itself specifies which counter is being programmed. The functional description of the
Control Word Register can be found in Figure 2.2. By contrast, initial counts are written into the

31
counters, not the Control Word Register. The address input is used to select the counter to be
written into. The format of the initial count is determined by the control word used.
The programming procedure for the 8254 is very flexible. Only two conventions need to be
remembered: for each counter, the control word must be written before the initial count is written
and the initial count must follow the count format specified in the control word (least significant
byte only, most significant byte only, or least significant byte and then most significant byte). Since
the Control Word Register and the three counters have separate addresses (selected by the address
input), and each control word specifies the counter it applies to (SC0, SC1 bits), no special
instruction sequence is required. Any programming sequence that fol lows the conventions in Figure
2.2 is accepte d. A new initial count may be written to a counter at any time without affecting the
counter's programmed mode in any way. The new count must follow the programmed count format.
Fig. 2.2: Control Word Register f ormat [8254 PS, 1993]

2.2 APB Interface Protocol

The APB is part of the AMBA 3 protocol family. It provides a low -cost interface that is
optimized for minimal power consumption and reduced interface complexity. The APB interfaces
to any peripherals that are low -bandwidth and do not require the high performance of a pipelined
bus interface. The APB has unpipelined protocol.
All signal transitions are only related to the rising edge of the clock to enable the integration
of APB peripherals easily into any design flow. Every transfer takes at least two cycles.

32
2.2.1 Write Transfers

The write transfer starts with the address, write data, write signal and select signal all
changing after the rising edge of the clock , as it is shown in Figure 2.3 . The first clock cycle of the
transfer is called the Setup phase. After the following clock edge the enable signal is asserted,
PENABLE, and this indicates that the Access phase is taking place. The address, data and control
signals all remain valid throughout the A ccess phase. The transfer completes at the end of this
cycle. The enable signal, PENABLE, is deasserted at the end of the transfer. The select signal,
PSELx, also goes LOW unless the transfer is to be followed immediately by another transfer to the
same pe ripheral.

Fig. 2.3: APB AMBA 3 write transfer [AMBA APB, 2004]

2.2.2 Read Transfers

Figure 2.4 shows a read transfer. The timing of the address, write, select, and enable signals
are as described previously for a write transfer. The slave must provide the data before the end of
the read transfer.

2.2.3 Transfers with Error

PSLVERR is used to indicate an error condition on an APB transfer. Error conditions can
occur on both read and write transactions. PSLVERR is only considered vali d during the last cycle
of an APB transfer, when PSEL, PENABLE, and PREADY are all HIGH.
It is recommended, that PSLVERR is driven LOW when it is not being sampled. That is, when any
of PSEL, PENABLE, or PREADY are LOW. Transactions that receive an error m ight or might not
have changed the state of the peripheral. This is peripheral -specific and either is acceptable.

33
When a write transaction receives an error this does not mean that the register within the
peripheral has not been updated. Read transactions that receive an error can return invalid data.
There is no requirement for the peripheral to drive the data bus to all 0s for a read error.
APB peripherals are not required to support the PSLVERR pin. This is true for both existing
and new APB peripheral designs. Where a peripheral does not include this pin then the appropriate
input to the APB is tied LOW.

Fig. 2.4: APB AMBA 3 read transfer [AMBA APB, 2004]

34

35
Chapter 3 – DUT Verification Flow

The verification quality of a chip is impacted by the chosen verification strategy. The impact
of the chosen strategy on the verification step duration is shown in Figure 3.1.
Fig. 3.1: Verification strategy impact on time saving [Inno -Logic, 2005]

The objective is to ensure that the block behaves as described in the specification and the
expectation is to achieve 100% fu nctional coverage, 100% RTL coverage and 0 bugs in the circuit.
This paper proposes a golden verification environment used to explain and highlight the
advantages of the Universal Verification Methodology on a complex DUT. The example can be
applied irres pective of the complexity of the module which has to be verified to create verification
components for checking, coverage collection, stimulus generation and test scenarios which are
sufficient for reaching the agreed metric to ensure a bug free circuit wi th minimum delay and with
high flexibility.
Figure 3.2 presents the workflow wich was followed in order to build this verification
environment. In the first stage t he verification plan defines all the DUT features and configuration
which have to be verifie d and it also defines how the verification environment has to be
implemented. Based on the VP the coverage which has to be collected and the desired test scenarios
are implemented.
During the second step, the verification environment si elaborated. The VE contains the
necessary configurations which have to be done in order to runt the test scenarios for more than one
version of the DUT. Also, the verification environment is the top level of the verification
architecture and instantiates all the components u sed, such as agents and monitors. The goal of
functional verification is to achieve a functional coverage of 100% and this gives the measure for
the verification stage completion , so at the third step the metrics which need to be collected are
implemented . Further on, the test scenarios are elaborated based on the VP defined at the first stage.
At the last step the verification environment quality is analyzed based on the collected
metrics and also, the VE can be refined in order to achieve higher efficienc y in terms of CPU
resources used and a better defect detection rate, if possible.

36

Fig. 3.2: Functional Verification workflow

3.1 Verification Plan Elaboration

The current methodology requires a verification plan (vplan) for module level validation.
The APB eVC comes with a VP which covers the features required for peripheral interfaces
validation. This must be instantiated in a vplan which lists the coverage metrics for the DUT.
Sometimes the DUT can have different features that are not required for all products in
which i t is used. In order to avoid configuration issues , the create d verification plan which contains
functional aspects of the module use s parameters for each feature.
In the end a second vPlan has to be created and will have two instances and it will be
basically a read-only document . The first instance should be of the DUT verification plan , for which
only the features that are actually used by the current product are selected . The second instance is of
the APB eVC VP , for which the same thing has to be done. O nly the parameters required by the
current product have to be configured .
The DUT verification plan contains the planned checkers for the three output signals (one
for each counter) on the rising and falli ng edges. Coverage items are defined for counter underflow,
output rise and fall, end of test, mode change, reset, registers and write of control word. The VP
defines 17 testcases planned in order to collect the desired metrics. Figure 3.3 represents an ex tract
of the DUT vplan.

37
Fig. 3.3: DUT verification plan extract

3.2 Verification Environment Implementation

Figure 3.5 presents the VE architecture proposed for this verification approach where all the
components are instantiated and will be presente d in more detail in the following sections.
The timer environment is comprised of a signal map where all the DUT interface signals are
linked to Specman ports. There are also two other instances, one is the register map where all the
registers are defined with the constraints according to the Product Architecture Specification and
the other is a configuration structure where parameters for the VE can be set.
The VE has two agents: one is used for drivin g the APB interface traffic and the other is
used to change the clock frequencies and gate signals independently for each of the three counters.
The APB eVC will be presented in more detail in the following sections.
The checking is done by using a referen ce model. The DUT contains three identical down
counters, therefore a single golden model has been elaborated which models all the functions of the
programmable interval timer and it has been instantiated three times, one for each of the counters.
The refe rence structure is presented in Figure 3.4.

Figure 3.4: Counter golden model

38
Fig. 3.5: Verification E nvironment architecture

3.2.1 APB eVC

The APB e Verification Component implements the APB AMBA 3 protocol presented in
Chapter 2 and its structure is presented in Figure 3.6. The eVC can generate master data items and
emulate a master driving them according to the protocol. It can also generate APB sla ve data items
as a response to data coming from the master and emulate a device driving them according to the
AMBA 3 protocol.
The user can configure data width, address size and the number of slaves using a
configuration unit. Also, the checking and cove rage collection capabilities can be activated or
deactivated using the switches has_checkers and has_coverage included in the configuration unit.
The eVC provides predefined checks for verifying that the DUT adheres to the protocol
rules in the AMBA APB Spec. The checks are part of the Bus Monitor . The protocol checks include
address stability , data stability during transfer , transfer direction , respon se delay , reset behavior ,
pselect and penable protocol signals behavior .
The eVC provides predefined coverage groups related to traffic on the bus . The coverage
groups are part of the Bus Monitor . Extracts of the coverage and checkers extens ions of the monitor
can be found in Annex 1.

39
Fig. 3.6: APB Interface eVC architecture

Read and write transfers are described by the transfer item which is used by the BFM or
monitor. The user can constrain the type of the item (master or slave), the number of the targeted
slave, the transfer direction, data, slave error value and pready delay of the slave. The tra nsfer item
definition and fields are presented in Annex 2.

Fig 3.7: APB transfer

40
3.2.2 vr_ad Package

The register and memory package, vr_ad, models the behavior of registers and memory. It
contains some built -in mechanisms with predefined types for efficient modelling. The package
addresses three independent aspects: address management, register modelling, and memory
modelling.
In this section, the following elements are introduced to the basic UVM elements: register
file, address map (Figure 3.8), register sequence driver. The register file represents DUT agent
registers and contains a list of consecutive registers. The address map represents the address space
and it maps the register files and memory blocks (if any) in the address space. In this s imple
environment with one single register file, the address map may seem redundant. Address maps gain
importance in VEs with more than one register file. The RSD is a dedicated sequence driver for
register operations and its functionality resembles that o f a virtual driver. The register file
declaration for the programmable interval timer is presented in Annex 3.

Fig. 3.8: Address map

The data item that represents a register access is vr_ad_operation. The vr_ad_operation
structure carries specific information about the actual operation to be performed on the register,
such as the direction of the operation, backdoor access and so on. The same data item serves both
register and memory operations, implemented under different subtypes. The register -specific
attributes are defined under the REG subtype. As the syntax tends to get complicated when many
constraints are needed or when indirect accessing is required, it is recommended to use the macros
„write_reg” and „read_reg”. The macros provide a more ef ficient implementation.

3.2.3 Verification Environment

As presented in Figure 3.5 , the programmable interval timer VE has another agent
instantiated, along the APB agent, to deal with the traffic on the second interface. This agent is
named C_OUT agent and it will be described in more detail in the following sections.

41
3.2.3.1 C_OUT Agent

The C_OUT Agent has been implemented because of the need to drive other signals present
on the interface of the DUT, other than the ones which are included in the APB interface. This gives
flexibility to the VE and modularity, being an important feature in the context of the used
methodologies.
This agent is comprised of a BFM, a driver, a bus monitor and a sequence library which is to
be used in the creation of t estcases. The monitor samples the signals of this interface (clock
generators control, gate signals of the three counters and the three output signals) and sends this
information to the timer monitor in order to be used for checking, collecting coverage an d model
the functionality of the entire DUT.
The item which is generate by the driver contains information on clock frequency, duty
cycle, offset and gate signal which can be constrained by the user to take any legal value. The item
stucture is presented in Annex 4. The number of the counter is selected by constraining the two bits
field named cnt_flag.
The sequence library is comprised of three predefined sequences: clock initialization
sequence, gate enable and gate disable sequences. All of the tree sequences use parameters which
are constrained from a higher level (from the test scenario or a higher l evel sequence library) and
can be configured in order to excercise all the proposed checkers and implement the testcases
described in the VP. An example of how these signals are driven during the simulation is provided
in Figure 3.9.

Fig. 3.9: C_OUT agent interface driving

3.2.3.2 Counter Golden Model

There are three identical counter golden models instantiated in the reference. This decision
has been taken having in mind that each counter can be operating independently of the other two.
For more modularity and module to system reusability, the golden model is connected to the
monitor through method and event ports. Event ports are used to transfer events between two e units
or between an e unit and an external object.
The Golden Model f unctionalities include: reseting model fields when a system reset is
issued, update the status registers according to the written control words and operating status of the
counters, updates the initial values of the counters using the RWx bits from the con trol word,
implements the read back mechanism in order to store important information on the timer module
status, implements a conversion method from binary to BCD and a BCD counter model. In Annex 6

42
the counter fun ctionality implementation is presented. F or the purpose of checking and coverage
collection, each of the six functioning modes is modeled.
Mode 0 (interrupt on terminal count ) is typically used for event counting. After the control
word is written, OUT is initially low, and will remain low un til the counter reaches zero. OUT then
goes high and remains high until a new count or a new Mode 0 control word is written into the
Counter. After the control word and initial count are written to a counter, the initial count will be
loaded on the next clock pulse. This clock pulse does not decrement the count, so for an initial
count of N, OUT does not go high until N + 1 clock pulses after the initial count is written.

Fig 3.10: Mode 0 simulation waveform

During the hardware retriggerable one-shot mode (Mode 1) OUT will be initially high. OUT
will go low on the clock pulse following a trigger to begin the one -shot pulse, and will remain low
until the counter reaches zero. OUT will then go hi gh and remain high until the clock pulse after the
next tri gger. After writing the control word and initial count, the counter is armed. A trigger results
in loading the counter and setting OUT low on the next clock pulse, thus starting the one -shot pulse.
An initial count of N will result in a one -shot pulse N clock cycles in duration. The one -shot is
retriggerable, hence OUT will remain low for N clock pulses after any trigger. The one-shot pulse
can be repeated without rewriting the same count into the counter. The GATE signal has no effect
on OUT.
Fig. 3.11: Mode 1 simulation waveform

43
The rate generator mode (an example is shown in Figure 3.10) functions like a divide -by-N
counter. It is typically used to generate a real time clock interrupt. OUT will initially be high. When
the initial count has decremente d to 1, OUT goes low for one clock pulse. OUT then goes high
again, the counter reloads the initial count and the process is repeated. Mode 2 is periodic : the same
sequence is repeated indefinitely. For an initial count of N, the sequence repeats every N clock
cycles. If GATE goes low during an output pulse, OUT is set high immediately. A trigger reloads
the counter with the initial count on the next clock pulse . OUT goes low N clock pulses after the
trigger. Thus the GATE input can be used to synchronize the counter. After writing a control word
and initial count, the counter will be loaded on the next clock pulse. OUT goes low N clock pulses
after the initial count is written. This allows the counter to be synchronized by software also.

Fig. 3.12: Mode 2 simulation waveform

Mode 3 (square wave mode ) is typically used for Baud rate generation. Mode 3 is similar to
Mode 2 except for the duty cycle of OUT. OUT will initially be high. When half the initial count
has xpired, OUT goes low for the remainder of the count. Mode 3 is periodic: the sequence above
is repeated indefinitely. An initial count of N results in a square wave with a period of N clock
cycles. If GATE goes low while OUT is low, OUT is set high immediately; no clock pulse is
required. A trigger reloads the counter with the initial count on the next clock pulse. Thus the
GATE input can be used to synchronize the counter. After writing a control word and initial count,
the counter will be loaded on the next clock pulse. This allows the Counter to be synchronized by
software also.
Fig. 3.13: Mode 3 simulation waveform

44
When M ode 4 (software triggered strobe ) is configured OUT will be initially high. When
the initial count expires, OUT will go low for one clock pulse and then go high again. The co unting
sequence is triggered by writing the initial count. After writing a control word and initial count, the
counter will be loaded on the next clock pulse. This clock pulse does not decrement the count, so
for an initial count of N, OUT does not strobe low until N + 1 clock pulses after the initial count is
written.
Fig. 3.14: Mode 4 simulation waveform

During M ode 5 (hardware triggered strobe – retriggerable) OUT will initially be high.
Counting is triggered by a rising edge of GATE. When the initial count has expired, OUT will go
low for one clock pulse and then go high again. After writing the control word and initial count, the
counter will not be loaded until the clock pulse after a trigger. This clock pulse does not decrement
the count, so for an initial count of N, OUT does not strobe low until N + 1 clock pulses after a
trigger. A trigger results in the counter being loaded with the initial count on the next clock pulse.
The counting sequence is retriggerable. OUT will not strobe low for N + 1 clock pulses after any
trigger. GATE has no effect on OUT.
Fig. 3.15: Mode 5 simulation waveform

3.3 Metrics Implementation

The metrics which are implemented by the timer VE Monitor are specified in the
verification plan. They consist of checkers for the three outpus signals and fifteen coverage groups.

45
The implemented automated checking mechanism s verify that the three output signal values
behave like it is explained in the Programmable Interval Timer Sp ecification. There are two
complementary checkers for the rising edge event of each of the three output signals and another
two complementary checks for the falling edge event of the same outputs. An example of
implemented checking mechanisms is detailed i n Annex 7.
Functional coverage is performed on user -defined functional coverage foints specified using
covergroup statements. These coverage points specify scenarios, error cases or corner cases to be
covered and also specify an analysis to be done on di fferent values of a variable. In the case of the
8254 Programmable Interval Timer, as it can be seen in more detail in Annex 8, the coverage points
are represented by events emitted in the Monitor unit. The selected events are: fall and rise of the
outpus signals of the three counters, control word update, reset fall, mode change, underflow
occurance and end of test.

3.4 Test Scenario s

The VE is implemented to provide a reusable and flexible verification component. Its main
function is to generate random -constrained data items, monitor the activity of the DUT, check the
response and collect coverage.
Many test scenarios have to be created in order to verify the features of the DUT. In order to
do so, a library of reusable sequences has to be created. Sequences are made up of several data
items which form the desired pattern or test scenario. This metho d reduces the length of the tests
and enables reuse for further more complex test cases.
In a complex verification environment, more than one agent can be generating stimuli in
parallel. To coordinate timing and data between different channels in a test -bench, virtual sequences
are used to execute items on other drivers which can execute items because they do not have their
own data items. Figure 3.15 gives a more organized approach on this matter.

Fig. 3.1 6: Virtual sequencer

46
Virtual sequences enable control over the activity of several agents connected to the
interfaces of the device under verification. A testcase example can be found in Annex 9. In the
presented testcase the first operation issued is to configure the three counters u sing a configuration
sequence which has multiple fields that can be constrained or fully randomized. The next step of the
test performs configuring and running in all six modes of a selected counter. In the last step, a
counter is configured and after an u nderflow occurs the status and counter value are read.

3.5 Result Analysis

The objective is to ensure that the block behaves as described in the specification and the
expectation is to achieve 100% functional coverage, 100% RTL coverage and 0 bugs in the circuit .
After the first four steps in the verification flow chosen all the metrics planned to be
collected were implemented together with the testcases. In order to run all the test scenarios
multiple times to achieve 100% functional and code coverage, a tool from Cadence called Incisive
Enterprise Manager is used (Figure 3.16).

Fig. 3.1 7: Incisive Enterprise Manager Interface

The regression for this VE to reach the desired metrics contains 330 tests (each of the
implemented test scenarios is started multiple times with different seeds). After the regression is
completed and all the failing tests are debugged, the VP elements have to be mapped to the metrics
implemented in the VE (Figure 3.17).

47
Fig. 3.1 8: Mapping of the VP elements

Even if the checking mechanisms are written, the verification engineer has to be sure that all
of them have toggeled when they were supposed to and none of them are failing. In order to do so,
the re gression results stored by the v Manager Tool can be applied to the ap proved VP and
correlated with the present elements. After this, the results may be stored in a .html format for easier
reading.
Fig. 3.1 9: Reference checking mechanisms analysis

48
After this step, the same method is applied to analyze the functional covera ge collection.
Figure 3.19 shows how this analysis is done in an early stage where the coverage collection is not
complete and more tests had to be run or implemented.
Fig. 3. 20 Functional c overage analysis

The third and last verification metric which is described in this paper is code coverage. Code
coverage is a basic coverage type which is collected automatically. It tells the verification engineer
how well the HDL code has been exercised by the testbench. In other words, how thoroughly the
design has been executed by the simulator using the tests in the regression. Code coverage is
measured using the IMC Tool like in Figure 3.20.
Fig. 3.21: Code coverage analysis

49
3.6 Verification Code Performance Review

Today’s complex Verification Environments require sometimes huge regressions, with
hundreds of test files, summing up thousands of runs per regression and taking many hours to be
finished.
For a complex design, the part of the Random Generation Based Veri fication Environment
code being stressed can be different from test to test and even from run to run. This makes it very
difficult to find out which combinations of test and seed are the most inefficient and profile them.
Some tests can take a long time t o run, but also produce a high amount of stimuli, being in the end
very efficient, other tests could have a moderate duration, but produce a low amount of stimuli,
being very inefficient.
This paper presents a method to reduce the duration of a complex r egression just by
profiling the least efficient runs of the regression, either regarding CPU time or Memory
consumption.

Fig. 3.2 2: Profiling tool interface

Profiling of simulations is required for analyzing the components which take the maximum
time during simulation. Information in the profiler report helps to identify inefficient HVL and/or
HDL coding practices. Once the most inefficient section of code is known, optimizing this code will
have the greatest effect on simulation performance.
Even thou gh Profiler is a great tool to improve performance of simulations, there are some
limitations in profiling large regressions. Profiling of large regressions is not automated and e ach

50
profiling report needs to be visually analyzed . There is n o automated way to point out the
performance issues .
It is d ifficult to choose the best test s on which to run the Profiler because results can
significantly differ from test to test . In each test, various parts of the code are stimulated in different
ways and there is no way to know w hich tests will “reveal” the mo st important performance issues.
The goal of this proposed method is to maximize the number of important performance
issues found, while reducing the time spent to find them. The first step is to sort the test from the
regression by using the following formula:

Inefficiency = CPU Time [sec] / Simulation Time [ms]

Fig 3.2 3: Test sorted by the inefficiency factor

A possible improvement added if functional coverage is available. Each test’s contribution
to coverage can be considered when computing the inefficiency. A modified version of the
previously defined formula for inefficiency can be used:

Inefficiency = CP U Time [sec] / Simulation Time [ms] / Coverage_contribution [%]

While the results with classic profiling were relatively poor, with no signification impact
achieved , after the six performance issues found and improved, the results using th is method were
more than expected . Code profiling on the presented VE resulted in an improvement of 23.4% in
terms of CPU time. Old CPU time was 1h.4m.32s and the n ew CPU time is 0h.49m.3s .
This m ethod has some important advantages: points out the least efficient tests , narrows
down the space of tests which need to be profiled , maximizes the number of performance issues
found . For many code performance issues , there are no universal rules. Some performance issues
are strictly related to the interaction between functionali ty of the DUT and the Verification
Environment (not necessarily have something to do with the code syntax or simulation switches).
The best results for profiling are achieved by the person who is most familiar with the DUT and
Verification Environment code .

51
Significant performance improvements (up to 25%) can be achieved with a small amount of
effort from the VE developer and an improvement of days of CPU time consumed during a large
regression may be achieved . Profiling should be run periodically in ord er to detect low
performance code as early as possible because bad performing code can require a huge effort to be
improved if done in a late phase. [Todea, 2014]

52

53
Conclusion s

The benefits of using UVM are very diverse and important. This verification methodology
supports module -to-system reuse, runs on any simulator, it is based on a base class library, provides
automation capabilities, enables multi language VIP and it is integrat ed with the coverage driven
verification workflow.
Metric driven verification ensures verification project predictability, productivity, and
quality. It uses specifications to create verification plans capturing verification intent, performs
metrics analy sis/reporting, measures progress, and automates verification tasks . Coverage is the key
in the verification of today's complex designs. Knowing what has been verified, how it relates to
what needs to be verified, and where are the holes, adds precision, ef ficiency and predictability to
the verification process. Functional, code and assertion coverage, being complementary in nature,
are all needed to provide a reliable and thorough coverage metric.
The presented verification e nvironment has the following features: UVM compliant , module
to system reuse is enabled by connect ing the model to the monitor through method ports and event
ports. After analyzing the results it can be concluded that 100 % pass rate of the tests in the
regression and 100 % functional c overage were achieved.
Using the implemented VE five bugs were found and fixed. It has been found that the
counter latch command blocks the APB bus (pready signal is always 0). Also, the r ead back
command issued for latching only the value of the selected counter has the same effect like the
counter latch command. Another bug was signaled w hen the MSB only mode for loading the initial
value is selected, the value is decremented twice when counting is enabled.
Complex Verification Environments require sometimes huge regressions, with hundreds of
test files. The method presented in this paper reduce s the duration of a complex regression by up to
25% just by profiling the least efficient runs of the regression, ei ther regarding CPU time or
memory consumption.
In the future, refinement of the profiling method is desired in order to reduce even more the
time spent on profiling the code. Also, it is planned that the VE, now implemented using e
Language, will be implem ented using System Verilog UVM in order to provide a golden model for
learning this HVL and how to ramp up a SV verification environment in a small amount of time.

54

55
References

[Bertacco, 2003] Valeria Bertacco, “ Achieving Scalable Hardware Verification with
Symbolic Simulation ”, Ph.D. Dissertation, Stanfor University, 2003

[Luong , 2012] Anh Luong, Andrzej Forys, “Testing and Verification of Verilog -Based
Digital Circuit Design ”, University of Utah, 2012

[Edwards, 2004] Stephen A. Edwards , “Design and Verification Languages ”, Columbia
University, New York , 2004

[Open Vera LRM , 2004 ] “Open Vera Language Reference Manual 1.3 ”, Synopsys , 2004

[e LRM, 2003] “ e Language Reference Manual 4.2 ”, Verisity, 2003

[eRM DM, 2003] “ e Reuse Methodology Developer Manual 4.3 ”, Verisity, 2003

[Farkash, 2015] Monica C. Fa rkash, Balavinayagam Samynathan, Bryan H ickerson,
Michael Behm, “ Mining Coverage Data for Test Set Coverage
Efficiency ”, DVCON Conference, 2015

[UVM UG, 2011] “Universal Verification Methodology (UVM) 1.1 User’s Guide”,
Accellera, 2011

[Balakrishnan , 2012] Sini Balakrishnan , „A glimpse on Metric Driven Verification
Methodology ”, www.vlsi.pro , 2012, accessed on 15.06.2015

[Rosenberg, 2003] Sharon Rosenberg, „ Combined coverage methodology speeds
verification ”, http://www.eetimes.com/document.asp?doc_id=1201923 ,
2003, accessed on 10.05.2015

[8254 PS, 1993] „8254 Programmable Interval Timer”, Intel , 1993

[AMBA APB, 2004] „AMBA 3 APB Protocol v1.0”, ARM , 2004

[Inno -Logic, 2005] „System Verilog Based Verification Methodology”, http://inno –
logic.com/systemverilog -based -verification -methodology.php , accessed
on 21.05.2015

[Todea, 2014] B. Todea, V. Georgescu, „It’s About Time! Efficient Profiling of Large
Regressions ”, CDNLive Conference , Munich, 2014

56

57
Annex 1
APB eVC Checking

 APB eVC checking
<’
extend has_checker ifx_apb_monitor_u{
event read_data_e is {@filtered_pena_rise_e and not(true(smp.sig_psel$==0));
[..((IFX_APB_VG_MAX_RDY_DELAY)*2)];
(@pready_rise_e or true(smp.sig_pready$ == 1)) and
true(smp.sig_psel$!=0) and
true(smp.sig_penable$==1) and
not(@filtered_pena_fall_e)}@clk_change;
event write_data_e is @read_data_e;
event start_r_tcm_e is (@read_data_e and not(@prstn_fall_e) and true(smp.sig_pwrite$ == 0));
event check_r_tcm_e is (@filtered_pena_fall_e and (not(@pr stn_fall_e) or
not(@prstn_active_e)));

on prstn_active_e {
check IFX_APB_ERR_00_RSTA that (smp.sig_paddr$ == 0 &&
smp.sig_pwrite$ == 0 &&
smp.sig_penable$ == 0 &&
smp.sig_psel$ == 0 &&
smp.sig_pwdata$ == 0 &&
smp.sig_prdata$ == 0 &&
smp.sig_pready$ == 0 &&
smp.sig_pslverr$ == 0) else
dut_error("DATA ON APB BUS IS NOT '0' DURING RESET");
};

//Check that penable was asserted 1 clock cycle after psel.
expect IFX_APB_ERR_01_ENACORR is rise(smp.sig_psel$)@clk_r ise => {[0..4];
(rise(smp.sig_penable$) or @prstn_fall_e)}@clk_rise
else dut_error("PENABLE WAS NOT ASSERTED 1 CLOCK CYCLE AFTER PSELECT WAS");

//Check that only one slave is selected.
on psel_rise_e {
var select_cnt: uint;
select_cnt = 0;
for i from 0 to (IFX_APB_VG_MAX_NUMBER_OF_SLAVES -1){
select_cnt = select_cnt + smp.sig_psel$[i:i];
};
check IFX_APB_ERR_02_SGSEL that ( select_cnt <= 1 ) else dut_error("MORE THAN ONE
SLAVE SELECTED");
};

//Check that pready is asserted in a prede fined maximum number of clock cycles.
expect IFX_APB_ERR_03_RDYASSERT is @filtered_pena_rise_e => {[..IFX_APB_VG_MAX_RDY_DELAY+2];
(( @pready_rise_e or true(smp.sig_pready$ == 1)) or @prstn_fall_e);}@clk_rise
else dut_error("PREADY WAS NOT ASSERTED IN T HE MAX_RDY_DELAY INTERVAL");

//Check that prdata is stable during the transfer.
mem_r_tcm()@clk_change is {
while TRUE {
wait (@psel_rise_e or @set_bus_bb_e) and true(smp.sig_pwrite$ == 0) and
true(smp.sig_prstn$==1);
var rdata : ifx_apb_vg_data_t;
message(NONE, "START RD CHECK");
wait @filtered_pena_rise_e and true(smp.sig_prstn$ == 1);
// message(NONE, "start wait");
if((smp.sig_prstn$ == 1) && (smp.sig_pready$ == 1)) then {
wait [1]*cycle;
} else {
while (( smp.sig_prstn$ == 1) && (smp.sig_penable$ == 1) &&
(smp.sig_pready$ == 0)) {
wait [1]*cycle;
// message(NONE, "waited 1 cycle");

58
};
wait [1]*cycle;
};
if(smp.sig_prstn$ == 1) then {
rdata = smp.sig_prdata$ ;
wait @filtered_pe na_fall_e; – and (not(@prstn_fall_e) or
not(@prstn_active_e));
if (smp.sig_prstn$ == 1) then {
// message(NONE, hex(rdata), " ", hex(smp.sig_prdata$));
check IFX_APB_ERR_04_RDATSTB that ( rdata == smp.sig_prdata$ )
else
dut_error("DATA ON APB BUS IS NOT STABLE DURING
PENABLE");
};
};
};
};

//Check that pwdata is stable during the transfer.
mem_w_tcm()@clk_change is {
while TRUE {
wait (@psel_rise_e or @set_bus_bb_e) and not(@prstn_fall_e) and
true(smp.sig_pwrite$ == 1);
var wdata : ifx_apb_vg_data_t;
var address : ifx_apb_vg_addr_t;
message(NONE, "START WR CHECK");
wait @filtered_pena_rise_e and true(smp.sig_prstn$ == 1);
wdata = smp.sig_pwdata$;
address = smp.sig_paddr$;
while ((smp.sig_prstn$ == 1) && (smp.sig_penable$ == 1) && (smp.sig_pready$
== 0)) {
wait [1]*cycle;
};
if ((smp.sig_prstn$ == 1) && (smp.sig_pwrite$ == 1)) then {
check IFX_APB_ERR_05_WDATSTB that ( wdata == smp.sig_pwdata$ &&
address == smp.sig_paddr$) else
dut_error("DATA ON APB BUS IS NOT STABLE DURING PENABLE");
};
};
};
‘>

 APB eVC coverage collection
<’
extend has_coverage ifx_apb_monitor_u{
cover transfer_done is{
item write_transfer : uint (bits: 1) = transfer_data.rw_direction;
item address : uint (bits: IFX_APB_VG_ADDR_SIZE) = transfer_data.address using
address_buckets;
item slaves : uint [0..IFX_APB_VG_SLAVE_RANGE] = transfer_data.slave_no using ranges = {
range([0]); };
item slave_error : uint (bits: 1) = tr ansfer_data.pslverr;
item p_write_transfer : uint (bits: 1) = prev_transfer_data.rw_direction;
item p_address : uint (bits: IFX_APB_VG_ADDR_SIZE) = prev_transfer_data.address using
address_buckets;
item p_slaves : uint [0..IFX_APB_VG_SLAV E_RANGE] = prev_transfer_data.slave_no using ranges =
{ range([0]); };
item p_slave_error : uint (bits: 1) = prev_transfer_data.pslverr;

item ready_delay : uint [0..IFX_APB_VG_MAX_RDY_DELAY] = rdy_cnt using ranges = {
range([0..IFX_APB_V G_MAX_RDY_DELAY],"",1); }, ignore = (ready_delay == 0);

cross write_transfer, slaves using name = no_of_w_transfers;
cross write_transfer, slave_error using name = transfers_with_error;

};
‘>

59
Annex 2
APB eVC Transfer Item

<'
// Item that describes an APB transaction, used by either monitor or bfm of SLAVE/MASTER
struct ifx_apb_item_s like any_sequence_item{
// kind: SLAVE, MASTER, MONITOR
const kind : ifx_apb_vg_item_kind_t;
// number of slave
slave_no : uint;
keep slave_no <= IFX_APB_VG_MAX_NUMBER_OF_SLAVES;
keep soft slave_no == 0;
// Type of transfer: READ or WRITE
rw_direction : ifx_apb_vg_transf_t;
// Address of the transfer
address : ifx_apb_vg_addr_t;
// Read or write transfer d ata
wr_data : ifx_apb_vg_data_t;
rd_data : ifx_apb_vg_data_t;
// Slave error
pslverr : uint (bits: 1);
// Ready response of the slave
!ready : uint (bits: 1);
// Ready delay of the slave
rdy_delay : uint;
keep rdy_delay <= IFX_APB_VG_MAX_RDY_DELAY;
// Transfer start delay for the APB MASTER
transfer_start_delay : uint;
keep soft transfer_start_delay == 0;
};

'>

60

61
Annex 3
Register File Declaration

<'

extend vr_ad_reg_file_kind : [TIMER];

extend TIMER vr_ad_reg_file{
keep size == 4;
post_generate() is also{
reset();
};
};

reg_def IFX_TIMER_CNT_0 TIMER 2'h0 {
reg_fld DATA : uint (bits : 8) : RW : 0 : cov;
};

reg_def IFX_TIMER_CNT_1 TIMER 2'h1 {
reg_fld DATA : uint (bits : 8) : RW : 0 : cov;
};

reg_def IFX_TIMER_CNT_2 TIMER 2'h2 {
reg_fld DATA : uint (bits : 8) : RW : 0 : cov;
};

reg_def IFX_TIMER_CTRL TIMER 2'h3 {
reg_fld SC : uint (bits : 2) : W : 0 : cov;
reg_fld RW_CTRL : uint (bits : 2) : W : 0 : cov;
reg_fld MODE : uint (bits : 3) : W : 0 : cov;
reg_fld BCD : uint (bits : 1) : W : 0 : cov;
};

'>

62

63
Annex 4
C_OUT Agent Driver Item

<'
// Item that sets the clock frequency for each of the three counters and
// the gate signals.
struct ifx_timer_clk_item_s like any_sequence_item{
// Flag for selecting the counter to be controlled
cnt_flag : uint (bits : 2);
// Clock frequency, duty cycle and offset for counter x
clk_freq : uint;
keep soft clk_freq in [0..50000];
clk_duty : uint;
keep soft clk_duty in [0..100];
clk_offset : uint;
// Gate for counter x
gate : uint (bits : 1);
};
'>

64

65
Annex 5
C_OUT Agent Sequence Library

<'
// Sequence definition for the clock agent
sequence ifx_timer_clk_sequence using
item = ifx_timer_clk_item_s,
created_driver = ifx_timer_clk_sequence_driver;

extend MAIN ifx_timer_clk_sequence {
!transfer : ifx_timer_clk_item_s;

body()@driver.clock is only {};
};

extend ifx_timer_clk_sequence_kind : [INIT_CLK_SEQ, EN_GATE, DSBL_G ATE, CHG_CNT0_CLK];
// Sequence for clock initialization: gate LOW, frequency = 50 MHz, clock duty = 50%, offset = 0
extend INIT_CLK_SEQ ifx_timer_clk_sequence {
!transf : ifx_timer_clk_item_s;
clk_freq: uint;
keep soft clk_freq == IFX_TIMER_DEFAULT_FREQ;
body()@driver.clock is only {
for i from 0 to 2 do {
do transf keeping {
.cnt_flag == i;
.clk_freq == clk_freq;
.clk_duty == 50;
.clk_offset == 0;
.gate == 0;
};
};
};
};

// Gate enable sequence. The enabled counter is selected by constraining cnt_index
extend EN_GATE ifx_timer_clk_sequence {
!transf : ifx_timer_clk_item_s;
cnt_index : uint (bits : 2);
clk_freq: uint;
keep soft clk_freq == IFX_TIMER_DEFAULT_FREQ;
body()@driver.clock is only {
do transf keeping {
.cnt_flag == cnt_index;
.clk_freq == clk_freq;
.clk_duty == 50;
.clk_offset == 0;
.gate == 1;
};
};
};

// Gate disable sequence. The disabled counter is selected by constraining cnt_index
extend DSBL_GATE ifx_timer_clk_sequence {
!transf : ifx_timer_clk_item_s;
cnt_index : uint (bits : 2);
clk_freq: uint;
keep soft clk_freq == IFX_TIMER_DEFAULT_FREQ;
body()@driver.clock is only {
do transf keeping {
.cnt_flag == cnt_index;
.clk_freq == clk_freq;
.clk_duty == 50;
.clk_offset == 0;
.gate == 0;
};
};
};
'>

66

67
Annex 6
Counter Functionality Model

// Counter functionality TCM
cnt_op_tcm()@cnt_clk_rise_ep$ is {
while TRUE {
sync (@gate_rise_ep$ or @gate_active_ep$) and not(@reset_active_e) and
true(ld_init_val_flag == 0) and true(reset_flag == 0);
// Counter operation described according to the set mode (0 -5).
case cnt_cw[3:1] {
// Mode 0
0 : {
ovf_flag = 0;
while (cntx_gate$ == 1 && ld_init_val_flag == 0) {
// Set out signal according to mode 0 and counter operation
case cnt_cw[0:0] {
0 : { if (count == 0) then {
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
count = 0xFFFF;
} else {count[15:0] = count[15:0] – 1;};
};
1 : { count = bcd_counter_m(count);
if (count == 0) then {
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
};
};
b = a;
a = count;
// Write to the counte r output field the corresponding value (MSB or LSB),
// controlled by the RWx bits in the control word
write_val_m(b);
// Out signal setup for mode 0
if(count == 1 || ovf_flag == 1) then {
out_s = 1;
} else if (count == 0) {
ovf_flag = 1;
out_s = 1;
} else { ovf_flag = 0;};
wait cycle;
};
write_val_m(a);
};
// Mode 1
1 : {
out_s = 0;
ovf_flag = 0;
wait [1]* cycle;
count = initial_value;
if (count == 0) then {
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
out_mode1_flag = 0;
while (cntx_gate$ == 1) {
// Set out signal according to mode 1 and counter operation
case cnt_cw[0:0] {
0 : { if (count == 1) then {
out_s = 0;
};
if (count == 0) then {
count = 0xFFFF;
out_s = 1;
} else {count[15:0] = count[15:0] – 1;};
};
1 : {
if (count == 1) then {
out_s = 0;
};
if (count == 0) then {
out_s = 1;
};

68
count = bcd_counter_m(count);
};
};
b = a;
a = count;
// Write to the counter output field the corresponding value (MSB or LSB),
// controlled by the RWx bits in the control word

write_val_m(b);
if (count == 0) then {
ovf_flag = 1;
nof_underflows_cnt = nof_ underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
} else if (ovf_flag == 1) then {
out_s = 1;
} else { out_s = 0;};
wait cycle;
};
if (count == 1) then { out_s = 1;};
write_val_m(a);
};
// Mode 2
2 : {
count = initial_value;
// null_cnt_bit = 0;
ovf_flag = 0;
if (count != 0) then {
wait [1]*cycle;
} else {
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
if(initial_value == 1) then {out_s = 0;};
while (cntx_gate$ == 1) {
case cnt_cw[0:0] {
0 : { count[15:0] = count[15:0] – 1; ovf_flag = 0; };
1 : { count = bcd_counter_m(count); };
};
// Set out signal according to mode 5 and counter operation
if(count == 1) then {
out_s = 0;
} else if(count == 0){
count = initial_value;
if(initial_value == 1) then { out_s = 0;
}else {out_s = 1;};
ovf_flag = 1;
nof_underflows_cnt = nof_underflows_cnt + 1 ;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
} else { out_s = 1;};
b = a;
a = count;
// Write to the counter output field the corresponding value (MSB or LSB),
// controlled by the RWx bits in the control word

write_val_m(b);
wait cycle;
};
write_val_m(a);
};
// Mode 3
3: {
count = initial_value;
// null_cnt_bit = 0;
if (count != 0) then {
wait [1]*cycle;
} else {
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_ underflow_e;
};
ovf_flag = 0;
out_mode3_flag = 1;
while (cntx_gate$ == 1 && reset_flag == 0) {
case cnt_cw[0:0] {
0 : { coun t[15:0] = count[15:0] – 1; ovf_flag = 0;};
1 : { count = bcd_counter_m(count); };

69
};
// Set out signal according to mode 5 and counter operation
if (cnt_cw [0:0] == 1) then {
mode3_treshold = bcd_conv_m(initial_value)/2;
conv_count = bcd_conv_m(count);
} else {
mode3_treshold = (initial_value / 2);
conv_count = count
};
if (conv_count > mode3_treshold && reset_flag == 0) then {
out_s = 1;
} else {out_s = 0;};
if (count == 0 && reset_flag == 0) then {
count = initial_value;
out_s = 1;
ovf_flag = 1;
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
b = a;
a = count;
// Write to the counter output field the corresponding value (MSB or LSB),
// controlled by the RWx bits in the control word

write_val_m(b);
wait cycle;
};
out_mode3_flag = 0;
if (count == 1) then {out_s = 1};
write_val_m(b);
count = initial_value;
};
// Mode 4
4: {
// count = initial_value;
if (count == 0) then {
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
out_s = 1;
ovf_flag = 0;
while (cntx_gate$ == 1) {
// Set out signal according to mode 5 and counter operation
case cnt_cw[0:0] {
0 : { if (count == 0) then {
count = 0xFFFF;
out_s = 1;
} else {count[15:0] = count[15:0] – 1; ovf_flag = 0; out_s = 1;};
if (count == 1) then {
out_s = 1;
} else if (count == 0) then {
out_s = 0;
ovf_flag = 1;
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
};
1 : { count = bcd_counter_m(count);
if (count == 0) then {
out_s = 0;
ovf_flag = 1;
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
} else if (count == 0x9999) then {
out_s = 1;
};
};
};
if (count == 1) then {
out_s = 1 ;
} else if (count == 0) then {
out_s = 0;
};
b = a;
a = count;

70
// Write to the counter output field the co rresponding value (MSB or LSB),
// controlled by the RWx bits in the control word

write_val_m(b);
wait cycle;
};
write_val_m(a);
};
// Mode 5
5: {
count = initial_value;
if (count == 1) then {out_s = 0;};
wait [1]*cycle;
ovf_flag = 0;
while (cntx_gate$ == 1) {
// Set out signal according to mode 5 and counter operation
case cnt_cw[0:0] {
0 : { if (count == 1) then {
out_s = 0;
count[15:0] = count[15:0] – 1;
} else if (count == 0) then {
count = 0xFFFF;
out_s = 1;
ovf_flag = 1;
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underf low_e;
} else {count[15:0] = count[15:0] – 1; ovf_flag = 0; out_s = 1; };
};
1 : { count = bcd_counter_m(count);
ovf_flag = 0;
if (count != 0) then {
out_s = 1;
} else if (count == 0) then {
out_s = 0;
ovf_flag = 1;
nof_underflows_cnt = nof_underflows_cnt + 1;
nof_underflows_eot = nof_underflows_eot + 1;
emit cnt_underflow_e;
};
};
};
b = a;
a = count;
// Write to the counter output field the corresponding value (MSB or LSB),
// controlled by the RWx bits in the control word

write_val_m(b);
wait cycle;
};
if (count == 1 && reset_flag == 0) then { out_s = 1;
} else if (count == 0 && reset_flag == 0) then {out_s = 1};
write_val_m(a);
};
};

71
Annex 7
Automated Checking Mechanism

<'
extend has_checker ifx_timer_reference_u {

// Check that output signals are asserted correctly
expect IFX_APB_ERR_11_CNT0_RISE is @out0_rise_e => {[..10]; detach({@out0_sig_rise_e;[1]}) or
true(cnt0_model.check_mask==1)} @clk_agt_mon.uclk0_rise_e
else dut_error("OUT0 signal did not beh ave according to the set operation mode");
expect IFX_APB_ERR_12_CNT1_RISE is @out1_rise_e => {[..10]; detach({@out1_sig_rise_e;[1]}) or
true(cnt1_model.check_mask==1)} @clk_agt_mon.uclk1_rise_e
else dut_error("OUT1 signal did not behave according to the set operation mode");
expect IFX_APB_ERR_13_CNT2_RISE is @out2_rise_e => {[..10]; detach({@out2_sig_rise_e;[1]}) or
true(cnt2_model.check_mask==1)} @clk_agt_mon.uclk2_rise_e
else dut_error("OUT2 signal did not behave according to the set ope ration mode");

// Check that output signals are LOW when it is imposed
expect IFX_APB_ERR_14_CNT0_FALL is @out0_fall_e => {[..10]; detach({@out0_sig_fall_e;[1]}) or
true(cnt0_model.check_mask==1)} @clk_agt_mon.uclk0_rise_e
else dut_error("OUT 0 signal did not behave according to the set operation mode");
expect IFX_APB_ERR_15_CNT1_FALL is @out1_fall_e => {[..10]; detach({@out1_sig_fall_e;[1]}) or
true(cnt1_model.check_mask==1)} @clk_agt_mon.uclk1_rise_e
else dut_error("OUT1 signal did n ot behave according to the set operation mode");
expect IFX_APB_ERR_16_CNT2_FALL is @out2_fall_e => {[..10]; detach({@out2_sig_fall_e;[1]}) or
true(cnt2_model.check_mask==1)} @clk_agt_mon.uclk2_rise_e
else dut_error("OUT2 signal did not behave acco rding to the set operation mode");

// Check that output signals are asserted correctly
expect IFX_APB_ERR_17_CNT0_RISE is @out0_sig_rise_e => {[..10]; detach({@out0_rise_e;[10]}) or
true(cnt0_model.check_mask==1) or true(smp.sig_prstn$==0)} @clk_ agt_mon.uclk0_rise_e
else dut_error("OUT0 signal did not behave according to the set operation mode");
expect IFX_APB_ERR_18_CNT1_RISE is @out1_sig_rise_e => {[..10]; detach({@out1_rise_e;[10]}) or
true(cnt1_model.check_mask==1) or true(smp.sig_pr stn$==0)} @clk_agt_mon.uclk1_rise_e
else dut_error("OUT1 signal did not behave according to the set operation mode");
expect IFX_APB_ERR_19_CNT2_RISE is @out2_sig_rise_e => {[..10]; detach({@out2_rise_e;[10]}) or
true(cnt2_model.check_mask==1) or t rue(smp.sig_prstn$==0)} @clk_agt_mon.uclk2_rise_e
else dut_error("OUT2 signal did not behave according to the set operation mode");

// Check that output signals are LOW when it is imposed
expect IFX_APB_ERR_20_CNT0_FALL is @out0_sig_fall_e => {[..10]; detach({@out0_fall_e;[10]}) or
true(cnt0_model.check_mask==1) or true(smp.sig_prstn$==0)} @clk_agt_mon.uclk0_rise_e
else dut_error("OUT0 signal did not behave according to the set operation mode");
expect IFX_APB_ERR_21_CNT1_FALL is @out1_sig_fall_e => {[..10]; detach({@out1_fall_e;[10]}) or
true(cnt1_model.check_mask==1) or true(smp.sig_prstn$==0)} @clk_agt_mon.uclk1_rise_e
else dut_error("OUT1 signal did not behave according to the set oper ation mode");
expect IFX_APB_ERR_22_CNT2_FALL is @out2_sig_fall_e => {[..10]; detach({@out2_fall_e;[10]}) or
true(cnt2_model.check_mask==1) or true(smp.sig_prstn$==0)} @clk_agt_mon.uclk2_rise_e
else dut_error("OUT2 signal did not behave according t o the set operation mode");

// Check that all outputs are 0 after reset
on rst_fall_delayed_e {
check IFX_APB_ERR_12_OUTX_RST that ( smp.out0_timer$ == 0 && smp.out1_timer$ == 0 &&
smp.out2_timer$ == 0) else
dut_error("Output of t he timer is not 0 during reset");
};
};
'>

72

73
Annex 8
Coverage Group Example

<’
// Control word update coverage
cover ctrl_word_update_e is {
item curr_cnt : uint (bits : 2) = curr_setup.sel_cnt using ranges = { range([0..2],"",1); };
item prev_cnt : uint (bits : 2) = prev_setup.sel_cnt using ranges = { range([0..2],"",1); },
when = (first_collected == TRUE);
item curr_mo de : uint (bits : 3) = curr_setup.mode using ranges = { range([0..5],"",1); },
ignore = (curr_mode > 5);
item prev_mode : uint (bits : 3) = prev_setup.mode using ranges = { range([0..5],"",1); },
ignore = (prev_mode > 5), when = (first_collected == TRUE);
item rw_mode : uint (bits : 2) = curr_setup.rwx using ranges = { range([1..3],"",1); },
ignore = (rw_mode == 2);
item BCD : uint (bits : 1) = curr_setup.bcd;
item clk0_freq : uint = clk_agt_mon.clk_0_freq using ranges = { ra nge([0..50000],"",50001);
};
item clk1_freq : uint = clk_agt_mon.clk_1_freq using ranges = { range([0..50000],"",50001);
};
item clk2_freq : uint = clk_agt_mon.clk_2_freq using ranges = { range([0..50000],"",50001);
};
item gate0 : bit = cnt0_model.cntx_gate$ using ignore = (gate0 == 1);
item gate1 : bit = cnt1_model.cntx_gate$ using ignore = (gate1 == 1);
item gate2 : bit = cnt2_model.cntx_gate$ using ignore = (gate2 == 1);
cross curr_mod e, prev_mode using name = cross__curr_mode__prev_mode, when = (first_collected
== TRUE);
};

// Reset fall coverage
cover rst_fall_e is {
item cnt0_mode : uint (bits : 3) = cnt0_model.cnt_mode using ranges = { range([0..5],"",1);
}, when = ((cnt0_model.rst_count > 1) && (cnt0_model.cnt_not_reconfigured == 1));
item cnt1_mode : uint (bits : 3) = cnt1_model.cnt_mode using ranges = { range([0..5],"",1);
}, when = ((cnt1_model.rst_count > 1) && (cnt1_model.cnt_not_reconfigured == 1 ));
item cnt2_mode : uint (bits : 3) = cnt2_model.cnt_mode using ranges = { range([0..5],"",1);
}, when = ((cnt2_model.rst_count > 1) && (cnt2_model.cnt_not_reconfigured == 1));
item cnt0_rw_mode : uint (bits : 2) = cnt0_model.rw_type_cov u sing ranges = {
range([1..3],"",1); }, when = ((cnt0_model.rst_count > 1) && (cnt0_model.cnt_not_reconfigured ==
1)), ignore = (cnt0_rw_mode == 0), ignore = (cnt0_rw_mode == 2);
item cnt1_rw_mode : uint (bits : 2) = cnt1_model.rw_type_cov using ran ges = {
range([1..3],"",1); }, when = ((cnt1_model.rst_count > 1) && (cnt1_model.cnt_not_reconfigured ==
1)), ignore = (cnt1_rw_mode == 0), ignore = (cnt1_rw_mode == 2);
item cnt2_rw_mode : uint (bits : 2) = cnt2_model.rw_type_cov using ranges = {
range([1..3],"",1); }, when = ((cnt2_model.rst_count > 1) && (cnt2_model.cnt_not_reconfigured ==
1)), ignore = (cnt2_rw_mode == 0), ignore = (cnt2_rw_mode == 2);
item cnt0_BCD : uint (bits : 1) = cnt0_model.cnt_bcd using when = ((cnt0_model.rst_cou nt >
1) && (cnt0_model.cnt_not_reconfigured == 1));
item cnt1_BCD : uint (bits : 1) = cnt1_model.cnt_bcd using when = ((cnt1_model.rst_count >
1) && (cnt1_model.cnt_not_reconfigured == 1));
item cnt2_BCD : uint (bits : 1) = cnt2_model.cnt_b cd using when = ((cnt2_model.rst_count >
1) && (cnt2_model.cnt_not_reconfigured == 1));
item clk0_freq : uint = clk_agt_mon.clk_0_freq using ranges = { range([0..50000],"",50001);
}, when = ((cnt0_model.rst_count > 1) && (cnt0_model.cnt_not_reconfi gured == 1));
item clk1_freq : uint = clk_agt_mon.clk_1_freq using ranges = { range([0..50000],"",50001);
}, when = ((cnt1_model.rst_count > 1) && (cnt1_model.cnt_not_reconfigured == 1));
item clk2_freq : uint = clk_agt_mon.clk_2_freq usin g ranges = { range([0..50000],"",50001);
}, when = ((cnt2_model.rst_count > 1) && (cnt2_model.cnt_not_reconfigured == 1));
item gate0 : bit = cnt0_model.cntx_gate$ using when = ((cnt0_model.rst_count > 1) &&
(cnt0_model.cnt_not_reconfigured == 1)) ;
item gate1 : bit = cnt1_model.cntx_gate$ using when = ((cnt1_model.rst_count > 1) &&
(cnt1_model.cnt_not_reconfigured == 1));
item gate2 : bit = cnt2_model.cntx_gate$ using when = ((cnt2_model.rst_count > 1) &&
(cnt2_model.cnt_not_reconfi gured == 1));
};
‘>

74

75
Annex 9
Test Scenario Example

================================================================================
Title : TIMER VSEQ TEST
Project : IFX_TIMER_VG
File name : test_vseq.e
Created On : 09/ 05/2015
Developers : Vlad Georgescu
Purpose :
Description : The test uses predefined sequences from the
virtual sequence library
Notes :
================================================================================
<'
import ifx_timer_vg_tb/tests/ifx_timer_cfg;

extend MAIN ifx_timer_vsequence {
!cfg_cnt : CONFIG_CNT ifx_timer_vsequence;
!run_all_modes : RUN_ALL_MODES ifx_timer_vsequen ce;
!underflow : UNDERFLOW ifx_timer_vsequence;

body()@driver.cout_driver.clock is only{
wait [5] * cycle;
// Set values for clock frequency and clock duty
do cfg_cnt keeping {
.rw_control == 1; .bcd_cnt == 0; .mode_var = = 1; .cnt_select == 0; .count_dur == 40;
//.driver == driver.cout_driver;
};
wait [200] * cycle;
do run_all_modes keeping {
.bcd_cnt == 0; .cnt_select == 1;
};
wait [200] * cycle;
do underflow keepi ng {
.mode_var == 1; .cnt_select == 2; .init_val == 152;
};
wait [30] * cycle;
message(NONE, "STOP RUN");
stop_run();
};
};

'>

Similar Posts