Abstract

The Image and digital signal processing applications require high floating point calculations throughput, and nowadays FPGAs are being used for performing these Digital Signal Processing (DSP) operations.Floating point operations are hard to implement directly on FPGAs because of the complexity of their algorithms. On the other hand, many scientific problems require floating point arithmetic with high levels of accuracy in their calculations. Therefore, we have explored FPGA implementations ofmultiplication for IEEE single precision floating-point numbers.For floating point multiplication, in IEEE single precision format we have to multiply two 24 bit mantissas.

As we know that an 18 bit multiplier already existsin Spartan 3, the main idea is to use the existing 18 bit multiplier with a dedicated 24 bit multiplier so as to perform floating-point arithmetic operations with atmost precision and accuracy and also to implement the prototyping on a Xilinx Spartan 3 FPGA using VHDL.

CHAPTER1

INTRODUCTION

1.1Introduction

Image and digital signal processing applications require high floating point calculations throughput, and nowadays FPGAs are being used for performing these Digital Signal Processing (DSP) operations. Floating point operations are hard to implement on FPGAs as their algorithms are quite complex. In order to combat this performance bottleneck, FPGAs vendors including Xilinx have introduced FPGAs with nearly 254 18x18 bit dedicated multipliers. These architectures can cater the need of high speed integer operations but are not suitable for performing floating point operations especially multiplication. Floating point multiplication is one of the performance bottlenecks in high speed and low power image and digital signal processing applications. Recently, there has been significant work on analysis of high-performance floating-point arithmetic on FPGAs. But so far no one has addressed the issue of changing the dedicated 18x18 multipliers in FPGAs by an alternative implementation for improvement in floating point efficiency. It is a well known concept that the single precision floating point multiplication algorithm is divided into three main parts corresponding to the three parts of the single precision format. In FPGAs, the bottleneck of any single precision floating-point design is the 24x24 bit integer multiplier required for multiplication of the mantissas. In order to circumvent the aforesaid problems, we designed floating point multiplication

Although computer arithmetic is sometimes viewed as a specialized part of CPUdesign, still the discrete component designing is also a very important aspect. Atremendous variety of algorithms have been proposed for use in floating-point systems.Actual implementations are usually based on refinements and variations of the few basicalgorithms presented here. In addition to choosing algorithms for addition, subtraction, multiplicationand division, the computer architect must make other choices. Whatprecisions should be implemented? How should exceptions be handled? This report willgive the background for making these and other decisions.

Our discussion of floating point will focus almost exclusively on the IEEEfloating-point standard (IEEE 754) because of its rapidly increasing acceptance.Although floating-point arithmetic involves manipulating exponents and shiftingfractions, the bulk of the time in floating-point operations is spent operating on fractionsusing integer algorithms. Thus, after our discussion of floating point, we will take a moredetailed look at efficient algorithms and architectures.

The pivotal task that lies ahead is to design a floating point multiplier using VHDL and its FPGA implementation.

Why only floating point ?

All data on microprocessors is stored in a binary representation at some level. After having a good look at what kind of real number representations that could be used in processors there were only two representations that have come close to fulfilling the modern day processor needs, they are the fixed and floating point representations.Now,let us have a brief glance at these representations to have a good understanding of what made us to go floating point representation.

Table 1.1Comparision of Floating Point and Fixed Point Representations

Fixed Point / Floating Point
Limited range
Number of bits grows for more accurate results
Easy to implement in hardware / Dynamic range
Accurate results
More complex and higher cost to implement in hardware

Why only FPGA for prototyping ?

Leading-edge ASIC designs are becoming more expensive and time-consuming because of the increasing cost of mask sets and the amount of engineering verification required. Getting a device right the first time is imperative. A single missed deadline can mean the difference between profitability and failure in the product life cycle. Figure 1 shows how the impact that time-to-market delays can have on product sales.

Fig 1.1 Declining Product Sales Due to Late-to-Market Designs

Using an FPGA to prototype an ASIC or ASSP for verification of both register transfer level (RTL) and initial software development has now become standard practice to both decrease development time and reduce the risk of first silicon failure. An FPGA prototype accelerates verification by allowing testing of a design on silicon from day one, months in advance of final silicon becoming available. Code can be compiled for the FPGA, downloaded, and debugged in hardware during both the design and verification phases using a variety of techniques and readily available solutions. Whether you're doing RTL validation, initial software development, and/or system-level testing, FPGA prototyping platforms provide a faster, smoother path to delivering an end working product.

Table 1.2 Comparision between FPGA and ASIC :

Property / FPGA / ASIC
Digital and Analog Capability / Digital only / Digital and Analog
Size / Larger / More smaller
Operating Frequency / Lower(up to 400MHz) / Higher(up to 3GHz)
Power Consumption / Higher / Lower
Design Cycle / Very Small(few mins) / Very long(about 12 weeks)
Mass Production / Higher price / Lower price
Security / More secure / less secure

VHDL

The VHSIC (very high speed integrated circuits) Hardware Description Language(VHDL) was first proposed in 1981. The development of VHDL was originated by IBM,Texas Instruments, and Inter-metrics in 1983. The result, contributed by manyparticipating EDA (Electronics Design Automation) groups, was adopted as the IEEE1076 standard in December 1987.

VHDL is intended to provide a tool that can be used by the digital systems

community to distribute their designs in a standard format.

As a standard description of digital systems, VHDL is used as input and output to

various simulation, synthesis, and layout tools. The language provides the ability to

describe systems, networks, and components at a very high behavioral level as well as

very low gate level. It also represents a top-down methodology and environment.

Simulations can be carried out at any level from a generally functional analysis to a very

detailed gate-level wave form analysis.

CHAPTER 2

PROJECT THESIS

2.1 NUMBER REPRESENTATIONS

There are two types of number representations they are:

  1. Fixed-point .
  2. Floating point.

Now let us have a detailed glance at each of them.

2.1.1Fixed-Point Representation

In fixed-point representation, a specific radix point - called a decimal point in English and written "." -is chosen so there is a fixed number of bits to the right and a fixed number of bits to the left of the radix point. The bits to the left of the radix point are called the integer bits. The bits to the right of the radix point are called the fractional bits.

Fig 2.1 Fixed-Point Representation

In fixed-point representation, a specific radix point - called a decimal point in English and written "." -is chosen so there is a fixed number of bits to the right and a fixed number of bits to the left of the radix point. The bits to the left of the radix point are called the integer bits. The bits to the right of the radix point are called the fractional bits. In this example, assume a 16-bit fractional number with 8 magnitude bits and 8 radix bits, which is typically represented as 8.8 representation. Like most signed integers, fixed-point numbers are represented in two's complement binary.Using a positive numberkeeps this example simple

To encode 118.625, first find the value of the integer bits. The binary representation of 118 is 01110110, so this is the upper 8 bits of the 16-bit number. The fractional part of the number is represented as 0.625 x 2n where n is the number of fractional bits. Because 0.625 x 256 = 160, you can use the binary representation of 160, which is 10100000, to determine the fractional bits. Thus, the binary representation for 118.625 is 0111 0110 1010 0000. The value is typically referred to using the hexadecimal equivalent, which is 76A0.

The major advantage of using fixed-point representation for real numbers is that fixed-point adheres to the same basic arithmetic principles as integers. Therefore, fixed-point numbers can take advantage of the general optimizations made to the Arithmetic Logic Unit (ALU) of most microprocessors, and do not require any additional libraries or any additional hardware logic. On processors without a floating-point unit (FPU), such as the Analog Devices Blackfin Processor, fixed-point representation can result in much more efficient embedded code when performing mathematically heavy operations.
In general, the disadvantage of using fixed-point numbers is that fixed-point numbers can represent only a limited range of values, so fixed-point numbers are susceptible to common numeric computational inaccuracies. For example, the range of possible values in the 8.8 notation that can be represented is +127.99609375 to -128.0. If you add 100 + 100, you exceed the valid range of the data type, which is called overflow. In most cases, the values that overflow are saturated, or truncated, so that the result is the largest representable number.

2.1.2Floating Point Numbers

The floating-point representation is one way to represent real numbers. A floating-point number n is represented with an exponent e and a mantissa m, so that:

n = be × m, …where b is the base number (also called radix)

So for example, if we choose the number n=17 and the base b=10, the floating-point representation of 17 would be: 17 = 101 x 1.7

Another way to represent real numbers is to use fixed-point number representation. A fixed-point number with 4 digits after the decimal point could be used to represent numbers such as: 1.0001, 12.1019, 34.0000, etc. Both representations are used depending on the situation. For the implementation on hardware, the base-2 exponents are used, since digital systems work with binary numbers.

Using base-2 arithmetic brings problems with it, so for example fractional powers of 10 like 0.1 or 0.01 cannot exactly be represented with the floating-point format, while with fixed-point format, the decimal point can be thought away (provided the value is within the range) giving an exact representation. Fixed-point arithmetic, which is faster than floating-point arithmetic, can then be used. This is one of the reasons why fixed-point representations are used for financial and commercial applications.

The floating-point format can represent a wide range of scale without losing precision, while the fixed-point format has a fixed window of representation. So for example in a 32-bit floating-point representation, numbers from 3.4 x 1038 to 1.4 x 10-45 can be represented with ease, which is one of the reasons why floating-point representation is the most common solution.

Floating-point representations also include special values like infinity, Not-a-Number (e.g. result of square root of a negative number).

A float consists of three parts: the sign bit, the exponent, and the mantissa. The division of the three parts is as follows considering a single-precision floating point format which would be elaborated in a detailed manner at a later stage

fig 2.2 Floating-Point Representation

The sign bit is 0 if the number is positive and 1 if the number is negative. The exponent is an 8-bit number that ranges in value from -126 to 127. The exponent is actually not the typical two's complement representation because this makes comparisons more difficult. Instead, the value is biased by adding 127 to the desired exponent and representation, which makes it possible to represent negative numbers. The mantissa is the normalized binary representation of the number to be multiplied by 2 raised to the power defined by the exponent

Now look at how to encode 118.625 as a float. The number 118.625 is a positive number, so the sign bit is 0. To find the exponent and mantissa, first write the number in binary, which is 1110110.101 (get more details on finding this number in the "Fixed-Point Representation"section). Next, normalize the number to 1.110110101 x 26, which is the binary equivalent of scientific notation. The exponent is 6 and the mantissa is 1.110110101. The exponent must be biased, which is 6 + 127 = 133. The binary representation of 133 is 10000101.

Thus, the floating-point encoded value of 118.65 is 0100 0010 1111 0110 1010 0000 0000 0000. Binary values are often referred to in their hexadecimal equivalent. In this case, the hexadecimal value is 42F6A000.

Thus a Floating-point solves a number of representation problems. Fixed-point has a fixed window of representation, which limits it from representing very large or very small numbers. Also, fixed-point is prone to a loss of precision when two large numbers are divided.

Floating-point, on the other hand, employs a sort of "sliding window" of precision appropriate to the scale of the number. This allows it to represent numbers from 1,000,000,000,000 to 0.0000000000000001 with ease.

Comparision of Floating-Point and Fixed Point Representations

Fixed Point / Floating Point
Limited range
Number of bits grows for more accurate results
Easy to implement in hardware / Dynamic range
Accurate results
More complex and higher cost to implement in hardware

2.1.3 Floating Point: Importance

Many applications require numbers that aren’t integers. There are a number of ways that non-integers can be represented. Adding two such numbers can be donewithan integer add, whereas multiplication requires some extra shifting. There are variousways to represent the number systems. However, only one non-integer representation hasgained widespread use, and that is floating point.In this system, a computer word is divided into two parts, an exponent and asignificand. As an example, an exponent of ( −3) and significand of 1.5 might representthe number 1.5 × 2–3 = 0.1875. The advantages of standardizing a particularrepresentation are obvious.

The semantics of floating-point instructions are not as clear-cut as the semantics

of the rest of the instruction set, and in the past the behavior of floating-point operations

varied considerably from one computer family to the next. The variations involved such

things as the number of bits allocated to the exponent and significand, the range of

exponents, how rounding was carried out, and the actions taken on exceptional conditions

like underflow and over- flow. Now a days computer industry is rapidly converging on theformat specified by IEEE standard 754-1985 (also an international standard, IEC 559).The advantages of using a standard variant of floating point are similar to those for usingfloating point over other non-integer representations. IEEE arithmetic differs from muchpreviousarithmetic.

2.2IEEE Standard 754 for Binary Floating-Point Arithmetic

2.2.1Formats

The IEEE (Institute of Electrical and Electronics Engineers) has produced a Standard to define floating-point representation and arithmetic. Although there are other representations, it is the most common representation used for floating point numbers.

The standard brought out by the IEEE come to be known as IEEE 754.

The standard specifies :

1) Basic and extended floating-point number formats

2) Add, subtract, multiply, divide, square root, remainder, and compare

operations .

3) Conversions between integer and floating-point formats

4) Conversions between different floating-point formats

5) Conversions between basic format floating-point numbers and decimal strings

6) Floating-point exceptions and their handling, including non numbers (NaNs)

When it comes to their precision and width in bits, the standard defines two groups: basic- and extended format. The extended format is implementation dependent and doesn’t concern this project.

The basic format is further divided into single-precision format with 32-bits wide, and double-precision format with 64-bits wide. The three basic components are the sign, exponent, and mantissa. The storage layout for single-precision is shown below:

2.2.2 Storage Layout

IEEE floating point numbers have three basic components: the sign, the exponent, and the mantissa. The mantissa is composed of the fraction and an implicit leading digit (explained below). The exponent base (2) is implicit and need not be stored.

The following figure shows the layout for single (32-bit) and double (64-bit) precision floating-point values. The number of bits for each field are shown (bit ranges are in square brackets):

Sign / Exponent / Fraction / Bias
Single Precision / 1 [31] / 8 [30-23] / 23 [22-00] / 127
Double Precision / 1 [63] / 11 [62-52] / 52 [51-00] / 1023

Table 2.1 Storage layouts

The Sign Bit

The sign bit is as simple as it gets. 0 denotes a positive number; 1 denotes a negative number. Flipping the value of this bit flips the sign of the number.

The Exponent

The exponent field needs to represent both positive and negative exponents. To do this, a bias is added to the actual exponent in order to get the stored exponent. For IEEE single-precision floats, this value is 127. Thus, an exponent of zero means that 127 is stored in the exponent field. A stored value of 200 indicates an exponent of (200-127), or 73. For reasons discussed later, exponents of -127 (all 0s) and +128 (all 1s) are reserved for special numbers. For double precision, the exponent field is 11 bits, and has a bias of 1023.

The Mantissa

The mantissa, also known as the significand, represents the precision bits of the number. It is composed of an implicit leading bit and the fraction bits. To find out the value of the implicit leading bit, consider that any number can be expressed in scientific notation in many different ways. For example, the number five can be represented as any of these: