Jump to content

Computer Arithmetic


Benstar

Recommended Posts

The exact algorithm used by an ALU to multiply and divide binary numbers depends on the number of transistors in the processor and the kind of binary number being processed. Moreover, not all computers have been built to process binary numbers. I once worked on an IBM1620, similar to an IBM1401, both predating IBM360 machines, which did not have an ALU and did decimal arithmetic instead of binary. They loaded addition and multiplication tables into memory and added/multiplied digit by digit with carry. Subtraction is easy enough using addition tables, but division must be done iteratively, for example using Newton-Raphson division.

 

Modern microprocessors with ALUs do a hardware "crunch" multiply for integer arithmetic, and more complex algorithms for floating point arithmetic. Although, as number formats become larger (e.g. 64 bit integers), a combination of crunch multiplier and loop in microcode, hardware pipeline, or combination microcode and pipeline may be used, because the number of transistors is great for a crunch multiplier of large format numbers. Division will usually be done with an iterative technique (e.g., Newton-Raphson).

Link to comment
Share on other sites

Don't forget about Setun and ternary computers, which decreased amounts of half-adder operations by about 1.5 times for normal additions and negated the need for any conversions when it came to negative numbers, also naturally covering almost all the binary logic as well as adding the ability to do tertiary. Fun times :)

Link to comment
Share on other sites

When coded as two's complement, negative and positive signed numbers add and multiply exactly like unsigned ones. Strictly the same hardware. Only the detection of overflow differs.

 

Though, for some reason I can't grasp, the IEEE488 format for floating point defines the mantissa as one's complement.

 

Multiplier trees can use binary single-bit full adders or half adders. They can also pre-code bit pairs of the multiplier as -2, -1, 0, +1, +2 which has some advantages. See "seminumerical algorithms".

 

Smaller processors save hardware by cabling ony 4*32 bits for instance, or even just one add-and-shift, but presently a full multiplier isn't such a big chunk any more. One 64bit*64bit takes around 1.5*100*64*64=0.6 million transistors "only", which is far less than any cache memory. A typical PC processor has 2-4 cores of each one multiplier and one adder, each floating on 2*64 bits (SSE) or 4*32 bits etc - that's already 8 full float multipliers of 64*64 on a chip, which crunch numbers at one cycle throughput on a Core 2. The AVX hardware passes from 128 bits to 256 bits.

 

Integer division uses a slow method because it doesn't tolerate the residual approximation of Newton-Raphson used for floats, so it's ususally faster if the programmer converts to floats, then back to integers. Intel claims to have speeded up the integer division on recent processors.

Edited by Enthalpy
Link to comment
Share on other sites

You do skip the step where you first have to code the number as two's complement, and yes procs are fast at it, but its operations you don't have to do at all with trites.

 

Also Alex (thats me) <3 SSE (SSE2 and SSE3), except when he has to reverse something that uses them, which can be real pain...

Edited by AtomicMaster
Link to comment
Share on other sites

AtomicMaster: "you first have to code the number as two's complement"

=> How complicated and lengthy do you imagine this operation is?

 

To convert an unsigned number into a 2's complement signed one: do nothing at all.

To convert a signed number into a 2's complement: it's already done! Apart in IEEE488 floats, do you see places in a computer where signed numbers are not already coded as 2's complement?

 

-----------------------------------------

 

One credible reason why IEEE488 defines the mantissa of floats as 1's complement: it's because the mantissa's heaviest bit must always be 1 in order to be implicit. According to the standard, this heaviest bit is not represented in the number since it's known to be 1. Every float except zero begins with "zero dot one" which are implicit, only the following bits are explicit in the IEEE representation. This gives one bit accuracy more with the same format width.

Link to comment
Share on other sites

  • 2 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.