Jump to content

Principle of a CPU


Nacelunk

Recommended Posts

Can someone explain to me how CPUs work? I know that the "information"(bits/bites) goes between memory (HDD or RAM) and the CPU which processes that information but how does it do that? How do we get solved equations and formulas instead of just numbers?

Link to comment
Share on other sites

The processor has many many differnt adding systems in it. It basically takes in 2 binary numbers adds them and outputs them (it's more complex than this but can be simplified to this). It even does subtraction procedures by subtraction.

 

Basically we don't get anything other than 1 and 0's out it's just how they are interperated that gives us the answers to things. All the pixels on your monitor are just generated by an interpertation of a set of 1's and 0's...

 

This isn't the best description in ther world...

Link to comment
Share on other sites

To get a grip on the very basics, you have to go back to the nuts and bolts.

The basic CPU can be modelled with a matrix of hrd-wired hand operated on/off switches configured as logical AND gates, a useful search term. Investigate Boolean algebra, also useful. The most basic CPUs were called bit slice processors, also searchable, still available and dating from the 70's and using the most basic logic functions. Equations and formulas belong to the realm of software programming, which has little relevance to the operation of a basic CPU, sometimes called an ALU (arithmetic and logic unit), also a searchable term. At the heart of every modern computer, unless it is analogue, lies a very simple and logical principle. It has been buried by technological convenience, and the economics of mass marketting.

Link to comment
Share on other sites

You might want to look at something called a 'Finite State Machine' which I believe is a method used to model and create CPU's. The basic idea is to model the CPU as a black box with certain inputs and certain outputs. Plus you agree that the box can be in a certain number of finite states. You draw these individually and show what the processor does to the input at each state. Example: if the CPU is in a idle state, and if the input is the 'add' command and two 8-bit numbers, the CPU will add them and go to a 'completed state'. The output of the 'completed state' will be the answer. This simple example doesn't give justice to typical, extremely complicated CPU's with thousands of states.

 

I don't have enough time to look for information, but if you don't find anything useful on it, I can find sites later.

Link to comment
Share on other sites

The basic CPU can be modelled with a matrix of hrd-wired hand operated on/off switches configured as [i']logical AND gates[/i]

 

it was probably a typo, but it`s NAND gates :)

and then the simple application of DeMorgans law.

Link to comment
Share on other sites

I had the impression he wanted to know what was actually inside [/i']the black box.

 

Well, knowing that its made of NAND gates tells you nothing. You can then argue that he was actually wondering whats inside the NAND gates.

 

The main point I forgot to mention was once you figured out the states and how each state migrates to another state, you can model it completely with NAND gates (with can be modeled completely by CMOS transistors)

Link to comment
Share on other sites

CPUs are of two types RISC and CISC.Both the processors are essentially based on same fundamental laws of Boolean Algebra.

If you have studied Boolean Algebra then it should be easy for you to understand that Processors and everything Binary gets and can be implemented using Electronic gadgets like transitors , capacitors and resistors.You can very well replace them with Mechanical equivalents.

Now the role of CPU in a modern computer is to hold the Computer Syncronized in its activities...its a small computing machine with the grand responsibility of syncronizing all to make it happen in a way which makes the computer useful to us.

Basically you get Registers and some binary operations along with a clock.

Link to comment
Share on other sites

let`s not forget the ALU most important of all the registers :)............

:)

Already mentioned, see my post #4. It was not referred to as a register, but a bitslice processor.

 

In my early days, before chips or DILICS (dual inline integrated circuits) and all computers were architectured with individual, discrete transistors, ( I just about remember working on all-valve computers, too) the ALU was the processor. Registers called buffers and accumulators were simply accessories to hold intermediate results.

Link to comment
Share on other sites

Interesting you should say that Gcol, I also used to work on CPU boards where the enire board was the CPU made of discrete components, In fact I still have the CCT diags for a few of them, I`ve also supplied parts for use in Valve/Tube (therminionic valves) based computers for a Museum.

I also used to teach Computer architecture too :)

 

so I do understand Exactly whats been said, although having said that, I`ve never worked on an all transistor one, I`de have Loved to have seen that :)

 

although I did have to build the basic Gates in transistors during early educational years and also did the same with Relays just for the fun of it :)

Link to comment
Share on other sites

I too have a set of diagrams rotting in the attic. Many pages of closely printed boolean logic, not a shred of electronics. Diagnosing to a single Boolean function was the hard part, replacing a trannie or diode the easy bit. Writing own test and diagnostic programs in machine code, wrestling with setting up read and write currents in ferrite core memories, Ah, a trip down memory lane. The hardware was easy, completely logical. Nowadays, its the programming that is the pain in the gluteus maximus.

Link to comment
Share on other sites

I've got a question - can anyone tell me what is the difference between 8, 16,32 and 64-bit buses?

 

It is just the number of bits in parallel that the CPU can handle in one go, or the number of side-by-side logic units in the processor. In general terms, the bigger (wider) the bus, the quicker the data throughput, but the more complicated and expensive the architecture. One pair of wires in a serial peripheral interface, many more in a parallel.

Link to comment
Share on other sites

In general terms, the bigger (wider) the bus, the quicker the data throughput, but the more complicated and expensive the architecture

But I don't think that, for example Athlon 64 3000 is two times faster than Athlon XP 3000 which has 32-bit architecture. Also the cost is almost the same

Link to comment
Share on other sites

Somebody on these forums said it was about 30% faster. But only if you have programs that utilize the extra functionality. On Windows a 64bit processor will preform no faster than a 32 bit one and will suck just as much as it normally does.

Link to comment
Share on other sites

On Windows a 64bit processor will preform no faster than a 32 bit one and will suck just as much as it normally does.
That has been changed in WinXP SP2 (service pack 2). The original version of WinXP and all earlier versions of Windows could not handle a 64-bit processor (ie. it would run like a 32-bit processor because the OS (operating system - Windows) could not utilise the extra 32-bits), but when MS brought out SP2 they incorporated compatability for 64-bit processors.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.