Jump to content

Whats the speed limit and how close are we?


GrandMasterK

Recommended Posts

I'm somewhat at a loss at the moment as to why processor speed hasn't started ramping up again. I'm not sure if it's a matter of the industry's shift in focus to multi-core architectures, or if it's due to the approaching limitations of transistor gate size.

 

IBM produced a gate a couple years back that was only four atoms across, but IIRC it produced a lot of errors due to quantum effects. What I don't know is how close processors are getting to that size. I know they're still substantially larger than that, but I don't know how much larger. (How big is 45nm in terms of "# of atoms across the gate"? I've no idea.)

 

The "buzz" over the last couple of years has been that once Core 2 Duo was out that Intel would have more or less resolved its current architecture limitations and would be able to start ramping up processor speeds again. But so far that hasn't really happened. Instead they seem to be focusing on increasing L1 cache size (up to 12mb) and front-side bus speed (1333mhz!).

 

Intel's current roadmap calls for Nahalem (45nm) architecture in 2008 and Westmere (32nm) architecture in 2009. The number of transistors could approach 1 billion in these series and they'll all be 4- or 8-core processors. But narry a word about clock speed is even being discussed.

 

I don't get it, but that's what's happening. Frankly I think the multi-core thing is getting out of hand. It's hard to imagine that an 8-core processor is really going to be all that much more better for me while sitting here typing a message in a window in Internet Explorer, for example. Yeah it might render Shrek 8 in 20 seconds, but will that really help the average computer user? But the way things are going we may see 16- or 32-core processors before some common sense returns to the industry.

Link to comment
Share on other sites

I'm sure Intel want to get plenty of bang for their buck with the Core Duo 2 before they release anything significantly better. They won't even talk about the specifics of the next wave this early on in the CD2 marketing game if they think there is a chance potential customers will hang on to their cash instead of upgrading their architecture twice.

Link to comment
Share on other sites

It's hard to imagine that an 8-core processor is really going to be all that much more better for me while sitting here typing a message in a window in Internet Explorer, for example.

 

I have a nice Pentium III I could lend you for that.

 

Faster processors are for things like video encoding and games playing, at least for home users. Multiple cores helps both of these enormously, although the coding for real time games is a non-trivial problem that the solution for is not yet universal across the industry and that's a hideously tangled sentence, I apologise.

Link to comment
Share on other sites

I'm sure Intel want to get plenty of bang for their buck with the Core Duo 2 before they release anything significantly better. They won't even talk about the specifics of the next wave this early on in the CD2 marketing game if they think there is a chance potential customers will hang on to their cash instead of upgrading their architecture twice.

 

It's not early -- Nehalem is only one year out. Historically we know a lot more at this stage of development. Intel has become distinctly cagier about its roadmap in recent years. And BTW' date=' not one CPU on the roadmap has a clock speed over 3.2 ghz. That's actually DOWN from 3.8.

 

(It's "Core2 Duo", btw, not "Core Duo 2". It may seem like a trivial distinction but it allows them to brand things like "Core2 Quad", etc.)

 

 

I have a nice Pentium III I could lend you for that.

 

Faster processors are for things like video encoding and games playing, at least for home users. Multiple cores helps both of these enormously, although the coding for real time games is a non-trivial problem that the solution for is not yet universal across the industry and that's a hideously tangled sentence, I apologise.

 

Actually the underlying presumption of all this multi-core focus is that in fact multiple cores help everyone. And to a certain extent, of course, they're right -- we all run many programs at the same time these days, and operating systems and programs have become far more complex.

 

And faster processers are NOT just for specialized applications. When you increase the speed of the processor, you incease the speed of every subsystem in the box. That's because for all our efforts at decoupling systems from the processor, all we've really done is reduce overall CPU load. The CPU is still involved in every single aspect of the system. Bump the processor, and the video, disk, and memory all get (apparent) bumps too.

 

But my main point was that the industry seems to be saying "megahurtz bad, multicore good", and I don't think that's a smart plan. Not only is there nothing really wrong with increasing processor speed (so long as you understand the limitations that go along with that), but it just puts all the eggs in a different basket that can never fully solve the problem.

 

Adding cores can often cause an apparent speed bump, yes. Writing that write-behind cache file from that dump you took from the Flash drive a few minutes ago has never been smoother. But what are my OTHER 30 processor cores going to do over the next half hour? Search for extraterrestrial intelligence? :D

Link to comment
Share on other sites

Computer speed and Internet speed? Will a home computer ever be able to render every shot in Shrek 3 in under a second? Will my home internet connection ever be going 1TB/s?

 

I think its because so much has to change really. For instance I know that not all to long ago windows really did not support threading on any level not entirely superficial. I went along and purchased the 64 bit processor that AMD had out. I thought wow, this should make some difference advancing from the 32 bit platform but not much anything else has followed suit and of course all the rave is the dual core. I think most anything can still be sensibly run on say 3ghz roughly, but that’s from my perspective and my first computer was running a coppermine 666mhz pent 2:D Try playing a first person shooter on that, I think it killed my comp to be honest.

Link to comment
Share on other sites

It's not early -- Nehalem is only one year out. Historically we know a lot more at this stage of development. Intel has become distinctly cagier about its roadmap in recent years. And BTW, not one CPU on the roadmap has a clock speed over 3.2 ghz. That's actually DOWN from 3.8.

 

You appear to be a bit hung up on raw clockspeed, which is odd given that an E6320 (for example) is a lot faster at around 1.8GHz than my old 2GHz P4 from 2002.

Link to comment
Share on other sites

You appear to be a bit hung up on raw clockspeed, which is odd given that an E6320 (for example) is a lot faster at around 1.8GHz than my old 2GHz P4 from 2002.

 

Jesus. I'm trying to elevate the discussion here. You know, computer science? Work with me, for pete's sake. :)

 

That is the popular meme today -- that clock speed is no longer relevent. The industry has done an excellent job defraying attention from gigahertz, and initially that would seem to be a good thing. We should all be happy with 3 gigahertz, which as we all know is the universal "lightspeed" of computing. I believe this was first measured by Michelson back in 1885, wasn't it? :D

 

But in fact there's really no reason why computers can't also have faster clock speeds than they currently do. I suspect it has more to do with marketing and general engineering direction than actual physical properties of the chips, but one of the concerns I have is that it will become HARDER for them to increase clock speed in the future because of all the additional cores (with, presumably, different fail speeds).

Link to comment
Share on other sites

That is the popular meme today -- that clock speed is no longer relevent.

 

I'm not saying that clock speed is irrelevent, just that, by and large, in recent times improvements in single core performance have come by an improvement in architecture rather than simply upping the clock speed.

 

All other things being equal, an improvement in clock speed is definitely advantageous and were we seeing claims about how the new Core2's would be running at 4GHz plus I'm sure everyone would be very excited. However, it doesn't really mean anything because we can't assume that all other things are equal.

 

But in fact there's really no reason why computers can't also have faster clock speeds than they currently do. I suspect it has more to do with marketing and general engineering direction than actual physical properties of the chips, but one of the concerns I have is that it will become HARDER for them to increase clock speed in the future because of all the additional cores (with, presumably, different fail speeds).

 

The point about failure rates is a good one, especially given this would exaggerate the vast differences in yields between AMD, Intel and Nvidia.

 

WRT clock speeds, I would make the distinction between theoretical chips and actual chips - enormously high clock rates can and have been produced, but not on today's chips, even with absurd levels of cooling.

Link to comment
Share on other sites

Woohoo, actual computer science discussion! (/cheer) :)

 

Those are good points, and in general I think the multi-core trend has been positive. We'd reached a point where linear speed improvements were only producing modest overall gains. In simple terms, we don't need 4ghz machines, we need *8ghz* machines, in order to see a real improvement in overall speed. Followed next year by 16ghz chips, then 32ghz chips, etc. It was a deadly spiral of radical demands piled one on top of another -- a no-win scenario.

 

But have we really solved this problem, or just replaced it with another, even more complex scenario? Chip complexity is greatly increased, and it's unclear (at least outside Intel's labs) what will happen when they begin to try to ramp up processor speeds again.

 

What I think is hoped is that a linear progression in processor speed (adding, say, 500 megahertz with each iteration) will now (because of multi-core) translate into a geometric progression in overall processor speed (2x, 2x, 2x...), which will return us to a linear progression in overall computer speed (50% faster each time you replace the old iron at CompUSA).

 

But whether or not that actually turns out to be the case is, I think, anyone's guess at this point.

Link to comment
Share on other sites

It's very dependent upon the software side of things as well, as it's pretty obvious that a single threaded application (as most "home" software is at this point) is still going to be limited by single core performance and, although it might be a bit of a truism, that the only things that will stress the processor will be things that require a lot of number crunching, and only real time programs (rendering, usually) make that into a significant technical challenge.

 

Going by the way the home market any improvements will only be observable on most computers by the latest version of spider solitaire using full 3d spinnionvision rather than the boring 2d of yesteryear.

 

In purely practical terms, AMD need a big hit in either the graphics card or processor markets soon. Losing the amount of money they're losing cannot be sustainable.

Link to comment
Share on other sites

To address the OP, I suggest you have a look at Seth Lloyd's "ultimate laptop":

 

http://www.edge.org/3rd_culture/lloyd/lloyd_index.html

 

To address the topic at hand, it's a parallel future.

 

Functional languages are certainly ramping up to take advantage of multiple cores. Erlang, Haskell, JoCaml, and Scala all have excellent parallelism across multiple CPU cores.

 

The main problem plaguing multithreaded programming is the use of shared state concurrency, namely threads. Shared state management is a source of deadlocks and race conditions. Moving to a shared nothing architecture which uses asynchronous message passing dramatically reduces the potential for errors in concurrent programs.

 

With the use of a hybrid heap which does facilitate multiprocess shared data for message-passed items, messaging becomes zero copy while the architecture remains shared nothing from the programmer's perspective. This is truly the best of both worlds: shared state concurrency without the potential for programmer error (at least from the perspective of an end user of the language)

 

Perhaps the most difficult part of reasoning about concurrent architectures is the inherent non-determinism, at least in languages which introduce side effects to facilitate concurrency (e.g. IPC)

 

As I understand it JoCaml facilitates deterministic parallelism through the use of join calculus. Perhaps the real solution is to parallelize declaratively instead of imperatively...

Link to comment
Share on other sites

actually pangloss the extra clock speed doesn't due to much for you on its own, all it does is make it so that the transistors cycle faster. In order to accomplish this you have to do things like make the processing pipeline longer as intel did with the p4 (anybody remember the 1 Ghz p3 outperforming a 1.5ghz p4?) for a while the industry was stuck in a bid to make processors look like they were running faster than they were by ramping up these clock speeds without many gains.

 

AMD was the first to back out of that and start making good processors that were more than just clock speed, this is why starting with the xp's the amd's were outperforming their intel counterparts even though the intels were running over a gigahert faster then they were. My A64 2.2 ghz could outpace the prescotts that were running at 3.5 ghz. the point is that the ghz don't really effect processing power when your looking at different processor designs, even though a prescott at 3.5 will always beat a prescott at 3.4 assuming all else is equal and they are both equal cores.

 

the other thing that happened to shift the industry was that intel couldn't get a processor out that ran at 4 ghz while amd was claiming procesors as being 4000+ as while the prescott core was supposed to hit 5 ghz in actuallity a capacitanace effect between the circuits combined with quantum tunneling limited the speed to a maximum of 3.8 and it took them a long time just to get those out.

 

so intel went back to drawing up better architectures and now our chips run faster.

 

 

also to see what we gave up for those 3.6 ghz clock speeds in terms of the single core's

 

the p3 had a 13 stage pipeline, the p4 a 20 stage, and the prescott core had a 32 stage pipeline. which means it took nearly 3x as many cycles for the prescott to compute something as the p3.

 

also the theoretical limit for silicon chips is somewhere around 11 nm so in a few years we'll need a new building material if we want to keep improving transistor density.

Link to comment
Share on other sites

Right, I think the last thing anybody wants is a rehash of the long-pipeline debacle. It wasn't truly until Core2 Duo (not even Core Duo!) that Intel fully recovered from that nonsense. I agree with you that better-designed processors are a good thing, I'm just concerned that we may have gone overboard on the "megahurtz is bad" angle. Making a chip faster is a perfectly valid way to speed things up. ALL chip improvements have their share of limitations.

Link to comment
Share on other sites

yes and you can bet that the newer chips are going to be faster, now that performance matters more than clock speed, and given any processor design they are going to make it go as fast as it can, however I doubt were gion to see many chips coming out that go much faster than 3.0

Link to comment
Share on other sites

ALL chip improvements have their share of limitations.

 

A top to bottom MIMD architecture (utilizing the above mentioned shared nothing message passing) atop a multicore CPU with an internal crossbar is effectively limitless, and even overcomes the Von Neumann bottleneck through extensive parallelism and distributed cache coherency.

 

To quote Erlang creator Joe Armstrong: "Your Erlang program should just run N times faster on an N core processor"

Link to comment
Share on other sites

Multi-core processors are a huge improvement over single-core processors since modern operating systems run so many background processes. Virus scans and automatic updates significantly affected the speed of other programs on my Pentium 4, but I don't even notice them on my Core 2 Duo.

 

I don't believe that the typical PC user will benefit from more than two or four cores at the moment. Until more individual programs are structured to use more threads, the Core 2 Duo is probably the best choice for PCs.

 

In my opinion, computer hard drives need the most improvement for speed. My CPU usage rarely reaches 100%, but I'm always waiting for the hard drive to load programs and files.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.