Jump to content

Why do people need fast/strong computers


silverghoul1

Recommended Posts

That depends on what you are doing really. A gaming pc is not fast or strong it is merely ok at the end of the day. A fast computer can run a website or complicated equations. In cases such as a multimedia or film course you might need expensive cameras. If you were doing animation you might want drawing pads and flash player. If you are doing math equations like the stock market you need a fast cpu. If you are taking pictures of peoples brains or ultrasound you need to be able to process that information fast enough to actually run the software. When you think most large scale movies can use render farms of maybe 3000 graphical processing units at one time to make one movie. If you think your gaming pc will make oblivion or skyrim you are wrong it is nowhere near powerful enough. Disney's latest film Big Hero 6 was rendered on a 55000 core supercomputer. http://www.engadget.com/2014/10/18/disney-big-hero-6/

Some people prefer stable computers. That's what big brother does he makes big stable banking computers which don't lose numbers.

Edited by fiveworlds
Link to comment
Share on other sites

videos make use of your gpu. Some videos will not run if your gpu isn't powerful enough. I notice the difference on my mom's old laptop. It won't run the newer tv shows etc. Realistically what is beefy(ok) today will only be average in 4 years time.

Edited by fiveworlds
Link to comment
Share on other sites

What about a person who doesn't do animation, music, etc. What purpose does a beefy computer serve to someone who uses it for general uses, like emails, videos, websites, etc.

None, as long as the computer is up to the task it's meant for. Why would you get a Ferrarri to go grocery shopping?

Link to comment
Share on other sites

None, as long as the computer is up to the task it's meant for. Why would you get a Ferrarri to go grocery shopping?

 

Because not all driving is to the store. Sometimes you need more than the basic rolley wheels and a trunk.

 

Unless I'm absolutely certain I'm never going anywhere but the store, doesn't it make sense to think a bit more versatile when making a three-year investment?

Link to comment
Share on other sites

 

Because not all driving is to the store. Sometimes you need more than the basic rolley wheels and a trunk.

 

Unless I'm absolutely certain I'm never going anywhere but the store, doesn't it make sense to think a bit more versatile when making a three-year investment?

Absolutely. But the question, as far as I can tell, is posed as "a person won't ever use this tool for anything than these basic tasks. Why would they need top of the line tools?". If you usually only surf and check e-mails, but sometimes you want to play games, you might want something better. But let's take my grandfather as an example. He considered getting a computer on which to store digital images. Store, mind you, not even watch. He would have been absolutely fine with an old, slow computer with enough hard drive space. Anything beyond that would be a waste of money.

 

Of course, the question is harder to answer when the person considering the 3 year investment says " I only want to surf", and you go "do you really?".

Link to comment
Share on other sites

From my first computer clocking 8Mhz and sporting 2 -count 'em 2- 5 1/4" floppy drives to my current lappy clocking IDK* and sporting tera bytes of drive storage, my need for speed is driven by math calculations. I have been running one search 24/7 for the last 17 months. Maybe that's not average. :blink: Ya think? :lol: I do like the power in the photography area compared to the old film and darkroom days, but all the e-mail, videos, games, and forums are just gravy.

 

*Looks like my computer runs @ 2.3 GHz

Edited by Acme
Link to comment
Share on other sites

Buy the computer that does what you need it to do.

 

At the end of the day, that's the only rule of thumb. If all you want to do is surf the web and check your email, buy a tablet, not a computer. They are (normally) cheaper and easier to carry (even compared to a laptop).

 

If all you need is to do basic word processing for school or something like that (think I need to run an Office Suite, but I'm not doing heavy spreadsheet work), then you find the cheapest machine that will actually run the software you plan to use.

 

Where you need a high end machine is for things like CAD and engineering applications, massive data processing (and even then you're better off throwing that at a farm of cheap boxes than trying to do it all on one machine - some people apparently repurpose GPUs for this sort of thing), and if you're a gamer that cares about things like graphics quality and high frame rates.

Edited by Greg H.
Link to comment
Share on other sites

  • 2 weeks later...

Most time, the "reasons" for stronger hardware are extremely bad, namely poor software. Windows 7 brings me little that Windows 95 didn't, and above all, I see no improvement (Ntfs, no-execute bit, few more) that would require any power.

 

Sometimes, the user has good reasons to need processing power. To compute the shape of a molecule, the composition of a flame, find error-correcting codes... a PC could be 1000 times faster. That would serve individual users and doesn't result from poor programming.

 

Some special projects that don't fit on a Pc have good reasons to need processing power and could use more.

Say, you search for distribution patterns in a catalogue of stars.

Or you just make a picture of the microwave sky, from many antennas at many wavelengths.

Or you search for the disintegration of unknown particles in the shower resulting from many collisions.

Or you try to build a bilingual dictionary - or better, a translator - by comparing en.wiki and es.wiki using machine learning.

And so on and so forth. There are many examples.

Link to comment
Share on other sites

Banal things that I'd like go faster on my PC:

  • Compare folders containing subsubsubfolders with 11,000 files of 0.2MB each. Though, my X25-E limits my E8600 in that operation, and maybe the aplpication (FreeCommander) can improve.
  • 7-zip 300MB. This one is already multitask, easy job for a new machine.
  • Search a file by its contents in subsubsubfolders with 11,000 files. I know indexation exists, but I hate background tasks when I expect inactivity.
  • Run the antivirus over the disk.
  • Scroll instantly Pdf documents whose many Jpeg were chosen too big by the author and which must be resized for display.
  • Check the authentication of downloaded files. Better software supposedly suffices.
  • Start and stop the OS of course, including Win2k.

Most of these needs can easily be multitask and will be rewritten by coming OS, so this is an easy job for a future machine. In fact, many could run on the video card.

 

In contrast, the molecule and flame computation applications I have are for the i387, so a better machine shall please accelerate this code by 103, thank you. The latest Intel are only 25% faster on such code, so I keep my Core 2 duo.

 

----------


Some prospective applications that could run on a PC:

  • CT scans run on modified video processors I believe. The Cpu would be cheaper when possible.
  • Decompress the Html code if it's compressed some day. Still 1/4 the size of web pages, so we could save 1/5 the volume and throughput of the Internet.
  • Run a general chemical equilibrium software when it exists (for gas, liquid, solid, in many solvents - the ones I've seen are for electrolytes or flames).
  • Estimate properly the melting point of a compound. No theory yet, it may need to try all possible stackings, if needed all possible conformations, and check also all vibrations and rotations modes, so that's heavy computation.
  • Guess the structure of an organic molecule from its IR spectrum and more information.
  • Guess synthesis paths for an organic molecule.

The last two rely heavily on AI, so the necessary computing power is hard to predict.

Edited by Enthalpy
Link to comment
Share on other sites

Decompress the Html code if it's compressed some day. Still 1/4 the size of web pages, so we could save 1/5 the volume and throughput of the Internet.

 

Enthalpy...

Have you ever programmed HTTP server or application connecting to it through net?

HTTP response *is* compressed the most of time.

Web browser sends request with compression methods it supports, and web server such as Apache is returning HTTP response which is compressed by method supported by web client.

https://en.wikipedia.org/wiki/HTTP_compression

Edited by Sensei
Link to comment
Share on other sites

How slowly does a big Latex document compile presently? That could be a reason for a faster machine, and that piece of software won't be rewritten any soon.

10 pages in a couple of seconds including bibtex and pdflatex a couple of times. Think my PhD thesis only took seconds. Never felt like a reason to increase performance.

Link to comment
Share on other sites

I use super-computing facilities for analyzing high throughput sequence data. Output files are generally 3-5gb per sample, so you need a decent amount of computing power to run analyses in a sensible time frame. I also have a reasonably powerful desktop for data visualization with CLC Workbench, R, Tableau, etc.

Link to comment
Share on other sites

Same here, except I do not have the luxury of a super-computer facility (just got a few servers with Tesla GPUs). In addition to sequence data, MS is even worse but also for the analysis of high-resolution microscopic images (especially time series).

Link to comment
Share on other sites

  • 3 weeks later...

I'd like from my OS or file navigator these easily paralleled tasks :

 

Find all text files that resemble strongly a given one - including txt, rtf, doc, htm... optionally across different types. That is, I designate one text file, the navigator finds all files in a big folder (10,000 files) that are probably edited versions of the reference. More in detail: some text portion can have been added, erased, moved within the file...

 

Find all images that resemble a given one. In a first approach, "resemble" can mean: cropped, scaled, rotated by multiples of 90°, changed contrast and luminosity. And of course, among varied compression methods. Consider that an amateur photographer has many 10,000 pictures.

 

And in a later approach, "resemble" would mean: "find all my pictures where Tom cat is" - not mistaking him with his fellows of course. Do it as you want, I'm the idea man. You wrote "fast computers", didn't you.

 

==========

 

Above all, I'd like computers to run sequential applications faster. I know this is what Intel and Amd do not want to read, sorry for that, but my demanding applications were written for DOS and 8086/8087, received later a Windows graphical interface, and nobody will rewrite them in a foreseeable future. This includes chemical equilibrium in rocket chambers, molecular conformation. The stone-old code is single-tasked and uses no Avx nor Sse. So I have no reason to buy a 3.3GHz twelve-core with Avx512, but I'd be happy with a machine as cycle-efficient as the Wolfdale, even without Sse, at 6GHz or 15GHz.

 

Though, maybe the sequencer can further improve the parallel execution of apparently sequential code? Both pieces of software I give as examples are optimization problems, which are linearized then solved step by step. Implicitly, the code is hugely parallel - it only was written as sequential loops historically.

 

The Wolfdale's sequencer already achieves one multiplication-accumulation per cycle on such code, hence it does the condition test, index incrementation, jump, data read and write in parallel to the float computation, taking nearly one cycle per loop. That's 3 to 20 times better than the Tualatin, and the Ivy Bridge gains further 20%.

 

The step beyond would be that the sequencer unrolls loops to exploit parallelism among successive passes where available. Seemingly the Cores still don't do it, and this would bring them beyond one pass per cycle. The compute hardware for that is long available and unused, including for software not relying on float calculations.

 

Unrolling the loops would normally be the compiler's job, but these applications provide only the executable. An other technical option would be an optimizer that reads the sequential executable and produces a parallel version of it for the Sse and Avx; a legal difficulty is the copyright of the executable.

 

Such an optimizer could also seek multitask parallelism in executable code. This is less easy and potentially unsafe. It could check for instance if functions can run independently from an other, then send the multiple calls to new tasks on different cores. Or cut millions of passes through a loop in 100,000s written as new tasks. This difficult optimizer would better be programmed using artificial intelligence style.

 

Marc Schaefer, aka Enthalpy

Link to comment
Share on other sites

Sounds like what you're looking for, Enthalpy, is a processor with a very long pipeline.

The last processor produced by Intel with a long, 32 stage pipeline was the P4D, with the hope of scaling it to 10 GHz.

Unfortunately it ran into a heat brick wall at 4 GHz, and AMD with its shorter pipeline ( where each stage does more work ), and superscalar ( load multiple instructions/data concurrently ) Athlon 64 wiped the floor with them.

Intel had to go back, dust off the shorter pipeline, P3 architecture, and make it superscalar for the Core architecture.

Intel has now evolved through Core, Core 2, and core i, it by adding multiple cores on the same die ( up to 8 actual and 16 logical on its latest CPUs ), and AMD has followed suit.

 

Unless you can find some sort of emulator, I'm afraid you're out of luck.

Link to comment
Share on other sites

Yes, I certainly agree there are technological reasons to limit the clock frequency, as semiconductors were my first occupation. Graphics cards go even further by running at 1GHz, since for them, parallelism is easy and consumption is the limit.

 

It's just that... I can't do anything with the multicore nor the Avx, because they don't fit my applications, and these won't evolve.

 

There is some headroom, though, because processes have improved. It doesn't always need longer pipelines and their drawbacks. Present Cpu have 6 cores with Avx256 at 3.4GHz: that's 24 units on 64b floats able to work simultaneously without burning - plus the graphics coprocessor now. If power were the only limit (it's not), the law in F2.40 would permit a single 64b unit at 13GHz.

 

Just for completeness, the A64 wasn't the first Cpu to execute several operation in parallel. My Pentium-1 could already work on load/store, loop and numbers simultaneously, though less perfectly than the Core, and it had already 2 Alu that did compute simultaneously. At that time, I measured 2 or 3 cycles per loop pass containing many instructions.

 

And while the Core has indeed a pipe length similar to the P3, it's a (excellent) redesign from scratch. Where the P3 took 2 cycles for a mul, the Core does a mul-acc per cycle, and it emulates the Mmx instructions on its Sse hardware while the P3 had both. But the Core's subtle sequencer owes the P4 very much, with decoded instructions in L1 and micro-ops.

 

I put much hope on a parallelizer for existing machine code. That would be perfect. I'd even spend money on a many-core Cpu then.

 

====================


One heavy trend in the near future is artificial intelligence. Be it useful to the consumers or not, it will spread to home PC just because companies begin to use it on supercomputers (search the Web for Yann Lecun), so it will be fashionable and programmers will seek consumer applications, possibly find some.

 

Different kinds of job need different capabilities from the machine. In addition, the necessary computing power is hard to predict.

 

AI programmes can apply established knowledge on a new situation. These are for instance expert systems. Their computing need range from moderate to unobtainable; for instance language translation should need speed. Such programmes typically process addresses and make comparisons, make unpredicted jumps in the programme and the Ram, but compute little on floats - even integer multiplications are less common.

 

Or they can infer new knowledge from a set of observations, aka "machine learning", for which neural networks seem to be still the standard method. Its needs resemble much scientific computing, with the optimization of linearized huge sets of equations. Yann ran the first attempts on a Cray-1, and now machines like Blue Gene begin to make sense of big amounts of data, like abundent sets of pictures, or scientific papers. It's the kind of programme I had in mind when asking to find the pictures featuring Tom cat but not his fellows.

 

While PFlops at home must wait a bit, TFlops are already available in Gpu or nearly, and much more is possible at a small company. For machine learning, more Flops mean learning more subtle knowledge, or from a bigger observation set.

Edited by Enthalpy
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.