Jump to content

optical computers


saleha noor

Recommended Posts

i am MS student...

I wana do my final year thesis in optcal computers.. I just wana know is it a hot research area

can I proceed with this topic.. I need some starti up help material to know what are major areas in optical computer to do work ..

please help me...

regardz

Edited by saleha noor
Link to comment
Share on other sites

I saw more about photonic computing about a decade ago than currently. The Wikipedia article says:

 

There are disagreements among researchers about the future capabilities of optical computers: will they be able to compete with semiconductor-based electronic computers on speed, power consumption, cost, and size? Opponents of the idea that optical computers can be competitive note that[6] real-world logic systems require "logic-level restoration, cascadability, fan-out and input–output isolation", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself.

 

Today research is oriented more towards nanotube and graphene gates, rather than photonic gates. And, of course there is research into quantum computers.

 

Sorry I cannot be more helpful.

Link to comment
Share on other sites

Optical computing was a fashionable research topic two decades ago. Seems out of fashion now.

 

What computers absolutely need right now are better INTERCONNECTS, not computing elements and gates. I don't care that little bit whether electric, optic, magnetosomething or magic, but we NEED to transfer massive throughputs with minimum delay through one chip - between many chips wouldn't be bad neither.

 

As processes shrink, gates are faster and numerous, but interconnects get slower for the same distance (more latency) and their combined throughput (with more lines per chip) stays constant. This alone explains why clock speeds stay at 3GHz since the pentium 4, and why Cpu make no progress since the Core 2.

 

Processing power would already be available in huge amounts with present processes; graphics chips exploit it less badly than Cpu. Better gates with the same interconnects bring zero, nothing. Better interconnects with present plain standard silicon MOS would bring everything.

 

-----

 

If abandoning in-chip interconnects with small lantency, you could develop more flexible inter-chip interconnections that offer full-matrix connectivity. Check before whether it's done; it is easy with serial links.

http://saposjoint.net/Forum/viewtopic.php?f=66&t=2454#p28231

(on Wed Aug 11, 2010) without optical connections nor electron beams, just with silicon gates.

Any route from 1000+ to 1000+ nodes allowed by one chip (put more a needed), no interlock, no limitation by the network architecture like a hypercube or hypertorus, more cumulated throughput. But the usual latency of a serial communication.

Link to comment
Share on other sites

Heat is another major problem that may limit making chips with many more cores than now exist. The Pentium instruction set and architecture uses many transistors that generate much heat, which require heat sinks and fans on the CPU chip. Active cooling with thermoelectric cooling might be used, or maybe nanotube or graphene transistors that can run on less than a nano watt of power may be an alternative. Otherwise, 64+ core chips are unlikely.

While thermoelectric cooling cannot increase the speed of transistors much, nanotube and graphene transistors may increase transistor speed. I've seen estimates of 100x speed improvement. However, manufacturing common microchips using graphene or nanotube transistors cannot be done yet, and perhaps never. Time will tell.

 

If one uses extreme cooling, enough to make superconductors, then transistors become faster and lower power. Of course, the equipment to super cool a computer will probably not be small enough for a laptop.


Currently software does not take full advantage of multiple cores. My laptop has four cores and can process eight simultaneous threads, but few programs can use more than one thread at a time. Thus, my CPU would sit idle most of the time, except I downloaded BONIC Manager from Berkely.edu and run some programs for scientific research on my otherwise idle computer, e.g., SETI at Home, and climate models.

For a single program to take advantage of multiple cores really requires compilers to be modified, which probably is being done. However, additional research is also needed to identify methods for making use of multiple cores.

From: Compiler Parallelization Techniques for Tiled Multicore Processors

Tiled multicore processors were developed to provide scal-
able performance from ever increasing on-chip transistor counts.
To maximize the benefit, however, it requires extensive sup-
ports from a compiler for tasks such as exploiting paral-
lelism, communication management, memory locality man-
agement, and control-flow management.

In this work we have investigated various compiler paral-
lelization techniques developed for tiled multicore proces-
sors. These parallelization techniques are distinguished from
conventional parallelizing techniques in that they exploit
fine-grained parallelism with the support of low-cost SONs in
tiled multicore processors. Although the investigated tech-
niques showed their effectiveness in some benchmarks, there
are still many future research topics in this area. With the
upcoming era of multicore processors, the applicability and
effectiveness of these fine-grained parallelization techniques
are likely to have a significant impact on the design of future
microprocessors

 

Technically this is not on topic, but saleha is looking for a topic and optical computing does not seem to be a good one. Perhaps s/he can get some ideas from us for another topic.

Edited by EdEarl
Link to comment
Share on other sites

Easy things are the ones you already know. Since you are the one who knows what you know, you are the one to decide what that topic will be. On the other hand, a thesis is a learning project, and to learn the most you must challenge yourself to learn new things. IMO your best choice is something you want to know about and find interesting, which means you will feel joy when learning instead of pain.

 

You chose software engineering, operating systems. Further divisions are mobile computing, cloud computing, software for wearable computers, which can be device drivers, such as using ones hands instead of a mouse for pointing, and applications instead of operating systems, such as software for bionic devices.

 

You can find your topic by searching Wikipedia for topics you know and like and following links to things that interest you to find new topics. Just as I did above.

 

As you find research papers, don't read them for full understanding at first, read the abstract and conclusions. You should be able to find many more papers than you want to include in your thesis. From the many, select a few that focus on an idea. Then, read the ones you select completely. You may need to discard some and select a few more. From the ones you select, title your paper and write your thesis.

Edited by EdEarl
Link to comment
Share on other sites

thank you sir ,,, for your help....

but sir my problem is "PROBLEM IDENTIFICATION" in all these field..

please guide me what is the basic thesis problem in operating system and software engineering field???

regardzzz

I'm not sure what you mean by PROBLEM IDENTIFICATION, and even if I understood I'm not sure I could answer the question without doing the research. There are a few eternal problems in software engineering, for example software is always released with bugs, and no one knows how to eliminate every bug, and there are always more projects to do than staff to do them.

 

Then, there are technology related problems, for example putting computers in phones created a desire to have more apps, but phones do not have the power or performance to run some apps; thus, cloud computing was developed with thin clients running on phones and applications running in the cloud. It is client-server computing, with a twist.

 

Then, as you dig deeper into any given technology, there are always things that can be solved with a computer and software, and things that cannot. Sometimes, hardware is insufficient, and sometimes software.

 

Research papers often have a conclusion that lists issues that are solved and open issues that need additional research. Other papers may have solutions to issues listed in one paper. I think, your task in identifying problems is to find the ones that have not been solved in any paper on whatever subject you choose. But, if you are unsure, ask your instructor.

Link to comment
Share on other sites

I think that is a currently active topic, but I have done no personal research in that area. You should be able to find a few papers on the subject in an hour or two if it is currently active. An online search should be enough at this point, because you should be able to get a good idea of whether a paper is relevant or not from the abstract.

Edited by EdEarl
Link to comment
Share on other sites

I am not an expert in distributed file systems, but I can help you analyze the issues. I will use the Socratic method.

 

1. Why are distributed file systems needed?

2. In what ways may files be distributed, e.g., whole files, columns, rows, as updated, index, or others?

3. What kinds of synchronization problems will occur, and how may they be resolved?

4. How do access methods affect applications using distributed files.

5. How do distributed files affect applications.

6. How do applications affect distributing files.

7. What happens when two programs need to update the same row and column and that row-column is distributed (copied) into several files.

 

Answer or partly answer these questions, and I'll ask additional questions. Read papers, but also do thought experiments of running imaginary applications that access files in various distributed configurations, be sure to consider multiple applications running at the same time.

Link to comment
Share on other sites

You don't like the full interconnection matrix to connect many chips? This one is predictibly doable, as it uses established silicon technology - nice for a thesis.

 

It has a clear interest. If your chip can connect 100 or 1000 Core processors, server manufacturers will want it. Some sort of Xeon processor that has already a fast serial link.

 

If no money is available for a prototype chip, you can breadboard a smaller demo with an Fpga. Or even, simulate the chip.

Link to comment
Share on other sites

You don't like the full interconnection matrix to connect many chips? This one is predictibly doable, as it uses established silicon technology - nice for a thesis.

 

It has a clear interest. If your chip can connect 100 or 1000 Core processors, server manufacturers will want it. Some sort of Xeon processor that has already a fast serial link.

 

If no money is available for a prototype chip, you can breadboard a smaller demo with an Fpga. Or even, simulate the chip.

A server farm or multiprocessor can use distributed files, but another topography is files distributed across long distances with perhaps several multiprocessor server farms. Although interconnect topology is important and related, saleha's latest question was limited to distributed files.

Link to comment
Share on other sites

actually i have no intrest in hard ware....soo this is why i left that topic....

i have very good grades in distributed systems ..so i have decided to go with DFS..

tommorow is my meeting with my supervisior i wannted to do homework before meeting thats why i was seeking help..

soo please enthalpy give me suggestion on DFS topic if you can ...

at this time i am seeking answer of the questions posted by edearal... i find them interested

major problem is that we dont have acess to paper of acm or ieee etc....

sooo i am finding difficuly to find answer these questions as well as my thesis topic

Link to comment
Share on other sites

Your library should have access to acm, ieee, and other papers, but it may take a while to get copies sent via snail mail.

How about, "The benefits, disadvantages, and strategies for deploying distributed file systems."

 

Additional Hints:

1. Why are distributed file systems needed? Multinational game files and corporate databases.

2. In what ways may files be distributed, e.g., whole files, columns, rows, as updated, index, or others?

3. What kinds of synchronization problems will occur, and how may they be resolved? What does it mean to lock a row (record).

4. How do access methods affect applications using distributed files. Consider indexing, hashing, and searching.

5. How do distributed files affect applications. Does an application always need to know about distributed files?

6. How do applications affect distributing files. Would tax collections and distributions in Germany affect the same in Australia.

7. What happens when two programs need to update the same row and column and that row-column is distributed (copied) into several files.

 

Also consider files on the Mars Curiosity Rover and similar ones at NASA on Earth.

 

Your imagination can be a great help. Don't forget the mind experiments.

Edited by EdEarl
Link to comment
Share on other sites

Not sure that a research should start with the thought of

i have very good grades in distributed systems ..so i have decided to go with DFS..

schooling for the purpose of good grades is wrong approach to do science, imho...

 

 

That said

 

DFS is a really hot topic right now. With the wide and ever-expanding adoption of Hadoop and HDFS, it is good way to solve a storage throughput problem without going the traditional ways of faster drives, crazier raids, with faster (and EXTREMELY EXPENSIVE) interconnects. For the same $250k/T SAN you get from EMC, you can have a sizable server farm capable of outperforming the SAN in every metric. There are plenty of problems that still need solving. File system indexes (for Hadoop for example indexes are still stored in memory on ONE node without a good way of replicating this node), efficient searching, moving, fs corruption, better algorithms for file distribution, copy maintenance, even reliability between physical locales. There are tons of interesting problems to solve.

Link to comment
Share on other sites

Mac OS and MS DOS are not used much anymore. DOS was strictly single threaded and did not support concurrency. Mac OS 9 did not support full multitasking (concurrent operations) but may have supported multiple users in some way. I've not used a Mac. Linux and Unix were always multitasking systems that supported concurrent operations. There are also mobile operating systems used by phones and tablets, and a few other operating systems such as the one on HP Tandem systems, BSD, and others. You may want to just lump operating systems into single thread and multithread.

Link to comment
Share on other sites

  • 4 weeks later...

OK, I understand the original poster has no interest for hardware, despite the orgiinal query was about optical computers.

 

For those people interested in hardware:

 

I strongly believe a full interconnection matrix for many processors with serial links is easily achieved and interesting for servers and supercomputers.

 

A special chip connecting 500 processors with very fast links is a big and expensive project. Though, a university can make a demonstrator for little money and reasonable time.

 

Instead of the biggest processors, just connect microcontrollers. Their links have a modest speed, fewer signals, and some are bidirectional from the beginning. One example is the I2C bus

http://en.wikipedia.org/wiki/I%C2%B2C

 

The interconnection matrix can then be programmable logic: cheap.

The hundreds of processing elements can be affordable: Arduino or some other.

 

The reasonable project demonstrates all mechanisms for a full-speed chip: full matrix, collision avoidance at the processing elements, intermediate data storage...

 

Marc Schaefer, aka Enthalpy

Link to comment
Share on other sites

As far as I am aware the HP Integrity NonStop System (Originally Tandem Computers) is a commercial system with the greatest interconnect, which supports 6,320 cores per server node interconnecting up to 256 nodes via a proprietary "Expand" high speed network. In effect, all 256 nodes are a single fault tolerant server, with Intel Itanium® 9500 series processors. The interconnect network for NonStop systems has changed many times over the years. I do not know if any part of it is optical or not. There are layers of interconnect starting with the eight core microprocessor, then interconnects among blades in a node, and finally interconnects among nodes. Nodes do have optical gigabit Ethernet ports.

Link to comment
Share on other sites

My proposal is to have an interconnect network less ridiculous as compared with the processing power:

- Full throughput of the processor's links, not Ethernet. But serial links, alas.

- Full interconnection matrix! Not some hierarchical network, hypercube or other graph with bottlenecks.

- I claim that a chip can do that for a few 100 processors. No optics needed.

 

And as a demonstrator, small processors can use a programmable logic chip for the interconnection matrix.

Link to comment
Share on other sites

Optical interconnects may have some advantages for interconnecting cores on a microprocessor, for example less power dissipation and less area used. A single fiber may carry several different frequencies of light with little heat generated. And, light carrying fibers may be insulators between traces that carry electricity. Thus, it may be possible to make parallel optical interconnects, without increasing microprocessor size.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.