Jump to content

"Cyberization"


Zolar V

Recommended Posts

If any of you watch Ghost in the Shell, you will easily understand what i am talking about, if you don't i would advise either watching a few episodes, Ghost in the Shell Stand Alone Complex, or trying to dig deep.

 

Ok, in the program there is a cyberspace connection between the persons mind and an avatar in cyberspace. my question is, if you were to be connected to a cyberspace and you were to be viewing lets say a video feed, would you see the video feed in a separate "workspace" aside from your vision? imagine your vision was similar to a desktop workspace, you can only have so many windows opened up. so i would assume that the video feed would be played similar to a memory where you view the video from a separate "workspace".

 

There is a way to test this hypothesis too.

You know how they have those machines and programs that are designed to read your brainwaves and translate them to movement of a cursor on a computer screen, or visualization of letters to type? well you could reverse the process. take a base reading on some simple pictures translate that base reading into a specific current to travel into the brain, and have the test subject view something different while the picture signal is being transmitted back into the brain.

 

in regards to the memory you may wonder why it is fuzzy and how would our video feed not be fuzzy like the memory. i would say that the memory is fuzzy because we are generating it, where as the video feed is constant just like our vision.

 

if this post does not fit here or would be better posted somewhere else please let me know as i want this idea to fully propagate this site.

Link to comment
Share on other sites

I'm a big GITS fan, but I cannot visualize what you are speaking of.

No pun intended.

Care to render a drawing?

 

 

The cyborgs (more like androids) in GITS have had their biological parts removed. The idea is putting persons' consciousness into robotic brains. With such physical hardware, I suspect the idea of biological constraints is removed, thus if you want to make a person more like a computer with visual additions, then you can.

Link to comment
Share on other sites

hmm im not sure if i can render a drawing i may have to try that. but let me try to explain a bit better if i may.

 

Everything you currently see through your eyes i am going to call Workspace A

my question is whether or not if you were to watch a video feed from cyberspace, would what you are watching be visualized in Workspace A or would it be Visualized in Workspace B. workspace B being somewhere aside form Workspace A.

 

you know when major sends a report on an avatar to ichkawa? you see that little ring thingy with her avatar speeking, well you see that through the characters Field of Vision in the show. what im asking is would that report be not in the characters Field of Vision but rather someplace else not affecting the Field of Vision.

Link to comment
Share on other sites

I see what you mean. I'm going to call it a window within Desktop 1.

 

If you ever use a GNU/Linux OS, such as Debian with metacity and gnome, then you'll be familiar with the idea of desktops.

That, however, doesn't mean the cyberbrain wouldn't have the ability to have multiple desktops, thus putting that window's visual data in the background when switching to another Desktop: Desktop 2.

 

Scene 1:

Person is viewing real world through eyes (Desktop 1).

Window appears in Desktop 1.

 

Scene 2:

Person decides to put away the window

Choices arise

Choice 1: Minimize or close window

Choice 2: Put window in back of mind (create background process)

 

My guess is that Choice 1 would conserve more energy but Choice 2 would be feasable given the cybernetic possibility via computer science and detailed knowledge of subconscious processes and visual neuroscience.

 

http://en.wikipedia.org/wiki/Virtual_desktop

Link to comment
Share on other sites

another question would arise from your truly simplified and understandable example, one that i was initially trying to get at.

 

Would you still be able to view the Window in Desktop 2 while viewing Desktop 1.

i would equate this to driving (lol classic example from GITS SAC) while viewing, or talking, in desktop 2 in the window.

 

And yes i am familiar with Linux, not your standard Linux either...

Link to comment
Share on other sites

Yes, but you must keep in mind a few things.

 

1. When the Major did cyberjump while driving, she later came back and Bato yelled at her.

 

He thought it was crazy that she was cyberjumping while driving.

 

2. Bato is also a cyborg, so you have to keep in mind that he must have some preconceived belief that it is impractical to cyberjump and drive at the same time.

 

That's my argument.

 

As such, however, Motoko may have generated the process of driving as a subconscious process while focusing her conscious being on the visual materials that existed in the cyber world.

 

My guess is that while driving (workspace 1) one can have a visual window in workspace 1.

I would equate that to driving while talking on a cell phone.

 

I'm going to assume it would be able to divide the visual cortex between two visual spaces.

 

With enough training it can be done. The problem with the human brain is the relation between visual focus and objects not within focus.

Objects not within focus are often not as sharp as those in focus. However, if a person can form the ability to view and analyze those things not in focus while viewing what is in focus, then that person can multitask.

 

I'm guessing what thinking would be possible. However, real world complications would arise.

I would imagine that three workspaces would be open. One with the videofeed (workspace 2), another with the incoming driving data (workspace 1), and one to display both windows (workspace 3). Otherwise, there would be some serious subconscious activity going on if a person focused on workspace 2 instead of workspace 1.

Edited by Genecks
Link to comment
Share on other sites

i would have to disagree. yes it would take training to adequately use all the space allowed. BUT i don't think that removing your conscious focus from the visual FOV would make it become unfocused. rather i think it would continue to be focused and the window you are viewing would also be focused because it is in focus when it is being transmitted to you brain.

Personally I believe our brain is much more capable of adapting to technology and things like this than what we realize. currently our brain has adapted to only using one Workspace because that is all it is able to do. it does not have a separate workspace to switch between. if you were to look how the brain sends and decodes information you may have the same opinion as me.

Link to comment
Share on other sites

Yes, but the point is that Motoko's brain would be a robot brain.

The biological principles that Homo sapiens simply wouldn't apply to the full extent.

 

We can, however, create a real world example these days.

There exist visual eyepieces that people can wear in order to view a monitor.

Let's put the eyepiece on the right eye.

Let's call the visual space seen as Desktop 2.

So, the left eye is viewing Desktop 1 (the real world through visual sensory).

 

If I can divide my attention between Desktop 1 and Desktop 2, then I can get a lot of work done.

Division of attention between the right eye and the left eye would be occurring.

I suspect with another training, a person can get better at driving while viewing the visual data on the right eye.

 

From such an example, you could view both Desktops.

Edited by Genecks
Link to comment
Share on other sites

For a real flesh and blood brain, you would have to at the very least start off by all the visual info passing through your visual cortex. By focusing on one thing, you will lose focus of other things, and your attention will be divided. With some training, perhaps you could get to the point where you can pay attention to both things at the same time. However, any extra attention you pay to one will result in slightly less attention to the other. You could for example start off overlaying a semi-transparent video feed over the real world visual feed.

 

Now, if you simulate a brain in realtime, the same will apply. However, suppose you run a simulation at double speed. Now you can focus on both things at once with as much attention as a human paying full attention to each (in theory at least, in practice us humans are very bad at task switching). It will still divide your attention, but you could have more attention to go around. And there's only so much attention you need to pay to driving for it to be safe, especially if your reaction time is far greater than a normal human's.

Link to comment
Share on other sites

For a real flesh and blood brain, you would have to at the very least start off by all the visual info passing through your visual cortex. By focusing on one thing, you will lose focus of other things, and your attention will be divided. With some training, perhaps you could get to the point where you can pay attention to both things at the same time. However, any extra attention you pay to one will result in slightly less attention to the other. You could for example start off overlaying a semi-transparent video feed over the real world visual feed.

 

Now, if you simulate a brain in realtime, the same will apply. However, suppose you run a simulation at double speed. Now you can focus on both things at once with as much attention as a human paying full attention to each (in theory at least, in practice us humans are very bad at task switching). It will still divide your attention, but you could have more attention to go around. And there's only so much attention you need to pay to driving for it to be safe, especially if your reaction time is far greater than a normal human's.

 

Following your example it would be quite logical to build attention via your stimuli. And only if you were injecting unprocessed signal

 

I think that the video signal coming from the eyes goes to the visual cortex for processing then gets routed to other areas for review and memorizing.

I would think that if you were transmitting the video feed to the brain. The transmitted signal could already be processed singnal and would not have to pass through the visual cortex for processing. And therefore could be a persistant visual feed that you are seeing aside from where you are seeing normally. aka the window thingy.

 

I Personally think that our brain is much more capable of everything then what we believe it is. Cybernetics and Man-Machine interfaces seem so realistic. considering the brain has evolved to control our body with precision and does show exponential value in other areas such as memory.

Does anyone what to try the above expironment i described above? i would volunteer as the test subject but unfortunatly, i cant.

I have another expironment that would describe nerve signal translation into machine movement. similar to an amputee's leg except the subject doesnt have to be an amputee.

Link to comment
Share on other sites

I have another expironment that would describe nerve signal translation into machine movement. similar to an amputee's leg except the subject doesnt have to be an amputee.

 

Yes. It seems our brain considers tools as an extension of our body. Controlling things via nerve impulses has also already been done.

Link to comment
Share on other sites

Can you phrase your question as a reference to Neuromancer or Snow Crash? :D

 

what?


Merged post follows:

Consecutive posts merged
Yes. It seems our brain considers tools as an extension of our body. Controlling things via nerve impulses has also already been done.

 

yes i know, but i want to see it taken a step further where we have feedback from the automated things we control. and I am not just talking visual feedback either.

Imagine it as having a robotic eye that can see the entire electromagnetic spectrum (preferably 1.0khz-18Ghz, then whatever frequencies x-ray, infrared and ultra-violet are) well i suppose that would just be visual feed back :D

but you should understand my meaning.

Link to comment
Share on other sites

But why a robotic eye? Recently, we have cured monkeys of colorblindness by adding in the missing color cone cells. If we can do this, it is just a small step away from adding new color cone cells, ie expanding the spectrum of light our natural eye can see. Considering current technology, I would definitely consider that far superior to an artificial implant.

Link to comment
Share on other sites

Oh, they're only the two greatest cyberpunk novels of all time...

 

READ??!

Good sir, I am at a terrible loss as to question why anyone including you, would use such archaic means to aquire knowledge or entertainment when such visually stimulating media exists.

Link to comment
Share on other sites

READ??!

Good sir, I am at a terrible loss as to question why anyone including you, would use such archaic means to aquire knowledge or entertainment when such visually stimulating media exists.

 

because there isn't anything more visually stimulating than your own hi-def, 3-D, fully explorable and interactive imagination.

Link to comment
Share on other sites

But why a robotic eye? Recently, we have cured monkeys of colorblindness by adding in the missing color cone cells. If we can do this, it is just a small step away from adding new color cone cells, ie expanding the spectrum of light our natural eye can see. Considering current technology, I would definitely consider that far superior to an artificial implant.

 

Ok. how about a cybernetic arm that is used to increase the users strength. for use in military aplications where you need extreme mobility and strenght. Such as unloading extreemly heavy cargo from a transport truck i.e. tank rounds or tread repair equipment.

how about a cybernetic leg that allowes increased speed(or strenght), that allowes the user to run really fast. it could be used in such examples of spying to sabatoge something..

 

of course you might cite the example of the exoskeleton, but that thing is pretty bulky. and requiring a power cord

or you might say something about the isloated strength gene found in mice. but in that case the 1:1 ration (our ratio) proved to be best solution.


Merged post follows:

Consecutive posts merged
because there isn't anything more visually stimulating than your own hi-def, 3-D, fully explorable and interactive imagination.

 

i wouldn't agree more :D

i have a library of about 200 books at home, all of which i have read........ twice...

 

btw i dont have a life..

Link to comment
Share on other sites

Ok. how about a cybernetic arm that is used to increase the users strength. for use in military aplications where you need extreme mobility and strenght. Such as unloading extreemly heavy cargo from a transport truck i.e. tank rounds or tread repair equipment.

how about a cybernetic leg that allowes increased speed(or strenght), that allowes the user to run really fast. it could be used in such examples of spying to sabatoge something..

 

of course you might cite the example of the exoskeleton, but that thing is pretty bulky. and requiring a power cord

 

Nope, these things may work well but cannot easily be powered, and involve tradeoffs rather than being purely superior. The exoskeleton has that advantage that it is wearable so you can take it off when you don't need it. I'm not familiar with any of these contraptions adding to your mobility however. I know robotic surgeons have the capability to remove hand tremor and translate large movements into small and precise movements. On the whole though, if you ask people designing these systems if they would want one as an implant, I'm fairly confident you will get a good, solid NO! In the future things may change though. First order of business would be better energy storage.

 

or you might say something about the isloated strength gene found in mice. but in that case the 1:1 ration (our ratio) proved to be best solution.

 

Hm, but it does need testing. I think cheating athletes will volunteer.

Link to comment
Share on other sites

Nope, these things may work well but cannot easily be powered, and involve tradeoffs rather than being purely superior. The exoskeleton has that advantage that it is wearable so you can take it off when you don't need it. I'm not familiar with any of these contraptions adding to your mobility however. I know robotic surgeons have the capability to remove hand tremor and translate large movements into small and precise movements. On the whole though, if you ask people designing these systems if they would want one as an implant, I'm fairly confident you will get a good, solid NO! In the future things may change though. First order of business would be better energy storage.

 

 

 

Hm, but it does need testing. I think cheating athletes will volunteer.

 

I know we are along way off from being able to implant a cybernetic implant that increases our intrinsic abilites. but i do think that we do have the technology to start merging machine interface and feedback with biological interfaces.

personally i would like to see/make it my self a robitic suit.. yea yea yea i know, another crackpot on robotic suits.

but seriously, i have many many designs in my brain for such a suit and the only problem is power. i hypothesize using current technology we could infact make a miniature nuclear powercore. using a heat pump to cool the system. i have a rendered few schetches on the idea. on paper that is.

 

I have another idea. this one is remenecient of Dr. Octavious. i think that if we were to add a mechanical arm to a human, our brain could in fact adapt and evolve to use the arm(s), via sending nerve impulses to the area where the man machine interface is, ie a silicon chip that translates nerve impulses to electrical pulses through wires to a cpu.

Edited by Zolar V
Link to comment
Share on other sites

I know we are along way off from being able to implant a cybernetic implant that increases our intrinsic abilites. but i do think that we do have the technology to start merging machine interface and feedback with biological interfaces.

 

Oh definitely. And for that, we mostly have handicapped people to thank.

Link to comment
Share on other sites

Oh definitely. And for that, we mostly have handicapped people to thank.

 

but how about distributing that technology to increase the abilites of non handicapped people, there by increasing their productivity?

 

so i was reading the "Are dreams reality" in the medical science forum.

and i think a slight comment on that subject could be inserted to this topic.

-quoted from Zolar V-

I" have had more than a few lucid dreams and i have noticed that there are just some parts of the dream i cant acess. such as i may have had a dream about a room and what i did in the room, but when lucidity kicks in i tried to open the door, but i couldn't and was just reset back to the previous event to me opening the door. "

 

when i would try to open the door, things would get staticy or fuzzy.

 

"its kind of interesting that we have conciousness during sleep, i would think that we may with a lot of research be able to use the time spent in sleep gathering information. from like reading a book or something. i mean reality is simple. it constitutes data points we currently see/hear/feel/touch to determine its current state(reality) so i would imagine that during sleep when you know your sleeping you could inject your own data points for whatever you want."

 

you could inject your own data points, ie our Video Feed, to do something productive or entertaining.

Link to comment
Share on other sites

READ??!

Good sir, I am at a terrible loss as to question why anyone including you, would use such archaic means to aquire knowledge or entertainment when such visually stimulating media exists.

 

Because if anyone actually succeeds at making a movie out of Neuromancer, it will be a complete abomination. (Though that doesn't stop people from trying)

 

Neuromancer is a book that I am fairly convinced could never be successfully adapted into a movie.

 

Snow Crash, on the other hand, would probably make for an awesome movie.


Merged post follows:

Consecutive posts merged
I know we are along way off from being able to implant a cybernetic implant that increases our intrinsic abilites. but i do think that we do have the technology to start merging machine interface and feedback with biological interfaces.

personally i would like to see/make it my self a robitic suit..

 

Perhaps you should read about Brain/Computer Interfaces.

Edited by bascule
Consecutive posts merged.
Link to comment
Share on other sites

Because if anyone actually succeeds at making a movie out of Neuromancer, it will be a complete abomination. (Though that doesn't stop people from trying)

 

Neuromancer is a book that I am fairly convinced could never be successfully adapted into a movie.

 

Snow Crash, on the other hand, would probably make for an awesome movie.


Merged post follows:

Consecutive posts merged

i think im going to check thos books out.

 

Perhaps you should read about Brain/Computer Interfaces.

 

May hap perhaps i should.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.