Jump to content

Integrating a Standard OS and Framework Into AI


Xittenn

Recommended Posts

I am currently working on a widgets library and the combination of this with my thinking on a number of other posts I have made has me delving more deeply into the subject of integrating an operating system, as a standard consumer understands one, into artificial intelligence. Just to give an idea of where I am coming from I will cover the thoughts that got me to this point and what it is I am thinking to possibly accomplish.

 

A number of years ago I had posted on SFN that I had wished to do some work on machine vision and well three years later, although I have still not reached any of my goals, I am progressing towards this. Today I made a comment about a device that could turn pages in a book and remote a video feed so I could read it. This got me thinking about my dolls(referring back to machine vision) project and how this could be a functional task performed. While working on my widgets library I had one of those moments and I started tying in widgets into AI and also re-examined how I might use my DUALITY(personal project for scientific computing) framework and applications.

 

I guess the important part of this thought was the use of Widgets in AI. For the most part, I believe, the operating system for AI devices is limited to processing of data that does not extend to Graphical User Interfaces; this also holds true for most scientific computing frameworks. I like to think a bit outside the box and being that I like GUIs I feel that my creations should enjoy them as well. So how would I make use of such process consuming entities?

 

The first thing I did was reduce the concept of the widget to a textual based entity where nodes are developed that would allow for decisive actions to take place. Essentially for an end user this is all a widget is, a set of nodes that allow for functional decisions to take place, and result in the fulfillment of actions necessary. So if I simply installed a Windows OS inside the mind of one of these dolls and reduced the graphical processing to nill and where the widgets could still be processed on a nodal basis, well I think this in itself has potential for an easily developed system. Much of this is already being done with embedded systems used for machining in factories and using reduced versions of different operating systems including Windows CE and others(I <3 M$). The reality though is that there aren't any present systems that pull up firefox in an AI processing center and browse the internets logically to gain insight into a given problem; even more so in this context, where nodal decisions are being processed based on the widgets made available by the software applications installed.

 

The second portion of this that I see as having possible potential for developing features is in the graphical representation that was previously, in this post, discarded. The potential here is in the development of a pseudo occipital lobe where the iconic images already implemented in such a system could be used in combination with a database of images and used as feed data for a pseudo prefrontal lobe. When combined I think the nodal based textual system would gain greater context through the addition of the graphical iconic data.

 

These thoughts, for me at the very least, were rather exciting even though I had had fragmented visions of such systems before. I will most definitely continue the development of my widgets library with the above in mind as micro management is currently the only means I have at keeping my projects alive(it's like CPR.) I am very interested in what others might have to say about this, and even links to material that covers this stuff, I haven't really done a good search on the topic. :)

Edited by Xittenn
Link to comment
Share on other sites

there are robotic agents and robotic parts you can get .. and the parts have a manual on how

to connect them to your laptop ...

 

but with everything you get, the model you create in programming is

the most important part, you have to do the AI agent life-cycle, and the key

of success in this field is "did it do the right thing ?", because agents even

partial ones or game-playing .. it's all about decision making ...

 

good luck

Link to comment
Share on other sites

there are robotic agents and robotic parts you can get .. and the parts have a manual on how

to connect them to your laptop ...

 

but with everything you get, the model you create in programming is

the most important part, you have to do the AI agent life-cycle, and the key

of success in this field is "did it do the right thing ?", because agents even

partial ones or game-playing .. it's all about decision making ...

 

good luck

 

I have previously engineered complete robotic devices from scratch from etching my own printed circuit boards to building the mechanical arm. The point? No point just putting that into perspective. The robot I am developing is a bio mimetic doll that will incorporate any number of technologies as it is being developed .... This isn't too important to the topic, it was simply my indications on where I was coming from.

 

The exciting part was the idea of creating a new framework for widgets that allowed AI Agents to navigate them in an optimized, reduced cost fashion. Starting from scratch this could mean any number of things and spans across a variety of applications and frameworks from HTML to desktop applications. The first big wall that I'm seeing is how to possibly incorporate existing frameworks into such a process, especially when there are no injections across modules :/

 

I just thought this was a pretty cool little project, possibly worth mentioning ..

Edited by Xittenn
Link to comment
Share on other sites

I have previously engineered complete robotic devices from scratch from etching my own printed circuit boards to building the mechanical arm. The point? No point just putting that into perspective. The robot I am developing is a bio mimetic doll that will incorporate any number of technologies as it is being developed .... This isn't too important to the topic, it was simply my indications on where I was coming from.

 

Don't play with words, xittenn ...

 

The exciting part was the idea of creating a new framework for widgets that allowed AI Agents to navigate them in an optimized, reduced cost fashion. Starting from scratch this could mean any number of things and spans across a variety of applications and frameworks from HTML to desktop applications. The first big wall that I'm seeing is how to possibly incorporate existing frameworks into such a process, especially when there are no injections across modules :/

 

I just thought this was a pretty cool little project, possibly worth mentioning ..

 

Welcome to Computer Science !.. we won't find things ready for us to use in our little interesting projects .. we have to

design everything, implement it ourselves, and if there is nothing that span over this variety .. we make that

span ourselves .. there is nothing that we cannot do, if planned well ...

 

But, things doesn't work because we just wrote a code and managed to get it working .. we have to

put our efforts in the design phase,

 

It's not an easy process, as you would know, you have an idea, you need Analysis, then Modelling, then Validation,

then Designing, then Implementation, then Testing, then finally Experimenting ...

 

so, good luck ...

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.