what is causing the high skyrocketing price of video cards?
]]>
First posted in biology, but I dont know how to cross-post, so I will copy paste it here too.
I came across this perspective when I realize using the CRISPR gene editing technology in the context of synthetic biology is basically doing code/DLL injection. In another way of seeing it, it is akin to installing chrome extensions (though this isn't really accurate because chrome extensions requires explicit permission, but this will be an easier example to use than a computer virus) to add or modify functionality of chrome. Chrome extensions are able to modify and monitor everything you do in the browser, and with what genes we have designed and inserted into the genetic host, we can do something similar.
This got me thinking if a new perspective can be drawn to solve biology problems. Computer engineers design one of the most resilient and precise systems known to man (This may sound like a joke to you, but you will realize that the reason why your car and airplanes all work as expected with all the programs in them is because of the effort the engineers made to make them secure and resilient). No matter how much you hammer away or abuse your computer, you don't expect all the billions and trillions of electronic parts to just give up on you. Hospital systems, rocket systems, these are all systems resilient to failure. Modern software OS-s has integrity checks, auto-repair and security schemes designed to protect the information in your computer and prevent all the extra programs you install from going on a rampage. Some times there will be failures, but we can expect the program to be better the next time a security patch rolls out. I do not mention UI/UX because this is pretty subjective, but security systems in of itself it something to behold.
There are many parallels that can be drawn from both our human body and computer systems (including the network systems all the way to kernel software). For example, protein networks/reaction chains can be thought of computer ABIs and program flow processes. One can hook onto them by hijacking one part of the flow. In bio systems it would be gene injection as mentioned at the start of this post and in software it can be DLL injection.
Take for example, cancer. I know cancer is a bit too complex to be taken as an example, but it is also one of the easiest, IMO to illustrate this. I will skip a lot of details because being nit-picky will only stifle the discussion. Cancer has two main high-level problems : the cancer genetic mutation and the cancer micro-environment. The cancer genetic mutation by external factors can be thought in terms of software memory corruption, which can be caused by a faulty installed program or accidental memory rewriting by another program. In modern software, this is mitigated by integrity checks and auto-repair systems where the system just copy-pasts working environments. If this doesn't work, software engineers can opt to do in-memory patching of the faulty software. In biological terms, integrity checks are performed by our immune system, who also deletes faulty programs/cells. The immune system is also capable of killing cancer cells. However, there is another factor in preventing this, which is the cancer micro-environment.
In a summary, the cancer tumor covers itself in a mesh of normal cells, creates an acidic environment and creates interstitial pressure. This is a major design problem that a lot of cancer medicine has to solve in drug delivery. In terms of computer engineering, the corrupted program has barricaded itself behind a locked segment of memory, where high level programs cannot reach (user privileges), where it also faces resistance from the corrupted program which overwrites and shifts its memory footprint dynamically. The easiest way software engineers deal with this is to nuke the memory segment: deleting it from memory. We also do this with cancer tumors by removing the tumour directly. However, the cancer cells / corrupted programs, might still survive within the body. Computers typically have scanning programs that check through all of the memory. These programs are more commonly known as anti-viruses. However, our body does not have a proper full body scan that checks every cell except the over-worked WBCs.
Essentially we have developed a highly scalable solution for our computers, why can't we do the same thing for our body, like developing new antivirus programs for our favourite OS-s.
Having said all this, I would like to extend this perspective by working with people from both biology and computer engineering fields in suggesting new places where this perspective will help, and spark a new discussion about this.
In other thread author suggested using jump-table with sub-routines as a method of optimizing complex if() { } else if() { } statement.
I have other, alternative suggestion. Hand-made binary-tree for complex if() statement..
Suppose so we have following code:
if( state == 0 )
{
}
else if( state == 1 )
{
}
else if( state == 2 )
{
}
else if( state == 3 )
{
}
else if( state == 4 )
{
}
else if( state == 5 )
{
}
else if( state == 6 )
{
}
else if( state == 7 )
{
}
else if( state == 8 )
{
}
else if( state == 9 )
{
}
else if( state == 10 )
{
}
else if( state == 11 )
{
}
else if( state == 12 )
{
}
else if( state == 13 )
{
}
else if( state == 14 )
{
[... code executed for 14...]
}
else if( state == 15 )
{
[... code executed for 15...]
}
In the best case, it'll find appropriate code after 1 comparison, in the worst scenario it'll have to compare all 16 cases.
Such code we can rearrange to:
if( state >= 8 ) // 50% of cases
{
if( state >= 12 ) // 25% of cases
{
if( state >= 14 ) // 12.5% of cases
{
if( state == 15 ) // 6.25% of cases
{
[... code executed for 15...]
}
else
{
[... code executed for 14...]
}
}
else // states 12-13
{
[...]
}
}
else // states between 8 and 11
{
[...]
}
}
else // states smaller than 8
{
[...]
}
Now any case is found after just 4 executions of if(). max.400% speedup.
[math]\log_2(16)=4[/math]
If we would have 256 cases in normal if() else if(),
this method would reduce them to just 8 executions of if(). max.3200% speedup.
[math]\log_2(256)=8[/math]
Using it has sense when cases have approximately the same probability of being executed.
It shouldn't be hard to make program generating such code with empty {} to fill them, taking as argument quantity of cases..
Best Regards!
]]>Chapter one is here
Here is the next installment (shorter and with more animation)
I would welcome any feedback.
]]>
8Z5*CD2%J$7SQRI@Z12
What willcome in place of the question mark (?) in the following series based on the above arrangement?
58C
*ZD
C52
?
]]>
Consider a 10 x 10 table with 10 by 10 variables and in the cells their correlation coefficient. Suppose that I want to select every combinatory set of variables that have a correlation smaller than 0.1. Is there a way in, let's say, Excel, or SPSS, to do this?
E.g., VAR1 has a correlation smaller than 0.1 with VAR2, VAR3, and VAR4.
VAR2 and VAR3 have a correlation smaller than 0.1, VAR2 and VAR4 too, but VAR3 and VAR4 have a correlation of 0.2
The resultant sets are (1) VAR1, VAR2, and VAR3, and (2) VAR1, VAR2, and VAR4.
You can imagine that this gets really complicated for 10 variables of which some have correlation smaller than 0.1 with at least 6 other variables.
In short: of 10 variables, I need to define every possible set that is made from any combination of variables, of which their inter-variable correlation is less than 0.1.
How can I do this?
Thank you very much!
F.
]]>1.) Reasonably, evolution is optimising ways of contributing to the increase of entropy, as systems very slowly approach equilibrium. (The universe’s predicted end)
a.) Within that process, work or activities done through several ranges of intelligent behaviour are reasonably ways of contributing to the increase of entropy. (See source)
b.) As species got more and more intelligent, reasonably, nature was finding better ways to contribute to increases of entropy. (Intelligent systems can be observed as being biased towards entropy maximization)
c.) Humans are slowly getting smarter, but even if we augment our intellect by CRISPR-like routines or implants, we will reasonably be limited by how many computational units or neurons etc fit in our skulls.
d.) AGI/ASI won’t be subject to the size of the human skull/human cognitive hardware. (Laws of physics/thermodynamics permits human exceeding intelligence in non biological form)
e.) As AGI/ASI won’t face the limits that humans do, they are a subsequent step (though non biological) particularly in the regime of contributing to better ways of increasing entropy, compared to humans.
2.) The above is why the purpose of the human species, is reasonably to create AGI/ASI.
[math]S_c(X,\tau) = -k_B \int_{x(t)} Pr(x(t)|x(0)) ln Pr(x(t)|x(0)) Dx(t)[/math] Equation(2)
6. In general, agents are demonstrated to approach more and more complicated macroscopic states(from smaller/earlier, less efficient entropy maximization states called “microstates”), while activities occur that are “paths” in the total system space.
[math]F_0(X,\tau) = T_c \nabla_X S_c(X,\tau) | X_0[/math]Equation(4)
7) In the extension of equation (2), seen in equation (4) above, "[math]T_c[/math]" is a way to observe the various unique states that a highly intelligent agent nay occupy, over some time scale "[math]\tau[/math]"....(The technical way to say this, is that "[math]T_c[/math] parametrizes the agents' bias towards entropy maximization".
8) Beyond human intelligence, AGI/ASI are yet more ways that shall reasonably permit more and more access to activities or "paths" to maximise entropy increase.
A) Looking at item (8), one may see that human objective/goal is reasonably to trigger a next step in the landscape of things that can access more ways to maximize entropy. (Science likes objectivity)
B) The trend says nature doesn't just stop at one species, it finds more and more ways to access more entropy maximization techniques. Humans are one way to get to whichever subsequent step will yield more ways (aka more intelligence...i.e. AGI/ASI) that shall generate additional "macrostates" or paths towards better entropy maximization methods.
]]>
A Deep Learning overview, by gauge group notation:
Psuedocode for the "Supersymmetric Artificial Neural Network":
a. Initialize input Supercharge compatible special unitary matrix [math]SU(m|n)[/math]. [See source] (This is the atlas seen in b.)
b. Compute [math]\nabla C[/math] w.r.t. to [math]SU(m|n)[/math], where [math]C[/math] is some cost manifold.
- Weight space is reasonably some K¨ahler potential like form: [math]K(\phi,\phi^*)[/math], obtained on some initial projective space [math]CP^{n-1}[/math]. (source)
- It is feasible that [math]CP^{n-1}[/math] (a [math]C^{\infty}[/math] bound atlas) may be obtained from charts of grassmann manifold networks where there exists some invertible submatrix entailing matrix [math]A \in \phi_i (U_i \cap U_j)[/math] for [math]U_i = \pi(V_i)[/math] where [math]\pi[/math] is a submersion mapping enabling some differentiable grassmann manifold [math]GF_{k,n}[/math], and [math]V_i = u \in R^{n \times k} : det(u_i) \neq 0[/math]. (source)
c. Parameterize [math]SU(m|n)[/math] in [math]-\nabla C[/math] terms, by Darboux transformation.
d. Repeat until convergence.
References:
Although not on supermanifolds/supersymmetry, but manifolds instead , here’s a talk by separate authors at Harvard University, regarding curvatures in Deep Learning (2017).
A relevant debate between Yann Lecun and Marcus Gary, along with my commentary, on the importance of priors in Machine learning.
Deepmind’s discussion regarding Neuroscience-Inspired Artificial Intelligence.
1.a) Life's purpose is reasonably to do optimization.
2.a) Artificial General intelligence (AGI), will probably arise in one decade or more, and they shall probably be better optimizers than humans.
2.d) In fact AGI is often referred to as the last invention mankind need ever make: https://youtube.com/watch?v=9snY7lhJA4c)
3) Thus, our purpose as a species is reasonably to focus on AGI development.
Some benefits of AGI may be:
I) Solve many problems, including aging, death, etc.
II) AGI may be used to help to find a unified theory of everything in physics!
III) Enable a new step in the evolutionary landscape; i.e. general intelligence that's not limited to human brain power, where humans may perhaps no longer be required to exist because smarter, stronger artificial sentient things would instead thrive.
if you know a platform which fulfills ths criteria, just click on the link below and add the address:
https://drive.google.com/open?id=1nF_1CqCbPCzikBf5tMaGnSjsZOOihCt1tjG1Q0MY_bM
Thanks!
Would a simulated universe be helpful to developing A.I.?
Also, note, with this universe I'm talking really basic mechanics. Maybe even only 2D.
]]>How can we know if the resulting route is the most efficient? i.e. how can we test the method we have discovered to be the solution without actually solving all permutations for every problem?
Is their some formula that produces the minimum distance without giving the route?
]]>How can we know if the resulting route is the most efficient? i.e. how can we test the method we have discovered to be the solution without actually solving all permutations for every problem?
Is their some formula that produces the minimum distance without giving the route?
]]>How does n-gram works? For instance, I can type:
"I drink a Coca Cola in my room today."
You run an n-gram for 2. "I drink" no meaning. "drink a" no meaning. "a Coca" no meaning. "Coca Cola" has meaning. And it goes up with all possible combinations in ascending order "I Coca", "I Cola", "I in" etc.
Collecting all web pages and texts you'll probably sum up the frequency for Coca Cola to be 100. Meaning Coca cola has a meaning and all words associated with it. And therefore when you type I drink a Coca Cola, you would get a list in the database saying "in the room", "in the zoo", "is cold", "is hot", based on the frequencies. And the one with the highest frequencies, usually make sense. And there you have a huge database at your command.
I used the algorithm provided here. And used it on the text dump from Wikipedia. Pure text, parse it to sentences, and named it "AI.txt". And it generated a "B.txt" file which I tried to shove it into the database. Then I find out that most Wikipedia information only shows up once, not the best solution for Frequency learning. So I've decided to scrape all web pages on the net, with the loop over IP addresses. If you know how to do that do help me out. Btw the size after generated from a 3MB text file is 3.9GB, and only on 3 words. So.........be prepared to have a huge database. If you have no idea what this is just ignore it. Else I'd like to hear your feedback and what you got, thanks.
P.S No this is not a homework assignment, just something I made out of free time
]]>So I would have a loop like 0.0.0.0 to 999.999.999.999 and just use urllib to download the webpages. Let me know if this is correct or if there is a better way, thanks
]]>Because currently I clock this processor to 4.0 ghz.
to balance.
Which I know is off by 1% , that is due to overheating above that with a standard fan cooler which sits at Min 44 C to Max 87 C which is say below 100 C to 150 C where it melts but I always go below 99 C to be safe which I have been told be be OC'd to 5.0 Ghz with watercooling.
]]>EW
]]>