Showing posts with label Daniel Dewey. Show all posts
Showing posts with label Daniel Dewey. Show all posts

Sunday, March 10, 2013

11. Future of Humanity Institute. Artificial Intelligence Reptile Brain Function. Countdown Oops Cycorp.

Empathy
  In a paper, "motivation selection in intelligent agents," that got Daniel Dewey hired at the Future Humanity Institute, thoughts were butterflies. Working memory was analogized as the brain’s butterfly net, "to scoop our scattered thoughts into its attentional gaze." But the net needed to be bigger. Quantitatively, more is better, right? So since "the average human brain can juggle seven discrete chunks of information simultaneously; geniuses can sometimes manage nine," the idea was to multiply by ten. Dewey posed a "hard cap on the complexity of thought. If we could sift through 90 concepts at once, or recall trillions of bits of data on command, we could access a whole new order of mental landscapes." You can see the numbers churning, 7, 9, 90, until Kurzweil's trillion, trillion singular artilect, femotect brain comes online.

Human cognition is a problem  because it is built of  "biochemical impulses" such as empathy..."not an essential component of intelligence." So,--if you can split off intelligence from being and make it an entity to itself you can have an artificial intelligence, emphasis artificial, a project derived from Nietzsche, human fission, split down the middle and divide the consequences: but here come the ants again: "AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent" (Nick Bostrom), "a super intelligence that might not take our interests into consideration, just like we don’t take root systems or ant colonies into account when we go to construct a building."

 The perennial ant man mouse butterfly analogy explicit in Existential Risk decrees that lower creatures exist only to serve the higher functions of intelligence and may be discarded at will, including the man himself when super man comes. Hugo de Garis, Kurzweil, Dewey, Bostrom. It's easy for them to imagine an insane machine:

" ‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world."

As in the replacement of being with thought, memory, contradiction with reason, and the ant man mouse analogy, Existential Risk worries that the artificial brain might suddenly go soft: "it is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximizing human happiness, except an AI might think that human happiness is a biochemical phenomenon.

 Benevolence is ruled out of these straw arguments as a condition of existence. Also happiness, equivocated as biochemical, as necessary to survival. Survival is the only necessity to super intelligence so conceived. A machine that endangered itself against its own survival by seeking happiness or benevolence would be just like the human who thinks that empathy and benevolence enhance life and without it life is pointless.

The eRisk taker says, "Let’s say you have an Oracle AI that makes predictions," which they do, dozens of  Web Bots, -but this knowing, predicting the future is a smoke screen for something else: "Bostrom and I discussed the Curiosity Rover, the robot geologist that NASA recently sent to Mars to search for signs that the red planet once [?] harbored life." Which is ridiculous considering that outer science, sanitized white science pretends  a 19th century views of Mars. Unless you wish to wait until the press conference you should not believe. What it adds up to is what it has been about for a century and more of a concoction of life other than human, whether from Nietzsche, the Red Planet, the galaxies, the aliens, the dolphin, all substitutes for the real. That's why Toby Ord, another great mind who says,- ‘I am finding it increasingly plausible that existential risk is the biggest moral issue in the world.’

 Contradiction

One could approach the Study of Existential Risk with the knowledge that that very Centre to study it was the central risk itself. Existential Risk has invented a specialized language and mindset to give it identity. The identity of Existential Risk there defined by its nature must reject other identities of Risk, rule out, not incorporate into dissent of their own study.  Persons opposed to these assumptions, excluded from the Centre, would be exterminated as it were like pesky ants. The funding for the Centre, the minds of the Centre, its techniques are the risk. The Risk to ourselves is that we at least consider that the cure for ourselves must be ourselves not another, whether alien or machine. Also we'd like to see the center confront the even Realer Risk of all the black budget science that it denies exists. If Nick Bostrom is the handmaid of CIA and Uni-Corp, he embodies the Risk.
Dismiss ordinary empiricism, no access to secret information. The Cambridge Centre for the Study of Existential Risk does not examine its own nuances of existence, but assumes them. Ordinary empiricism can know something a hundred, two hundred years after of the purpose behind chemtrails, underground bases, the quantum leap of science in 50 years. Too late. But for now the debasement of culture, government and society, weather wars, the effects of HARP, GMO hybrids, mutation invite salvation from without as a forced choice. Civilization has lost its inner life. The best comfort the boundless can minister is ayahuasca for knowing and the hope that the infrared Lucifer telescope in Arizona finds the answer fast. New science believes the aliens will be like Social Democrats for the good of mankind. New science believes Earth will be saved  from pollution and overcrowding, that Nietzsche and Schopenhauer represent truth, even if Hawking suspected  the putative alien (like the scientist), might not harbor opposition or dissent any better than Spanish priests among the Aztec.

Existential Riskers also worry about the absence of life in space, witness Enrico Fermi's "where are they" paradox. Millions of worlds but not one visit! Konstantin Tsiolkovsky said, "if intelligent civilizations are destined to expand out into the universe, then "scores of intelligent civilizations should be crisscrossing our skies."  This is what is called the outer science. Inner science is forbidden until it is announced that everything you thought or believed, meaning what was told you, is wrong. Hook, line and sinker. In the meantime Robin Hanson, another DARPA scientist at the Future of Humanity Institute, thinks maybe the universe wears a condom, there's  some "great filter," "something about life itself that stops planets from generating galaxy-colonising civilisations."  Maybe science will be seen in our putative future as an inversion of medievalism: "Maybe technologically advanced civilizations choose not to expand into the galaxy, or do so invisibly, for reasons we do not yet understand. Or maybe, something more sinister is going on. Maybe quick extinction is the destiny of all intelligent life." Futurists are a little cloudy today.

Protector of the Indians

From this tardiness of arrival science had to go ahead and invent its own salvation, that is, super intelligence. Lack of intelligence was about the only problem left. Oh, one other, the roots and the ants. "If you had a machine that was designed specifically to make inferences about the world...a primordial force of nature, like a star system or a hurricane — something strong, but indifferent...a super intelligence that might not take our interests into consideration, just like we don’t take root systems or ant colonies into account when we go to construct a building."(Bostrom quoted by Andersen). Super intelligence shares with Aliens the instincts of the  Spanish. Who says earth is not an interesting place? Earth natives must summon a new Las Casas to represent them in the high counsels of the heaven and the low counsels of the labs as Protector of the Humans. Plan B before the aliens land? Superman! Which you think, the aliens or the AI let us live? Be grateful.

 
* Dissenting Views: 

1) Maybe existential riskers are in cahoots that even their big brains cannot fathom.The loss of inner life and existential risk are defeated by reason, but even yet, reason both advances and betrays with false principles of inclusion and exclusion. Thus to scientific reason the Cambrian explosion of life disproves both science and the Bible, but in a wider principle of reconciliation another paradigm, Contradiction let us say, rules in, mutually holds two conflicting truths in the mind simultaneously without choosing. Reason says that would be worse than what you've already got--says then the inner science, the black science. Pretending, moving and not moving simultaneously, but there are no Keatsians at the Centre, would have to be incorporated with the Centre in order to reach a  truly human notion of existential risk.  They exterminated them.

2) It is a mere subtext that no technology was ever invented without use, so here comes the sun bomb, the HAARP bomb, the chem bomb.


3) But there is no average human brain if it is seen as made in the image of God, or in other words, to apply descriptive techniques from analytical thought to synthesis-making is false. Another theory of the brain is that it thinks unknowing to itself and produces solutions to problems that it did not foresee. Any number of butterflies, "scattered thoughts" relative to the rest of the animal kingdom is completely arbitrary. This analogy utterly misses the human coin, which is based upon unknowing, not knowing! as the only species of intelligence. To build a machine that can fulfill the quantitative measure on an assumption that first denies the unknowing, and second rules out feeling, straw mans both machine and man. To take it as anything else is such a huge defect as to invalidate Existential survival altogether.  All these futurist scenarios are the thinking of the flawed men who rule out everything except their flaw.  Neither do they know Intelligence is relative depending on how far out you look. The wheel would have been better uninvented if it led to planetary extinction.

Prolegomena on Genius. The Teacher.

  This is a work about what we believe, what is possible to believe and what are the hindrances to our belief, being mainly one, intelligenc...