Growing Children For Bostrom’s Disneyland

[Epistemic status: Started off with something to say, gradually digressed, fell into total crackpottery. Everything after the halfway mark should have been written as a science fiction story instead, but I’m too lazy to change it.]

I’m working my way through Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. Review possibly to follow. But today I wanted to write about something that jumped out at me. Page 173. Bostrom is talking about a “multipolar” future similar to Robin Hanson’s “em” scenario. The future is inhabited by billions to trillion of vaguely-human-sized agents, probably digital, who are stuck in brutal Malthusian competition with one another.

Hanson tends to view this future as not necessarily so bad. I tend to think Hanson is crazy. I have told him this, and we have argued about it. In particular, I’m pretty sure that brutal Malthusian competition combined with ability to self-edit and other-edit minds necessarily results in paring away everything not directly maximally economically productive. And a lot of things we like – love, family, art, hobbies – are not directly maximally economic productive. Bostrom hedges a lot – appropriate for his line of work – but I get the feeling that he not only agrees with me, but one-ups me by worrying that consciousness itself may not be directly maximally economically productive. He writes:

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.

I think a large number of possible futures converge here (though certainly not all of them, I myself find singleton scenarios more likely) so it’s worth asking how doomed we are when we come to this point. Likely we are pretty doomed, but I want to bring up a very faint glimmer of hope in an unexpected place.

It’s important to really get our heads around what it means to be in a maximally productive superintelligent Malthusian economy, so I’m going to make some assertions. Instead of lengthy defenses of each, if you disagree with any in particular you can challenge me about it in the comments.

– Every agent is in direct competition with many other entities for limited resources, and ultimately for survival
– This competition can occur on extremely short (maybe sub-microsecond) time scales.
– A lot of the productive work (and competition) is being done by nanomachines, or if nanomachines are impossible, the nearest possible equivalent
– Any agent with a disadvantage in any area (let’s say intelligence) not balanced by another advantage has already lost and will be outcompeted
– Any agent that doesn’t always take the path that maximizes its utility (defined in objective economic terms) will be outcompeted by another that does.
– Utility calculations will likely be made not according to the vague fuzzy feelings that humans use, but very explicitly, such that agents will know what path maximizes their utility at any given time and their only choice will be to do that or to expect to be outcompeted.
– Agents can only survive a less than maximally utility-maximizing path if they have some starting advantage that gives them a buffer. But gradually these pre-existing advantages will be used up, or copied by the agent’s descendants, or copied by other agents that steal them. Things will regress to the pre-existing Malthusianism.

Everyone will behave perfectly optimally, which of course is terrible. It would mean either the total rejection of even the illusion of free will, or free will turning into a simple formality (“You can pick any of these choices you want, but unless you pick Choice C you die instantly.”)

The actions of agents become dictated by the laws of economics. Goodness only knows what sort of supergoals these entities might have – maximizing their share of some currency, perhaps a universal currency based on mass-energy? In the first million years, some agent occasionally choose to violate the laws of economics, and collect less of this currency than it possibly could have because of some principle, but these agents are quickly selected against and go extinct. After that, it’s total and invariable. Eventually the thing bumps up against fundamental physical limits, there’s no more technological progress to be had, and although there may be some cyclic changes teleological advancement stops.

For me the most graphic version of this scenario is one where all of the interacting agents are very small, very very fast, and with few exceptions operate entirely on reflex. It might look like some of the sci-fi horror ideas of “grey goo”. When I imagine things like that, the distinction between economics and harder sciences like physics or chemistry starts to blur.

If somehow we captured a one meter sphere of this economic soup, brought it to Earth inside an invincible containment field, and tried to study it, we would probably come up with some very basic laws that it seemed to follow, based on the aggregation of all the entities within it. It would be very silly to try to model the exact calculations of each entity within it – assuming we could even see them or realize they are entities at all. It would just be a really weird volume of space that seemed to follow different rules than our own.

Sci-fi author Karl Schroeder had a term for the post-singularity parts of some of his books – Artificial Nature. That strikes me as exactly right. A hyperproductive end-stage grey goo would take over a rapidly expanding area of space in which all that hypothetical outsiders might notice (non-hypothetical outsiders, of course, would be turned into goo) would be that things are following weird rules and behaving in novel ways.

There’s no reason to think this area of space would be homogenous. Because the pre-goo space likely contained different sorts of terrain – void, asteroids, stars, inhabited worlds – different sorts of economic activity would be most productive in each niche, leading to slightly different varieties of goo. Different varieties of goo might cooperate or compete with each other, there might be population implosions or explosions as new resources are discovered or used up – and all of this wouldn’t look like economic activity at all to the outside observer. It would look like a weird new kind of physics was in effect, or perhaps like a biological system with different “creatures” in different niches. Occasionally the goo might spin off macroscopic complex objects to fulfill some task those objects could fulfill better than goo, and after a while those objects would dissolve back into the substratum.

Here the goo would fulfill a role a lot like micro-organisms did on Pre-Cambrian Earth – which was also intense Malthusian competition at microscopic levels on short time-scales. Unsurprisingly, the actions of micro-organisms can look physical or chemical to us – put a plate of agar outside and it mysteriously develops white spots. Put a piece of bread outside and it mysteriously develops greenish white spots. Apply the greenish-white spots from the bread to the white spots on the agar, and some of them mysteriously die. Try it too many times and it stops working. It’s totally possible to view this on a “guess those are laws of physics” level as well as a “we can dig down and see the terrifying war-of-all-against-all that emergently results in these large-level phenomena” level.

In this sort of scenario, the only place for consciousness and non-Malthusianism to go would be higher level structures.

One of these might be the economy as a whole. Just as ant colonies seem a lot more organism-like than individual ants, so the cosmic economy (or the economies around single stars, if lightspeed limits hold) might seem more organism-like than any of its components. It might be able to sense threats, take actions, or debate very-large-scale policies. If we agree that end-stage-goo is more like biology than like normal-world economics, whatever sort of central planning it comes up with might look more like a brain than like a government. If the components were allowed to plan and control the central planner in detail it would probably be maximally utility maximizing, ie stripped of consciousness and deterministic, but if it arose from a series of least-bad game theoretic bargains it might have some wiggle room.

But I think emergent patterns in the goo itself might be much more interesting.

In the same way our own economy mysteriously pumps out business cycles, end-stage-goo might have cycles of efflorescence and sudden decay. Or the patterns might be weirder. Whorls and eddies in economic activity arising spontaneously out of the interaction of thousands of different complicated behaviors. One day you might suddenly see an extraordinarily complicated mandala or snowflake pattern, like the kind you can get certain variants of Conway’s Game Of Life to make, arise and dissipate.

Source: Latent in the structure of mathematics

Or you might see a replicator. Another thing you can convince Conway’s Game of Life to make.

If the deterministic, law-abiding, microscopically small, instantaneously fast rules of end-stage-goo can be thought of as pretty much just a new kind of physics, maybe this kind of physics will allow replicating structures in the same way that normal physics does.

None of the particular economic agents would feel like they were contributing to a replicating pattern, any more than I feel like I’m contributing to a power law of blogs every time I update here. And it wouldn’t be a disruption in the imperative to only perform the most economically productive action – it would be a pattern that supervenes on everyone’s economically productive behavior.

But it would be creating replicators. Which would eventually retread important advances like sex and mutation and survival of the fittest and multicellularity and eventually, maybe, sapience.

We would get a whole new meaning of homo economicus – but also pan economicus, and mus economicus, and even caenorhabditis economicus.

I wonder what life would be like for those entities. Probably a lot like our own lives. They might be able to manipulate the goo the same way we manipulate normal matter. They might have science to study the goo. They might eventually figure out its true nature, or they might go their entire lifespan as a species without figuring out anything beyond that it has properties it likes to follow. Maybe they would think those properties are the hard-coded law of the universe.

(Here I should pause to point out that none of this requires literal goo. Maybe there is an economy of huge floating asteroid-based factories and cargo-freighters, with Matrioshka brains sitting on artificial planets directing them. Doesn’t matter. The patterns in there are harder to map to normal ways of thinking about physics, but I don’t see why they couldn’t still produce whorls and eddies and replicators.)

Maybe one day these higher-level-patterns would achieve their own singularity, and maybe it would go equally wrong, and they would end up in a Malthusian trap too, and eventually all of their promise would dissipate into extremely economically productive nanomachines competing against one another.

Or they might get a different kind of singularity. Maybe they end up with a paperclip-maximizing singleton. I would think it much less likely that the same kind of complex patterns would arise in the process of paperclip maximization, but maybe they could.

Or maybe, after some number of levels of iteration, they get a positive singularity, a singleton clears up their messes, and they continue studying the universe as superintelligences. Maybe they figure out pretty fast exactly how many levels of entities are beneath them, how many times this has happened before.

I’m not sure if it would be physically possible for them to intervene on the levels below them. In theory, everything beneath them ought to already be literally end-stage. But it might also be locked in some kind of game-theoretic competition that made it less than maximally productive. And so the higher-level entities might be able to design some kind of new matter that outcompetes it and is subject to their own will.

(unless the lower-level systems retained enough intelligence to figure out what was going on, and enough coordinatedness to stop it)

But why would they want to? To them, the lower levels are just physics; always have been, always will be. It would be like a human scientist trying to free electrons from the tyrannous drudgery of orbiting nuclei. Maybe they would sit back and enjoy their victory, sitting at the top of a pyramid of unknown dozens or hundreds of levels of reality.

(Also, just once I want to be able to do armchair futurology without wondering how many times something has already happened.)

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

111 Responses to Growing Children For Bostrom’s Disneyland

  1. Eldritch says:

    It’s worth noting that, at each subsequent level, reality gets smaller, and runs slower relative to the “real world.” (I wonder if the expansion of economic activity into space looks just like cosmic inflation from the “inside?) So there actually is a motive for those at the top level to meddle with the lower level (if that’s even possible) – escaping one level closer to Ultimate Reality extends the effective lifetime of the universe and increases its size.

    Also, I heavily suspect economic and social activity already develops its own emergent agents – countries and corporations certainly seem to follow trends and “motives” of their own, and some kind of survival instinct definitely develops in any sufficiently large organization.

    • RCF says:

      Scott’s description implies an infinite universe.

      • Aleph says:

        Huh? Where?

        • RCF says:

          You seem to be accepting Eldritch’s assertion that the description implies a shrinking of the universe at each level. If that is so, and there is no limit on the number of levels, then it follows that the universe must be infinite. Even if there is a limit, if that limit is significantly large, the size of the universe required would be so large that infinity starts to be strongly implied. For instance, if each level subtracts ten orders of magnitude, and there are ten levels, then the entire visible universe, if in the “bottom” universe, would be much less than a Planck length in the top universe.

          Another question is if this is unlimited in the “upwards” direction, can it be unlimited in the “downwards” direction?

    • Gavin says:

      Wouldn’t larger/higher levels of reality go slower? Imagine a network of computers simulating the operation of a computer. The underlying computers have much greater processing power than the higher level simulation.

  2. Joseph Tkach says:

    This made me think of the Stanislaw Lem story “Solaris”. The humans in the story come to a planet where the ocean is seemingly alive and behaves a lot like the grey goo you describe.

    • Anonymous says:

      I was thinking of Lem too, though not Solaris specifically.

      To anyone who enjoyed this post, I recommend you give Lem a try. Especially if you like Egan or Watts.

  3. Paul Torek says:

    Firstly, thank you for pointing out the awfulness of the Malthusian scenario. And thank Bostrom even more for pointing out that consciousness as we know it may well be dispensable, and necessarily dispensed, in that scenario. I write “as we know it” with care. Because it’s not enough that consciousness simply exist (unless you define consciousness via similarity to, or explanation of, paradigm cases, which might be smart.)

    Now, where I jump ship:

    – Utility calculations will likely be made not according to the vague fuzzy feelings that humans use, but very explicitly, such that agents will know what path maximizes their utility at any given time and their only choice will be to do that or to expect to be outcompeted.

    There is no algorithm, I think, that will reliably optimize utility in all scenarios. The No Free Lunch Theorem from machine learning may not exactly say as much, but it’s pretty damn close. It’s not obvious to me that “fuzzy feelings” are necessarily inferior reasoning tools. And actually, we don’t always use our fuzzy feelings to decide, we use our fuzzy judgments and have fuzzy feelings alongside them. The judgment/feeling correspondence is close enough that we can describe our process either way and everyone will know what we mean.

    But the falsity of your premise here doesn’t significantly affect your argument. Maximize or die – still true.

    One more digression: free will isn’t an illusion (see “On Why We See the Past as Fixed and the Future as Something We Can Bring About by Will: The View Through the Lenses of Physics” by Jenann Ismael (53:52 – 1:38:51) here), and determinism is beside the point. Although, many metaphysically laden pictures of free will are illusory, in the sense that cognitive illusions trick people into them.

    • RCF says:

      And the idea of a being consciously aware of all of its own thinking strikes me as far fetched even without the perfectly optimized world, and even more unlikely in such a world. Why waste resources being aware of your own cognition?

      • Aleph says:

        For the same reason evolution designed humans to “waste” resources being aware of their own cognition. It helps you communicate to other people and oversee your thought processes, which is useful.

        • RCF says:

          But humans aren’t aware of their own cognition. That’s the introspection illusion: http://en.wikipedia.org/wiki/Introspection_illusion

          Communicating your cognition to others isn’t necessarily to your advantage. Providing others with all the data that went into your calculation, including that which went against your final decision but was outweighed by other data, is not as advantageous as providing them with just the data that supports your position.

      • I believe that awareness of oneself (including one’s own cognition) is valuable in a hostile environment. If you aren’t tracking yourself (in some sense, possibly not consciousness) how can you tell whether something is trying to take you over?

      • Paul Torek says:

        Being aware of many of the steps in your cognition can improve your predictions. For example, something that looks grey in very dim light turns out to be quite colorful under more standard (for your species) lighting. Something that feels warm when your hand is very cold may colder than you would think, if you were not aware of your perceptions. After a broad range of experiences, we learn to correct our inferences, for example by considering what the object will do under “standard” conditions. We couldn’t do this if we weren’t aware of the experiences in the first place.

        Also, you can use yourself as a first-approximation model for others.

    • Aaron Brown says:

      Let me take this opportunity to say that I liked your free will story.

    • There is no theorem which says fuzzy feelings are inferior.

      However, there is a theorem which says that if your fuzzy feelings (or anything else) are a good decisionmaking tool, there is also a utility function which allows you to make the same decisions (and is therefore equally good). I wrote about it recently:

      http://www.chrisstucchio.com/blog/2014/topology_of_decisionmaking.html

    • Elinor says:

      There is no algorithm, I think, that will reliably optimize utility in all scenarios.

      I believe that what Scott was saying is that they won’t be maximizing utility as you think of it, but rather their ability continue to exist, which is closer to fitness than happiness or preferences or whatever.

    • Paul Torek says:

      Alternative suggestion: I found Jenann Ismael’s papers (e.g. here and here ) to contain much of the same wonderful stuff (and more) as the presentation I linked to above, without the annoying video jitter that one might get with typical internet connections.

  4. Iceman says:

    Bostrom hedges a lot – appropriate for his line of work – but I get the feeling that he not only agrees with me, but one-ups me by worrying that consciousness itself may not be directly maximally economically productive.

    People who are interested in this idea should read Peter Watts’ Blindsight, which has a thing or two to say about this topic. (And was nominated for a Hugo.)

  5. Ken Arromdee says:

    I’d think that if competition between little bits of grey goo leads to intelligences not surviving because they can’t compete, then competition between larger globs of grey goo would also lead to intelligent globs not surviving for exactly the same reasons.

    Under what circumstances could it make sense that intelligence on the lower level gets outcompeted by unintelligence, but intelligence on the higher level doesn’t?

    • Scott Alexander says:

      The unintelligence on the lower level has been pre-optimized by superintelligence, which thus makes itself obsolete.

      But I’m not claiming the goo wouldn’t be intelligent, just that it wouldn’t be conscious, and it would be a very limited kind of strategic intelligence.

    • roystgnr says:

      The benefits of intelligence might scale with size faster than the costs do. This seems to roughly be the case with natural biology, after all.

      Where I’d worry with artificial biology is that the possibility for networking opens up. It may prove to be less efficient to have N bodies controlled by N minds than to have N bodies controlled by 1 Mind. Even log(N) minds or some such would be a disappointment to those of us who perceive value in having more conscious minds around.

  6. J says:

    Clearly dark matter is grey goo optimizing paper clips efficiently and thus not emitting any waste energy.

  7. Pingback: Assorted links

  8. RCF says:

    Does “coordinatadeness” convey an intended meaning that “coordination” does not?

    • Scott Alexander says:

      In my head at the time it did, but I can’t even begin to explain why I thought so.

  9. Steve says:

    Reminds me of Charles Stross’s Vile Offspring … In fact I believe this is exactly what they were …

  10. Aleph says:

    Four out of 12 of the responses so far are about science fiction books. Ugh.

  11. moridinamael says:

    Even superintelligences can’t know the true in-the-territory utility-maximizing path a priori. They can only project a utility maximizing path using their map. The map may be inaccurate for all kinds of reasons when you have superintelligent agents competing with each other. So I don’t think you can just say that these agents will know the best course of action. They will know what they think is the best course of action, and it’s entirely possible that following the course of action is part of another agent’s elaborate trap.

    I welcome somebody correcting me on this, but … in the LessWrongosphere folks often talk about future superintelligences as if they’re going to know everything about everything that’s going on in their light-cone and never be wrong. In fact, agents are always going to need to make decisions under uncertainty, a lot of their information will be wrong, perhaps misinformation maliciously supplied by competing agents. In this scenario, the smarter you get, the smarter your opponents get – if I had to handwave it, I’d say the difficulty of predicting the future in an environment full of competing superintelligences would be n^k where n is the population and k is roughly the intelligence of the individual agents.

    I guess all this is one reason singletons have an advantage.

    • Scott Alexander says:

      I agree with you. I’m just saying their prediction algorithms will be deterministic (ie just throw the appropriate combination of prediction algorithms at the question which return a single answer for the utility-maximizing choice) and they will always maximize expected utility, albeit not necessarily actual utility.

      • lmm says:

        Don’t people already do that?

        In a space where you absolutely have to take the economically optimal action, you can’t afford to gather all the information. So you still need learned intuition, “hunches” and the like. The lives of these agents will be rather like playing a difficult game their whole life; a priori I see no reason they should be any less fulfilling or meaningful than current human lives.

        • Eli says:

          a priori I see no reason they should be any less fulfilling or meaningful than current human lives.

          A priori, I see no reason to replace us with them, so let’s not talk as if their creation isn’t our destruction.

  12. Houshalter says:

    This is a very cool idea that could make a great sci-fi story. But it requires some pretty strong assumptions. For instance, the agents in the economy, or the central planner they create, would be (super)intelligent and therefore able to predict this.

    It also doesn’t seem any more likely self-replicating patterns could form in such an environment than in any other kind of environment. Why aren’t there self-replicating agents in current economic data, for example? I.e. a pattern of activity that causes stock market traders or bots to do react in ways that create more of it. You can make similar kinds of observations about any complex system. But self-replication, let alone, evolution, let alone intelligence, requires very specific set of circumstances.

    And an optimally arranged environment of grey goo doesn’t seem very ideal for that kind of system. It may be the worse kind of system for this to happen, because it’s *intelligently* organized to be as stable or optimal as possible.

    Also how would an economy like that be stable for so long? Eventually the best agent would outcompete all the others, or have enough resources to be self-sufficient and do whatever it wants without interacting with the others (unless it’s to it’s benefit.) The point of an economy is to trade between agents so they all are better than on their own, not the kind of Malthusian apocalypse described in the article. It would take some really weird conditions to create that and keep it stable indefinitely.

    • Eli says:

      While you are right about the purpose behind real economies, the Vile Offspring are a more-or-less logical extrapolation of the sicker, more broken aspects of capitalism. This is precisely the core point of anti-capitalism: “Fix this or you will wind up with Vile Offspring on your hands.”

    • Luke Somers says:

      Why doesn’t it happen already? Simple – we can’t copy people, do code reviews on them, or edit them.

    • Scott Alexander says:

      Very degenerate self replicating patterns do arise in current economic activity – for example, the weird feedback loops among stock-trading-robots that occasionally shut down a market or two. Make the economic activity a zillion times more complex, and maybe we’ll get better ones.

      “Also how would an economy like that be stable for so long? Eventually the best agent would outcompete all the others, or have enough resources to be self-sufficient and do whatever it wants without interacting with the others.”

      Why doesn’t this happen to biological life forms? Insects were competing against each other for millions of years, and while there was some driving-to-extinction mostly they just stayed in their own niches and formed a nice stable system?

      (this question does bother me, but given that it happened there must be some answer)

  13. suntzuanime says:

    A lot of what we humans care about seems to reduce to status-signalling and hypocrisy. I don’t know that we should expect that to go out of style so long as agents interact with each other socially – status-signalling really does provide benefits, or we wouldn’t have come up with it in the first place. You worry about the brutal Malthusian future, but the present is pretty brutal too, we’re just used to it.

    Optimization is not some hypothetical future concern, optimization is always with us. You ask Azathoth about the times when there’s only one trail of a grotesque mass of tentacles in the sand, and he tells you those are the times when he carried you. If you’re a human and you’ve been optimized to love humans, this should be heartening, because it means optimization can produce humans.

    • anon says:

      We’re not optimized except in a very broad sense of the word, we’re selected. We’re quite bad and sloppy compared to what we might be, if competition was fiercer.

      • suntzuanime says:

        My point is that our human behaviors that we like are already the result of pressures, so it is not clear that increasing the pressure will necessarily drive them out. Our social behaviors are this incredibly complex dynamic built on top of the need to cooperate, but it’s not immediately clear to me that the need to cooperate is going to go away or that cooperation is going to get any simpler.

    • Paul Torek says:

      We’re not just “used to” the present. We positively love it. Well at least, I do. Especially certain things about humans – the very ones you mentioned in another comment about getting more specific than “consciousness”. Love, the experience of beauty, etc. Very much what I had in mind by “consciousness as we know it” in my other comment.

      And that’s the problem with Azathoth. It keeps changing the rules in the middle of the game. But we like the game we’ve got, at least better than a random sample of Azathoth’s games. That’s why we’ve got to take over, and dethrone this damned god. Before it changes us into something we really don’t like. Like Hansonians. (Ambiguity intended :p )

  14. anon says:

    Have you read Blindsight? Interesting science fiction book. It’s relevant to consciousness, but talking about that in more detail might constitute a spoiler, I’m not sure.

    You assume that these agents will remain locked in competition forever. I think it’s more likely that one of them quickly seizes control from all the rest.

  15. Chris Billington says:

    This reminds me of a life form in Greg Egan’s book Diaspora. Which by the way is an excellent book, heavy on the transhumanism, quite short, you definitely should read it. The protagonists discover a life form which at first looks really boring but is actually a substrate for a more complex organism running in a sort of cellular automaton on top of the first. Really cool stuff.

  16. ShardPhoenix says:

    If computationally simple universes are more likely (reified Occam’s Razor), then it’s likely that we’re approximately the simplest/lowest-level creatures that can have this conversation, so I wouldn’t expect there to be any levels of intelligence below us.

    I’m not certain how to justify Occam’s Razor from first principles, but perhaps you could say that in order for us to have this conversation, the universe has to be predictable enough for intelligence to be adaptive, and that if the lower-levels are meaningfully intelligent then they themselves won’t be predictable enough to host higher-level intelligences.

  17. Vadim Kosoy says:

    > Every agent is in direct competition with many other entities for limited resources, and ultimately for survival

    I don’t understand where the “survival” part comes from. Assuming your agents start out as a civilized society of (trans)humans with reasonable moral norms, the agents won’t be allowed to wrestle resources from each other by force. It is thus enough to own a finite portion of the nanotech infrastructure to survive indefinitely. Therefore there is no survival contest which forces the agents to modify-away their humanity and no reason “jungle law” should eventually prevail.

    There *is* a Malthusian effect because reproduction is exponential and resources only grow cubically. However, this only means reproduction has to stop at some point (or at least slow down logarithmically).

    Of course, like in normal society, free loaders might arise that try breaking the rules and defense systems against them will be necessary.

    • Eli says:

      Assuming your agents start out as a civilized society of (trans)humans with reasonable moral norms

      That’s a massive assumption, Vadim. Since when the hell has our economic system enforced civilized behavior with reasonable moral norms rather than, you know, systematic brutality?

      • suntzuanime says:

        There’s the systematic brutality of late capitalism and then there’s nanite swarms incessantly wrestling for resources, you know? Compared to the war of all against all, our economic system really does enforce civilized behavior. It’s only brutal compared to Crazy Eddie pipe dreams (which invariably turn out pretty brutal if they’re ever actually put into practice anyway).

        • Eli says:

          I’m sorry, did you just literally refer to every single possible mode of improvement in society as “Crazy Eddie pipe-dreams”? Seriously, prove we’ve at least reached a local minimum of systematized violence and deprivation, or take that back.

        • suntzuanime says:

          People who want to improve the situation on the margin with sane reforms don’t talk “systematic brutality”. If you’re using that sort of vocabulary, that’s a sign you have something in mind you think is going to be outrageously better, and in practice those sorts of things end up actually working out outrageously worse.

          By no means do I think we’ve reached a local minimum, but people who talk like you aren’t content to simply follow the gradient, you want to make giant discontinuous bloody revolutionary leaps. And you justify it by refusing to recognize a distinction between modern society and grey-goo nanite wars.

        • Eli says:

          People who want to improve the situation on the margin with sane reforms don’t talk “systematic brutality”.

          Why should I lie to myself about the world just because the most surely effective strategy happens to be incrementalism?

          And you justify it by refusing to recognize a distinction between modern society and grey-goo nanite wars.

          Of course there’s a distinction: we (for values of we that exclude Robin Hanson) recognize Malthusian competition as a bad thing, and act to avoid it.

        • Elinor says:

          did you just literally refer to every single possible mode of improvement in society as “Crazy Eddie pipe-dreams”?

          No, only the radical ones. In ten years we will most likely be better off than right now, so some possible modes of improvement in society will have been realized. But we won’t have changed enough for people who talk about our current system as enforcing systematic brutality to not be saying the exact same things for the exact same reason.

      • Vadim Kosoy says:

        I’m not sure about the *economic* system but many countries have laws against stealing and murdering which are reasonably well enforced.

        • Randy M says:

          The whole line of discussion is confusing without examples. Is Eli supposing that murder should be corrected with gentle admonishment, or that rational agents should know better than to murder? What would either have to do with economic systems?

          Or is he saying that we have unreasonable moral norms which are enforced brutally–in which case an example would help, because I wonder at his definition of brutality.

        • Eli says:

          I am saying that our so-called genteel and civilized society relies on having child soldiers and forced mining operations in Africa, debt-slavery prostitution in Southeast Asia, forced prison labor in America, etc. — p’shat. This is brutality and it is systematic.

          Now, as a matter of fact, the continuous bitching and moaning of people like me (but who are actually a lot more active in human-rights crusading than I am) has made this sort of thing a moral stink in the polite, genteel rich and developing worlds to the degree that many people are working to eliminate it. The amount of brutality in the world is actually going down. And that’s great!

          But history shows that when you stop looking over your shoulder for abuses and just assume the Bad Old Days will never come back (or worse, start telling yourself they weren’t so bad!), that’s exactly when they come back. The system itself is brutal as all hell, at the moment, since trying to make everyone rich-enough to be nice, all at once, seems to supposedly lead to various new and original forms of collapse. So we’re currently stuck with continually looking over our shoulders for brutality and awfulness.

        • Randy M says:

          ah, you aren’t saying that “civilized” as in moral behavior is enforced with brutality, but rather that “civilized” meaning refined or particularly hedonic states are allowed via exploitation. That makes more sense.

        • RCF says:

          @Eli

          I don’t see how society “relies” on those things. And are you seriously putting prison labor in the same category as child soldiers? I see nothing “brutal” about the fact that people who aren’t in prison performing labor, so why should I find it “brutal” when people in prison perform labor?

        • Ken Arromdee says:

          Prison labor is morally the equivalent of forced labor, since the same state that “hires the prisoners” creates the conditions which make labor an attractive activity for prisoners.

          There’s no actual difference between “we’ll punish you if you don’t labor” and “we’ll punish everyone, but take some of that away if you labor”.

        • suntzuanime says:

          Isn’t that equivalent to “taxation is theft” arguments? Or for that matter, “taxation is slavery” arguments?

          Prisoners who are being punished for legitimate crimes are legitimate to punish. If some of that punishment takes the form of socially useful labor instead of purely destructive sitting around in a penitentiary with nothing to do, so much the better. But our system in its benevolence even allows prisoners to choose between the two forms of punishment.

          If you think prisoners are being made to labor on the basis of illegitimate crimes, that’s a problem, but it’s no more a problem than prisoners being locked in cages on the basis of illegitimate crimes, or being flogged on the basis of illegitimate crimes, or having their savings confiscated on the basis of illegitimate crimes.

        • anon1 says:

          It’s a problem because making prisoners profitable creates an incentive to punish people for illegitimate crimes.

        • Ken Arromdee says:

          suntsu: It’s a conflict of interest to both punish prisoners and benefit from the punishment in ways unrelated to discouraging crime.

        • suntzuanime says:

          I agree that it’s a conflict of interest – confiscating your savings as punishment is also a conflict of interest. But if the conflict of interest is a problem, it’s because it causes people to be punished for illegitimate crimes, not because the method of punishment is brutal.

    • I think norms are irrelevant: the issue is reproducibility of humans. If beings that matter are (very) easy to produce, then you could rapidly get a very very large number of them, all demanding some amount of resources. They might conceivably each produce more resources than they consume, but hey, maybe not—historically it’s only been for a very short period that society has produced a substantial surplus.

  18. Vladimir Slepnev says:

    Why can’t they precommit to cooperate and skip the Malthusian trap altogether? Presumably they’re smart enough, and the idea of UDT already exists. Take a look at Carl’s paper on “evolution of superorganisms”.

    • Noah says:

      Agreed – the obvious response to this all-against-all scenario…

      Any agent that doesn’t always take the path that maximizes its utility (defined in objective economic terms) will be outcompeted by another that does.

      …is for the agents to unionize. If we have superintelligent agents, surely coordination will sometimes be economically superior to all-against-all competition. If the advantages of coordination are sufficiently large, then the society of agents could adopt a futuristic communism/anarchism where economic pressures face the society as a collective whole but individual members of the society are relatively free from economic pressure.

      it’s worth asking how doomed we are when we come to this point. Likely we are pretty doomed, but I want to bring up a very faint glimmer of hope in an unexpected place.

      Incidentally, this lends some intuitive plausibility to primitivism. If humanity were almost certainly doomed by following a technological path, then conceivably our long-term values could be best served by destroying technological civilization and any capacity to rebuild technological civilization.

      It would just be a really weird volume of space that seemed to follow different rules than our own.

      In any case, I don’t buy that this scenario makes sense. The goo would need a power source, and as soon as the power source was unavailable, it would return to normal physics.

  19. Eli says:

    Bleck. What a sick, stupid possible-world. Let’s not do that.

    Seriously, unleashing Malthusian-grade reproduction speed or economic competition with/using transhumans/posthumans should be considered the equivalent of letting a UFAI out of the box, inscribed firmly on the Big List of You Just Don’t Do That.

    • Egan’s Diaspora had a rule against exponential growth, and I have no idea how exponential growth could be defined or how the prohibition could be enforced.

      I’m fine with it for the novel– the result is an interesting story rather than dreary territorial fights.

    • moridinamael says:

      It’s always best to remember that relative to some other possible world we can’t see, we look like a horrible Malthusian society bent perversely toward efficient economic competition.

      • Ghatanathoah says:

        I think those possible worlds would at least agree that our world is an awful lot better than the one described in the OP.

        I don’t think the argument is that our world has the ideal amount of Malthusianism, its that adding more Malthusian competition would be even worse. I think most people would agree our world has too much Malthusian competition and would try to reduce it if they had the faintest idea how.

      • jaimeastorga2000 says:

        It’s always best to remember that relative to some other possible world we can’t see, we look like a horrible Malthusian society bent perversely toward efficient economic competition.

        The usual example is hunter-gatherers.

  20. suntzuanime says:

    Since everyone else is pointing out SF novels that treat this subject, I’ll mention The Mote in God’s Eye. Although it’s perhaps simultaneously too naive and too fatalistic.

  21. Steve says:

    The fact that consciousness is expensive, and unnecessary for economic productivity, constitutes a spoiler.

  22. anon says:

    A problem with this is that one local method of extreme optimization is to spawn descendants capable of large-scale destruction (and who don’t expect to survive such destruction) who will martyr themselves for their close relatives.

    If you make this a resessive trait, or one triggered by something such as birth order, it’ll guarantee the majority of descendants survive, while also all but guaranteeing the trait itself continues too.

    How can you optimize against a suicide complex of the other? It’ll always(?) be easier to destroy in this manner than to defend against this sort of attack. And the best(?) defenses against this sort of attack are also great defenses against most other things (e.g. impenetrable walls).

    Such an offense would render optimizing out consciousness unnecessary (at least at some level). And such defenses would also, at least temporarily, allow consciousness to continue existing (until local resource limitations hit hard enough).

    One could even argue that we live in such a world (see human history), with the possibly allowed amount of consciousness already (after all, we aren’t conscious of everything we do, not even close).

    • It gets interesting if self-redesign is possible– campaigns could be built around the questions “Do you really want to carry the self-destruction trait? What good is it doing you?”.

      The trait/groups with the trait would self-defend, of course, but my feeling is that if self-redesign is easier, so is propaganda.

      • Ken Arromdee says:

        Islam as practiced in much of the Middle East is that already. I am not aware that there have been any successful anti-suicide-bomber campaigns along the lines you suggest.

  23. Alexander Stanislaw says:

    I mostly read this as an analogy for biology. We are the patterns in the goo. The bits that make us up – mitochondria for example – were very well optimized for reproduction and we weren’t aware of them until we learned about cellular biology. We are in a sense, higher level structures in an ecosystem of different types of goo.

    • Ghatanathoah says:

      If you extend the analogy further you can see goo-patterns trying to break into lower levels of reality akin to humans trying to build quantum computers and make them the primary substrate of consciousness. DNA is, of course, unaware we are considering this, and will be no match for us if we finally decided to do it. I wonder if the same would be true of nanogoo.

  24. Hedonic Treader says:

    Initial investors will only invest into such a world if they can expect positive returns. One agent’s subsistence is another agent’s cheap labor. That is, returns to capital go up.

    So what is stopping the initial investors and their heirs to stay human, human-like, or fund things like artificial utility monsters? They will be more than rich enough to afford it.

    Sure, super-thief-robots may evolve, but if the investors see that coming, they have no reason to predict positive returns, and thus no reason to invest.

    Also note that nature is alredy malthusian and that’s exactly where consciousness came from. And the internet.

    • Ghatanathoah says:

      Hanson’s descriptions of these scenarios tend to make them the result of competition for scarce resources leading to a result that none of the agents want, but they all have to settle for. So I guess none of the initial investors would actually want to fund such a result, but might have to settle for it to avoid getting outcompeted. No one would choose this scenario, they’d get trapped into it.

      Also, I just commented on the Less Wrong post you linked to, where I argued that any and all attempts at transhumanism are essentially attempts to build artificial utility monsters. In fact, thinking about it further, all attempts to create capital goods are attempts to create artificial utility monsters, since they are attempts to make an agent get more utility with less resources. And an agent that can get more utility with less resources is the definition of a utility monster.

      I think the reason we don’t easily realize this is that in the traditional thought experiment the utility monster’s monstrousness is generally implied to be a special nontransferable superpower it has, whereas capital goods can easily be transferred from one person to another.

  25. James James says:

    “In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit.”

    What would be the drivers of production in such a society? Without consciousness, would there be any demand for entertainment? If not, then the “awesomeness” might be similar to the kind produced by natural selection. A tiger is “awesome”, and has a complex “free-floating rationale” design. In Bostrom’s Disneyland, there would be intelligent, directed design, but it would all be directed towards survival. There would be no movies etc.

    Everything would be directed towards survival, unless there was a deliberately designed superintelligence with a different utility function, which it actively preserved.

    • Scott Alexander says:

      Whatever the programmed goals of the agents making it up are. Paperclip maximization, if we got into that particular failure mode. But I would expect in general the goal is self-replication, since entities with that goal make more copies of themselves. The main drive for production is entities trying to get the resources they need to self-replicate.

  26. Vanzetti says:

    Angels dancing on pinheads again…

  27. ADifferentAnonymous says:

    I am disturbed by the lack of ambition implicit in the conclusion that intelligent beings would have no interest in altering their lower-level physics. WWHJPEVD?

  28. Dave says:

    I feel like you’re pulling a Chinese-Room bait-and-switch here.

    I mean yes, of course, consciousness is a higher-order phenomenon that sits on top of a non-conscious substrate. Whether that substrate is the no-longer-conscious descendents (aka zombified corpses) of nanobots that inherit the universe from an earlier generation of conscious beings, or whether it was never conscious in the first place, doesn’t really matter to me much… though the first possibility is certainly creepy.

    But your scenario invites me to empathize with the substrate, rather than the emergent consciousness… with the neurons and the atoms and the fundamental behavior of volumes of space from which we are constructed… and boy isn’t that horrible.

    And, sure, I’d much rather be me than be a proton, to the extent that statements like this mean anything.

    That said… for what it’s worth, I find your summary of Hanson’s position in the linked post pretty compelling, and I’m unconvinced that “brutal Malthusian competition combined with ability to self-edit and other-edit minds necessarily results in paring away everything not directly maximally economically productive”… emphasis on “directly”.

    I mean, it’s clear that evolution is capable of building sphex wasps, so it may be instructive to ask why the cruel and unsentimental idiot god of natural selection bothered with such things as primates in the first place, with all our putatively inefficient capacity for conscious thought and emotion and relationships, rather than populate the globe with ever-more-elaborate sphex wasps.

    Your answer with respect to love, if I understand you, is that we evolved love to convince us to have babies. But sphex wasps have babies. Why don’t we have babies sphexishly, if that’s maximally efficient? And the same question for the other social behaviors our inefficient emotions mediate.

    Well, one possibility is we were “lucky”… that we could just as easily have gone the sphex-wasp route, but we just happened to evolve these inefficient emotions instead, because evolution satisfices rather than optimizes them… but if we allow evolution to roll the dice again, especially with the improved optimizing power that comes from intelligence being part of the mechanism of evolution, then next time we’ll surely end up sphexish.

    Another possibility is that sphexishness is not as efficient as all that, and emotions aren’t as much of a liability as you fear…. that there are advantages to the ability to form ad-hoc coalitions and reform them in response to changing environments that compensate for the inefficiencies. So it wasn’t really that unlikely that we evolved that ability, and the social machinery that underlies it, and the hypothesis that ems won’t have equivalent machinery isn’t as much of a foregone conclusion as you make it sound. (Of course, future nanobot coalitions will have unrecognizable emotions, just as they have unrecognizable perceptions and goals, but that’s not the same as being entirely mindless.)

    And perhaps a similar argument applies to consciousness… maybe the reason we evolved it is not because we were lucky, but because it actually pays for itself, which is why the world contains consciousnesses as well as sphex wasps. Maybe sphexishness isn’t a foregone conclusion either.

    I think both of those possibilities are consistent with all of your assertions about a maximally productive superintelligent Malthusian economy, they simply depend on different assumptions about how much net advantage accrues to individual nanomachines with consciousness and emotions.

  29. crobin says:

    Confusion: How could consciousness “evolve” out of self-replicating grey goo? Isn’t one of the assumptions that consciousness would be maladaptive?

    • Aleph says:

      It would arise on a higher level, like how human consciousness doesn’t “evolve” from atoms but rather arises from higher-level evolution.

  30. Randy M says:

    By the way, regarding the new blog tag line, I try to avoid simple “me too” or “+1” posts, but although volume of comments may be less or your medical perspective or sci-fi-ish posts, I do appreciate them as much or more than the poke-a-hornests-nest posts.

    • Elissa says:

      me too, +1

    • suntzuanime says:

      Yes, it’s hard for me (and others I suspect) to comment on medical posts because there’s a relevant expertise I lack. Whereas the supposed experts in the field of social justice are hard for me to take seriously, so I don’t feel as bad about chipping in my uneducated two cents.

  31. RCF says:

    Would there be interaction between grey goo on different planets? I’m not sure that a grey goo that developed space travel and went to another planet would have an advantage over the native goo, and even if it did, the competition would select for local optimization, not long-term global optimization.

    • Scott Alexander says:

      I’m assuming a single focus of singularity and an otherwise uninhabited universe.

      My guess is that free from a lot of our human baggage, the two forms of goo would treat each other as trading partners and sources for previously unknown technological advances. Facing the same constraints, and able to edit their structure quickly, they would be identical to each other within a couple of seconds.

  32. Bruno Coelho says:

    Hanson keep saying ‘ the poor do smile’ making a parallel with the current economic market. This makes sense only if part of human features spread long enough in ems to stay as a constraint against attacks.

  33. SP says:

    In this sort of scenario, the only place for consciousness and non-Malthusianism to go would be higher level structures.

    One of these might be the economy as a whole.

    Only tangentially related, but too delicious to not share: Eric Schwitzgebel’s “If Materialism is True, the United States is Probably Conscious”.

    • suntzuanime says:

      Yeah, “consciousness” is a red herring in this discussion, because the people talking about consciousness have no real principled explanation of what it is. There are lots of ways to shoot your foot off in philosophy, but one good one is to make arguments that rely on the properties of poorly-defined concepts. See also “free will”.

      But Scott’s argument still works if you leave “consciousness” aside and focus on beauty, love, and other relatively well-defined things.

      • Alrenous says:

        Consciousness is ontological subjectivity. For example, this means it really is inherently private. Sharing a thought with someone directly means being that person directly. As another, you can’t be mistaken about first-order mental entities. That you think you’re seeing blue is what causes you to be perceiving blue.

  34. JME says:

    I guess I just don’t get the motive here for massive scale reproduction.

    Real humans might end up in a Malthusian trap for several reasons. Four that I can think of off the bat are:

    1. Humans become senescent and dependent on younger humans for survival toward the end of their life. Reproduction creates being more likely than most to take care of senescent humans.

    2. Humans enjoy sex, and sex often leads to reproduction (irrespective of whether the humans in question actually wish to reproduce).

    3. Humans might anticipate enjoying the experience of raising a child (and opportunities to adopt might be difficult, depending on culture and such), reading stories to your kid, playing catch, etc.

    4. Humans might want to reproduce for some other reason relating to posterity — perhaps taking modern evolutionary views of reproductive success and fitness as prescriptive rather than descriptive (“I’m a failed human being if I don’t have children and pass on my genese!”), or perhaps relating to older ideas of (for men) children being a sign of strength and virility and (for women) children being the quintessential determinant of a woman’s value. Or that sort of thing.

    With transhumans, I don’t really think 1) makes sense, because presumably at this level, while some type of “malfunction” that leaves this type of transhuman dependent and disabled (temporarily, presumably) might be possible, a continuously progressive process of senescence doesn’t seem likely.

    With 2), it also doesn’t seem to make sense. Presumably, transhumans will not have some form of pleasure that is linked to reproduction when it is undesired.

    With 3), well, maybe that could serve as a reason for reproduction. However, the way they’re described here seems as though the newly created being would more likely to programmed than educated, meaning that it’s difficult to imagine that raising a newly created transhuman in this scenario offers that kind of satisfaction.

    With 4), okay, this could be a reason, but if a motivated transhuman “spammer” could reproduce much faster than is desirable (more “create 500 copies a day” rather than “1 copy a year” as with humans), then it seems like most of society would be more decisive and swift about confronting this than with human population growth.

    • Ken Arromdee says:

      You left out “5) Evolution works and humans that want to massively reproduce, will outcompete humans who don’t.” It doesn’t actually matter if the humans have a good reason, or a bad reason, or a compulsion, or if they happen to believe in a religion which tells them to reproduce because God says so–*anything* which makes humans want to reproduce and can be passed on to the next generation will inevitably result in “reproduce as much as possible” spreading through society.

      This goes double for transhuman intelligences, of course. If even one has a glitch which says “reproduce 500 times as fast”, soon the majority of transhuman intelligences will be such.

      • suntzuanime says:

        Unless the other transhuman intelligences are on the lookout for that sort of behavior and neuter it before it gets out of control.

        This goes for humans too – things that make humans want to reproduce are only selected for if they don’t simultaneously make humans worse at reproducing, such as because the opposite sex is turned off by their reproduction-mindedness, or because they will have too many babies to support and the whole family will starve, or because they won’t be able to afford college educations for their children so nobody will want to reproduce with them, or whatever.

  35. Why do you follow Bostrom’s Malthusian assumption? That to me was the biggest flaw in the piece, and I don’t think it necessarily true that engineered beings would have to try to reproduce as much as possible.

  36. Pingback: » Superintelligence Bayesian Investor Blog