Note 1: intelligibility and the causal nature of intelligence

Positing the link between intelligence and intelligibility is interesting in itself, but he doesn’t seem to even consider the possibility of non-conceptual, non-thinking, unintelligible intelligence. Even though the book purports to entertain an outside, nonhuman perspective, it seems to start out from a perspective that is too human already, it’s too restrictive.

I don’t think that deprivatization, intersubjectivity, epistemological concerns and intelligibility, or anything pertaining to sociability really, has anything inherently to do with the essence of intelligence. Intelligence is much closer to a “blind idiot god”, a process with an imperative, than it is to “sense” or “meaning”.

It may be the case that intelligence is most efficiently realized through concepts and language along with a process that resembles human reasoning, but I don’t think these are necessarily constitutive.

I can’t think of a non-clumsy way of putting into words why I think this, however. It relies on examples of non-conceptual intelligences that are obviously just as fictional as AGI. In fact, an AGI might be the best context to try and imagine these examples. An AGI that does not reason, yet is intelligent. An AGI that can use language and communicate, but doesn’t think.


Can you explain more about what you think intelligence is?

It applies in the context of an agent with a goal behaving in an environment. Intelligence is both a measure of how successful it is at obtaining that goal* and the computational process that drives its behavior, depending on context.

I generally think of this as control problems where the environment is some kind of a system like a game or a physical simulation: this is intentionally extremely broad so as to include the possibility of all practical aspects of intelligence we see in humans.

* vary the goals and environments to get a more sensible measure

ok well yea you’re gonna disagree, because he doesn’t think that is sufficient for the designation of intelligence. He would regard all that as still taking place in the space of causes, as opposed to the space of reasons. What you are calling a goal he would probably regard as a pattern, fully explainable in terms of causal factors alone


What exactly do you mean by “doesn’t think” in your reference to AGI, I believe I have an idea I just don’t want to be ascribing the wrong line of thought due to its inherent long list of meaning that is currently ascribed to it. So if you wouldn’t mind could you further explain what exactly you mean by a non thinking, non reasoning AGI? […]

Clearly there is some computation going on, an agent that doesn’t ever do anything isn’t intelligent. So if “thinking” just means “any mental activity”, then of course, no AGI can be non-thinking. But I think that there is mental activity that can reasonably be called non-thinking, although I should’ve avoided this term.

An example: ants and bugs. There’s looots of processing in their brains going on, that I wouldn’t call thinking. Also, they’re agents that consistently accomplish goals in varying environments, so they’re clearly intelligent. There you have an example of a non-thinking intelligence.

A less artificial example would be an AGI that has no need for language or concepts because it does not do reasoning. Imagine an AGI running a shoe factory. It generates and executes extremely elaborate plans and eventually conquers the entire global…. shoe market or whatever. It does this through a kind of planning that is singular in purpose (increase marketshare!) but unintelligible to itself: at no point does it stop to consider the fact that it is in fact increasing market share or to create any concept at all. All its mental activity is a form of visual-tactile-counterfactual simulating.

It will certainly have recurrent mental states and patterns, but I still think that with enough creativity you can imagine it succeeding without proper concepts. It might make sense for it to at some point actually develop concepts and even designate itself as a shoe marketshare increaser, but not in service of updating this norm, only in the service of bettering itself. Either way, it’s nonessential.


the book ostensibly argues against the idea of “intelligence-as-pattern-governed behavior” that is so pervasive in the literature (which is akin to the hypothesis of a “Blind idiot god” or “bateria reacting on a petri dish” as examples of intelligence). You might disagree with Reza’s account, of course. Just pointing out that evoking a concept of intelligence “without thought” is to level an argument the book is already an answer to, but perhaps this gets clearer later in. Same with the “anthropocentrism” objection. The book is very clear about why methodologically it tries to model intelligence upon human intelligence, while at the same time building leverage points with which to supersede it. This objection is answered in the book.

Note 2: language and reasoning

Language is embeddable in a blind optimization process, which seems to suggest that it’s just one of activities that an intelligence does, not something that is essentially constitutive.

The more intelligent a program you find the likelier it is that it has a capacity for language, concepts, etc. However that’s just an economics thing: concepts and language are in some sense a very efficient intelligence technology because they compress mental activity allowing you to do other stuff.

I am deeply skeptical of not talking about this stuff in the ‘space of causes’ because that’s what science does essentially, and I consider intelligence to be a real or formal phenomenon that science and math should have access to.

Note 3: evolution

(Not by me)

It’s interesting to think about this in terms of evolution, which is natural blind optimization process par excellence. My favorite example: There is a type of butterfly that lives in the forests of West Africa (I don’t recall the species). The plant life is so concentrated and thick that these butterflies have a difficult time finding each other in order to mate. So over many generations an interesting solution has emerged which solves this problem. Every morning for a few weeks a year the butterflies follow streams in the opposite direction of the water current. This eventually brings them to a high peak that sits above the forest and provides a clearing in which they can easily locate each other and mate.

Here we have a collective behavior which appears to be the result of reason, deliberation, and conscious planning. But the behavior exists without the need for these capacities. It is just a pattern based in differential response to stimulus, which emerged through natural selection. From this angle we can say that conscious reasoning is partly a capacity to perform, in something close to real time, adaptations to environmental problems. The kind of adaptations that take place in pre conscious nature as well, just on a much longer time scale One way to think of mind is as a contraction or concentration of “tendencies” already present in nature at large.

*I’m personally agnostic about whether we should call this process of blind optimization ‘intelligence’

Note 4: autonomy, directedness, and AI risk

I’ve had a short twitter exchange with Peter Wolfendale on this topic some time ago, and I’m glad that both him and Reza Negarestani wandered into this area of AI risk, even with explicit mentions of Yudkowsky and Bostrom.

Here, I am much more on home turf and I’m confident in saying that they’re both wrong in their treatment of AGI. Autonomy (being an agent) and directedness (having a goal) are the formal necessary conditions of being intelligent, and the “socio-semantic capabilities” are the contingent residuals of a concrete realization of intelligence! (Which is exactly what Reza sets out to separate in I&S and does so incorrectly in this way IMO)

The concept of intelligence necessarily includes agency and goal-directedness, exactly in this formal sense that the only sensible definitions of intelligence require them. Intelligence is, very crudely, the capacity for making maximally useful decisions in an environment. The decision-making and being-in-an-environment are what give you agency/autonomy, and also decision-making and a usefulness measure are what give you goal-directedness. The environment doesn’t even have to be a “space” in some geometrical sense! And the usefulness measure is arbitrary.

When we advance such a definition our goal is to focus on the essence of intelligence, in other words to remove the contingent “barnacles” attached to common notions and instantiations of the term and come to some scientific and mathematical understanding that would in principle be arrived at too by any other being. No matter the frequency of appearance, or maybe even the practical necessity of peer phenomena such as language and concepts, that which is merely downstream from the essence has no place in the definition.

I don’t see any amount of Hegelian magic, or talk of the difference between absolute and determinate negation, getting you out of this. The beauty of formal concepts is that they’re absolutely determined, and if we agree on that definition of intelligence, there’s not amount of ‘determinate negation’ that would make it sensible not to require the concepts of autonomy and directedness.

I would genuinely love to see a definition of intelligence that upends this principle and has some impact on AI risk discourse. But I’m not seeing it. For all intents and purposes, Yudkowsky et al are essentially correct in their treatment of AGI.

Reza’s concrete arguments against this view, judging from the review article, seem a bit unsophisticated. When AI risk people talk about AGI as a perfect Bayesian predictor, they’re not concerned with the problem of induction: it’s cut away by their (again, the only sensible one) definition of intelligence as something that is useful, not something that is epistemologically sound. The fact that the paperclip maximizer doesn’t care about the “5 minute problem” or the problem of induction, probably goes in its favour…


What are the conditions for autonomy?

The motivation is that it’s not immediately clear to me why this is a requirement for intelligence. I lean kind of panpsychist and am loathe to abandon the possibility of diffuse fuzzy intelligences emerging within complex processes.

E.g. when Foucault has asubjective strategies emerging with the mesh of power relations that seems to suggest emergent intelligences appearing within institutions or discourses that aren’t necessarily easily localisable

Very interesting question.

There is of course no notion of intelligence or autonomy that is inherent to the universe, it’s a bunch of processes that we subsequently equip with structure. One such structure is intelligence. I’m arguing that it necessitates or assumes additionally the structure of agency (along with behavior, decisions, utility, etc.)

Yet I also agree that ‘diffuse intelligence’ is a valid concept, we can at least imagine it and you’ve given an example.

I don’t think that this is a problem. The reason it’s not a problem is that these are formal notions that have no commitments wrt the realization: it does not matter if the agent is actually a bunch of disparate physical units, something that cannot be localized, a mesh of institutions, or a single ‘thing’.

If we ascribe intelligence to any sort of process, then we also ascribe agency to it. It’s just that we haven’t specified the interface of this agency: if we’re talking about people, the interface is the whole physical world (well not really but nevermind). If we’re talking about an institutional mesh, the interface are other institutions, society, etc.

In a way, all real intelligences are diffuse. I cannot imagine an intelligence that is not on some level an emergent phenomenon as opposed to something fundamental.

Note 5, on induction

Drawing on Bertrand Russell’s five minutes ago paradox according to which the whole universe could have been created moments ago with our memory of the past included, Negarestani further contends that we cannot even be certain of the validity of our past experiences upon which induction depends to makes its generalizations from observed patterns: ‘a puritan inductivist who believes that general intelligence or the construction of theories can be sufficiently modelled on inductive inferences alone takes for granted the reliability of the information about the past’. While the argument for simplicity might provide one last-ditch defense of the overdetermined inductive model of mind, Negarestani persists that simplicity is merely a pragmatic rather than objective rule. After all, there can certainly be cases where ‘the false theory may be simpler than the true one’. In Negarestani’s view, we ought to refrain from modelling the mind on an inductive or any one method of cognition in favor of conceiving mind as a complex interaction of many epistemic approaches: ‘this problem, however, could have been avoided had the model of general intelligence accommodated epistemic multimodality’.

Regarding this point, I want to quote a friend of mine who is writing a very weird philosophy / computer science mashup text, “from the perspective of an AGI before it opens its eyes” (a perspective which is nicely dual to Negarestani’s own humans-as-AGI thought experiment).

First let me give some context: he’s making this sort of elaborate and technical epistemological and metaphysical system that Reza would almost surely call ‘precritical’ or something. The core idea is basically taken from Tegmark: the universe as a computable mathematical structure, discovering this structure is what science does; but he’s writing about the information-theoretic and epistemological side of things more than the physics (well, for now, I’ve only seen the first chapter).

His method is to look at the moment before an AGI opens its eyes and sees what kind of universe it’s in, i.e. sees the effect of the physical laws that would govern its perceptions. The system he’s developing is basically this kind of “blank-slate” epistemology that he argues the AGI would develop. Very roughly, the AGI accepts it will find itself in a mathematical structure which will provide sense-impressions, and it will seek to create a model of this world. What is the best way to do that? Well, assuming hypercomputation (which is as thing you do before you work on computable approximations), it is to look at all possible programs whose outputs are consistent with the sense-data, and exponentially punish program length. This is called Kolmogorov complexity and the reasoning method is Solomonoff induction (your theory is the shortest program). There’s a myriad of technical and philosophical problems arising from this, maybe you’ll read about it one day in his book.

Now finally here’s the part about the problem of induction:

Instead of considering all programs consistent with the data, we can focus on the one model which is the simplest [shortest]. The longer ones are exponentially less probable, and there is an infinity of them. A Python program p_a which is n characters longer than p_b, where both p_a and p_b are consistent, is 95^n times less probable. […]

What was our goal? We arrive at that particular program by trying to explain the existing data we have already seen. What is the fruit of our solution? That program also gives us predictions about what will happen next. The programs which behave according to the same principles all the time are shorter than programs which produce data according to one set of rules for some time and then suddenly change and start producing data according to some other set of rules. This is why the sun will very probably rise tomorrow. The problem of induction dissolves. Our knowledge enables us, to a limited degree, to see into the future.

The universe which behaves one way and then just stops would need to know when to stop. Planck time is approximately 5.39∗10−44 seconds. The age of the universe is around 13.8 billion years. This gives us around 8.074152133580706 ∗ 1060 Planck time units. The probability of us living in such a universe is by that factor smaller than living in the one which does not stop.

I think it’s really interesting to go all-in on induction like this, and basically relegate the normative questions Negarestani wants us to tackle to mere objects of our ultimately inductive theories (as Schelling points or social contracts, nothing more than mere phenomena in any case). It’s an extremely compelling model of reasoning to me.

(Also I have to brag I’ve minimally contributed to this part of his text with one problem he solved. He had the answer almost immediately so maybe he actually thought of it before though idk. It’s the “universe that stops RIGHT NOW” problem.)


What argument is there for treating the universe as a computable structure when almost nothing is computable (including Kolmogorov complexity which is being employed by the AGI)? Is a claim about how a certain AGI sees the world or that the world is like this and the AGI has some access to it? Or, alternately, what am I misunderstanding?

I really, really recommend getting Max Tegmark’s book, Our Mathematical Universe, for this. Also it’s an extremely fresh pop sci review if you need that kind of thing, especially for cosmology (his field). It’s definitely the most technical pop-sci I’ve ever seen, really good book.

Anyway the argument is really crazy. There are no uncountable sets and everything is computable. If anything, it’s a never before seen line of reasoning:D (not that there are no uncountable sets, this is actually a controversial but an old and known math phil position taken by Brouwer and Wittgenstein, but simply the direction he’s concluding stuff in)

This neatly solves some problems in physics apparently which I’ve since forgotten. Kolmogorov complexity may be hypercomputational but that doesn’t really matter, no one guarantees that you’d be able to reason about things perfectly. But if you could, that’s how you’d do it.

And it’s both a claim about how the world really, actually, no-strings-attached is, and for how the AI should reason about it.

And when you think about it it’s almost sensible. Picture Conway’s game of life, but instead of changing the board state, you stack one state on top of the other ad inifinitum. This is the “mathematical structure” of the Conway’s game of life modulo starting sate, the analogue of our spacetime. A sentient glider inside it would uncover various laws that hold for the Conway’s game of life, just like we do about our own universe, by observing and extrapolating.

Eventually it would create a theory, which would in essence be a program–the Conway’s game of life!

That’s what’s going on with his explanation at least.

Also, the actual structure is isomorphic to the turing machine that computes it or whatever.

Note 6, on the book’s accesibility

‘philosophical maturity’ is just as real as mathematical maturity: you can’t expect to follow a philosophical discussion without putting in a lot of work beforehand in the same way that you cannot understand mathematical concepts or problems without having a solid grasp of the material that precedes them. in both disciplines there are techniques, motivations, historical projects, patterns, that are often ‘tacit’ and only reveal themselves when you know enough to “read between the lines”: you may understand the words just like you may understand steps of a proof, but without doing the prior work you won’t understand the point.

people who complain about philosophy as being “profound bullshit” often miss that. fair enough.

but there is a major difference in how this play out in practice: mathematics is almost maximally efficient with its “maturity budget” because there is very little superfluous stuff you have to learn, and almost nothing goes stale. every step you take is a step in the right direction, bringing you something. this is a natural consequence of the kind of reasoning that mathematics is.

even taking into account how philosophy simply isn’t about this kind of reasoning, there is something about the way that it is done that makes it even less efficient with the “maturity budget” than it has to be. I desperately don’t want to endorse this but there is something very Chad and based about Bertrand Russel not even bothering to understand Hegel and carelessly dismissing him outright. furthermore, single posts on blogs like Slate Star Codex can contain more valuable insight than entire books I’ve read, and these posts are written in an almost opposite style than most philosophy texts: completely direct, down-to-earth, full of examples (not to mention actually good writing).

so sometimes it genuinely feels like the most basic complaints lobbed at philosophy really are true. maybe some curious guy with a propensity towards neuroscience and a blog really will tell you more about the mind in a single post than an entire academic career in philosophy ever could

I wanna qualify this last part with saying that I really do believe in the legitimacy and autonomy and all that and the existence of genuine philosophical problems that justify a community and jargon etc etc, and none of this is to imply that this is even a solvable problem or that someone is responsible even in the abstract

Note 7: bitcoin 2: electric boogaloo

Land’s Bitcoin stuff actually makes sense in this context:

As Land notes in an early footnote, his key theoretical nemeses are precisely ‘neohumanists’ like Negarestani for whom human cognition is more apt to judge what is real and true in the space of reasons than automated algorithms, programs and codes. Conversely, bitcoin is a form of automated criterion for the selection and separation of reality from its false appearances without a community of rational agents being needed to debate opinions about what they think is right, opinions which are always subject to revision, and hence error, corruption and bias. Bitcoin thus breaks down our rational intuitions and approximations of the real through a brute, technical proof of reality which is no longer subject to discretion, debate and revision: ‘the distinctive feature of the Bitcoin game is that it produces binding decisions without a referee, or dependence upon prior agreement. Coordination is neither presumed, nor invoked, but produced’. If bitcoin marks the automation of intelligence, socio-semantic reason can only be seen as one possible intelligent system among many possible others rather than intelligence’s necessary and universal conditions as Negarestani’s first objection would have us believe.


this take is worse than “mouth of babes” or “noble savage”. bitcoin records facts by omitting any context it could possibly omit, it’s not as much an automation of intelligence as a reduction of facts to the most stupid version that then can be technically manipulated. as much as i like early land, the whole bitcoin book is ridiculous

The point here is that it’s not a system for recording facts, but a system for producing facts: it is a system that for all intents and purposes better solves certain games between humans than their reason does.

Land’s intent is not to argue that Bitcoin is literally “intelligent” or anything like that, only that socio-semantic elements do not posses sole claim to inference/facts/reality/behavior. Once that is shown, Negarestani’s assumption that they’re the necessary formal conditions of possibility of all intelligence is out of luck.

I think that bitcoin and the agents that use it (both humans and machines) for all intents and purposes also qualify to be designated as a “community” that practices reasoning about the validity of transactions.

I.e. there’s a very real difference between how we consider a plain payment processing system and bitcoin because of bitcoin’s role. They’re both machines for decision-making, but bitcoin produces consensus among rational agents.

A response to Land would be asking what is different between a central ledger (mathematically simple) and a decentralized one (more complex).

Personally I think he’s using this discrepancy in complexity to sneak in mysticism. The systems are functionally equivalent wrt the validity of their decisions, I don’t see how trustlessness has any effect on the argument about rationality.

How does mining work inside this context? What is its metaphysical equivalent?

I would connect mining with reasoning or inferences. It is the process that supplants the “norm revision” of socio-semantic intelligence: possibility of error (and thus the need for revision) is what make it work and expand, whereby mining in bitcoin achieves the same goal (correct inferences, i.e. separation of false from true transactions) but with the impossibility of error (modulo 50% attacks)

but honestly all of this is kind of a stretch and too close to this reasoning by analogy prevalent in continental philosophy that I don’t really know whether it makes sense to dig deeper in this sense of looking at what specific mechanisms in the blockchain can be connected with

This could be the starting point for an explanation of Land’s reactionary turn in terms of his championing forms of digital or network intelligence, despite their function as apparatuses to reassert identity and spatialize time (even though this is at odds with elements of his early work)