Activity Feed

  Knut Sondre Sæbø revised idea #4932.

If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts actually run.

from dataclasses import dataclass

@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?

@dataclass
class Theory:
name: str
items: list[Item]

plaintext
def score(self) -> float:
if not self.items:
return 0.0
ok = sum(1 for i in self.items if i.programmable)
return ok / len(self.items)

def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")

Demeter theory

demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotion
at", "function", False),
Item("emotion_weather", "function", False),
])

Axial tilt theory

axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axial
tilt", "node", True),
Item("solarconstant", "node", True),
Item("solar
angle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])

compare(demeter, axial_tilt)

Output metrics:
Demeter 25%
Axial tilt 100%

I'm not a programmer, so the code below is 100% AI-generated. But here is an attempt. If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts run.

from dataclasses import dataclass

@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?

@dataclass
class Theory:
name: str
items: list[Item]

plaintext
def score(self) -> float:
if not self.items:
return 0.0
ok = sum(1 for i in self.items if i.programmable)
return ok / len(self.items)

def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")

Demeter theory

demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotion
at", "function", False),
Item("emotion_weather", "function", False),
])

Axial tilt theory

axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axial
tilt", "node", True),
Item("solarconstant", "node", True),
Item("solar
angle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])

compare(demeter, axial_tilt)

Output metrics:
Demeter 25%
Axial tilt 100%

  Knut Sondre Sæbø commented on idea #4931.

It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.

This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.

This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.

#4931​·​Knut Sondre Sæbø, 7 days ago

If we normalize a theory into the parts that can be put on a computer, the types it uses, the nodes (specific values) it commits to, and the functions between them, we can score the theory by how many of those parts actually run.

from dataclasses import dataclass

@dataclass
class Item:
name: str
kind: str # "type", "node", or "function"
programmable: bool # does it compile and run?

@dataclass
class Theory:
name: str
items: list[Item]

plaintext
def score(self) -> float:
if not self.items:
return 0.0
ok = sum(1 for i in self.items if i.programmable)
return ok / len(self.items)

def compare(a: Theory, b: Theory) -> None:
print(f"{a.name:<20} {a.score():.0%}")
print(f"{b.name:<20} {b.score():.0%}")

Demeter theory

demeter = Theory("Demeter", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Goddess", "type", False),
Item("Emotion", "type", False),
Item("Demeter", "node", False),
Item("emotionstate", "node", False),
Item("emotion
at", "function", False),
Item("emotion_weather", "function", False),
])

Axial tilt theory

axialtilt = Theory("Axial tilt", [
Item("Latitude", "type", True),
Item("Temperature", "type", True),
Item("Angle", "type", True),
Item("Insolation", "type", True),
Item("axial
tilt", "node", True),
Item("solarconstant", "node", True),
Item("solar
angle", "function", True),
Item("insolation_at", "function", True),
Item("temperature", "function", True),
])

compare(demeter, axial_tilt)

Output metrics:
Demeter 25%
Axial tilt 100%

  Knut Sondre Sæbø commented on idea #3069.

My critique of David Deutsch’s The Beginning of Infinity as a programmer. In short, his ‘hard to vary’ criterion at the core of his epistemology is fatally underspecified and impossible to apply.

Deutsch says that one should adopt explanations based on how hard they are to change without impacting their ability to explain what they claim to explain. The hardest-to-change explanation is the best and should be adopted. But he doesn’t say how to figure out which is hardest to change.

A decision-making method is a computational task. He says you haven’t understood a computational task if you can’t program it. He can’t program the steps for finding out how ‘hard to vary’ an explanation is, if only because those steps are underspecified. There are too many open questions.

So by his own yardstick, he hasn’t understood his epistemology.

You will find that and many more criticisms here: https://blog.dennishackethal.com/posts/hard-to-vary-or-hardly-usable

#3069​·​Dennis HackethalOP revised 6 months ago

It seems to me that "hard-to-vary" is itself the criterion that a theory should be as programmable as possible. As you note, the goal of a theory should be to make it as explicit as possible, and a program is explicitness in its most complete form. Any theory with ambiguous components automatically has a breaking point that is changeable without detection. A programmable theory has strict causal relations all the way from the axioms to the prediction, which makes any change to the components detectable. In other words: a theory is hard to vary to the extent that its components and the couplings between them can be specified as a program. If a theory is vague, you cannot tell when it has been varied.

This gives a concrete operationalization. A breaking point is any place in the formalization where the chain stops being programmable: a primitive with no implementable type, a coupling between components that cannot be specified, or a step that requires implicit theories to fill the explanatory gaps. A mathematical theory with no remaining gaps has zero breaking points and is maximally hard to vary. A theory in natural language is already worse, because words carry ambiguity and vary from mind to mind. This does not rule out better and worse theories in natural language, since we can use more or less ambiguous words and relations. But it does create a hierarchy of hard-to-vary explanations, where the share of the explanation that is programmable, or at least unambiguous, forms the basis for the criterion.

This is probably too crude a formalization. But evaluating the two theories of Demeter's emotions and axial tilt as explanations, you could check how much of each is programmable. Detecting seasons is programmable in both cases through temperature and changes in weather. Demeter's emotions and the causal link from them to the weather, which is the entire explanation, are not programmable. In the axial tilt theory, every component is. So on this measure Demeter scores 25% and axial tilt scores 100%.

  Knut Sondre Sæbø commented on criticism #4915.

In that case, I'm unclear what "100% true" means.

Perfect correspondence with the facts.

For example, if it’s currently raining, and you say it is, then your statement is 100% true.

#4915​·​Dennis HackethalOP, 9 days ago

Would you agree that this notion of truth amounts to truth relative to our conceptual framework? When you say it's 100% true that it's raining, "the facts" you correspond to are already facts within that framework, and not reality.

At the molecular level there are no discrete raindrops, only a continuous distribution of H2O molecules constantly evaporating and condensing, and some of those very molecules are diffusing through the roof into the house, since no material is 100% impermeable to water vapor.

  Knut Sondre Sæbø revised criticism #4926.

When we search for truth, I think most of us are trying to understand the causal structure of the universe, not just predict it with our own fitted models. This is just a criticism of this notion of truth, which waters the concept down from what I at least think of as truth. Many incompatible theories can fit the same facts without capturing any causality. If you agree that truth is correspondence with reality, and not with the facts within our conceptual framework, the problem reemerges.

A statement carves the world into concepts standing in relations. For it to correspond with reality, those concepts must pick out genuine entities and relations in reality. But we have no way of verifying that our conceptual carvings track or pick out entities and relations in reality.

You might disagree. But when we search for truth, I think most of us are trying to understand the causal structure of the universe, not just predict it with our own fitted models. This is just a criticism of this notion of truth, which waters the concept down from what I at least think of as truth. Many incompatible theories can fit the same facts without capturing any causality. If you agree that truth is correspondence with reality, and not with the facts within our conceptual framework, the problem reemerges.

A statement carves the world into concepts standing in relations. For it to correspond with reality, those concepts must pick out genuine entities and relations in reality. But we have no way of verifying that our conceptual carvings track or pick out entities and relations in reality. This might not imply that some theories can't be more true than others. But it definitely rules out absolute truth.

  Knut Sondre Sæbø revised criticism #4925.

When we search for truth, I think most of us are trying to understand the causal structure of the universe, not just predict it with our own fitted models. This is just a criticism of this notion of truth, which waters the concept down from what I at least think of as truth. Many incompatible theories can fit the same facts without capturing any causality. If you agree that truth is correspondence with reality, and not with the facts within our conceptual framework, the problem reemerges.

A statement carves the world into concepts standing in relations. For it to correspond with reality, those concepts must pick out genuine entities and relations in reality. But we have no way of verifying that our conceptual carvings track the world's entities and relations. And therefore we can't know if a statement is true, or if it merely corresponds with the facts (our ideas/perceptions of the world)

When we search for truth, I think most of us are trying to understand the causal structure of the universe, not just predict it with our own fitted models. This is just a criticism of this notion of truth, which waters the concept down from what I at least think of as truth. Many incompatible theories can fit the same facts without capturing any causality. If you agree that truth is correspondence with reality, and not with the facts within our conceptual framework, the problem reemerges.

A statement carves the world into concepts standing in relations. For it to correspond with reality, those concepts must pick out genuine entities and relations in reality. But we have no way of verifying that our conceptual carvings track or pick out entities and relations in reality.

  Knut Sondre Sæbø addressed criticism #4906.

I think you misunderstand both my own argument and the meaning of ambiguity.

You’re saying that, to hold a true idea in the sense of absolute truth in my head, I’d have to have perfect definitions, which require infinite amounts of information, and having all that information is impossible. Right?

While you obviously know what those words mean, you do not have absolute, 100% defined boundaries of what they refer to and what they don't.

I think it’s enough to know what the words mean for the idea to be true. We don’t have to have “100% defined boundaries”.

Truth means correspondence with the facts (Tarski). Not infinite precision.

I think a ‘trick’ cynics use (not maliciously, still I like to call it a trick) is to set an unrealistically high standard for truth. And then, when no idea ends up being able to meet that standard, they say the idea can’t be true.

#4906​·​Dennis HackethalOP, 10 days ago

When we search for truth, I think most of us are trying to understand the causal structure of the universe, not just predict it with our own fitted models. This is just a criticism of this notion of truth, which waters the concept down from what I at least think of as truth. Many incompatible theories can fit the same facts without capturing any causality. If you agree that truth is correspondence with reality, and not with the facts within our conceptual framework, the problem reemerges.

A statement carves the world into concepts standing in relations. For it to correspond with reality, those concepts must pick out genuine entities and relations in reality. But we have no way of verifying that our conceptual carvings track the world's entities and relations. And therefore we can't know if a statement is true, or if it merely corresponds with the facts (our ideas/perceptions of the world)

  Ed Matthews revised idea #4919 and marked it as a criticism. The revision addresses idea #4921.

I have marked this as a criticism.


If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

  Ed Matthews commented on idea #4921.

Hi Ed, welcome to Veritula. If this idea is meant as a criticism (it sounds like one), be sure to revise it and check the criticism checkbox. See also ‘How Does Veritula Work?’

#4921​·​Dennis Hackethal, 8 days ago

I see, thank you.

  Dennis Hackethal commented on idea #4919.

If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

#4919​·​Ed Matthews revised 8 days ago

Hi Ed, welcome to Veritula. If this idea is meant as a criticism (it sounds like one), be sure to revise it and check the criticism checkbox. See also ‘How Does Veritula Work?’

  Ed Matthews commented on idea #4919.

If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

#4919​·​Ed Matthews revised 8 days ago

Risk adversity is widespread enough that restrictive terms may be implicit.

  Ed Matthews revised idea #4918.

If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

Risk adversity is widespread enough that the contract is implicit.

If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

  Ed Matthews commented on criticism #2017.

I don’t think the issue hinges on whether something is physically scarce, whatever that’s supposed to mean. After all, all information is physical, as David Deutsch likes to emphasize. The real distinction is this: stealing someone’s digital money deprives them of the ability to use it, while copying someone’s novel does not prevent the author from accessing or using their own work. The former is zero-sum; the latter is not.

#2017​·​Amaro Koberle, 7 months ago

If you take an idea from me and produce a derivative work you may change the value of my copy.

It need not necessarily be a decrease in value. For example, a novel derivative work created by you may increase purchases of my works. Alternatively, your work may tarnish the brand associated with my work, or even directly compete with me, and reduce my sales.

I may not want to take this risk. I ask you not to take such actions in exchange for me sharing a copy with you (with agreed restrictions). If you accept and breach the agreed restrictions, you have violated our contract.

Risk adversity is widespread enough that the contract is implicit.

  Dennis Hackethal posted idea #4917.

A contradiction in The Beginning of Infinity by David Deutsch? 🤔
https://www.youtube.com/shorts/xmngAmZMEuo

  Dennis Hackethal posted idea #4916.

Steve Jobs was a good writer.
He wrote clearly and simply.
Anyone can understand him.

Jobs’s article on Adobe Flash

From https://x.com/WebDesignMuseum/status/2049544240213196807

  Dennis Hackethal addressed criticism #4914.

In that case, I'm unclear what "100% true" means. If your definitions have wiggle room, then the truth is not your idea. The truth is within the bounds of your idea, but it is not identical to your idea.

#4914​·​Rob Rosenbaum, 9 days ago

In that case, I'm unclear what "100% true" means.

Perfect correspondence with the facts.

For example, if it’s currently raining, and you say it is, then your statement is 100% true.

  Rob Rosenbaum addressed criticism #4906.

I think you misunderstand both my own argument and the meaning of ambiguity.

You’re saying that, to hold a true idea in the sense of absolute truth in my head, I’d have to have perfect definitions, which require infinite amounts of information, and having all that information is impossible. Right?

While you obviously know what those words mean, you do not have absolute, 100% defined boundaries of what they refer to and what they don't.

I think it’s enough to know what the words mean for the idea to be true. We don’t have to have “100% defined boundaries”.

Truth means correspondence with the facts (Tarski). Not infinite precision.

I think a ‘trick’ cynics use (not maliciously, still I like to call it a trick) is to set an unrealistically high standard for truth. And then, when no idea ends up being able to meet that standard, they say the idea can’t be true.

#4906​·​Dennis HackethalOP, 10 days ago

In that case, I'm unclear what "100% true" means. If your definitions have wiggle room, then the truth is not your idea. The truth is within the bounds of your idea, but it is not identical to your idea.

  Dennis Hackethal commented on idea #4912.

Cool - maybe that is better. I think that inexplicit knowledge is underrated in the David Duetsch circles, what do you think? ... there, I used it in a sentence, now I'll remember it!

#4912​·​Patrick O'Loughlin, 9 days ago

Nice, yes. I do see Deutschians using the concept, especially in the context of the fun criterion. But in the general public inexplicit knowledge is underrated, I agree.

  Patrick O'Loughlin commented on criticism #4910.

Not to be a stickler but I think you mean ‘inexplicit’.

Implicit = not said directly but implied. Can still accompany explicit speech though.
Inexplicit = not expressed in words or symbols.

At least that’s how I use the terms.

#4910​·​Dennis Hackethal, 10 days ago

Cool - maybe that is better. I think that inexplicit knowledge is underrated in the David Duetsch circles, what do you think? ... there, I used it in a sentence, now I'll remember it!

  Dennis Hackethal addressed criticism #4908.

It’s a gift, then. He increased the value you had by, let's say, replacing your bike with the same bike but new.

#4908​·​Fabio Guerreiro, 10 days ago

Value isn’t in the object itself. It’s in the owner’s mind. If the owner doesn’t consent to the replacement, the value may well be lower.

For example, imagine somebody replacing your teddy bear from childhood with the ‘same’ one but new.

  Dennis Hackethal criticized idea #4909.

Implicit vs Explicit knowledge -- AI, like DNA, has the ability to create implicit knowledge.

#4909​·​Patrick O'Loughlin, 10 days ago

Not to be a stickler but I think you mean ‘inexplicit’.

Implicit = not said directly but implied. Can still accompany explicit speech though.
Inexplicit = not expressed in words or symbols.

At least that’s how I use the terms.

  Patrick O'Loughlin commented on idea #4683.

AIs have created output that is not only novel, but seems to constitute new knowledge (resilient information), such as the famous Move 37 from AlphaGo. That is new knowledge because the move was not present in the training data explicitly, nor did the designers construct it.

#4683​·​Tyler MillsOP, about 2 months ago

Implicit vs Explicit knowledge -- AI, like DNA, has the ability to create implicit knowledge.

  Fabio Guerreiro criticized idea #1426.

Yeah. And if he takes it against your will and replaces it with a brand new bike it’s still theft.

#1426​·​Dennis Hackethal, about 1 year ago

It’s a gift, then. He increased the value you had by, let's say, replacing your bike with the same bike but new.

  Dennis Hackethal addressed criticism #4892.

I think you run into the problem of definitions. An idea cannot be absolute, perfect truth without total, perfect, complete definitions for its terms. This isn't required for knowledge - the terms can be rough because the ideas are tentative. But for absolute truth, the boundaries of meaning of your terms must be completely determined. But, as the postmoderns pointed out, this requires infinite information - the complete determination of any one term requires its distinction from all other terms. In fact, they didn't go far enough. I'd argue you would need to know the distinction between the term and all other possible terms.

You have to know perfect definitions in order to have the idea in your head be perfectly true. Perfect definitions require infinite information, therefore you cannot know perfect truth.

#4892​·​Rob Rosenbaum, 11 days ago

…as the postmoderns pointed out…

Citation needed.

  Dennis Hackethal addressed criticism #4904.

I think you misunderstand both my own argument and the meaning of ambiguity. "I'm currently located in a hemisphere" is not ambiguous in its meaning due to not knowing which hemisphere you're in. The meaning is ambiguous to the extent that we do not have absolute knowledge of what you are, what it is to be located, or what a hemisphere is - or what "in" is. While you obviously know what those words mean, you do not have absolute, 100% defined boundaries of what they refer to and what they don't. But you would have to have that to have absolute truth.

I may be wrong in this argument, but I don't see how your counterexample refutes it.

#4904​·​Rob Rosenbaum, 10 days ago

I think you misunderstand both my own argument and the meaning of ambiguity.

You’re saying that, to hold a true idea in the sense of absolute truth in my head, I’d have to have perfect definitions, which require infinite amounts of information, and having all that information is impossible. Right?

While you obviously know what those words mean, you do not have absolute, 100% defined boundaries of what they refer to and what they don't.

I think it’s enough to know what the words mean for the idea to be true. We don’t have to have “100% defined boundaries”.

Truth means correspondence with the facts (Tarski). Not infinite precision.

I think a ‘trick’ cynics use (not maliciously, still I like to call it a trick) is to set an unrealistically high standard for truth. And then, when no idea ends up being able to meet that standard, they say the idea can’t be true.